Modular data centers are becoming more common. What started as modified shipping containers have no turned into complex prefabricated buildings in multiple parts assembled on site.
I recently had the joy of being involved with moving 6000 cores and 1/2PB of disk from a traditional data center, to an HP Eco Pod (now just HP POD 240a). The promise was lower cost to build, lower cost to cool, less time to deploy etc. Pictures after the break.
I can't comment on the first part, i'm just a customer of the space, the second is still to be seen, but I expect it to be true, the last is only true if your own organization allows it.
I have my disappointments, small cold isles make installing large heavy items difficult. Vertical PDU's block some styles of cable management arms, look in the images below for examples.
I do like it, and I think it is a good path for the HPC space. The POD is rated at 24KW/rack. It is difficult even with products like the Dell C8000 series or HP s6500 series nodes to create that power draw. Also you will notice in our pictures, the folks at Michigan, who purchased this, decided against external power conditioning, so every rack has 12 U of UPS, and about half have 18U of UPS, for IT and cooling power conditioning. This decision limited how much equipment could fit in the racks. This is nothing against HP, this was a decision that was out of their hands. I would love to have that extra 12 U in every rack, that is space for 24 nodes, 384 cores.
I like the high power per rack. I like the management system presenting power draw and cooling equipment status. I like the fact that the entire Pod is UL listed, includes integrated fire suppression and monitoring. I like the ambient air cooling, in Michigan, we have a lot of cold days, no reason to run chillers. This should save a lot of power costs.
TL;DR; Version: I like it. See below for pictures.
"You don't like it?"
ReplyDeletehttps://www.youtube.com/watch?v=yil9wlfa0yo
("No, I don't like it.")
Great picture of the inside - I like it. : )
Delete