Looking for a nice little 2U box that can house eight 2-slot GPUs not to mention large amounts of everything else? If so, take a long look at the Gigabyte GS-R22PHL, it has almost everything you could ever want in a 2U box.
Gigabyte calls the GS-R22PHL a “supercomputing server” but I would consider it more of a HPC building block, and a very flexible one at that. The base board is what you know and love, two Romley/Xeon E5-2600 on an Intel CO2 chipset, 16-DIMMs supporting up to DDR3/1600, and 8 2.5″ drive bays. The supercomputing part comes in when you notice it packs up to 8 2-slot GPUs in a 2U form factor, an incredible packaging job if there ever was one. If this sounds good, it gets better when you look at the details, but first look at the machine itself.
The box packed tightly with goodies
Starting at the drives, you can use the Intel SATA/RAID controller or pick the optional LSI 2208 SAS-6 card on one of the PCIe slots not devoted to the GPUs. If you are serious about your computing, and you wouldn’t buy this type of machine if you just dabble, then don’t even consider skipping the LSI card, it is just dumb to skimp on real storage. Same with the RAM, officially you can cram in 16 registered DIMMs totaling 512GB or 32GB a DIMM. If you really want, this board will support up to 128GB of unregistered ECC DDR3, but again, why skimp on the important bits to save a pittance? Inphi DIMMs aren’t cheap, but they make 32GB sticks of DDR3/1600 possible, and they are registered too.
Getting back to the basic server bits, they are there starting with an Aspeed AST2300 remote KVM setup that thankfully supports up to 1900*1600, a minor miracle in the server world. Power is also redundant with two 1600W hot-plug PSUs in back. One unexpectedly nice touch is that they are 80+ Platinum certified, useful at the wattage we are talking about for this box. If you have 8 GPUs running near TDP, that doesn’t leave much margin with these PSUs though. That said, I doubt most setups can push all the cards that hard 100% of the time, you will likely hit bottlenecks elsewhere long before the GPUs max out.
The GS-R22PHL is also well equipped for I/O with two 10GbE ports standard and two 10GbE SPF+ ports optional. Intel supplies the X540 for the copper connection, a Broadcom BCM57810 does the optical duties. Both are about as vanilla as you could expect, no surprises here. There are also two USB2 ports should you want to plug-in a Wi-Fi dongle, wireless mouse, or one of those color changing blinky things you get at trade shows that make all the other data center tenants jealous. Only the coolest supercomputers have 7-color USB mood lighting and this board makes it possible not once but twice.
Cards in racks by the 2/3rds dozen
We are saving the best part for last, the GPUs, or more likely Xeon Larrabees. Why is this the best part? Take a look at the packaging above, two GPUs are put in a cage and four of those are stuffed sideways in to the GS-R22PHL. This allows the 2U form factor to happen, all the other 8 GPU chassis I have seen are either 4U or more. Consider me impressed by this packaging job, more so because it is a generic 2U server, not engineered to a specific requirement ala Facebook or Google RFPs.
This packaging job is made possible because the rest of the machine is packed in to the center aisle quite tidily, an amazing feat if you consider it has 2 sockets, 16 DIMMs, two chipsets, 8 drives, and all the associated heat sinks and fans. But wait, there’s more. Remember those two 1600W PSUs? How about room for two more PCIe cards? Cables? Can’t forget those either, they do take room and block airflow. The Gigabyte GS-R22PHL may not look like much from the outside, but once you think about it, it is a damn amazing box. It supports just about everything you could possibly ask for in a GPU compute server, and does it in an astounding 2U. I really can’t think of a better packaging job for a non-custom machine, can you?S|A
Latest posts by Charlie Demerjian (see all)
- HyperX ships it’s 60 millionth enthusiast memory module - Oct 15, 2018
- Bittware/Nallatech water cools 300W of Xilinx FPGA - Oct 12, 2018
- More on Intel’s 10nm process problems - Sep 17, 2018
- Intel puts out another 14nm 2020 server platform - Sep 11, 2018
- Why Can’t Intel Supply Enough 14nm Xeons? - Sep 10, 2018