The best tech shown at IDF so far was the new SeaMicro box from AMD, say hi to the SM15000. The bigger news is the new storage boxes that add 5PB of storage to the SM15000 line in a sane way.
To the CPU geeks out there, the most important part of the announcement are the two new server cards, one with Piledriver/Opteron and one with Ivy Bridge. They both are hot swap compatible with the older Atom and Sandy Bridge cards, and should be a nice upgrade for those in need. We showed you the AMD card months ago, the specs on it are an 8 core Opteron at 2.0, 2.3, or 2.8GHz with up to 64 GB of DRAM. The Ivy Bridge model is a Xeon E3-1265Lv2 model, the name should immediately identify the spec to most readers. If it doesn’t, that would be a 2.3GHz model that supports up to 32GB of memory.
Opteron on the left, Xeon on the right
This is an interesting split, the Xeon should have more CPU grunt, but the AMD box offers more cores to sell as VMs with more memory per core. Don’t discount the value of this to marketing wonks, it matters more than mythical performance numbers to many end users. This also strongly implies that AMD has a four channel memory solution, so the Opteron in the AMD card is likely a 2-die solution.
The box itself is new too, mandated by a new fabric. SM15000 has the same electrical connections, it has to to support the same cards, but the number of connections that make up the fabric is different. In SeaMicro terminology, this is extending the fabric outside of the box. Strictly technically speaking it doesn’t, the added torus width is there to put two SAS ports on each I/O blade. It looks like this.
Back of the box, SAS on either side of the NICs
The SAS ports connect to a standard SAS switch and then to a SAS expander box, both available from multiple vendors if you don’t like the SeaMicro version. In aggregate, you can have 1408 HDDs for a claimed total of over 5PB of storage per SM15000 server. Because it uses basic SAS hardware, not expensive Fiber Channel SANs, it is cheap. To make life better it is essentially a big direct attached drive array (DAS) so management is essentially zero. The drives are logically ‘connected’ to each server instance, so you just manage your servers and their drives.
All in all, it is a great idea. If the microserver category fits your workload, this array could drop the cost of storage by an order of magnitude or more. Better yet, you don’t need an expensive SAN expert to babysit it, you see drives, RAID, and everything else you are already used to dealing with. Two new cards, one new storage array, and a lot of cost that end users don’t have to pay any more.S|A
Charlie Demerjian
Latest posts by Charlie Demerjian (see all)
- Microsoft Hobbles Intel Once Again - Sep 20, 2024
- What is really going on with Intel’s 18a process? - Sep 9, 2024
- Industry pioneer Mike Magee has passed away - Aug 12, 2024
- What is Qualcomm launching at IFA this year? - Aug 9, 2024
- SemiAccurate is back up - Aug 7, 2024