The idea is simple enough, take some of the PCIe switch silicon that PLX is known for and make a top of rack switch out of them. From the front it looks like any other 10/40Gb switch with 32 QSFP+ ports on the front plus a few management ports and assorted blinky lights in fashionable ‘southwestern’ tones. If you open it up it is, well, pretty sparse. Three chips, two PSUs, and a bunch of passive components make up the majority of the box.
Looks like any Ethernet switch minus the Ethernet
The first question everyone asks when they hear about this device is, “Why?” Answers are numerous starting with the fact that you can pass TCP/IP over it if you really want to and it consumes about half the power of Ethernet while doing so. Running it up and immediately back down that stack is a bit of a waste though, and by bit we mean pretty stupid without any real up side.
Each port has four PCIe3 lanes so a total of 32Gbps of bi-directional bandwidth. Since there is no software stack to navigate it is painfully low latency too. If you don’t packetize it and then depacketize it the reasons for lower latency and power use are pretty obvious. Better yet you can directly touch memory on remote systems via DMA/RDMA, TCP/IP can’t offer this capability in any sane way.
So lower latency, lower power, more bandwidth, DMA, and all done over standard QSFP+ cables, what is the down side? Some may get nervous over DMA capabilities but that group probably isn’t a good target market for this anyway. The real question is why Intel would be pushing this type of device for the datacenter? Unfortunately we have no clue about that one.S|A
Have you signed up for our newsletter yet?
Did you know that you can access all our past subscription-only articles with a simple Student Membership for 100 USD per year? If you want in-depth analysis and exclusive exclusives, we don’t make the news, we just report it so there is no guarantee when exclusives are added to the Professional level but that’s where you’ll find the deep dive analysis.