Intel has changed their Xeon lineup and ballooned both the SKUs and the prices. SemiAccurate has told you about some of this in the past and now we can give you more detail.
In early May Intel announced their new branding for the Xeon line spreading it across four categories, Platinum, Gold, Silver, and Bronze. SemiAccurate exclusively told you about the price increases and the tiering of features at the same time. Now we can release the SKUs and the details and it is a pretty monumental change
The biggest of these changes is, as we told you in May, that Intel is now artificially cutting out features and selling them back to customers in the same way they do for the i3/i5/i7/i9 lines. In a non-artificial change they are also adding FPGAs and Omnipath connections on an extended die/MCM, something that could be very useful to some user categories. Lets take a look at what the new Xeons are offering starting with the SKU stack.
Intel Skylake-SP Xeon SKUs
OK that one is an eye test chart but there is a lot to see here. Rather than get all wordy about it, lets just use the official explanation chart, it is fairly logical. The one thing we will point out is that the F suffix on the parts are for Fabric aka Omnipath, not FPGA. The FPGA variants are officially MIA to be released sometime next year. Think Purley refresh/Cascade Lake, not released on Purley.
Intel decoder ring for Skylake-SP
That brings us to the first elephant in the room, defeaturing. You might recall AMD went to great lengths to point out that all of their Epyc CPUs are fully featured. All have full memory capability (2TB/socket), all have the full 128 PCIe3 lanes, and the largest can do what the smallest can. The only thing AMD differentiates on are the traditional cores, speed, and socket counts. Intel on the other hand cuts just about everything useful at one tier or other.
On the Platinum front you get it all. Almost. This line goes up to 28C/56T, has the full 38.5MB of cache, and has all three UPI links available. This last bit may seem pointless but it is what enables 4S and 8S configurations. Yes we know the Haswell and Broadwell-EP lines could do 4S with 2 UPI links but that was a hack, they weren’t fully connected. To do it right you need three links for a 1-hop 4S configuration or a 2-hop 8S. Platinum and some Gold Skylake-SP Xeons have three UPI links.
Platinum also has six DDR4 lanes supporting two DIMMs per channel at 2666MHz. If you use 128GB DIMMs that means 1.5TB per socket. Sort of. As long as you buy a CPU with an M suffix meaning 1.5TB memory support. Without that you can only access 768GB per socket regardless of how many DIMMs you put in or what size they are. For this privilege you pay almost exactly $3000 per socket, the 8180 costs $10,009 vs the 80180M at $13,011. On the low-end the smallest 1.5TB model is the Gold 6134 at $2214 against the 6134M at $5217. Yes that is a 135% price increase for a single feature on the 6134.
On the plus side, Intel claims there is only a small percentage of customers that need more than 1.5TB, likely in the 5% range. Intel is quite correct about this and because a 64GB DIMM costs ~$900 and 128GB DIMMs are in the range of cars for prices, that number is small for a reason. That said as SemiAccurate pointed out earlier this tiering breaks the business models for some very lucrative Intel customers.
SemiAccurate countered the small percentage argument by asking if you would purchase a CPU knowing it had a memory cap? Sure the overwhelming majority of buyers don’t ever upgrade memory but many think they may, aspiration is a powerful motivator in sales. If such a cap doesn’t dissuade many buyers from purchasing a Xeon, it sure doesn’t hurt AMD’s marketing efforts. This cap SemiAccurate has directly confirmed that it has cost Intel orders.
Back to the Platinum features, the line has two FMA units per core, something we won’t go into detail about here, along with Turbo and HT. The Platinum line pulls in the old E7 RAS features as well, something you would expect from an 8S capable CPU, you need them here. For this generation Intel adds Node Controller support, again something out of the scope of this article. The Platinum line is recognizable by the 81xx numbering.
Going on to the Gold Skylake-SP Xeons we have two sub-lines, 61xx and 51xx, they are very different beasts. The 61xx line runs from 12C to 22C and has most of the features of the Platinum line. This means 3 UPI links, DDR4/2666 support, dual FMA units, RAS, and node controller support. It looks to us like the 61xx is an XCC (the largest of the three Skylake-SP dies) variant like the Platinum 81xx CPUs. It shares all the features that allow the CPU to scale to >2S and has more than the 18C supported by the mid-range HCC die. <=18C 61xxs are likely fused off XCC die as well.
That brings us to the lower of the Gold CPUs, the 51xx line. These parts lose one FMA unit per core, one UPI link, only support DDR4/2400, have two UPI links, and support lower but still quite advanced RAS features than the 81xx and 61xx lines. The 51xx line does however support 4S configs but like Haswell/Broadwell-EPs, they don’t support fully connected 4S configurations. This strongly suggests that the 51xx Gold Xeons are HCC dies (the middle of the three physical implementations).
Silver 41xx is up next with progressively fewer features. It takes the 51xx Gold line’s base and caps the core count at 12. There are still two UPI links but they are running at 9.6GBps instead of 10.4GBps in the higher Xeon lines. Clocks are capped at 2.2GHz for base but as with their higher featured brethren, can turbo higher. Other than that, they are basically a dead ringer for the consumer Skylake-SP models with added ECC capabilities. The die count and lack of a third UPI link strongly suggests Silver Xeons are cut from the smallest LCC die.
At the bottom of the line are the Bronze parts known as the 31xx Xeons. It has an 8C cap and runs up to 1.7GHz base. DDR4 is limited to 2133MHz and turbo goes bye-bye as well. Other than that it is the same as the Silver line. Bronze is going to be the small business pedestal server CPU of choice from Intel’s Skylake lineup, it will probably sell in large numbers to non-datacenter buyers.
This all brings us back to the pricing and comparisons to the last generation Broadwell-EP line. Prior to the new metals nomenclature the old Xeon line was differentiated mainly by socket count support/QPI links. 4/8S 3 QPI links were E7s and 2/4S 2 QPI links were called E5s. Core counts were up to 24 for E7s and 22 for E5s even though they were from the same die. It made sense, kind of, but we got used to it.
Believe it or not the new branding makes a lot more technical sense. Platinum and 61xx Golds are XCC die based, 51xx Golds are HCC, and both Bronze and Silver are LCC. This will simplify things a great deal in the long run and probably make for cleaner messaging. The problem is how do you compare old E5/E7s to new? Do you go by core count? Socket count? Other features? On paper some things make sense until you look at the pricing.
The Platinum line is clearly priced where the E7s were, IE really high. Gold 61xxs are priced below the $4155 of the 2699v4 Broadwell but support 4S fully connected via 3 UPI links. On the other hand they max out at 22C on the 6152 which costs $3655. What is Apples to Apples until Apple makes a new Mac Pro with Purley? SemiAccurate is going to simply compare top of the line with top of the old line regardless of anything else. Why? That is what Intel did in their performance presentations, they routinely compared the 2699v4 to 8180s so that is what we will do.
In the end the new Skylake-SP/Purley Xeons are an interesting step forward in some regards and half-finished in others. The FPGA variants and Xpoint support is delayed until next year but the rest is finally here. Top line prices went up radically, some other may have gone down along with features like when comparing a 2699v4 to a Gold 6152, but since Intel did not give out test systems we can’t say if performance tracks price increases or decreases. The one thing that does take a sharp turn is the progressive de-featuring of the lines, something Intel has wisely avoided in Xeons until todays launch. If AMD didn’t have a competitive lineup with Epyc, this could have been a very lucrative move for Intel. It is going to be interesting to see how these new CPUs do in the market, buyers have a lot more than usual to digest this time around.S|A
Latest posts by Charlie Demerjian (see all)
- HyperX ships it’s 60 millionth enthusiast memory module - Oct 15, 2018
- Bittware/Nallatech water cools 300W of Xilinx FPGA - Oct 12, 2018
- More on Intel’s 10nm process problems - Sep 17, 2018
- Intel puts out another 14nm 2020 server platform - Sep 11, 2018
- Why Can’t Intel Supply Enough 14nm Xeons? - Sep 10, 2018