Why SemiAccurate called 10nm wrong

Analysis: Close in some ways, way off in a big one

Intel LogoA few months ago SemiAccurate claimed Intel killed their 10nm process, we were wrong. Instead what looks to be happening is that the 10nm process is severely deprecated because it makes no economic sense to carry on with.

When we first wrote that 10nm was dead there was a slew of criticism directed our way, something not unusual for SemiAccurate. What was unusual was most of the criticism asked why we were attacking Intel and saying bad things about the company. This is unusual because it is very clear that none of the critics actually read the article, the sub-headline of which is, “This is actually a good thing for the company” and the second line of which is “Before you jump to conclusions, we think this is both the right thing to do and a good thing for the company.”

Call us overly jaded but we don’t think this is exactly harsh criticism, quite the opposite. It was, and still is our opinion that the Intel 10nm process will never be financially viable so knifing it was a very smart move. If you look at the public numbers presented by Intel at their Manufacturing Day in early 2017, they stated in no uncertain terms that the first iteration of 10nm would be denser, lower power, and slower than the 14nm process at the time.

The second iteration of 10nm (10+) would be denser, lower power, and best case equal to the 14nm process of 2017. It wouldn’t be until the third iteration of 10nm (10++) which was due in 2020 when 10nm would surpass 14nm for performance. Again, this is Intel’s own data from 2017 which stated 10nm would be out in 2017 too. Do recall that this is the 14nm from 2017, the graphs presented assumed that 14nm development would stop in 2016, something Intel has been very vocal in staying has not been the case.

History has shown that 10nm did ship in 2017 as promised, it just didn’t actually work, have double digit yields, or get a single working GPU that was shown off publicly. It was indeed a PR stunt and one we will disregard for this article, we feel 10nm didn’t actually ship because it didn’t work but feel free to disagree. The point is that since then 14nm has made significant steps forward, 10nm doesn’t seem to have done the same. Why is this important? Unless Intel is on track to meet the 2017 schedule, it is unlikely the 10+ and 10++ variants are on time. It is even less likely that they will meet the original performance goals much less exceed them.

So we have an unspecified 10nm process due out in Q4 of 2019 for desktop, ‘mid-2020’ for server, which seems on track to meet current 14nm products on performance and be a little better on power. Best case. If everything works. And the 10nm and 10+ generations are skipped and we jump right to 10++ which is pulled in and better than expected. And 14++ wasn’t improved over the past 2-3 years. Bets anyone?

This is why we said that Intel killing 10nm was a smart move, spending ~2 years and a few billion dollars to debug something that ends up with a roughly equal product, again best case, to the one before it doesn’t seem to be a good use of time, money, and engineering resources. It would be much better to cut 10nm and put all the resources into 7nm in hopes of pulling it in and actually having an unquestionably better product line based on it.

The one landmine we haven’t mentioned in this article is yield. Given that 10nm was initially yielding in the single digit range for non-fully working devices, vastly less for fully working, it wasn’t economical to produce products based on it. Now that many of those technical hurdles appear to have been surmounted, it looks like yields are vastly better. Anyone want to bet that yields aren’t close to where the 5+ year old 14nm variants are?

Based on our best information, 10nm is nowhere near the yields of 14nm despite the PR statements from Intel. Why is this important? Performance and energy use determine ASPs. When a new product is TCO underwater vs the older one you have to lower prices or do less ethical things to move the new device. End users really don’t care about the die area of a chip even if some OEMs do for a few classes of devices like phones.

Die area and yields affect cost and margins. 10/10+/10++ are significantly smaller than their 14nm predecessors so that facet of cost is obviously lower. Process costs are quite the opposite, undoubtedly higher for running costs even if we forget about R&D expenses. Yield is an open question however and if you look at Intel’s rate of transition from 45nm to 32nm, 32nm to 22nm, and 22nm to 14nm, you will see an interesting trend of longer tails for devices on the older process. (Note: Disregard the cherry-picked PR statements, this is based on what Intel was selling to OEMs and ODMs)

This lengthened tail, especially for lower ASP and lower margin devices, tracks very closely with the yield numbers SemiAccurate has been hearing. In short the picture over the last 4-5 generations of process may not be as rosy as it is publicly described. With this in mind, think about yields on 10nm and what that does to costs. ASPs are capped by the performance of 14nm products, area is less than half of 14nm for the same transistor counts, process costs go up a bit, and yield, well it is an open question.

SemiAccurate will go out on a limb here and say that 10nm yields won’t be close to 14nm for the first year of it’s volume production. We will go further out on that limb and say that production costs for 10nm chips will be significantly higher than 14nm devices of similar performance. Why? Yield and what Intel has directly stated about the performance of those devices. In short the most rosy scenario for 10nm/10+ devices is that they will be about equal to Intel’s 14nm line while costing significantly more to produce. This is why we said knifing it was a good thing.

But Intel did not knife it completely, they just severely deprecated it. Any guesses why? While this explains why we thought knifing 10nm was a good thing, it still doesn’t answer the question as to why we said it was dead. That reason is because the deprecation involved taking three of the four fabs that were slated to produce 10nm CPUs and moving them to different processes.

When we wrote the original piece there were four fabs slated to transition to 10nm. One of these has been backported to 14nm, something which can’t be undone in a time relevant to the 10nm transition. Two of the remaining fabs installed lots of EUV tools which are meant for the 7nm process, not the 10nm process. This effectively precludes these facilities from producing 10nm.

This left one fab which was slated for 10nm, and try as we might we couldn’t get definitive information on. Meanwhile we had several sources confirming the information about the three other fabs and telling us that 10nm was unquestionably dead. As it turns out they, and SemiAccurate, were wrong, it is coming out in some form in Q4/2019.

That leaves two more open questions, the process itself and volumes. Officially Intel claims the 10nm process is in good shape, on schedule, and unchanged from previous versions. The first two are laughable, the third is an actually debatable point. Setting aside the semantic argument of the old 10nm process not working and the current one does, _SOMETHING_ changed. What if anything actually did change is an open question. This one will have to wait until Tech Insights gets a new 10nm device and does a teardown on it. Our bet is that the 10nm process as seen in the 8121U has been significantly modified to get to the version that comes out it Q4/19.

Volumes are less of an open question, or at least the general range is. Based on what Intel is doing with fabs and equipment, it is unquestionable that 10nm volumes are severely reduced. Even if Intel adds another 10nm fab it looks like 10nm wafer throughput will be no more than half of what it was slated for just a few years ago. From where things sit now it looks like the wafer throughout on 10nm will be ~1/4 of what was planned earlier. Why? Remember what we said about economic viability?

So if volumes are so low, costs so high, performance so… meh, and all the rest, why is Intel bothering? The best theory SemiAccurate has heard is that there are certain technical hurdles on 10nm that needed to be solved because they were used in 7nm. If these challenges were not overcome in 10nm, the same work would need to be done under the banner of 7nm anyway so why not just fix 10nm and make some devices while you are at it? That said this is only a guess but it seems logical enough.

With volumes so low, what is Intel going to produce on 10nm? That is where things get interesting.

Note: The following is analysis for professional level subscribers only.

Disclosures: Charlie Demerjian and Stone Arch Networking Services, Inc. have no consulting relationships, investment relationships, or hold any investment positions with any of the companies mentioned in this report.

The following two tabs change content below.

Charlie Demerjian

Roving engine of chaos and snide remarks at SemiAccurate
Charlie Demerjian is the founder of Stone Arch Networking Services and SemiAccurate.com. SemiAccurate.com is a technology news site; addressing hardware design, software selection, customization, securing and maintenance, with over one million views per month. He is a technologist and analyst specializing in semiconductors, system and network architecture. As head writer of SemiAccurate.com, he regularly advises writers, analysts, and industry executives on technical matters and long lead industry trends. Charlie is also available through Guidepoint and Mosaic. FullyAccurate