Market Impact
The big question is what is the market impact of the new Ice Lake Xeons. The perspective you take will be influenced by personal affinity, but there are some objective areas we can see in this launch that we can use to discuss how they will impact the market.
First and foremost, if one has an AVX-512/ VNNI application, need Optane PMem, or simply has a hard-line mandate in their organization to use Intel Xeon, then the choice is very simple. Xeon is your answer. The converse is true, if your organization is looking to the future and wants to diversify to ensure Intel does not have the market leverage to extract pricing as it did in 2015-2019, then one will probably look at AMD EPYC 7003 “Milan” parts or even something from Arm.
Realistically, the interesting part of the market is not one that is staunchly in one camp or another. Instead, the interesting part is the thin slice of the market that is truly competitive. In that segment open to options this is how we see the market:
- For those who simply wanted PCIe Gen4/ 8 channel memory from Intel, and had been looking to AMD for those features, then Ice Lake is a perfect response.
- If one did not want Optane PMem because the cost was too high with “L” SKUs, then Ice Lake is a great response.
- If one wanted to get a full 32 cores to use a VMware vSphere per-socket license or 2x 16 core Microsoft Windows Server licenses, then Intel just made a big move. AMD cannot simply point to Intel and say that there will be wasted license space due to insufficient cores.
All is not perfect for Intel though. AMD still has a strong position when:
- Large caches get utilized
- 160-162 lane PCIe configurations
- Single socket configurations
- Higher core counts can drive socket consolidation
- One wants to have another vendor in the mix to apply pricing pressure to Intel.
- One wants to run on something that is already PCIe Gen4 certified on AMD
The fact is, that Intel is touting acceleration that will not help all applications. AMD just has raw core count and large caches. AMD also has been driving the PCIe Gen4 ecosystem since 2019 where Intel is arriving in late Q1. If you want to run NVIDIA A100 GPUs, partners will be supporting Intel with those GPUs, but AMD has effectively been the de-facto Ampere training platform for a year. Likewise on non-Intel NICs, drives, and other accelerators. Intel will catch up rapidly, but at launch, AMD is simply further ahead here.
Power9 is only relevant to those who need Power at this point. IBM is not competitive in the mainstream 2P market.
Arm is interesting. Still, Arm is winning when there is a mandate to go Arm. One does not switch an enterprise data center to an architecture that does not work with existing workloads unless there is a very good reason to and it is an organizational mandate. That is a mandate that is being heard more often, but we are still waiting for something to spark mass adoption.
Our sense is that with the Intel Ice Lake Xeon, Intel effectively did what it needed to. It is now competitive in the heart of the market. A big part of AMD’s success has been through being able to yield larger chips and enable PCIe Gen4, and Intel has taken steps in both directions.
AMD is not going to capture 70% of the market with this generation. They are too supply-constrained at TSMC to do this. At the same time, the number of OEMs that are telling us that AMD is now double-digit percentage of their revenue has gone up and one can see that many vendors are now treating AMD EPYC on relatively equal footing with Intel Xeon with similar numbers of platforms for each. Compare this with when the AMD EPYC 7001 series launched and AMD had many commitments to make servers, but it took time for those to hit the market and really get out there and many were treated as experiments. Now, large vendors such as Dell, HPE, Lenovo, Supermicro, Quanta, and others are launching full series of AMD series servers along with Intel servers.
For the first time, we are going to see actual competition on merits rather than binary decision points such as “is a system available from the vendor we use?”, “does this have PCIe Gen4?”, and so forth. That is great for the market.
Why the Ice Lake and Whitley are a Dead End
With all of the market impact discussion, both Ice Lake and its Whitley platform are dead ends. Today, Intel will tout its 10nm process, but that 10nm process is the reason it will be the last of its kind.
Looking past Ice Lake Xeons, we have Sapphire Rapids. Sapphire Rapids will bring CXL for the first time. CXL sits atop PCIe Gen5. We will also have a transition to DDR5 in that timeframe. Beyond that, Intel has said Sapphire Rapids will not be a monolithic die. Instead, to offer expanded features and de-risk future chips, this will be a multi-purpose die.
Let us pretend that we are covering the Sapphire Rapids launch in 2022, here is what we expect to be saying to be the benefits of Sapphire Rapids over Ice Lake Xeons:
- PCIe Gen5 support
- DDR5 support
- First-gen CXL support for memory pooling
- Greatly expanded core counts due to the multi-die approach
- More customization options due to the multi-die approach
- New acceleration options
- Higher TDP to go along with the additional cores
- Additional focus on local caching and memory hierarchy will be required to enable the above
Given just the above, one can see why Sapphire Rapids is going to be a big deal for the industry, and AMD EPYC 7004 “Genoa” is being designed to compete with Intel at Sapphire Rapids in 2022.
The somewhat strange part of the Intel Ice Lake launch is that it is launching PCIe Gen4 after around 9 years of supporting Gen3. About a year later we will be using PCIe Gen5 with CXL. Intel is launching DDR4-3200 support that it had in Cooper Lake in 2020 and AMD had in 2019, but in about a year we will be on DDR5. Intel is touting how its single monolithic die is superior to AMD’s multi-die approach, but it has also said that monolithic die approaches are not the way forward and is fully embracing multi-die. This is the 40+ pieces of silicon that are going into the new Intel Xe HPC GPU called “Ponte Vecchio” that will sit alongside Sapphire Rapids. It is easy to see that if this is Intel’s direction, Ice Lake’s monolithic die is the end of an era.
A worry in the industry is that if Sapphire Rapids is on time or even pulled in, and Ice Lake availability is more like June than May, customers will simply skip Ice Lake. Intel will still sell a ton of the chips, but this has been a constant theme in the industry for over a year. We know Sapphire is such a huge jump that even with the big jump that is Ice Lake, the next generation is a paradigm shift.
Final Words
First off, we need to give congratulations to the teams at Intel for simply getting Ice Lake out. As we started this piece with, the journey to 10nm was like the 10-year Odyssey that Homer shared. Getting this out undoubtedly puts Intel in a competitive position with AMD and gets rid of some (not all) of AMD’s and even some of the Arm ecosystem’s biggest binary win points. PCIe Gen4 on x86 is no longer just AMD, nor is a 32 core level.
To be sure, AMD still has a product that is good and AMD did not take dominant share when it was clearly the better option. The fact that Intel still had ~90% market share when its competitor had more cores, lower power consumption per core, PCIe speed and lane advantages, memory bandwidth advantages, and a two-year head start on the new generation should not be lost on folks. Intel dropped prices by 60% about a year ago and is now aggressively pushing prices down as AMD has sought to raise prices to capture the extra value created by its new chips. Intel did not need to take the performance crown, it just needed to show up with something competitive again as it had in the EPYC 7001 generation.
For STH readers, this is great. AMD platforms will have been in the market longer, and will therefore be a bit more mature, but Intel will quickly follow. By July 2021 we should have an extremely competitive market where pricing can make a true difference throughout large areas of the market. We mentioned that AMD’s strategy of raising prices may also have been in anticipation of needing to discount more with Ice Lake, and that will likely be true.
Overall, 2021 is the most exciting year in the server space in a decade, and this is happening just on the cusp of server architectures completely changing in 2022.
@Patrick
What you never mention:
The competitor to ICL HPC AVX512 and AI inference workloads are not CPUs, they are GPUs like the A100, Intinct100 or T4. That’s the reason why next to no one is using AVX512 or
DL boost.
Dedicated accelerators offer much better performance and price/perf for these tasks.
BTW: Still, nothing new on the Optane roadmap.it’s obvious that Optane is dead.
Intel will say that they are “committed” to the technology but in the end they are as commited as they have been to Itanium CPUs as a zombie platform.
Lasertoe – the inference side can do well on the CPU. One does not incur the cost to go over a PCIe hop.
On the HPC side, acceleration is big, but not every system is accelerated.
Intel, being fair, is targeting having chips that have a higher threshold before a system would use an accelerator. It is a strange way to think about it, but the goal is not to take on the real dedicated accelerators, but it is to make the threshold for adding a dedicated accelerator higher.
“not every system is accelerated”
Yes, but every system where everything needs to be rewritten and optimized to make real use of AVX-512 fares better with accelerators.
——————
“the inference side can do well on the CPU”
I acknowledge the threshold argument for desktops (even though smartphones are showing how well small on-die inference accelerators work and winML will probably bring that to x86) but who is running a server where you just have very small inference tasks and then go back to other things?
Servers that do inference jobs are usually dedicated inference machines for speech recognition, image detection, translation etc.. Why would I run those tasks on the same server I run a web server or a DB server? The threshold doesn’t seem to be pushed high enough to make that a viable option. Real-world scenarios seem very rare.
You have connections to so many companies. Have you heard of real intentions to use inference on a server CPU?
Even Facebook is doing distributed inference/ training on CPUs. Organizations 100% do inferencing on non-dedicated servers, and that is the dominant model.
Hmmm… the real issue with using AVX-512 is the down clock and latency switching between modes when you’re running different things on the same machine. It’s why we abandoned it.
I’m not really clear on the STH conclusion here tbh. Unless I need Optane PMem, why wouldn’t I buy the more mature platform that’s been proven in the market and has more lanes/cores/cache/speed?
What am I missing?
Ahh okay, the list prices on the Ice Lake SKUs are (comparatively) really low.
Will be nice when they bring down the Milan prices. :)
@Patrick (2) We’ll buy Ice Lake to keep live migration on VMware. But YOU can buy whatever you want. I think that’s exactly the distinction STH is trying to show
I meant for new server application, not legacy like fb.
Facebook is trying to get to dedicated inference accelerators, like you reported before with their Habana/Intel nervana partnerships, or this:
https://engineering.fb.com/2019/03/14/data-center-engineering/accelerating-infrastructure/
Regarding the threshold: Fb is probably using dedicated inference machines, so the inference performance threshold is not about this scenario.
So the default is a single Lewisburg Refresh PCH connected to 1 socket? Dual is optional? Is there anything significant remaining attached to the PCH to worry about non-uniform access, given anything high-bandwidth will be PCIe 4.0?
Would be great if 1P 7763 was tested to show if EPYC can still provide the same or more performance for half the server and TCO cost :D
Sapphire Rapids is supposed to be coming later this year, so Intel is going 28c->40c->64c within a few months after 4 years of stagnation.
Does it make much sense for the industry to buy ice lake en masse with this roadmap?
“… a major story is simply that the dual Platinum 8380 bar is above the EPYC 7713(P) by some margin. This is important since it nullifies AMD’s ability to claim its chips can consolidate two of Intel’s highest-end chips into a single socket.”
I would be leery of buying an Intel sound bite. I may distract them from focusing on MY interests.
Y0s – mostly just SATA and the BMC, not a big deal really unless there is the QAT accelerated PCH.
Steffen – We have data, but I want to get the chips into a second platform before we publish.
Thomas – my guess is Sapphire really is shipping 2022 at this point. But that is a concern that people have.
Peter – Intel actually never said this on the pre-briefs, just extrapolating what their marketing message will be. AMD has been having a field day with that detail and Cascade Lake.
I dont recall any mention of HCI, which I gather is a major trend.
A vital metric for HCI is interhost link speeds, & afaik, amd have a big edge?
Patrick, did you notice the on package FPGA on the Sapphire Rapids demo?
Patrick, great work as always! Regarding the SKU stack: call me cynical but it looks like a case of “If you can’t dazzle them with brilliance then baffle them with …”.
@Thomas
Gelsinger said: “We have customers testing ‘Sapphire Rapids’ now, and we’ll look to reach production around the end of the year, ramping in the first half of 2022.”
That doesn’t sound like the average joe can buy SPR in 2021, maybe not even in Q1 22.
Is the 8380 actually a single die? That would be quite a feat of engineering getting 40 cores on a single NUMA node.
I was wondering about the single die, too. fuse.wikichip has a mesh layout for the 40 cores.
https://fuse.wikichip.org/news/4734/intel-launches-3rd-gen-ice-lake-xeon-scalable/
What on earth is this sentence supposed to be saying?
“Intel used STH to confirm it canceled which we covered in…”