Welcome Back Intel Xeon 6900P Reasserts Intel Server Leadership

4

Intel Xeon 6 Platforms

Now, the challenge is that delivering four different high-level P/E-core/socket combinations (at least) means that there is a lot of variability in specs. You will see a lot of “Up to” here and that is because Intel is differentiating on things like features depending on the core type and SKU.

Intel Xeon 6 Granite Rapids AP Launch Xeon 6 Family Specs
Intel Xeon 6 Granite Rapids AP Launch Xeon 6 Family Specs

A great example of this is that the Xeon 6700E has 88 PCIe Gen5 lanes, but the slide says 96 lanes because it does not just cover the 6700E series. The P-cores, however, can get up to 136 PCIe Gen5 lanes for single-socket designs in the R1S configuration that we covered. Intel also has 12-channel memory with P-cores MCR DIMM support and more UPI links in the 6900 series for more inter-socket bandwidth.

Intel Xeon 6 6700 And 6900 Platform 3
Intel Xeon 6 6700 And 6900 Platform 3

At STH we have been talking about CXL a lot for the past several years. Intel Xeon 6 supports CXL 2.0 Type 1, Type 2, and Type 3 devices on the 64 lanes that support CXL.

Intel Xeon 6 CXL 2.0 1
Intel Xeon 6 CXL 2.0 1

Of course, there are also caveats to things like the CXL Type 3 devices as the P-core variants get interleaved memory options in both sockets, but the E-core variants do not. We have seen the CXL Heterogeneous Interleaved mode and it is really cool. Imagine having ~8 additional channels of memory bandwidth and running DIMMs and CXL memory modules both in a giant pool of memory with bandwidth striped across all of them. Sierra Forest does not get that. Instead, CXL memory can either be its own NUMA node or a hardware-assisted flat memory mode.

Intel Xeon 6 CXL 2.0 2
Intel Xeon 6 CXL 2.0 2

What we know in this generation is that the Type-3 devices for memory expansion are going to be the big driver for CXL adoption.

Intel Xeon 6 Granite Rapids AP Launch CXL 2.0
Intel Xeon 6 Granite Rapids AP Launch CXL 2.0

Funning in flat memory mode, Intel can get more memory capacity, and potentially at a lower cost.

Intel Xeon 6 Granite Rapids AP Launch CXL 2.0 In Flat Mode
Intel Xeon 6 Granite Rapids AP Launch CXL 2.0 In Flat Mode

Just to make this a bit easier, using something like an Astera Labs Aurora A1000 card, we can simply put that in one of the PCIe Gen5 x16 riser slots.

Astera Labs Aurora A1000 With 4x 64GB DDR5 DIMMs 2
Astera Labs Aurora A1000 With 4x 64GB DDR5 DIMMs 2

We can fill the card with 4x 64GB DDR5 RDIMMs and get another 256GB of memory capacity at about the same latency as adjacent socket memory (e.g. the memory pool connected to the opposite CPU in a 2-socket server.) We can then do this four times and get an extra TB of memory in the server and roughly 8 channels of DDR5 worth of bandwidth.

Astera Labs Aurora A1000 With 4x 64GB DDR5 DIMMs 3
Astera Labs Aurora A1000 With 4x 64GB DDR5 DIMMs 3

CXL has another trick, however. Suppose you are a hyper-scale customer with loads of working DDR4 modules. In that case, you can use a compatible CXL DDR4 controller card and plug memory modules into your DDR5 server via a similar methodology to the above. We have a ton of DDR4 memory and cannot wait until we can get a CXL DDR4 memory shelf that we can cable (using retimers) and use with these systems.

Intel Xeon 6 CXL 2.0 3
Intel Xeon 6 CXL 2.0 3

We recently saw the Marvell Structera that can handle up to twelve DDR4 DIMMs so they can be used as a CXL 2.0 memory pool for DDR5 servers.

Marvell Structera A 2504 5
Marvell Structera A 2404 5

For hyper-scalers who have lots of DDR4 being decommissioned, this can be a big source of cost savings even if only 1/3 of memory is recycled from previous generations.

Marvell Structera X 2404 CXL Enabled DDR4 Memory Recycling
Marvell Structera X 2404 CXL Enabled DDR4 Memory Recycling

Next, let us get to the performance.

4 COMMENTS

  1. Wow can’t even hide Patrick’s love affair with Intel anymore can we ? Intel has not even properly launched this but yet it’s 128c Intel vs 96c Genoa, but AMD will have same 128c in 2 weeks time……just be honest finally and call it servingintel.com ;-)

  2. Yawn… Still low on PCIe lanes for a server footprint when GPUs and NVME storage is coming fast and furious. Intel needs to be sold so someone can finally innovate.

  3. Whether love or not, the numbers are looking good. For many an important question will be yield rates and pricing.

    I wonder why Epyc is missing from the povray speed comparison.

    One thing I’d like to see is a 4-core VM running Geekbench 6 while everything else is idle. After that Geekbench for an 8-core VM, 16-core, 32-core and so forth under similar circumstances. This sort of scaling analysis would help determine how well balanced the MCRDIMM memory subsystem is to the high-core-count processors–just the kind of investigative journalism needed right now.

    As an asside, I had to work over eight captchas for this post.

  4. The keyword would be availability. I checked just now, and these newer parts don’t have 1k Tray Pricing published yet. So not sure when would they be available. It felt painful to restrict the On-Premise Server procurement specification at 64 cores to get competitive bidding across vendors. Hope for the best.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.