Why Servers Are Using So Much Power TDP Growth Over Time

7
Intel 2P Server Processor TDP Ranges 2008 To 2024 Estimated
Intel 2P Server Processor TDP Ranges 2008 To 2024 Estimated

Depending on where you are in a server refresh cycle or building out AI clusters, one thing is clear. Liquid cooling may get much attention, but power is often the long pole and biggest challenge in build-outs. While many factors drive this, one big one is that servers use more power with increased performance and capacity. In this article, we will look at dual socket server CPUs and NVIDIA SXM GPUs and how their TDPs have increased over time.

Over the next few weeks, we are going to have a series of articles on data center power.

As always, we suggest watching the video in its own tab, window, or app for the best viewing experience.

Server CPU TDPs are Rapidly Increasing

I have tried to make this chart a few times but have been paralyzed by indecision. Trying to nail a population to chart when it comes to charting TDP over time. Is a 8-socket Xeon EX low-volume high-end part going to make the cut? What about a workstation-focused part? At the end of the day, I went back through STH, looked at the types of chips we were deploying in dual-socket servers, and decided to use those. Are these charts perfect? Absolutely not. There are judgment calls being made to try mapping a progression of server CPUs over time.

Taking a look at the dual-socket Intel Xeon TDPs over time, it should be clear that the trendline is accelerating to a degree that we simply did not see for over a decade.

Intel 2P Server Processor TDP Ranges 2008 To 2024 Estimated
Intel 2P Server Processor TDP Ranges 2008 To 2024 Estimated.

Between 2008-2016, power consumption remained relatively stable. Skylake (1st Gen Intel Xeon Scalable) in 2017 was the first generation where we saw a notable increase in TDP. After Cascade Lake in 2019, TDPs have been racing upward, both at the higher and lower ends. Cooper Lake was largely based on Cascade Lake, but had a few enhancements and was deployed at Facebook for example. One could rightly argue that this was primarily a public launch for 4-socket servers (4P), but we added it in. If you want to imagine the chart without Cooper Lake, then replace the 2020 bar with Cascade Lake, and you would have four years of TDP stagnation. Starting with Ice Lake in 2021, both the top and bottom end SKUs saw their TDPs increase.

A quick note here is that while we have previously talked about this quarter’s Granite Rapids-AP hitting 500W max TDP, and Q4’s AMD EPYC-Next (Turin) at 500W, these are still unreleased products, and we do not have the full stacks. We just put a plug for the low TDP range and used both companies talking about 500W as the high-end. That may change.

Taking a look at the AMD side, here is what we would see charting these parts out. Given that the Operton 6000 series was popular at the time, and we did many G34 reviews back in the day, we are using those for older parts. When the Intel Xeon E5-2600 came out in 2011, we heard server vendors say that the Xeon E5 was displacing the Opteron 6000 series.

AMD 2P Server Processor TDP Ranges 2008 To 2024 Estimated
AMD 2P Server Processor TDP Ranges 2008 To 2024 Estimated

The Turin low-end TDP might be a bit low, but what is clear is that the TDP is going up. If you look at this in comparison to the Intel chart above, it is easy to see that AMD was ratcheting up TDP faster than Intel. In the industry, you may have heard that before the latest Xeon 6E launch, AMD is more efficient. That is because AMD EPYC Rome to Genoa/ Genoa-X/ Bergamo had a significant process lead and was able to pack many more cores in a slightly higher TDP. As an example, the top-end 2019 Cascade Lake Xeon 8280 was a 205W part with 28 cores, or about 7.3W/ core. AMD EPYC 7H12 was a HPC focused 64-core part at 280W, or about 4.4W/ core. While it used more power, the power efficiency went up significantly.

GPUs are Following TDP

GPUs are another area where TDPs have been increasing significantly. Although we do not normally pre-share performance data, we wanted to validate that we are using the right SXM GPU power consumption and years with NVIDIA, so we sent this chart and got the OK. GPUs are often announced and become widely available in different quarters, and NVIDIA GPUs frequently use configurable TDP, so making the chart we wanted to validate this was directionally correct.

NVIDIA SXM GPU TDP Growth 2016 To 2025 Estimated
NVIDIA SXM GPU TDP Growth 2016 To 2025 Estimated

Moving from 300W to 700W may seem in-line with the CPU TDP increases, another aspect is to remember that there are usually eight GPUs in a SXM system. In 2016, as NVIDIA was transitioning to SXM, the typical deep learning servers were 8-way or 10-way GeForce RTX 1080 Ti servers before NVIDIA’s EULA and hardware changes pushed the market to data center GPUs. These systems were often 2.4-3.1kW systems.

10x NVIDIA GTX 1080 TI FE Plus Mellanox Top
10x NVIDIA GTX 1080 TI FE Plus Mellanox Top

We looked at the newer version of that design in our NVIDIA L40S review last year.

A modern AI server is much faster, but it can use 8kW. That is pushing companies to upgrade to liquid cooling to lower power consumption and upgrade their rack power solutions.

In 2025, we fully expect these systems to use well over 10kW per rack as each of the eight accelerators start to tip the 1kW TDP mark. In North America, there are still a lot of 15A/20A 120V racks that cannot even power a single power supply of these large AI servers.

While we often talk about CPU and GPU TDPs, much more is going on.

7 COMMENTS

  1. Phrasing it in terms of EV power consumption is really interesting. This rack uses as much power as 3 or 4 fully loaded semi-trucks barreling down the highway at 70mph, 24/7/365. That’s astonishing.

  2. One thing not really mentioned, but if you know you know….

    Why we only talk about power requirements rather than focusing on cooling requirements – The power usage directly informs the BTU load for cooling. 8KW of power into any silicon electronics ~= 8kw of waste heat. Unlike a motor or lights might impart work or light as well as waste heat (where heat = lost efficiency), a cpu is all waste-heat. The transistor gates do not appreciably move/work to reduce the heat (not work in the abstract but work in the physical). 500W cpu puts out 500W of heat at load.

    This is why a colocation/datacenter will only bother with billing you for power usage…not cooling, one = the other. The facility will cool things certainly, but power in is heat out.

    As to why watercooling now? Because the parts are so close together and higher powered – close together. You are packing 120kw per rack…you could still cool the aisle for 120kw of power, but you gotta cool a tesla-worth of power usage every hour, now big in-case fans, big fins for heatsinks. Facility water will have giant cooling towers outside which will have gobs more room to dissipate the heat. These are usually direct to air though and do waste water from evaporation.

  3. As a point of reference a DECsystem-20 KL from around 1976 took 21kW for the processor not counting disks, terminals and other peripherals.

    The way I see it AMD introduced their 64-bit instruction set around 2000 after which x86 systems started being considered as possible servers. The dot-com bubble burst and by 2010 the cost effectiveness of x86 was more important than performance.

    14 years later power consumption has gone up because the market niche changed. Those x86 systems are now the top-tier performers while ARM–from Raspberry Pi to Ampere Ultra–fill the low-power cost effective roles.

  4. In terms of power efficiency, rather than raw power draw, are there any places where we are seeing regressions?

    I know that, on the consumer side, it’s typical for whoever is having the worse day in terms of design or process or some combination of the two to kick out at least a few ‘halo’ parts that pull downright stupid amounts of power just so that they have something that looks competitive in benchmarks; but those parts tend to be outliers: you either buy from the competitor that is on top at the time or you buy the part a few notches down the price list that’s both cheaper and not being driven well outside its best efficiency numbers in order to benchmark as aggressively as possible.

    With the added pressure of customers who are operating thousands of systems(or just a few, but on cheap colo with relatively tight power limits) do you typically even see the same shenanigans in datacenter parts; and, if so, do they actually move in quantity or are they just there so that the marketing material can claim a win?

    My understanding is that, broadly, the high TDP stuff is at least at parity, often substantially improved, in terms of performance per watt vs. what it is replacing; so it’s typically a win as long as you don’t fall into the trap of massively overprovisioning; but are there any notable areas that are exceptions to the rule? Specific common use cases that are actually less well served because they are stuck using parts designed for density-focused hyperscalers and ‘AI’ HPC people?

  5. what would be the theoretical power burn of a modern server if we were still stuck on 45nm Nehalem process? :)

  6. While the TDP keeps going up, the efficiency of the chips is much better or just the overall CPU is much faster. The E5 2699 v4 was a 22c/44t CPU with a 145W TDP or 6.59W/core. The Epyc 9754 is a 128c/256t CPU with a 360W TDP or 2.81W/core. The Broadwell Xeon has a TDP 2.35x higher per core than the Epyc and is quite a lot slower core per core as well.

  7. “A modern AI server is much faster, but it can use 8kW.”
    “In 2025, we fully expect these systems to use well over 10kW per rack”

    Dang. How are they going to manage that kind of exponential efficiency increase in less than a year? Using only 25% more power for an entire rack vs. a single system today. Impressive to be sure.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.