Cavium ThunderX2 Context: The Most Important Arm Data Center Release
Every organization has their own evaluation criteria for adopting a new platform. We wanted to focus on four key lenses that potential buyers evaluating the technology may use: the ecosystem, socket performance, platform performance, and the competitive landscape. There are other lenses we see folks in the industry using, but these four are going to come up in every conversation.
Although there is a lot of focus on the raw performance and general impressiveness of the ThunderX2 platform, we must acknowledge the fact that very few systems are installed in a greenfield ecosystem today. Instead, the systems are installed alongside existing applications and with competitive products in the market so there is much more than the raw performance we need to look into.
Cavium ThunderX2 Ecosystem
When we look at the Cavium ThunderX2 ecosystem, we see a much broader set of products and logos than we saw for the ThunderX launch. At launch, the Cavium ThunderX2 has a number of key customers, especially in the HPC space, for the new chip.
What you will notice about that list is that these are large HPC vendors and labs along with companies from the US, Asia, and Europe. That should give you some idea of where the chip is seeing the most traction at this point. It is also intriguing because this is the same market that Intel targeted with Xeon Scalable and AVX-512.
Looking ot the broader ecosystem, one can see that there are a large number of companies involved in the Arm ecosystem at this point. This is well beyond what we saw in 2016 showing how far this has come in the past two years.
Cavium highlighted a few OEM platforms from HPE, Atos, and Cray. All three companies spoke at the launch event in San Francisco.
Gigabyte has been a major ODM partner for Cavium since the original ThunderX generation. We are in the process of reviewing a number of Gigabyte servers and the overall build quality has improved a great deal. Gigabyte now ODM’s servers for other brands such as the HPE Cloudline 2200 and 2100 series.
Overall, the number of parties involved has increased with ThunderX2’s launch. All of these companies investing in the alternative architecture have a tangible impact: using Arm servers has become more accessible to a broader swath of the server market.
Using the Cavium ThunderX2 Ecosystem
The launch of Cavium ThunderX2 coincides with a completely different ecosystem than we had in 2016 with ThunderX. With ThunderX, the world had a dual socket Arm platform, but the software side needed a lot of work. By April 2016, with the Ubuntu 16.04 LTS release, the Arm ecosystem was improved over previous generations, but at that time we felt that we needed to publish a maturity model to explain our experience. Here is that maturity model:
During our early Arm server testing, simple tools were not available. Examples down to simple tools like iperf3 which one was accustomed to using “sudo apt install -y iperf3” on the x86 side required compiling the software in our ThunderX days. Now, installing iperf3 along with more complex packages like MariaDB/ MySQL, nginx, redis and others can happen directly from package managers.
At STH, we use Docker extensively and containers are a big deal in today’s infrastructure. aarch64 is fully supported on Ubuntu 16.04 and 18.04 now so it was easy to install and use docker as we would on an x86 system with a single exception.
Our x86 containers in some cases used base x86 layers so we had to build our infrastructure back from the original Dockerfiles. That process takes at most a few minutes.
Beyond Docker, items like KVM virtualization work as do many tools. Virtualization is a good feature, but there is at least one difference.
If you shut down a VM on an Intel Xeon E5-2600 V3 system then migrate it to Intel Xeon Scalable or AMD EPYC, it will start up and work fine. Since you are changing architectures, that means that you cannot simply boot a VM on the Cavium ThunderX2 (or other Arm/ Power systems.) Although the ecosystem has evolved on a monumental scale, this is still not the solution we are going to recommend to enterprises running VMware or Windows Server clusters and enjoying features like live migration.
On the hardware side, we installed additional NICs, a (LSI) Broadcom 9305-24i SAS3 controller and several NVMe SSDs, including Intel Optane, and they worked out of the box. This is not the same experience as we saw two years ago and is a vast improvement.
Taking a step back, this is an important point. ThunderX in 2016 was really a developer platform. If you wanted to compile software on Arm, you could do so with some work. ThunderX2 in 2018 is a completely different story. We now have an Arm ecosystem that can support open source and closed source projects at a higher level. For example, if you use nginx for your web server, it is trivial to add an aarch64 version and add ThunderX2 to your Kubernetes cluster or Swarm. What that means is that the Cavium ThunderX2 is now suitable for your broader DevOps and applications team usage, instead of being a developer novelty for many organizations.
At the same time, there is a gap between using an alternative x86 architecture such as AMD EPYC and an Arm architecture in a virtualized environment and in environments with paid, supported, and pre-compiled applications. The next hurdle Arm architectures need to clear is getting enough market share that the big software vendors see the commercial interest in supporting non-x86. At the time of this writing, an ISV deciding where to spend investment dollars has a larger TAM supporting x86 than they do Arm. It is an evolutionary process to get there and the journey is underway.
I’m through page 3. I’m loving the review so far but I need to run to a meeting.
Looks like a winner. Are you guys doing more with TX2?
It’s crazy Cavium is doing what Qualcomm can’t. All that money only to #failhardcentriq
Cool chart with the 24 28 30 and 32 core models
Cavium needs to fix their dev station pricing. $10k+ for two $1800 cpus in a system is too much. Their price performance is undermined by their system pricing
Read the whole thing, very impressed with the TX2 performance and pricing, think i’m going to try one out. But was a bit bummed out when i found out on page 8, the most important thing, power usage, wasn’t properly covered and compared to the Intel and AMD systems :(
Welcome competition, always good to see that there is pressure on the market leader.
Microsoft is also working on an ARM version for windows, so this can go the right way…
I’m very confused by some of what you wrote and the exact testing setups of these platforms is extremely unclear. To cite just one example of Linpack test where you state:
“Our standard is to run with SMT on since that is what most of the non-HPC environments look like. This is a case where having 256 threads simply is too much. We also ran the test with 32 threads per CPU, or SMT off which yielded a solid improvement. ”
On a 4-way SMT system you get 256 threads by operating 64 cores. You claim that the CPU only has at most 32 cores and in the same statement you re-tested at 32 threads. So…. what exactly did you test? A 32 core CPU that *cannot* have 256 threads? A two-socket 64-core system that can have 256 SMT threads but that was then dropped to a single-socket configuration with only a single 32 core processor?
Please put in a clear and unambiguous table that provides the *real* hardware configurations of all the test systems.
That means:
1. How many sockets were in-use. Were *ALL* the systems dual socket? All single socket? A mixture? I can’t tell based on the article!
2. Full memory configuration. Yes I know about the channel differences, but what are the details.
3. That’s just a start. The article jumps from vague slides about general architecture to out-of-context benchmark results too quickly.
Competition in the server industry great!
Don – 32 threads per cpu means 64 threads total right? 2x 32 isn’t that hard.
I don’t think this convinced me to buy them. But I’ll at least be watching arm servers now. We run a big VMware cluster so I’d have a hard time convincing my team to buy these since we can’t redeploy in a pinch to our other apps.
We’ll be discussing TX 2 at our next staff meeting. Where can we get system pricing to compare to Intel and AMD?
Can you do more about using this as Ceph or ZFS or something more useful? Can you HCI with this?
Love the write-up. You guys have grown so much and it shows in how much you’re covering on this which is still a niche architecture in the market.
Nice write-up, with plenty of details, on the newly launched. Congrats to Cavium.
Cavium Arm server processor launch, suddenly Microsoft shows up and reiterates it still wants >50% of data center capacity to be Arm powered. And it’s loving Cavium’s Thunder X2 Arm64 system. Together designed two-socket Arm servers…
Looks like cavium is taking on Intel with armv8 workstation. Same processor as used by cray. Interesting. Comparing to Xeon ThunderX2 is good in all aspects like performance, band width, No.of cores, sockets, power usage etc.
Competition in silicon is good for the market.
CaviumInc steps up with amazing 2.2GHz 48-core ThunderX2 part, along with @Cray and @HPE Apollo design wins, and @Microsoft and @Oracle SW support. Early days for #ARM server, but compelling story being told.
ThundwrX2 Arm-based chips are gaining more firepower for the cloud.
The Qualcomm Centriq 2400 motherboard had 12 DDR4 DIMM slots and a single >> 48 core CPU.
The company also showed off a dual socket Cavium ThunderX 2. That system had over >> 100 cores and can handle gobs of memory
“With list prices for volume SKUs (32 core 2.2GHz and below) ranging from $1795 to $800, the ThunderX2 family offers 2-4X better performance per dollar compared to Qualcomm Centriq 2400 and Xeon…”
Cavium continues to make inroads with the ThunderX2 @Arm-compatible platform..
Nice Coverage. 40 different versions of the chip that are optimized for a variety of workloads, including compute, storage and networking. They range from 16-core, 1.6GHz SoCs to 32-core, 2.5GHz chips ranging in price from $800 to $1,795. Cavium officials said the chips compete favorably with Intel’s “Skylake” Xeon processors and offer up to three times the single-threaded performance of Cavium’s earlier ThunderX offerings.
The ThunderX2 SoCs provide eight DDR4 memory controllers and 16 DIMMS per socket and up to 4TB of memory in a dual-socket configuration. There also are 56 lanes of integrated PCIe Gen3 interfaces, 14 integrated PCIe controllers and integrated SATAv3, GPIOs and USB interfaces.
Kudos to Cavium…
Those power numbers look horrendous. A comparable intel system would be less than half that draw. In fact, 800W is the realm 2P IBM POWER operates in. I get that it’s unbinned silicon and not latest firmware but I can’t see all that accounting for ~50-75W tops. My guess is Broadcom didn’t finish the job before it was sold to Cavium, and if Cavium had to launch it now lest they come up against the next x86 server designs (likely starting to sample late 2018).
I guess when Patrick gets binned silicon with production firmware, he’ll also have to redo the performance numbers because it’s quite possible that the perf numbers will likely take some hit. 800W! At least it puts paid to the nonsense about ARM ISA being inherently power efficient. Power efficiency is all about implementation.
The performance looks quite good, but yeah the 800W are a show stopper…
The xeons and epyc processors consume way less than that.
I doubt they can get to the power consumption of the xeons and epyc without lowering quite a lot the max frequency and voltages accordingly. If they can do it, then that’s great. But I have some doubts.
For the STREAMS benchmark (“Cavium ThunderX2 Stream Triad Gcc7”) I assume the Intel compiler is leveraging the FMA instructions, giving them the boost in performance.
Where is performance for dolar graph? Without it is this just lost of useless results..
RuThaN – how would you propose performance per dollar? All SKUs used in the performance parts have list prices that are easy to get. Discounts, of course, are a reality in enterprise gear. The ThunderX2 is sub $1800 which is by fart the least expensive.
Beyond the chips, what system/ configuration are you using them in? How do you factor in the additional memory capacity of ThunderX2 versus Skylake-SP, will that mean fewer systems deployed?
What cost for power/ rack/ networking should we use for the TCO analysis?
I do not think that performance/ dollar at the CPU level is a metric those outside of the consumer space look at too heavily versus at least at the system cost. For example, this is a fairly basic TCO model we do: https://www.servethehome.com/deeplearning11-10x-nvidia-gtx-1080-ti-single-root-deep-learning-server-part-1/
Failure to publish measured power during *every* benchmark run is evasive. This is critical data, for the spread of workloads, and allows calculating energy efficiency.
Please be honest and report the data. Caveats are fine but failure to report is not fine.
Richard Altmaier – thank you for sharing your opinion. There are two components for sure, performance and power consumption. Both are certainly important, but for this review, performance seemed ready, power did not due to a variety of noted factors.
As mentioned, the test system we have is fairly far from what we would consider comparable to the AMD/ Intel platforms that have been in our labs for more than a year. We do enough of these that it is fairly to see that power is higher than it should be. We do not want to publish numbers we are not confident in, lest they get used by competitors.
We also mentioned that there will be a follow-up piece to this. The other option was to publish zero power numbers. Despite your opinion, performance alone is a compelling story. Unlike the x86 side, the ARM side has never had a platform that can hit this level of performance which makes the raw performance numbers quite important themselves.
BTW – There was a well-known Intel executive also named Richard Altmaier.
Would love to see the commands used to generate these results, especially on STREAM on the 8180. I’ve not seen more than ~92GB/s with 768GB installed across all 6 channels with OpenMP parallelization across all 56 threads…