ASUS RS720A-E11-RS24U Review AMD EPYC 7763 and NVIDIA A100 in 2U

6

ASUS RS720A-E11-RS24U Power Consumption

Power consumption of this server is largely dictated by the GPUs and CPU used. With the dual AMD EPYC 7763 CPUs and for NVIDIA A100 GPUs, we had a lot of power hungry components.

ASUS RS720A E11 RS24U Power Supplies
ASUS RS720A E11 RS24U Power Supplies

At idle, our system was using about 0.53kW with the four NVIDIA A100 GPUs, AMD EPYC 7763, and 16x 32GB DIMMs. We were fairly easily able to get the system above 1.5kW and to the 1.9-2.0kW range. As such, the key implication is that we have around 1kW/ 1U of power density. Many racks cannot handle that kind of density so we are simply going to remind our users, as with all servers, to have a plan for this kind of power density if this is a configuration you want to deploy.

STH Server Spider: ASUS RS720A-E11-RS24U

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH Server Spider ASUS RS720A E11 RS24U
STH Server Spider ASUS RS720A E11 RS24U

This system is clearly focused on GPU/ accelerator density but also has a lot more CPU, memory, and storage capacity than the ASUS ESC4000A-E10 single-socket 2U server. We do not have 3.5″ drive bays for capacity storage and we also have the dual 10Gbase-T networking available in our test configuration. One could use the low-profile slot for more networking but it would still not be extremely dense on that side.

ASUS RS720A E11 RS24U Riser Options
ASUS RS720A E11 RS24U Riser Options

We are just going to quickly note that ASUS has options for more networking density, but we are doing the STH Server Spider based on the configuration we had. Clearly, the focus of our configuration was on the GPU/ accelerator density and storage instead of having many PCIe slots for networking.

Final Words

Overall, this is a very interesting system. ASUS combines a lot into a relatively short 840mm chassis. We get two 64-core CPUs which would be roughly equivalent to three Intel Xeon Platinum 8380 CPUs. ASUS did a great job getting compute density to match the accelerator density.

ASUS RS720A E11 RS24U AMD EPYC 7003 Sockets With 32x DIMMs 2
ASUS RS720A E11 RS24U AMD EPYC 7003 Sockets With 32x DIMMs 2

Having the ability to add four PCIe NVIDIA A100 GPUs and keep everything cool is great. It does require adding the extra fans internally and the external fan, but then again this is around a 1kW per U configuration which means cooling is going to be a major challenge. Again, while we are testing with the NVIDIA A100 40GbE PCIe GPUs because that is what we have access to, this system could use other GPUs or network/ storage/ AI accelerators just as easily. We also have a lot of storage potential on the front of the system.

ASUS RS720A E11 RS24U 4x NVIDIA A100 Stack 4
ASUS RS720A E11 RS24U 4x NVIDIA A100 Stack 4

The one item we wish that this server had more of is onboard networking. In the configuration does have higher-speed networking with the dual 10Gbase-T, but when paired with four $10,000+ accelerators, plus high-end CPUs, and storage this ends up being a configuration that can easily be configured to sell for $60,000+ which makes the dual 10Gbase-T option seem imbalanced. This can be remedied by adding a NIC in the place of the PIKE II card though.

Overall, it was really interesting to see how this PCIe Gen4 platform takes top-end NVIDIA accelerators and AMD CPUs and combines them with some of the new server design trends such as cabled accelerator cages/ risers and the latest BMC to make an efficient and relatively compact 2U server.

6 COMMENTS

  1. You guys are really killing it with these reviews. I used to visit the site only a few times a month but over the past year its been a daily visit, so much interesting content being posted and very frequently. Keep up the great work :)

  2. The power supplies interrupt the layout. Is there any indication of a 19″ power shelf/busbar power standard like OCP? Servers would no longer be standalone, but would have more usable volume and improved airflow. There would be overall cost savings as well, especially for redundant A/B datacenter power.

  3. Was this a demo unit straight from ASUS? Are there any system integrators out there who will configure this and sell me a finished system?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.