Gigabyte H261-Z60 Storage Performance
Storage in the Gigabyte H261-Z60 is primarily driven by SATA interfaces. As a result, storage performance is not of the same level of focus as it would be for the H261-Z61 NVMe variant. Still, we wanted to compare high-quality SSDs and HDDs to give some sense of what one can get out of each node.
Deploying today, unless cost is an enormous constraint, we would opt for 1.92TB SSDs over the hard drives. We see the Gigabyte H261-Z60 primarily using SATA drive bays for boot devices and using NVMe or network for primary storage.
Compute Performance and Power Baselines
One of the biggest areas that manufacturers can differentiate their 2U4N offerings on is cooling capacity. As modern processors heat up, they lower clock speed thus decreasing performance. Fans spin faster to cool which increases power consumption and power supply efficiency.
STH goes through extraordinary lengths to test 2U4N servers in a real-world type scenario. You can see our methodology here: How We Test 2U 4-Node System Power Consumption.
Since this was our first AMD EPYC test, we used four 1U servers from different vendors to compare power consumption and performance. The STH “sandwich” ensures that each system is heated on the top and bottom as they would be deployed in dense deployment.
This type of configuration has an enormous impact on some systems. All 2U4N systems must be tested in a similar manner or else performance and power consumption results are borderline useless.
Compute Performance to Baseline
We loaded the Gigabyte H261-Z60 nodes with 256 cores and 512 threads worth of AMD EPYC CPUs. Each node also had a 10GbE OCP NIC and a 100GbE PCIe x16 NIC. We then ran one of our favorite workloads on all four nodes simultaneously for 1400 runs. We threw out the first 100 runs worth of data and considered the 101st run to be sufficiently heat soaked. The other runs are used to keep the machine warm until all systems have completed their runs. We also used the same CPUs in both sets of test systems to remove silicon differences from the comparison.
As you can see, the Gigabyte H261-Z60 nodes are able to cool CPUs essentially on par with their 1U counterparts. That is a testament to how well the system is designed. We had to alter the Y-axis here to show there was even a difference at all. If we had used a 0-101% axis the difference would have been less than a pixel. This is a great result from the Gigabyte H261-Z60.
That’s a killer test methodology. Another great STH review.
Great STH review!
One thing though – how about linking the graphics to a full size graphics files? it’s really hard to read text inside these images…
Monster system. I can’t wait to hear more about the epyc 7371’s
I can’t wait to see these with Rome. I wish there was more NVMe though
My old Dell C6105 burned in fire last May and I hadn’t fired it up for a year or more before that, but I recall using a single patch cable to access the BMC functionality on all 4 nodes. There may be critical differences, but that ancient 2U4N box certainly provided single-cable access to all 4 nodes.
Other than the benefits of html5 and remote media, what’s the standout benefit of the new CMC?