ASRock Rack ROMED6U-2L2T Test Configuration
The time that we were working on the ROMED6U-2L2T review started during the AMD EPYC 7002 “Rome” era and continued past the AMD EPYC 7003 Milan launch.
- CPUs: AMD EPYC 7232P, EPYC 7502P, EPYC 7713
- Memory: 6x 16GB DDR4-3200 ECC RDIMM, 6x 32GB DDR4-3200 ECC RDIMM
- Network: Mellanox – NVIDIA ConnectX-6 (1x 200GbE port)
- Storage: 2x Intel S3710 400GB
- Power: EVGA P2 850W
- OS: Ubuntu 20.04.2 LTS
There are certainly a few notes from this build that are worth sharing. Perhaps the biggest is that we had to use a beta BIOS to get EPYC 7003 series support. This worked for us but using something labeled Lab/ Beta may not be what one wants to use for production. Still, the fact this is working and the platform has EPYC 7003 series support listed, makes us think that there is a clear path for mainstream EPYC 7003 support.
The CPUs are very interesting. Since there are only six memory channels, the idea of using a 64-core CPU is intriguing. One gets more memory bandwidth than the standard AMD Threadripper, but less than the Threadripper PRO and the EPYC 7002/ 7003 8-channel parts. Most of our readers are likely going to use “P” series parts since these are optimized for single-socket configurations with discounting. To us, the sweet spot is likely to be the EPYC 7402P/ EPYC 7502P, and the new EPYC 7443P/ EPYC 7543P. These provide a good mix of value along with core counts and memory bandwidth per core. AMD also kept a number of the 4-channel optimized SKUs that are lower-power parts like the EPYC 7282 that may be very interesting to users in a platform like this. You can learn more about those SKUs here:
Perhaps the most surprising aspect of this system is that there are so many options given the rich I/O capabilities.
ASRock Rack ROMED6U-2L2T Performance
In terms of performance, we already have a long line of EPYC benchmarks. Instead, we wanted to focus on the 6-channel impact on a few of the different CPU options compared to our reference numbers. Of course, if we wanted a big number, something like stream on the higher-end parts would lead to a 25% reduction but we wanted to see a bit more of a real-world use case. Also, some of these results are in the +/- 1% range from our standard figures which we consider test variation and not significant.
Perhaps the best way to summarize the above in terms of key takeaways on the performance:
- For the 4-channel optimized EPYC 7232P, we saw very little impact due to the memory bandwidth decrease. This is expected given that we have six channels.
- For the 32 core EPYC 7502P, we saw some impact, but it is nowhere near the 25% one would see in an entirely memory bandwidth-bound workload. We can expect little to no loss for those applications that have relatively lower memory access needs.
- With the 64-core EPYC 7713 (there is also an EPYC 7713P that should be the same) we saw a somewhat greater impact, but it was not alarming. If one simply wants 64-cores and all of the I/O in a MATX platform, this performance is going to make sense. If one is optimizing on more balance, then lower core count options may be reasonable.
These results are going to vary based on configuration options and workload. In general, it is worth noting that there are impacts for some SKUs on lower memory channel usage. Realistically, if a larger motherboard/ chassis is an option, from a performance standpoint it is better to get a motherboard with all eight memory channels populated and filled when using higher-end SKUs. Again, this makes logical sense, but we want to point this out for our readers.
We are just going to note that we are not doing power and noise on this platform since those are going to be impacted more by the chassis and configuration options rather than the motherboard itself.
Next, we are going to discuss the market perspective followed by our final words.
ASRock Rack should start building boards with onboard sfp connectors for ethernet >= 10G.
That makes the stuff much more useable.
Thanks for this long waiting review !
I think it’s an overkill for high density NAS.
But why did you put a 200Gb NIC and say nothing about it?
Incredible. Now I need to make an excuse that we need this at work. I’ll think of something
Hello Patrick,
Thanks for the excellent article on these “wild” gadgets! A couple of questions:
1. Did you try to utilize any of the new milan memory optimizations for the 6 channel memory?
2. Did you know this board has a second m.2? Do you where I can find an m.2 that will fit that second slot and run at pcie 4.0 speeds?
3. I can confirm that this board does support the 7h12. It also is the only board that I could find that supports the “triple crown”! naples, rome and milan (I guess genoa support is pushing it!)
4. cerberus case will fit the noctua u9-tr4 and cools 7452, 7502 properly.
5. Which os/hypervisor did you test with and did you get the connectx-6 card to work at 200Gbps? with vmware, those cards ran at 100Gbps until the very latest release (7.0U2).
6. which bios/fw did you use for milan support? was it capable of “dual booting” like some of the tyan bios?
Thanks again for the interesting article!
Beautiful board
@Sammy
I absolutely agree. Is 10GBase-T really prevalent in the wild? Everywhere I’ve seen that uses >1G connections does it over SFP. My server rack has a number of boards with integrated 10GBaseT that goes unused because I use SFP+ and have to install an add-in card.
Damn. So much power and I/O on mATX form factor. I now have to imagine (wild) reasons to justify buying one for playing with at home. Rhaah.
@Sammy, @Chris S
Yes, but if they had used SFP+ cages, there would be people saying why don’t they use RJ-45 :-) I use a 10GbE SFP+ MicroTik switch and most the systems connected to it are SFP+ (SolarFlare cards) but I also have one mobo with 10GBase-T connected via an Ipolex transceiver. So, yes if I were to end up using this ASRock mATX board it would also require an Ipolex transceiver :-(
My guess it that ASRock thinks these boards are going to be used by start-ups, small businesses, SOHO and maybe enthusiasts who mostly use from CAT5e to CAT6A, not fiber or DAC.
The one responsible for the Slimline 8654-8i to 8 x SATA cable on Aliexpress is me. Yes. I had them create this cable. I have a whole article write up on my blog and I also posted here on STH forums. I’m in the middle of them creating another cable for me to use with next gen Dell HBA’s as well.
The reason the cable hasn’t arrived yet, is because they have massive amounts of orders, with thousands of cables coming in, and they seemingly don’t have spare time to make a single cable.
I’ve been working with them closely for months now, so I know them closely.
Those PCIe connectors look like surface mounts with no reinforcement. I’d be wary of mounting this in anything other than flat/horizontal orientation for fear of breaking one off the board with a heavy GPU…
@Sammy and @chris s….what is the advantage of sfp 10gb over 10gbase-t ??
I remember PCIe for years not having reinforcement Eric. Most servers are horizontal anyway.
I don’t see these in stock anymore. STH effect in action
@Patrick, wouldn’t 6 channel memory be perfect for 24 core CPUs, such as the 7443P; or would 8 channels actually be faster – isn’t it 6*4 cores?
@erik answering for Sammy
SFP+ would let you choose DAC, MM/SM fiber connectors. Ethernet @ 10gb is distance limited anyway, and electrically can draw more power than SFP+ instances.
Also 10G existed as SFP+ a lot longer than ethernet 10G existed, I’ve got sfp+ connectors everywhere, and SFP+ switches all over the place. I don’t have many 10G ethernet cables (cat 6a or better recommended for any distance over 5ft), and I only have 1 switch around that does 10G ethernet at all.
QSFP28 and QSFP switches can frequently break out to SFP+ connectors as well so newer 100/200/400gb switches can connect to older 10g sfp+ ports. 10GB ethernet conversions from SFP+ exist but are relatively new and power hungry.
TLDR: SFP became a server standard of sorts that 10Gbase-T never did.
One thing I noticed is that even if ASRock wanted SFP+ on this board they probably couldn’t fit it. There isn’t enough distance between the edge of the board and the socket/memory slots to fit the longer SFP+ cages.
Thanks Patrick, that is quite nice find. Four x16 PCIe slots plus a pair of m.2, a lot of SATA and those 3 slim x8 connectors is IMO nearly a perfect balance of I/O. Too bad about the DIMMs. Maybe Asrock can make an E/ATX board with the same I/O balance and 8 DIMMs. I still thimk the TR Pro boards with their 7 x16 slots are very unbalanced.
My main issue with the 10GbaseT (copper) ports is the heat. They run much hotter than the sfp+ ports.
I believe there is a way to optimize this board with it’s 6 channels if using the new milan chips. I was looking for some confirmation.
Patrick, Thank you for your in-depth review. Hit every mark on what i was looking for that is important. ASRock ROMED6U-2L2T fantastic board, for the I/O, can’t find one that works like Data Center server in mATX form. Is there a recommended server like or other casing you or anyone tested this board, the required FANs cooling and power with fully populated and maintains server level reliability. I love that this has IPMI.
I would like to see a follow on review with 2x100G running full memory channels on this board, and running RoCE2 and RDMA.
Great review, thank you. I am still confused about what, if anything, I should connect to the power socket label 1 and 2 in the Quick Installation Guide. The PSU’s 8-pin 12 V Power connector fits but is it necessary?
@Martin Hayes
The PSU 8-pin/4+4 CPU connectors are what you use to connect. You can leave the second one unpopulated however but with higher TDP CPU’s it’s highly recommended for overall stability.
Thanks Cassandra – my power supply only has one PSU 8-pin/4+4 plug but I guess I can get an extender so that I can supply to both ports
OMG, this is packed with features then my dual socket xeon. I must try this. Only ASRock can come up with something like this.