Gigabyte ME33-AR0 AMD EPYC 8004 Motherboard Review

14

Gigabyte ME33-AR0 Block Diagram

Taking a look at the block diagram we can see how this is all wired.

Gigabyte ME33 AR0 Block Diagram
Gigabyte ME33 AR0 Block Diagram

There are 80 PCIe Gen5 lanes, 48 of which go to the PCIe x16 slots. The 1GbE NICs connect through the CPU as well. Compared to single-socket Xeon systems from years ago, this is much less complex. Another way to look at it is that these platforms are designed to replace dual-socket Xeon servers from 3-7 years ago. Not only does this simplify the system architecture to a single socket, but it also removes the PCH that would have been used to connect components like SATA drives and the 1GbE Broadcom NIC.

Gigabyte ME33-AR0 Management

Management is handled by an ASPEED AST2600 BMC. This is the industry standard baseboard management controller of this generation.

Gigabyte ME33 AR0 ASPEED BMC
Gigabyte ME33 AR0 ASPEED BMC

Gigabyte has a standard management interface based on the AMI MegaRAC platform. The company includes features like the iKVM functionality in its interface.

Gigabyte Managmenet Dashboard
Gigabyte Management Dashboard

Something that Gigabyte has started to do is using unique passwords for its management interfaces.

Gigabyte ME33 AR0 Unique Password
Gigabyte ME33 AR0 Unique Password

This is something that has become the industry standard due to new regulations. We covered this some time ago in our piece: Why Your Favorite Default Passwords Are Changing.

Gigabyte ME33-AR0 Performance

Overall, we tried several different processors in this motherboard, including the AMD EPYC 8534P(N) SKUs and the 8324P(N) parts.

Gigabyte MZ33 AR0 AMD EPYC 8004 Performance To Baseline
Gigabyte MZ33 AR0 AMD EPYC 8004 Performance To Baseline

We are testing three platforms, including this one at the same time. Realistically, these all achieve roughly similar performance with the same CPUs being cycled between them.

Next, let us get to our key lessons learned.

14 COMMENTS

  1. With the CPU & RAM placement like this, how can one do with these PCIe slots? Does it only suitable for NICs?

  2. Goodness, such an I/O connected board with only 1G ethernet? I fail to understand why mid-end or higher-end boards don’t have at least one 10G port and a 2.5G port

  3. Hello,

    I am running HPC servers for FEA consulting that I do. Could you comment on the idle power of this board as I am actively looking for HPC server solutions with low idle power consumption to replace an older 4 socket Xeon system. My current dual socket EPYC 9554 system pulls around 475W at idle and my older 4 socket Xeon system pulls close to 750W at idle.

  4. Indeed Eric, a word about the placement of the PCIe slots would be interesting. What’s the idea here? Always use risers? Did you talk to Gigabyte about it?

  5. These boards aren’t usually bought by those who uses GPU accelerators
    figure mostly NICs, HBAs, or U.2 breakout cards

  6. That’s pcie placement is gigabyte trademark i guess, the most terrible design that I have seen.

    You can’t use any card that have bigger size than x16 slot, wuick reminder, that almost any raid controller have cables that going not to the top, but to the side.

    You wanna install something like perc h755? Hah, shame on you you only have 1(!) port to do it, otherwise you will by on top of memory dimms.

    And this spaces between pcie slots is more for GPU (since they are twoslot width), but nope, can’t use it for it.

    So I guess it’s just HBA retimers card, but they ALSO bigger, than x16 slot. So the cables from them will go into cpu heatsink, yikes.

    I just don’t understand why gigabyte continues to do this..

  7. 2024, at least 10GbE should be the bare minimum standard.

    If I were the designer of this board, I would have swap the placement of the M.2 with the DIMM/CPU

    That at least would render the PCIE slots usable….

  8. That is the stupidest board layout I have ever seen. Should have left out the PCIe slots and sold it cheaper, they being blocked by CPU & RAM.

  9. Folks, I see no one gets this from the commenting crowd. This is a dedicated home/lab/SOHO server board. As were the previous AR0 series boards.

    Thanks to no high-power components, besides the CPU, this board is fully compatible with desktop/tower cases as well as ANY cheap/simple rack chassis.

    The absence of a HOT 10GbE chip is a FEATURE. Not a bug. Install an SP3 Noctua, and there ya go!

    There are 3 x16 slots for suitable NICs, HBAs or x4 NVMe carriers. All of these are full-height/half-length cards. If someone needs spiining rust, there are 16 native SATA ports for some ZFS goodness.

    Lastly, the bottom slot is full x16 PCIe gen4, so even a monster GPU can be supported in most cases.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.