The Gigabyte ME33-AR0 is the company’s single-socket AMD EPYC 8004 motherboard. We have started our EPYC 8004 “Siena” series and this is an understated, or under-hyped platform that a lot of folks could be taking advantage of. Gigabyte’s platform for a Siena server is unique compared to others in the market. Let us get into it.
Gigabyte ME33-AR0 Hardware Overview
The motherboard is going to look very different, but the size is EATX 12″ x 13″. That means it will integrate into many different chassis. If you are reading this, and doing a double-take on the motherboard, we will go through why it looks the way it does in this article.
The new motherboard uses the AMD SP6 socket. One could be excused for thinking it looks a lot like the Naples through Milan (and Threadripper) SP3 socket. An easy way to tell is that there are six memory channels and up to two DIMMs per channel for twelve DIMM slots total.
In the socket are the AMD EPYC 8004 CPUs that scale up to 64 cores using Zen 4c cores, the same that are in “Bergamo“. One way to think about this is that it is like half a Bergamo from a maximum core count and DDR5 memory channel perspective.
Putting the CPU at the front of the motherboard allows for the system to direct airflow over the CPU heatsink. It also allows for all of the memory channels to be present alongside a number of PCIe slots.
Just showing an example of this, we can see theĀ Gigabyte G242-Z10 or theĀ Gigabyte MZ32-AR0 as previous generation examples of how this works. Now, Gigabyte is bringing a similar style platform using a newer technology.
Next to the DIMM slots on the top of the motherboard are two PCie Gen5 x4 M.2 slots.
Gigabyte also has three x8 MCIO connectors on the leading edge of the motherboard. Two are on the top side.
The other is on the bottom. These give a total of 24 lanes or enough for six NVMe drives. Gigabyte also includes a MCIO to SATA cable for those who want to use SATA drives. Two of the MCIO connectors can be used for SATA giving a total of 16 drives.
There are four PCIe x16 slots. Three are PCIe Gen5 x16. The fourth, or the top slot in this photo is a PCIe Gen4 x16 slot.
For management, there is an ASPEED AST2600 BMC.
Networking is handled by a Broadcom BCM5720. This is a dual port 1GbE NIC chip.
The top of the motherboard has the ATX power connector and the CPU power connectors on the motherboard.
Next, let us get to how this is all connected.
Hi Eric,
Could you please provide some Info about power consumption of the board?
Thanks,
Emil
With the CPU & RAM placement like this, how can one do with these PCIe slots? Does it only suitable for NICs?
Goodness, such an I/O connected board with only 1G ethernet? I fail to understand why mid-end or higher-end boards don’t have at least one 10G port and a 2.5G port
What an odd board.
Hello,
I am running HPC servers for FEA consulting that I do. Could you comment on the idle power of this board as I am actively looking for HPC server solutions with low idle power consumption to replace an older 4 socket Xeon system. My current dual socket EPYC 9554 system pulls around 475W at idle and my older 4 socket Xeon system pulls close to 750W at idle.
Sheesh. What a weird layout.
I hope no one needs any long PCIe cards.
Indeed Eric, a word about the placement of the PCIe slots would be interesting. What’s the idea here? Always use risers? Did you talk to Gigabyte about it?
These boards aren’t usually bought by those who uses GPU accelerators
figure mostly NICs, HBAs, or U.2 breakout cards
That’s pcie placement is gigabyte trademark i guess, the most terrible design that I have seen.
You can’t use any card that have bigger size than x16 slot, wuick reminder, that almost any raid controller have cables that going not to the top, but to the side.
You wanna install something like perc h755? Hah, shame on you you only have 1(!) port to do it, otherwise you will by on top of memory dimms.
And this spaces between pcie slots is more for GPU (since they are twoslot width), but nope, can’t use it for it.
So I guess it’s just HBA retimers card, but they ALSO bigger, than x16 slot. So the cables from them will go into cpu heatsink, yikes.
I just don’t understand why gigabyte continues to do this..
Is that 150W figure for the whole board? Because that would truly be amazing if not impossible
2024, at least 10GbE should be the bare minimum standard.
If I were the designer of this board, I would have swap the placement of the M.2 with the DIMM/CPU
That at least would render the PCIE slots usable….
so close, so far
That is the stupidest board layout I have ever seen. Should have left out the PCIe slots and sold it cheaper, they being blocked by CPU & RAM.
Folks, I see no one gets this from the commenting crowd. This is a dedicated home/lab/SOHO server board. As were the previous AR0 series boards.
Thanks to no high-power components, besides the CPU, this board is fully compatible with desktop/tower cases as well as ANY cheap/simple rack chassis.
The absence of a HOT 10GbE chip is a FEATURE. Not a bug. Install an SP3 Noctua, and there ya go!
There are 3 x16 slots for suitable NICs, HBAs or x4 NVMe carriers. All of these are full-height/half-length cards. If someone needs spiining rust, there are 16 native SATA ports for some ZFS goodness.
Lastly, the bottom slot is full x16 PCIe gen4, so even a monster GPU can be supported in most cases.