Years ago, the 4U 24-bay 3.5″ storage server was a popular offering. As time progressed, organizations sought greater density and we started seeing servers offer storage on both sides of the chassis with 36x 3.5″ bays being common. While it will be classified as a 36-bay system, the Gigabyte S452-Z30 actually has 42 hot-swap bays. More importantly, those bays do not include the internal storage and expansion options making this a very expandable platform. Unlike storage servers of yesterday, this PCIe Gen4 platform utilizes a single AMD EPYC 7002 (and presumably future EPYC 7003) processor greatly reducing cost while still providing more connectivity than a dual-socket Intel Xeon solution. In our review, we clearly have a ton of ground to cover, so let us get to it.
Gigabyte S452-Z30 Overview
Lately, we have been splitting our hardware overview section into two parts. First, we are going to discuss the external features of the system. We are then going to discuss the internal components that make the server work.
Gigabyte S452-Z30 External Overview
The front of the 4U chassis is dominated by 24x 3.5″ bays. There are also two USB 3.0 ports on the right rack ear. One of the key advantages of the 4U 36-bay form factor is that the entire system is only 625mm deep which means it can fit in relatively short racks compared to many top-loading systems.
Something awesome about this generation is that Gigabyte has switched to tool-less drive trays for all 42 hot-swap bays. The one exception is if you want to directly mount a 2.5″ drive to a 3.5″ drive bay. Otherwise, 4 screws per tray and 42 trays mean we save 168 screws thanks to this design. This can save hours in configuration and servicing time so this is certainly a high-point of the solution.
On the left front rack ear, we have a number of buttons. The red button has more space than we are used to around it which gives it a bit of a lower-quality feel.
Also, if we look between the rack ear and the drive trays we can see some exposed wiring going from the chassis to the rack ear. This is going to work fine and we tried seeing if we could intentionally hit the cables when inserting a drive but were not able to. In the future, we hope Gigabyte can update this rack-ear design and make small changes to the button and the exposed wiring. As we will see, the rest of the server is fairly forward-looking but this is a small detail that would go a long way in perception.
Moving to the rear of the unit, there is a lot going on. Starting with the bottom drive bays, there are 12x 3.5″ drive bays. These are also tool-less and SAS/SATA just like the front panel drive bays.
Above the 3.5″ bay section we see two 1.2kW 80Plus Platinum power supplies. Although this feels like a big system, its power requirements are relatively reasonable in large part due to the single socket AMD EPYC 7002 design.
The 2.5″ drive configuration on the rear is certainly more modern than we have seen previously. There are two 2.5″ SATA hot-swap bays. These are primarily for the OS boot media. Above the motherboard, we have four 2.5″ NVMe/ SATA hybrid drive bays. One can put a pair of Optane SSDs for write caching and large TLC/ QLC drives in the system for read cache drives while mirroring both pairs.
On the rear I/O panel we get legacy serial and VGA connectors, three USB 3.0 Type-A ports, a management LAN port, and two 1GbE ports. The 1GbE ports are controlled via an Intel i350-AM2 NIC which is a significantly better option than two Intel i210-at NICs. This is a great job by the Gigabyte team to use the more costly, but better NIC here.
The other item we see is an array of low-profile expansion slots. The S452-Z30 does not have full-height slots, but that is common in the 36-bay form factor. There is another slot in the middle of the rear I/O and that is an OCP NIC 2.0 slot. We are going to discuss these slots in more detail in our Internal Overview section.
Before we get to the Internal Overview, we wanted to point out the green tabs. In hyper-scale data centers such as with Facebook’s OCP platforms, green signals a surface designed to be handled. This may not seem evident but one can actually undo many screws on the side of the chassis, then pull the motherboard section out.
There are enough wires attached, as you will see in our Internal Overview that this is unlikely to be a casual service option. At the same time, it means Gigabyte can make a single chassis and swap different platforms in. It also provides a way to service the components under the motherboard. In older 36-bay systems, this required either hand/ arm contortions or a lot of disassembled.
Next, let us get to our internal overview of the system to see how this works.
Almost ordered 3 of these systems today, unfortunately it just missed the mark for us. It would have been a home run if someone took out 6 of the 3.5” drive bays from the back, moved the power supplies down to where ~ 3 drives were and put between 6 to 8 nvme where the other 3 drives were. That would free up room in the top 2 u for a motherboard with 2 sockets and 32 dims. With that you could have tired storage, or vm’s that could include nvr duties with possible room to add accelerator cards for video AI. Would be a branch office powerhouse, just add networking.
Something I think is an interesting use case is utilizing the PCIe slots for NVMe storage whether that is an internal card or an external chassis.
@Patrick: Thoughts of using this as a high-performance TrueNAS Core appliance with a Gigabyte CSA4648 (Broadcom SAS3008 in IT Mode) HBA?
We did not get to test FreeNAS. Proxmox VE or soon TrueNAS Scale are based on Linux.