AIC SB201-TU 2U 24-bay NVMe Storage Server Review

4

AIC SB201-TU 2U Internal Hardware Overview

Here is the internal view of the server. One can see we have storage in front, then the storage backplanes, fans, CPUs and memory, then I/O expansion.

AIC SB201 TU Internal Overview
AIC SB201 TU Internal Overview

The drive backplane is designed for NVMe or SAS/ SATA. Our system is configured for 24x NVMe storage so all twelve connectors are PCIe Gen4 x4.

AIC SB201 TU Backplane 1
AIC SB201 TU Backplane 1

The drive backplanes also have fairly large slots for airflow between the drives and into the rest of the system.

AIC SB201 TU Backplane 2
AIC SB201 TU Backplane 2

Here are the six fans. These are not in hot-swap carriers, but that is a lot of cooling for a 24-bay chassis.

AIC SB201 TU Fans
AIC SB201 TU Fans

The airflow guide is a hard plastic unit that was easy to install and remove.

AIC SB201 TU Top Airflow Guide
AIC SB201 TU Top Airflow Guide

That airflow guide is there to cool two LGA4189 sockets. These are for Ice Lake generation 3rd Generation Intel Xeon Scalable CPUs.

AIC SB201 TU CPU Sockets
AIC SB201 TU CPU Sockets

Here is a shot of the 2U coolers that come with the server for the CPUs. The motherboard supports up to 270W TDP CPUs, but the base configuration is set for 165W TDP CPUs. We realized this when we tried using Xeon Platinum 8368 CPUs. Those are not needed for NVMe storage servers and Intel has plenty of SKUs at 165W or below.

AIC SB201 TU Heatsink
AIC SB201 TU Heatsink

Speaking of below, taking photos of the eight DDR4-3200 per socket for sixteen DIMMs total was challenging given all of the cabling. AIC has another version of the motherboard with 2 DIMMs per channel, but that requires a proprietary form factor motherboard.

AIC SB201 TU CPU Sockets And Memory
AIC SB201 TU CPU Sockets And Memory

Behind the CPUs and memory, we have features like the OCP NIC 3.0 slot that is the primary networking slot.

AIC SB201 TU OCP NIC 3.0
AIC SB201 TU OCP NIC 3.0

The OCP NIC 3.0 slot has an airflow guide that we had a love-hate relationship with. It was effective, but it was significantly harder to get installed than the hard plastic CPU airflow guide. Then one of five times it just popped in place perfectly. Perhaps it was just us, but this felt like something that could be improved.

AIC SB201 TU OCP NIC Airflow Guide
AIC SB201 TU OCP NIC Airflow Guide

Next to the OCP NIC 3.0 slot is the Lewisburg PCH. This is an Intel C621A PCH.

AIC SB201 TU Lewisburg PCH Area
AIC SB201 TU Lewisburg PCH Area

Then there are the PCIe retimers. There are three x16 retimers and two x8 retimers. These are needed to ensure signal integrity between the PCIe slots and the front 2.5″ NVMe bays. The shorter runs from the front of the motherboard do not need retimers, but this is something we started to see more of in the PCIe Gen4 generation. In the PCIe Gen5 generation, we will see more retimers.

AIC SB201 TU PCIe Gen4 Retimer Cards For NVMe
AIC SB201 TU PCIe Gen4 Retimer Cards For NVMe

Something that we failed taking photos of just because we did not want to disturb the cabling is a duo of M.2 slots. Below this bouquet of cables, we have two PCIe Gen3 x2 M.2 slots for additional storage.

AIC SB201 TU Cables Over M.2 Area
AIC SB201 TU Cables Over M.2 Area

Next, let us get to the system’s block diagram.

4 COMMENTS

  1. Sure would be nice to see more numbers, less turned into baby food. For instance, where are the absolute GB/s numbers for a single SSD, then scaling up to 24? Or even: since 24 SSDs are profoundly bottlenecked on the network, you might claim that this is an IOPS (metadata) box, but that wasn’t measured.

  2. The whole retimer card – cable complex looks very fragile and expensive, it’s hard to believe this was the best solution they could come up with.
    The vga port placement is a mystery, they use a standard motherboard i/o backplate, they could have used a cable to place the vga port there (like low-end low profile consumer gpus usually do)
    The case looks little too long for the application. Very interesting server, not sure if it’s in a good way but at least the parts look somewhat standard

  3. We get 100GBPS Read and 75GBPS write speed on a 24 bay all nvme server. Ultra high throughput server with network connectivity upto 1200GbE.
    Would love for you to take a look.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.