Inspur Electronic Information NF3180A6 Review A 1U AMD EPYC 7003 Milan Platform with a HUGE Cooler

3

Inspur NF3180A6 Internal Hardware Overview

Here is a look inside the server with the front drive bays to the right and the rear I/O to the left to help orient our readers in this section.

Inspur NF3180A6 Internal Overview
Inspur NF3180A6 Internal Overview

Behind the drive bays is a fan partition. You can see the honeycomb airflow guide on the front of the fans as well.

Inspur NF3180A6 Fan Partition
Inspur NF3180A6 Fan Partition

This fan partition is made up of eight dual fan modules. A nice feature is that these modules are a higher-end design that does not utilize cabled connections to the motherboard. This is common in 2U servers, but many 1U servers do not have this feature.

Inspur NF3180A6 Hot Swap Fans
Inspur NF3180A6 Hot Swap Fans

Behind the fans are PCIe connectors that this system is not using. Our sense is that if we had a denser NVMe configuration these would be populated with cables for front NVMe.

Inspur NF3180A6 Slim And Fans 1
Inspur NF3180A6 Slim And Fans 1

Here is another angle showing how the fans and this PCIe Gen4 I/O area works.

Inspur NF3180A6 Slim And Fans 2
Inspur NF3180A6 Slim And Fans 2

Behind that partition, the big feature is the single AMD EPYC 7002/ 7003 SP3 socket. In our server, we have a single AMD EPYC 7763 64-core part. While that is still a PCIe Gen4/ DDR4 generation CPU, it has 64-cores so in normal x86 (non-accelerated) computing, it is competitive with many of the Sapphire Rapids Xeon parts. Another huge benefit is that as a single-socket server, one can use ā€œPā€ SKUs that offer significant discounts for single-socket-only servers.

Inspur NF3180A6 AMD EPYC CPU And Memory 3
Inspur NF3180A6 AMD EPYC CPU And Memory 3

The server also uses DDR4-3200 (1DPC) and DDR4-2933 (2DPC) making it much lower cost than the new DDR5 generation. Our system is filled with 16x DDR4 DIMMs for a 2DPC setup.

Here is the server with the full airflow guide configuration There is a baffle that guides air from fans to the CPU heat sink. There are also guides around the DDR4 memory so that the memory does not use too much airflow.

Inspur NF3180A6 AMD EPYC CPU Cooler 1
Inspur NF3180A6 AMD EPYC CPU Cooler 1

Here is a slightly different angle to help give some perspective.

Inspur NF3180A6 AMD EPYC CPU Cooler And Memory 4
Inspur NF3180A6 AMD EPYC CPU Cooler And Memory 4

At this point, many of our readers will notice a key feature, the heatsink. The heatsink had extensions via heat pipes to help add surface area for cooling.

Inspur NF3180A6 AMD EPYC CPU Cooler And Memory 3
Inspur NF3180A6 AMD EPYC CPU Cooler And Memory 3

Here is a top view of the AMD EPYC processor, memory, and cooling just for some sense of scale.

Inspur NF3180A6 AMD EPYC CPU Cooler And Memory 1
Inspur NF3180A6 AMD EPYC CPU Cooler And Memory 1

Next to the memory, we have an OCP slot for internal mezzanine cards like a SAS controller.

Inspur NF3180A6 Internal OCPA 3
Inspur NF3180A6 Internal OCPA 3

There is a fairly large area in the server dedicated to this function.

Inspur NF3180A6 Internal OCPA 2
Inspur NF3180A6 Internal OCPA 2

That OCP area is in front of the power supplies. Next to the power supplies we saw this large PCIe connector with a big pull tab.

Inspur NF3180A6 Big PCIe Cable Connector
Inspur NF3180A6 Big PCIe Cable Connector

Here is that plus the ASPEED AST2500 BMC.

Inspur NF3180A6 ASPEED AST2500 BMC
Inspur NF3180A6 ASPEED AST2500 BMC

On the other side, next to the DIMMs, we have a number of connectors. That includes two 7-pin SATA connectors.

Inspur NF3180A6 2x 7 Pin SATA
Inspur NF3180A6 2x 7 Pin SATA

Behind this area is the cooler extension and then additional I/O.

Inspur NF3180A6 OCP NIC 3.0 And Riser Area Above
Inspur NF3180A6 OCP NIC 3.0 And Riser Area Above

Part of that is the OCP NIC 3.0 slot. That has a NVIDIA ConnectX-5 100GbE card installed.

Inspur NF3180A6 OCP NIC 3.0 Slot With NVIDIA ConnectX 5
Inspur NF3180A6 OCP NIC 3.0 Slot With NVIDIA ConnectX 5

Next, let us get to the system topology.

3 COMMENTS

  1. Is this a “your mileage can and will vary; better do your benchmarking and validating” type question; or is there an at least approximate consensus on the breakdown in role between contemporary Milan-bases systems and contemporary Genoa ones?

    RAM is certainly cheaper per GB on the former; but that’s a false economy if the cores are starved for bandwidth and end up being seriously handicapped; and there are some nice-to-have generational improvements on Genoa.

    Is choosing to buy Milan today a slightly niche(if perhaps quite a common niche; like memcached, where you need plenty of RAM but don’t get much extra credit for RAM performance beyond what the network interfaces can expose) move; or is it still very much something one might do for, say, general VM farming so long as there isn’t any HPC going on, because it’s still plenty fast and not having your VM hosts run out of RAM is of central importance?

  2. Interesting, it looks like the OCP 3.0 slot for the NIC is feed via cables to those PCIe headers at the front of the motherboard. I know we’ve been seeing this on a lot of the PCIe Gen 5 boards, I’m sure it will only get more prevalent as it’s probably more economical in the end.

    One thing I’ve wondered is, will we get to a point where the OCP/ PCIe card edge connectors are abandoned, and all cards have is power connectors and slimline cable connectors? The big downside is that those connectors are way less robust than the current ‘fingers’.

  3. I think AMD and Intel are still shipping more Milan and Ice than Genoa and Sapphire. Servers are not like the consumer side where demand shifts almost immediately with the new generation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.