The Inspur NE5260M5 is designed to provide flexible and accelerated compute options to the edge server market. In March 2019, we first covered the Inspur NE5250M5 as part of our GTC 2019 coverage. There, it was shown as an edge server solution that could use features such as NVIDIA T4 GPUs for AI inferencing. Ever since then, we have wanted to review the server. In the meantime, the target of our review evolved and so now we have the Inspur NE5260M5 server to review. The two platforms are similar in many ways, but the NE5250M5 is designed with fewer storage bays but front power supplies while the NE5260M5 we are reviewing has rear power supplies and more 2.5″ hot-swap storage. If you need power in the front of the chassis, then our advice is to read our review and then check out the other version of this system. Now on to our review.
Inspur NE5260M5 Hardware Overview
We are going to break our hardware overview section into two parts. First, we are going to look at the external hardware. Second, we are going to look inside the system to show how the system works.
Inspur NE5260M5 External Hardware Overview
Opening the lid we can see the power distribution board and storage on the left, two riser assemblies in the middle, and an airflow guide strategically directing the flow of air.
Perhaps one of the most important features of the NE5260M5 is size. This is a 2U server designed to fit 19″ racks. While 2U servers are very common, what makes this server unique is that it is only 430mm or just under 16.93″ deep. For telecom deployments and other space-constrained deployments such as in retail, having a short-depth chassis is important for the rack infrastructure at the edge. Although the chassis is compact, the server still packs dual Intel Xeon CPUs, hot-swap storage, and a multitude of PCIe lanes.
On the left side of the chassis, we have six hot-swap 2.5″ drive bays. Usually, these will be SATA/ SAS bays. Whereas the NE5260M5 has these six bays in front, the NE5250M5 has power supplies mounted up front giving it two 2.5″ drive bays but all cabling in the front of the chassis. Something else that is interesting here is that this particular server is an Inspur Systems server assembled in the USA. While we did an article/ video on Visiting the Inspur Intelligent Factory Where Robots Make Cloud Servers, Inspur now has a new Silicon Valley manufacturing site. I have gotten a quick look at the new site under construction, but we have not done a tour or video on the facility yet.
In the center of the system, we have three full-height PCIe I/O slots. We are going to discuss risers later in this hardware overview. The front I/O has two USB 3.0 ports, two SFP+ 10GbE ports, a VGA port, and a management port.
As you can see from the right side of the chassis front, there are three additional PCIe I/O slots. In our test system, we have Mellanox ConnectX-5 dual-port 100GbE NICs.
In the rear of the unit, we have four fan modules and redundant power supplies. The power supplies are 1.3kW units. Again, some installations may require all cabling to be on the front of the chassis. If that is the case, you would look at the NE5250M5 where these PSUs are on the front of the chassis instead.
The four fan modules are hot-swap units with two fans in each. The handles at first may look a bit robust but they provide a hard standoff between the rear of the chassis and walls or other obstructions. That way the server can get airflow and the tabs on the PSUs and fans are not broken. In the short-depth edge installations that these are designed for, servicing the machine from both sides of a rack is not always possible so this is a small but important touch.
Next, we are going to open the system and look at the hardware inside the server.
We’re looking to deploy this form factor in 2021. It’s great that STH is reviewing this class of system.
Hello, I’m the EE of this SEVER, It also have several SKUs to support 6 Bays of 2.5” NVMe SSDs, or 4 Bays NVMe + 2 Bays SATA , or 2 Bays NVMe + 4 SATA, just let readers to know , thanks!