AMD is not officially entering the lower-cost single-socket server market, but it is starting to feel like it. On the heels of our popular ASRock Rack 1U4LW-B650/2L2T Review, Giga Computing (formerly Gigabyte Server) is announcing a new line of 1U AMD Ryzen servers.
Giga Computing Goes Full AMD Ryzen Server
Here are the four new 1U servers with redundant power supplies. There is even a 10GbE model, the R133-C11.
E133-C10 | R133-C10 | R133-C11 | R133-C13 | |
Application | Edge | General-purpose | ||
CPU Support | Up to 105W TDP | Up to 170W TDP | ||
Memory | 4 x DDR5-5200 (ECC/non-ECC) UDIMM | |||
Expansion Slots | 1 x dual-slot (Gen5 x16)
1 x FHHL (Gen4 x4) |
1 x dual-slot (Gen5 x16) for GPU
1 x FHHL (Gen4 x4) |
||
Storage | 2 x 2.5” SATA
1 x M.2 Gen4 x4 |
4 x 3.5”/2.5” SATA
1 x M.2 Gen4 x4 |
4 x 3.5” SATA
4 x 2.5” SATA 1 x M.2 Gen4 x4 |
4 x 3.5”/2.5” SATA
1 x M.2 Gen4 x4 |
LAN ports | 2 x 1GbeE
1 x IPMI |
2 x 10GbE
2 x 1GbE 1 x IPMI |
2 x 1GbeE
1 x IPMI |
|
BMC | Aspeed AST2600 | |||
Power Supply | Redundant 550W | Redundant 800W |
These also have PCIe Gen5 x16 slots and a BMC for IPMI. All of them come with four DDR5 slots that support ECC or non-ECC UDIMMs. You can learn more about ECC DDR5 here:
Final Words
This is a really exciting time. AMD has not formally entered this market on the server side with an EPYC offering, so it is not an area we expect to see companies like Dell or HPE enter in the near term. On the other hand, AMD has started to work with some of its partners, like Giga Computing, to bring a new price, performance, and power capability to the lower-cost single socket market.
Hopefully, we get to look at the Giga Computing R133-C11 or one of the other models in the near future. Our results with the 1U Ryzen servers have been better than expected thus far.
It will be interesting to see what, if anything, Intel feels like doing about this. Hardly an existential threat in general; but having the AMD position on low end servers be “just use desktop processors” doesn’t make Xeon-E look very exciting unless the prices on those stay nice and close to the equivalent Core parts.
A few years ago this type of what I might call “PC as a data center quality server” gear would have come in handy…
..Covid && finding “critical infrastructure” under people’s desks: In any IT-centric (small/medium financial firm that grew 5X in headcount during my 9 year tenure) firm one might find that under-desk “personal server” (Win or Linux) that became a key piece of an application, yet never matriculated to the data center.
Covid starts spooling up and people start unspooling from corporate HQ in Manhattan (NYC -> WFH). My suburban data center (I had to be on-site during Covid) DR space starts to look like a Noah’s Ark of NUCs to full-sized PCs. None of which have dual power (not that this would have been helpful in an office-space level power environment anyway), no BMC/IPMI, not hooked into the DC’s per-server monitoring system.
So now everyone’s “favorite office plant” that they sat next to at work everyday and kept an eye on, suddenly became my unmonitored / unlabeled / unmarked on an elevation diagram / no remote console access, “petting zoo” to keep running.
It was amazing how many people thought that merely by being in the corporate DC that their pet servers were somehow now blessed with 24×7 support, in a perfect power environment (no one tripping over power cords, no issues with the cleaning staff and vacuum cleaners, nice big power strips meaning no need to think through how many PCs were plugged into a circuit, unlimited bandwidth on a shared 1-Gbit network, etc).
This was in a data center with a handful-of-thousands of dual-socket Xeon/Epyc-infused servers. 80/20 cattle-to-pets ratio and on a really good day it would hit a minimum of 1K servers per on-site sys admin/data center tech.
…
Eventually as I found it untenable to spend 10~15 minutes just to find which server (“Carl can you reboot XYZ5?”) to power-cycle (or perhaps something slightly more elegant if a monitor and keyboard were nearby), I started to push back, gently at first (it’s never a good idea raise the emotional temperature of one’s customers during a stressful event)…Stay firm, but try logic “Hi XYZ, IF you took your critical-for-your-app under-your-desk-in-the-office PC and left it in the lobby of Equinix over in Secaucus, what level of support would you expect). Hmmm.
Well as per usual, I MacGyver’d/muddled through with NUCs-on-a-shelf in formal data center rack, a couple those dual-power cord/single-outlet ATS for PCs in a rack, some apps relocated to dual-socket servers (OpenStack perhaps caught a few apps also).
…
Yeah a formal, data center quality “baby server” 1U format would have come in handy, in said timeframe (but I imagine s t r e t c h e d thin logistics chains at the time would have hindered my ability to obtain said creature in any case). Hmmm.
Life seldom presents one with complex equations where every variable can be optimized at the same time.
I think Ryzen is being considered for server roles because of ECC memory support combined with good performance, price and power efficiency. I’d definitely be interested if STH focused on exploring the reliability of ECC memory by using a mechanism to generate bit flips in a future analysis of this hardware.
The fact that AMD never disabled ECC in hardware and that 65W AMD CPUs are very very efficient and probably enough for most server applications, makes server manufacturers start to think of their strategy. And they have integrated graphics now.
It is the one nightmare Intel doesn’t want. And because it doesn’t have heterogenous core setups it works really well as a VM, well, better than Raptor/Alder lake for a start
The Zen 4 memory speed issue when you put in 4 sticks is just too great with regard to the slowdown to use the current chips as server parts. You are limited to 2x32gb of ddr5 or be limited to 4x32gb with speeds slower than last gen boards with ddr4.
The pcie lanes are also still so limited. Its hard to mirror a nvme when you don’t have a second slot (or the second slot is off the chipset). Speaking of chipset, using asrock rack gear as an example, far too many items are hung off the chipset and then funneled to the 4 connecting lanes to the cpu. If there was a way to break x1 pcie 5 lane to x4 pcie 3 lanes, you could get some nvme storage but alas. So despite the pcie speed bumps, the overal impact is you still only have x4 lanes=1 nvme drive.
@fuzzy You started to see Intel finally stop some of the crazy artificial segmentation in Alder Lake, moving to just needing a workstation chipset to enable ECC support. The biggest problem with their old strategy was the low-end workstation and server variants were taking way too long to get to market after the consumer part release. These are all the same dies. They need to simplify things for their engineering teams and just make one set of processors/chipsets that OEM’s can turn into whatever end product they want. The bleeding has started at Intel and they need to get way more efficient.
I really hope Asrock or Giga release a 1U Ryzen server that does away with SATA and goes full flash with U.2/U.3. This opens the possibility of deploying QSFP28 in the pcie slot and have some local storage that keeps up with the system’s overall performance. A system with 16x lanes for storage and 8x lanes for networking paired with the high frequency and low power of the Ryzen would make a beast of a firewall appliance.
@Colin totally agree, I am also looking at flash-only servers. Asrock previously listed a model called 1U2E-B650/2L2T with two U.2/U.3 hot swappable drives but it is now gone from their product page.
The giga servers have a second PCI slot that can be used for a second NVMe drive at least, but you would lose the host swap capability.
Does anyone know, what does “for GPU” mean for the main PCIe slot? I’m assuming it can run a 100G NIC just fine?