Dell EMC PowerEdge T140 Management
We are going to focus our management discussion on iDRAC and some of the tools to manage an individual server. The company has offerings to manage fleets of servers as we saw in our Dell EMC PowerEdge MX review, but that can be several additional reviews. Instead, when you try to access the server’s remote management interface, you will see a standard interface that is equivalent to the company’s Intel Xeon experience.
The Dell EMC PowerEdge T140 utilizes iDRAC 9. One of the first things we noticed was the snappy responsiveness of the web UI. With this generation, Dell EMC upgraded the CPU that runs iDRAC. This means that the PowerEdge server is able to collect more data, send more data to fleet management controllers, and more notably, render pages faster.
We use a lot of these web management tools since our lab has racks of gear from dozens of vendors. Some are fast with far fewer features such as Supermicro’s IPMI. Some have a lot of features but are slow. For example, if you have used a Lenovo Xeon E5 generation system’s IMM, you have had time to contemplate whether a sundial is an appropriate tool for timing the page loads. With iDRAC 9, the system is responsive.
If you have remote administration teams that sit on another continent, iDRAC is a pleasant experience while some other, slower solutions, are rough on the admin.
The dashboard provides a simple UI to see status at a glance and directly launch IPMI management. One can also place the system into lockdown mode from the More Actions menu in the event you need to increase security.
The iKVM feature is a must-have feature for any server today as it has one of the best ROI’s when it comes to troubleshooting. iDRAC 9 features several iKVM console modes including Java, ActiveX, and HTML5.
Modern server management solutions such as iDRAC are essentially embedded IoT systems dedicated to managing bigger systems. As such, iDRAC has a number of configuration settings for the service module so you can set up proper networking as an example.
In terms of monitoring, iDRAC has a basic dashboard that gives stats. This can often be used as a sanity check.
Although the individual metrics are interesting, the larger picture implication is that this monitoring can feed into larger monitoring and management solutions. Dell EMC has their own tools and is also active in the industry so if you use a 3rd party tool, there is a high probability iDRAC is supported.
Part of that data collection involves simple tasks such as inventorying systems. It is common to have servers that were purchased at different times to have slightly different configurations. iDRAC can offer this data to management tools.
A standout feature we wanted to show was a BIOS configuration page. One can make BIOS changes via a web UI. That is astounding. If you (or your IT admin) has ever had to make a BIOS change using a legacy method, this is a huge benefit. Using iKVM was a major upgrade to the process but it was still onerous. An admin would remotely reboot a system and furiously attack the DEL, F2, or other keystrokes to ensure they made it into BIOS. From there, some of the less advanced 2018 BIOS setups still look like their UI designers idolized early 1990’s DOS programs. Some of the more advanced BIOS setups look more like Windows 2000 era programs. By having BIOS configurable using the iDRAC, one can use a modern UI and avoid that unpleasant process.
From the screenshots provided, you can see there is a lot more to iDRAC 9. If you want to learn more, go try it.
Our key takeaway here is that if you are accustomed to managing Dell EMC PowerEdge servers in your data center, the lower-cost tower server is going to feel the same. That is a great job by the Dell EMC team to keep the functionality virtually the same across platforms.
Dell EMC PowerEdge T140 Test Configuration
The Dell EMC PowerEdge came to us and we augmented the configuration for our benchmarking. Here is what we used:
- Server: Dell EMC PowerEdge T140
- CPU: Intel Xeon E-2186G (6 core/ 12 thread)
- Memory: 4x 16GB ECC UDIMMs
- Storage: 1TB 3.5″ HDDs, 1x 400GB Intel DC S3710
- Storage Controller: Dell EMC PERC H330
- PCIe Networking: Mellanox ConnectX-4 Lx
- Power Supply Configuration: Single 365W
These types of servers are often ordered with single DIMMs for cost-sensitive segments of the market. Our configurations with 4x 16GB DIMMs are being used to match the higher-end CPU. We also cycled several CPUs through to get performance figures at different ends of the market.
Unlike our previous Intel Xeon E-2100 series reviews, we now have data from ten different CPU options that we can present in our performance section as we have gotten several more SKUs in the lab in the last month. We now have all of the 6 core/ 12 thread and 4 core/ 8 thread options on the market today along with the Core i3-8100 and Core i3-8300 options.
Dell EMC PowerEdge T140 Topology
As modern systems get more complex, we have added topology into our testing methodology. How devices are connected is becoming a bigger concern for our readers. Here is the system’s topology:
On the Intel Xeon E-2100 series, we see a relatively simple topology. This is a single chip design with a PCH so it aligns closely with almost a decade of architectures in the space. With future platforms, this will not necessarily be the case so it is at least important for looking at this system in a few quarters.
Next, we are going to look at the Dell EMC PowerEdge T140 performance before we get to power consumption and our final words.
Hello,
How about hdd cooling fans ? Had temperature issues with 4 hdd’s in my T20. Had to switch to another case.
How many SATA connectors are there? The case/mobo looks like poweredge T110 II which we really like at our Uni. Unlike this one, in the T110 II the front 5.25 bays could be replaced by 3 or 4 hdd for ZFS mirrors (pool) by adding a small PCI-E sata card. I do hope that the PCI-E lanes support some basic gfx card (at least a nvidia Geforce GT1030) to get remote “ssh -X” for some virtualisation software. (PE T110 II has tons of issues with any gfx card as there is a kind of < 20W limit).
When the third image in a server review is a latch, you know you’re reading one of STH’s crazy in depth reviews. Nothing says hands on like featuring a part meant to put a hand on.
Any word on whether an inexpensive StarTech M.2 adapter can be used to accommodate a Samsung 983 DCT (MZ-1LB960NE) M.2 NVMe drive and have the T140 boot from it? I think Dell removed such NVMe boot ability from the T30.
Recently built a T140 for a client needing a bare metal SQL box with some solid per core grunt and the Xeon E’s are top of the class in that dept. Basically the T140 is a single socket Xeon workstation with iDRAC bolted on. That’s fine, because there’s always been a server line in that class and they’ve always been good value. I went with Intel SSDs in RAID 1, but with the BOSS card the server can do Intel VROC, which is NVMe RAID native….at least according to Dell.
Is there any advantage in having a Xeon E-xxxxG processor on that machine? Will the graphics capabilities of the chip be used for anything, or is there any wiring to make it available?
Hi Paulo – not for display output, however, you can use features like Quick Sync
I bought myself one, it’s really nice.
One thing that annoys me though is that the E-2136 I bought it with is capped for some reason at 4.18 GHz instead of going all the way up to 4.5 as it should have. And it doesn’t hit the 80W while testing single thread so it doesn’t throttle. I’m testing all this with intel extreme tuning utility to get the whole picture.
Dell capped it for some reason.
Actually the limit is exactly 4.2 under Linux, under Windows I only got to 4.18, but under linux, running cooler (70C) + undervolting and on a half an hour compiling it stays at 4.199 or 4.2 GHZ, never passing that, even by 1 Mhz, which is annoying as nowhere is written that you’re limited by Dell.
Preciso colocar mais HD SATA, ja usei uma CADDY HD/DVD e coloquei um SSD. porem tenho mais um disco SATA 1TB para adicionar além dos 4 existente.
qual a sugestao?
Hello
I just got the T140 with Intel 2126G (80 W), H330 controller and 1x 1 TB HDD.
My config should consume less power than the one you tested, since your config includes the Intel 2128G rated for 95 W
My power meter says:
220 V \ 50 Hz
51 W
Power factor 0.87
0.26 A
Why you have just 35 W when idle?
Hello, we just got one T140 with 2126G, 1TB HDD and 8GB DDR4 ECC for a client. Working upgrade:
3x8GB Samsung DDR4-2666 CL19 ECC (M391A1K43BB2-CTD -> 32GB)
Gigabyte Nvidia GT 1030 Low Profile 2G (GV-N1030D5-2GL, integrated video disabled)
Samsung 860 Pro 1TB SATA SSD (HDD for backup)
Will be used as an entry server-workstation with dualdisplay.
With iDRAC9, is an additional license required to get remote desktop capabilities, like with iLO?
Licenses are needed to get Idrac9 on a T140 to show the console display. I had to buy a vga-to-hdmi adapter to get a view of the console, for installation. A little disappointing for a server. Noise-level is low, but too much for a living-room.
Hello;
I am in between to choose this one for HPE ML30 Gen10. Which one would you choose and why?
Regarding CPU Turbo issue:
I’d guess its a deliberate BIOS limit on this board, any E-series Xeon i’ve seen is running max at its all-core Turbo frequency.
A comment above says that iDRAC9 as included with the T140 does not include a remote console feature. Is there an upgrade or license that adds that? And if so, does it ruin the economics of the server as is the case with the ProLiant servers that are a notch above the MicroServer Gen10 Plus?
Configured one of these for various test loads and have few comments. First of all it indeed does not support NVMe boot which is a bit odd for 2019s machine, not a real problem since I can always use a boot loader from USB to continue booting from NVMe.
The option to use Dell BOSS card also means you are limited to the two older SSD models BOSS supports and the card itself is just few bulk components for hefty price. Obviously if you already have a fleet of Dells and can acquire these things cheaply it’s a different situation and it also enables things in IDRAC and so on.
Also while the machine has multiple PCIE slots it considers most of the cards “third party devices” and automatically ramps up fans up to painful levels. Now there is a switch that can be used ignore “3rd party device fan control” via racadm. But for some reason my NVMe wasn’t considered third party PCIE card but a PCIE SSD meaning there was nothing to do. I found out that by downgrading iDRAC to 3.30 you could manually control the fans via undocumented IPMI codes (disgusting) so I tried downgrading. And in the end just installing 3.30 fixed the issue without any manual tuning, fans are set to auto and not running as 100% even with the unholy non-Dell third party PCIE device installed.
The decision to set fans at 100% when encountering “unknown devices” is just stupid. What’s the point of PCIE slots if you are not going to allow standard compliant devices to run easily? Is the case so badly designed that a typical PCIE device power draw could cause issues?
Is it make sense to buy xeon based processor in 2021? As now ARM servers can come soon with less cost better performance? Please suggest what do you think ? Thanks.