ICC Vega R-116i Power Consumption
For this, we wanted to show the system using a few data points we captured during testing to show a range of potential values of the solution from idle to completely heat soaked.
- Idle: 0.10kW
- STH 70% CPU Load: 0.24kW
- 100% Load: 0.31kW
- Maximum Recorded: 0.33kW
There is variability here based on configuration. Note these results were taken using a 208V Schneider Electric / APC PDU at 17.5C and 71% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance.
STH Server Spider: ICC Vega R-116i
In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.
As you can see, the server does not aim to have the highest storage, networking, and accelerator density. Instead, it is focused on delivering eight fast cores and accompanying fast memory to reduce latencies improving response times.
Final Words
When we started this review, I was wary that it would be a system with some consumer parts thrown into a 1U case. I was wary that the quality would not be up to par. I was also dead wrong in my initial assumption.
The ICC Vega R-116i is a well-engineered machine with the company touching virtually every facet of the solution down to the actual motherboard and cooling design. Even the BIOS has pre-configured overclocking settings for specific types of workloads. ICC certainly went the extra mile to deliver a unique and solid solution.
At the same time, if you are accustomed to high-end Dell EMC PowerEdge or HPE ProLiant gear, you will notice that some of the custom-designed air baffles and connectors are not in the ICC system. It is fairly easy to tell that this is not a server that sells 100,000+ units per year. On the other hand, it also feels extremely well built. Using the chair analogy, if Dell EMC is the Herman Miller of the server world, ICC is the bespoke chair maker that builds a chair specifically to meet your dimensions and needs.
Overall, the ICC Vega R-116i performed well for us running stable throughout the weeks the system has been crunching benchmarks and numbers in our data center lab. The overall build is excellent and you can tell how ICC engineered the various aspects of the system down to small details like putting a skin on the web management tool and using custom DIMMs. For those in the HFT realm or others that need this type of performance, this is an excellent solution.
Yes, I’m sure the buy/sell orders get sent out MUCH quicker than with some loser 4 GHz processor when both are on common 1 GbE networks….
(This is a moronic concept, IMO; catering to stupid traders thinking their orders ‘get thru quicker’? LOL!)
It’s for environments where latency matters, and the winner (e.g. the fastest one) takes all. And as you say if 1 GbE is “common” then there are certainly other factors that could be differentiators. That 25% in clock advantage could be to shave 20% off decision time or run that much more involved/complex algorithms in the same timeframe as your competitors.
Calling people whose reality you don’t understand stupid reveals quite a bit about yourself.
Does the CPU water cooler blow hot air over the motherboard towards the back of the case?
Mark this is a very healthy industry that pushes computing. There are firms happy to spend on exotic solutions because they can pay for themselves in a day
@Marl Dotson – for use in a latency sensitive situation you would a install fast Infiniband card in one of the PCIe expansion slots. You certainly would not use the built in Ethernet
That cooler looks to be very simular to Dynatron’s, probably an Asetek unit.
https://www.dynatron.co/product-page/l8
@David, I’m not familiar with the specific properties of the rad used here, however the temp of the liquid in the closed loop system should not exceed +4-8°C Delta T.
@Dan. Wow. Okay. At least one 10Gb card would be the basic standard here: 1Gb LAN is a bottleneck as @hoohoo pointed out. Kettle or pot?
Follow up: is this an exciting product? 5Ghz compute on 8 cores is great, but there are several other bottlenecks in the hardware, in addition to the Nic card.
Hi,
i would like to make a recommendation concerning the CPU Charts for the future.
It would be helpful if you could add near to the name of the cpu, the cores and the thread count. For example EPYC 7302p (C16/T32). It would be easier for us to see the difference since its a little hard to keep track in our head each models specs.
Thank you
A couple of comments here assuming that onboard 1GbE is the main networking here. You’ve missed the Solarflare NIC. As shown it’s a dual SFP NIC in a low-profile PCIe x8 format. That’s at least a 10GbE card according to their current portfolio. It may even be a 10/25GbE card; https://solarflare.com/wp-content/uploads/2019/09/SF-119854-CD-9-XtremeScale-X2522-Product-Brief.pdf
For those commenting on the network interface – this is the entire point of the SolarFlare network adapter. This card will leverage RDMA to ensure lowest latency across the fabric. The Intel adapters will either go unused or be there for boot/imaging purposes due to the native guest driver support which negates the need to inject SolarFlare drivers in the image.
It is unlikely you would see anything that doesn’t support RDMA (be it RoCE or some ‘custom’ implementation) used by trading systems. You need the low latency provided by such solutions otherwise all the improvements locally via high clock CPU/RAM etc are lost across the fabric.
For some perspective here, I think HedRat, Alex, and others are spot on. You would expect a specialized NIC to be in a system like this and not an Intel X710 or similar.
The dual 1GbE I actually like a lot. One for an OS NIC to get all of the non-essential services off the primary application adapter. One for a potential provisioning network. That is just an example but I think this has a NIC configuration I would expect.
There is a specialized NIC and even room for another card, be it accelerator or a different, custom NIC. If you have to hook up your system to a broker or an exchange you’ll have to use standard protocols. The onboard Ethernet may still be useful for management or Internet access.
the storage support the NVME SSD?