Power Consumption and Noise
Power for these was provided by a 120W power supply. This is a “Replacement AC Adapter” Generally, these have not been great over the years.
The power consumption was not what we expected. The 2x 10GbE and 4x 2.5GbE system with the Core i7 ended up being the lower power system of the two. In Proxmox VE we get 12-14W as idle. At 100% load, the system is usually in the 39-40W range.
We saw the package power for these hitting 35W on both systems and that was a bit shocking. We would have expected or liked to have seen a lower figure even at the cost of performance.
The noise on this one was also shocking. Under load in our 34dba studio, we were hitting 36-37.5dba under 100% load for noise. You might be able to hear it through a $1000 Sennheiser microphone in the video, but this is not a loud system. It feels like it is also a system that should be louder, and that might be because the entire time testing it I wanted to put a 40mm Noctua fan in it.
Key Lessons Learned
The first key lesson learned here is cooling. Here we have an attempt at a hybrid cooling setup, albeit with saving a few cents on the thermal paste that interfaces with the chassis. Since it is using the chassis as a cooling element, the chassis gets hot.
What feels like the big miss of the system is cooling for the other side of the chassis. Even 1-2 40mm fan mounting points and vents could make this way better. We did not see it fail, but, again, we feel like the bottom section has too many components to not have some sort of airflow.
On the CPU side, here is the interesting part. Even though it was only a $20-40 upgrade, we actually would not get the Core i7-10510U again. You can see the performance and our Geekbench 6 challenge in the performance section, but there is something not captured in raw throughput.
These two charts show the system with stress-ng running and if you look at the pink middle bars it should become apparent what is going on. The Core i7 above is thermally limited and keeps trying to boost clocks, albeit with perhaps a 100MHz-200MHz loaded advantage over the Core i5. What happens is that it is constantly running into limits and thus the cores have to change P-states. If you look at Intel’s higher-end networking processors, AMD EPYC Bergamo, or Arm chips, they tend to focus on minimizing clock speed transitions as that introduces jitter. The Core i5 is not ideal, but it is much more consistent under load and at only a slightly slower clock speed than the Core i7. Our pick therefore would be the Core i5 if we purchased another one.
The final key lesson learned is more of a “what the heck?” moment. This system may have ports that could make it a low-end Core i7 desktop, but it is using a NAS motherboard with custom form factor NICs and even six unused internal SATA ports. It was completely different than we were expecting. When we ordered the units, we thought perhaps they had a low-cost riser to a standard PCIe NIC or OCP NIC, but we were very wrong on that front.
Key lessons learned, this system has many of them.
Final Words
At STH we get to see so many of these systems. Reviewing the next generation of a slightly faster system or one with an extra port is important for completeness. On the other hand, systems like this that have some crazy engineering are absolutely more fun.
The multiple 2.5GbE and dual SFP+ 10GbE port configuration are one that we have heard so many times in our other reviews makes us think that is exactly the market that they were targeting with this system. It is somewhat eye-opening that the easiest answer on how to build this system was to use a NAS motherboard with a custom NIC form factor, not just changing sheet metal to accept a standard NIC.
Overall, this is one that I think we liked the overall concept of, but it feels like it is going to have a refinement revision in the future, again hopefully with one or two fans added.
Where to Buy
We purchased units from AliExpress, but the team found a brand on Amazon with them. Generally, AliExpress’s brands like Topton are less expensive and pricing changes a lot. Here are affiliate links to both storefronts.
- AliExpress under Topton Core i5
- AliExpress under Topton Core i7
- Amazon under Moginsok Core i5
- Amazon under Moginsok Core i7
Note: We may earn a small commission if you purchase through these links. That is how we have the budget to purchase these units for reviews.
I’d honestly love to see the 10Gb units as Debian kubernetes hosts.
Or even better, find a compatible 4-6 bay SSD chassis (making use of those onboard SATA ports). With how cheap second-hand ~250-500G enterprise SATA drives are, this is potentiallu looking like the most cost effective way to make a strongly perfirmant cephfs cluster.
I’m thinking the x520 would get the latency values low enough that one could realistically deploy high performance distributed storage in a home lab, without having to sacrifice their firstborn… I might have to give this a shot!!
As i read this article I thought I was reading another installment of “What’s absurd to you is totally normal for us” on the “Fanless tech” website.
So this device has SATA ports that you may or may not be able to use. That’s a waste of a Dollar or more in costs. And those Molex ports – more waste.
And those “over the top” CPUs that make the case hot. Does the case get as hot as a Texas concrete road at the peak of summer heat when it doubles as a frying pan?
Ok, interesting product but…a perfect case of how to stuff 10 pounds of stuff in a 2 ounce baggie.
No thanks.
Do the SFP ports negotiate at lower multigig speeds? Like 5 and 2.5?
@Michael, not if they continue to be routed to an Intel 82599 / X520 series network controller.
Do they offer AMT management? Poor-mans KVM…
This is an excellent little box, add a bunch of SATA drives for bulk storage, and you got yourself an all-in-one node. Get 3 nodes, and now you have a Ceph cluster, and you can use DAC to have each node connect to two other nodes, not even going to need a switch. 2.5G for some inter-node communication and you’re essentially ready for Proxmox and Ceph or Rancher HCI.
@Patrick Did you try ESXi 8/8U1 during your testing? The VMware HCL lists a handful of X520-based NICs as compatible with 8, using the ixgben driver. Wondering if it could be coaxed into working if it’s not out of the box.
The review states, “We then tried the Core i7 system after it was done with weeks of stress-ng, and it failed Geekbench 6 as well. We did multiple tests, and it happened several times at different parts of the Geekbench 6 (6.0 and 6.10) multi-core tests.”
Since the system is not stable enough to complete one of the tests, how is this going to work as a server? Would extra cooling make it stable? Would down clocking the RAM fix the crashes? What about CPU speeds?
Without knowing why the system is unstable and how to fix it, I don’t see how the conclusions could be anything other than doesn’t work stay away.
Earlier today, the Youtube thumbnail was nauseatingly positive about these little units, faulty as they were. Then the article says We don’t like the heat, the noise, the TDP, the stability, the chosen components…
Hey guys running around today. Eric – we literally had stress-ng and iperf3 running on this unit for 30 days straight. The only thing that crashed it was Geekbench 6, but not 5. All we can do is say what we found. Steve – we tried 8. I usually try staying safe on VMware since the HCL is so much pickier than other OSes.
MOSFET – I can tell you that is completely untrue. This was scheduled to go-live at 8AM Pacific. I was heading to go film something in Oregon so everything was scheduled. The thumbnail only says 10GbE 2.5GbE and that is all it has said. The title has not changed either. I am not sure what is “nauseatingly positive” when it is just factually the speed of the ports.
Man, I literally just bought (and had delivered) a Moginsok N100 solution with 4 x 2.5Gbit ports. Not sure how I missed this, as it would have completely changed my calculus.
Since Geekbench crashes at different places each run, this points to a hardware malfunction. My suspicion is continuous stress-ng creates too predictable a load to crash the system.
If I were to guess, the on-board DC-to-DC power regulators aren’t able to handle sudden changes of load when a fast CPU switches back and forth from idle to turbo boost (or whatever Intel calls it). Such power transients likely happen again and again during Geekbench as it runs each different test.
Repeated idle followed by all cores busy may not happen frequently in real-world use, but crashing at all is enough to ruin most server deployments I’d have in mind.
I wonder if the system would be stable after decreasing the maximum CPU clock speed.
@Chrisyopher – Your N100 is great.
@Eric – we did not see it on the Linux-Bench scripts either. Those are pounding workloads over 24 hours. We only show a very small portion of those scripts here but those would be more short duration bursts and long bursts. There is something amiss, I agree, but it does not seem to be a simple solution.
I wonder if Geekbench is doing something with the iGPU that stress-ng doesn’t touch.
This whole thing looks like an interesting platform in completely the wrong case and cooling solution.
It also reinforces that Intel (or someone) needs a low cost low power chip with a reasonable number of PCI lanes. Ideally ECC too, although that’s getting close to Xeon-D and its absurd pricing.
The X520 dual port on PCI-E x4 with 2.0 interface is actually not able to run at full speed (can only perform ~14Gbps when both ports pulling traffic together), the older R86S with Mellanox is OK because the card is running at v3.0. This is a downgrade
@Michael, normally it is the transceiver which takes care of that – most 10GBaseT transceivers can do 2.5 and 5 GbE besides 10GbE.
Oh if only it had SFP28!
(just kidding)
Very interesting device with two SFP+ ports. I wonder if they have a new Alder Lake based NAS board coming out, so getting rid of the old stock.
Topton has been having issues with these with the barebones customers. Customers are putting in off the shelf components and they struggle in the heat and hang. I am not sure Topton’s supplied RAM/SSD arent any better than anyones personal depot of parts, but these do run hot and the SSD should probably have a heat sink along with adding a Noctua fan.
You have to admit, Topton and others have their ears to the home server/firewall market by adding the 10GbE ports. But to keep the prices down they are going through the back of the supplier warehouses looking for the right combination to complete the checklist.
With as hot as they run, I would surmise they don’t have a very long useful life.
I’d be interested in seeing packet per second benchmarks on these devices running pfsense or opnsense. Having an understanding of how well the platforms perform with a standardized routing or NAT workflow would be interesting.
Something I’m missing in this review are detailed routing tests, since this device is clearly intended to be used as router.
This means that many people would buy this to put pfsense on it, so it’s nice that you would include stats on that too.
pfsense lan -> lan (2.5 and 10gbit), lan->wan (both 2.5 and 10gbit with firewall), vpn (ipsec, wireguard etc) speeds.
More technical people will understand that due to 4x on pci-e 2.0 the 10gbit expierence will be mediocre at best, but non technicial people must also get this kind of info :-)
What units support Coreboot? What is the purpose of having a firewall that can be accessed at the chip level and defeated by a powerful adversary?
I bought many small PCs for PFSense installations. All the rackmounts have failed after 4 years (PSU or SSD). The fanless were much more reliable but not recommended for non technical users because you never know if the unit is ON or OFF due to the LEDs sometimes inverted betweek disk and power ! It also by default try to boot on a not installed Windows with secure boot instead of PFSense. If you have customers using it on a remote location, it’s an issue. Also, too many PFBlocker rules can lead the unit to be too hot and to crash especially when stupid users put the unit in a closed cabinet !
My policy is to use second hand DELL servers R210, R220, R230 with the IDRAC Enterprise option and a new SSD. You just have to choose a CPU with AES hardware acceleration in order to imrove OpenVPN and IPSEC speeds. The IDRAC option allows you to repair everything when a software update goes wrong: just ask to the user to plug his mobile phone in usb share connexion mode on a PC in the same LAN and you are done.
Yes, a rackmount server is much more bulky than a mini pc but these DELL have half the depth of other models like R610 and the 3 Mhz or 3,5 Mhz CPU performs much better on single core tasks than the more expensive models.
I try to order from AliExpress, and my order was canceled, the i7 is out of stock and they informe me that “it is expected that we will launch an 11th generation cost-effective CPU in mid-September, you can buy it at that time.”
Happy with the previously reviewed firewall, I love to see more reviews. I want SFP28 next, absolutely.
Nadiva – Recording that video in about a week and a half.
I wonder how hard it would be to put this thing in a little NAS case with 6×2.5″ bays and some fans… it’s pretty much ideal, but I definitely like the NAS option.
If anyone has some information where to buy the motherboard alone please write here
Looking forward to the updated R86S review!
Just got the new one with a Intel U300E and DDR5! Going to try it out with Proxmox VE later.
https://imgur.io/VO0PEMa
@Noobuser
Do you have any updates about this model? I’m looking for a future router with 10GbE SFP ports for home.
Is stable? Is crashing as the previous ones in tests? Is getting hot?
Hi,
would it be possible to know how the console port is working on this device? What kind of operations are supported by using this console port? thank you very much.
Thank you