Supermicro SYS-112C-TN Block Diagram and Topology
This is the big change with this system is the topology. There is no longer a PCH in the Intel Xeon 6 platforms. This is a huge change. As a result, everything goes into the CPU. Another big change is that the platform has a huge number of PCIe lanes.

Somewhat related to the topology is frankly the motherboard. Despite the size of the 1U system, this is the entire motherboard. There are no traditional PCIe slots. Instead, it is a relatively tiny motherboard since there are M-XIO x16 slots for the risers, the OCP NIC/ AIOM slot, a DC-SCM slot for management, then the rest of the I/O is presented as MCIO slots.

Another small, but important change, is that the Intel Xeon 6781P has two compute tiles, and so in NPS=2 mode it looks almost like a top-end 4th Gen Intel Xeon Scalable system. The big difference is that we have much larger caches and this is all a single physical socket.

Next, let us quickly move on to the management.
Supermicro SYS-112C-TN Management
The server uses an ASPEED AST2600 BMC for its out-of-band IPMI management functions.

In the interest of brevity, the Supermicro IPMI/ Redfish web management interface is what we would expect from a Supermicro server at this point.

Of course, there are features like the HTML5 iKVM as we would expect, along with a randomized password. You can learn more about why this is required so the old ADMIN/ ADMIN credentials will not work in Why Your Favorite Default Passwords Are Changing.
Next, let us talk about performance.
Supermicro SYS-112C-TN Performance
We used this platform in our Intel Xeon 6700P and 6500P Granite Rapids-SP for the Masses Initial Benchmarks and First Look piece with the Intel Xeon 6781P.

Since we have the same platform, and this is our first 1P Intel Xeon 6700P system, we are just going to show the performance from that review.
STH nginx CDN Performance
On the nginx CDN test, we are using an old snapshot and access patterns from the STH website, with DRAM caching disabled, to show what the performance looks like fetching data from disks. This requires low latency nginx operation but an additional step of low-latency I/O access, which makes it interesting at a server level. Here is a quick look at the distribution:

Just as a quick note, the configuration we use is a snapshot of a live configuration. Here, nginx is one of the very well-optimized for Arm workloads, and the cloud-native processors. Or in other words, on modern CPUs, just having more cores helps. Even with that, we were a bit surprised to see the part hit perform slightly better than the Xeon 6780E, Intel’s cloud native CPU in the socket. Of course, the Xeon 6781P is benefiting from more threads and 20W higher TDP headroom. We should also note that we are not using QAT offload here which would be a significant boost for the Xeon 6 platforms as it can take away most of the SSL overhead.
MariaDB Pricing Analytics
This is a very interesting one for me personally. The origin of this test is that we have a workload that runs deal management pricing analytics on a set of data that has been anonymized from a major data center OEM. The application effectively looks for pricing trends across product lines, regions, and channels to determine good deal/ bad deal guidance based on market trends to inform real-time BOM configurations. If this seems very specific, the big difference between this and something deployed at a major vendor is the data we are using. This is the kind of application that has moved to AI inference methodologies, but it is a great real-world example of something a business may run in the cloud.

Here we can see a case where the P cores really power ahead over the E-cores. Something also worth noting is that if you want to estimate performance of the Xeon 6700P and have results for the Xeon 6980P, it is not too far off from the core count difference in many of these workloads, assuming you are not memory bandwidth bound. The 6781P is giving us a little bit more performance per core which is common on lower core count parts.
Our baseline here is that dual socket Intel Xeon Gold 6252. We selected that as part of our cloud-native workload series because it is what a major OEM told us they sold the most of in the 2nd Generation Xeon Scalable line. A 24-core part is not top bin, but was fairly high in that lineup since the 28 core parts were uncommon. That is still pointing to around a 5:1 consolidation ratio, which is great.
STH STFB KVM Virtualization Testing
One of the other workloads we wanted to share is from one of our DemoEval customers. We have permission to publish the results, but the application itself being tested is closed source. This is a KVM virtualization-based workload where our client is testing how many VMs it can have online at a given time while completing work under the target SLA. Each VM is a self-contained worker. This is very akin to a VMware VMark in terms of what it is doing, just using KVM to be more general.

Interestingly enough here, the E-core Xeon is doing well with smaller VM sizes, or many smaller VMs. As the VM size increases, the P-core Xeon seems to make up for only having 80 cores instead of 144. We have the top-bin Ampere and AMD numbers on here, as well as the Intel Xeon 6980P. We need to reiterate that we are not using a top-bin Xeon 6700P SKU. This is only 80, not 86 cores, and it is the single-socket optimized part. Still, it is doing fairly well.
Next, let us discuss the power consumption of a system like this.
That server looks fun! I would love to see a birds eye view of the server.
Everything seems so small.
It’s not good old SM anymore, but some super-proprietary Dell/hp-like weird overpriced systems. It all started with that disgusting “To maintain quality and integrity, this product is sold only as a completely-assembled system”.
What’s going on with all the USB 2.0 in the block diagram? It looks like USB is routed through the DC-SCM for management, and then broken out to each of the MCIO connectors. I don’t even see a USB host controller coming out of the Intel SOC. This seems like a major shift, my guess is that USB 2.0 is being used as a control plane for CPLD and similar devices.
For a new system I feel like I’d be looking at SSG-122B-NE316R or ASG-1115S-NE316R with 16 EDSSD drives, instead of just 12 older 2.5″.
@Iaroslav – actually, this system isn’t proprietary. its the new DC-MHS form factor.