It has been almost two years since Intel announced its Optane wind-down. While most SSDs spend the majority of their days running read-intensive workloads, there are a few applications that simply need high-endurance storage because they are under constant write pressure. Today, we are going to look at the Solidigm D7-P5810, an 800GB SSD that uses SLC NAND to achieve a 50-65 drive writes per day (DWPD) of endurance and consistent performance.
Solidigm D7-P5810 800GB Overview
The Solidigm D7-P5810 arrived in its lower-end form, an 800GB PCIe Gen4 NVMe SSD in a 2.5″ form factor. Something that Solidigm offers, that not all of its competitors offer, is a 1.6TB version of the drive.
With fresh Solidigm branding, the Soldigm D7-P5810 has the typical metal case finish we have seen on several generations of Solidigm drives.
The drive itself is a familiar 2.5″ model. We also saw a diagnostic port on the front of the drive.
Here is the U.2 connector.
800GB in a modern SSD is not a colossal capacity. We have already shown a 61.44TB SSD. Something that is fun to think about is that if that larger drive at 72x the capacity was a 1 DWPD drive, it would have a similar total petabytes written (PBW) figure as an 800GB drive at 72 DWPD.
This is not meant for capacity. Instead, this is meant for heavy write-pressure applications, such as logging, some database functions, and so forth, where consistent low-latency performance is required. Much of the industry has transitioned to larger TLC and QLC SSDs as data center SSD vendors target the capacity hard drive market. That leaves an opening in the high-performance, lower-capacity space being vacated by Optane.
Solidigm gives two different endurance figures for this drive: 50DWPD for random write workloads and 65DWPD for sequential write workloads. Random workloads put more pressure on the way data is placed on the NAND, and as a result, the endurance varies based on how data is written. In terms of overall PBW, this drive is rated at 73PBW.
By storing one bit per cell, NAND is storing one-third to one-quarter (versus TLC and QLC) of the data of other common SSDs. While that decreases capacity, it is much easier to maintain and read/ write proper charge levels to NAND cells in SLC. As a result, we get higher endurance and higher performance.
Now, we have had the chance to look at one in a hands-on review. Let us get to that next.
Intel SSDs were my goto during the long 6-Gbit SATA era (while supporting 5K Linux servers).
Dumb question:
Would the QLC 60TB running in SLC mode (15TB) perform as well as this? Or would the excess capacity slow it down?
You mentioning it and the SLC mode stuff made me realize we have the capability for 15TB SLC SSDs and nobody is really putting anything into that.
Give us back optane!
FInally, SLC.
This is actually not true SLC, it has QLC dies ran in pure pseudo SLC
Even more hilariously, the QLC drives are 5LC running in QLC
Optane is too good of a tech to allow it to die. Fixes 100% of the problems with NAND.
SLC gets you part of the way there but with the same limitations with all NAND.
I just finished our new SAN – all 2S SPR, 2TB, 2x Bluefield 3 (400GbE), 2x ConnectX7 (400GbE) – 24 x NVMe in a 2U Supermicro chassis.
Tier 0 is 4 servers and 280TB of Optane storage (4x70TB) ZFS with single parity.
Tier 1 is 12 servers and 4PB of Intel Enterprise NAND and dual parity ZFS.
Tier 2 is 45 drive top loaders (3) from Supermicro with 14TB Exos dual 12Gb/s SAS. These are limited to ConnectX6 and 2x100GbE / 200GbE.
Tier 3 is soon to be decommissioned Supermicro 90 drive top loaders.
I Picked up the Optane and NAND drives for less than half cost after the Solidigm branded drives started appearing. Still have over 2PB of NAND and 300TB of Optane in reserve.
Intel NAND was always superior to other NAND and Optane has no peer.