Something we are trying to do more of at STH is highlighting when things go less than perfectly. We now operate a fleet of several hundred nodes at any given time, and things fail. In the STH forums, we have had threads for years where members can post their hardware failures. This is the inaugural failure of 2022 for the new thread.
Farewell Seagate Exos X12 12TB Enterprise Hard Drives
These two Seagate Exos X12 12TB drives were two of 16 that we have in two 8-bay NAS units we use mostly just for backup storage.
Both drives were built in February 2019, and put into service many months after. That makes the drives less than three years old. What was quite interesting about the drive failures here is that both drives were next to one another in the NAS. It is actually more common that drives experience clusters of failures rather than individual random failures. Sometimes hard drive failures chain from one drive to the next, but also the chassis, backplane, and power are other common sources of multiple disk failures. If you ever experience multiple disk failures, especially in a short time period on a single backplane, it is worth exploring chassis-level challenges.
In fact, it is not the only pair of drives that we had fail in the first two weeks of 2022. We will have a story about a doomed HPE ProLiant cluster in the near future as well. Luckily, given how we had this system configured we did not lose any data here so restoring from backups was not necessary.
Final Words
Many assume that buying “enterprise” drives are tantamount to having extremely reliable drives that will not fail. This is, of course, not the case. Always assume drives can fail.
As we kick off the new year, this failure was nowhere near as significant as the DIMM fire that kicked off 2021’s thread. Still, if you have failures and want to share them, check out the new 2022 share your hardware failures thread in the STH forums.
A wonder if vibration have a role to play when multiple close-located-disks fails.
It has alway been a mystery to me why enterprise storage racks doesn’t have vibration-damping for each disk. Disks, fans etc. create a lot of vibrations…even when you close the door to your 19″ rack can be a threat. If I high sound can kill a disk (when fire-protection kicks in) why not those vibrations?…..it’s a fragile peace of equipment.
We actually have a 60 Bay jbod that’s full of this exact model that was installed in late 2018. They are the same lot number and were built in the same day. Over the course of the last three years more than half of them have failed. This is in contrast with the more than 1000 other mechanical hard drives in and around the save rack space that have experienced normal failure rates. We think the early production versions of this model had some issues as the warranty replacements have not failed.
Very interesting data point Lester. Thank you for sharing.
MrCal at this point though, every disk manufacturer also knows about how these drives will be installed in servers. My assumption is that they take the environment into consideration when designing drives.
No manufacturer is perfect, but……
Friends don’t let friends buy Seagate.
They are the frontier/spirit airlines of hard drives.
I hate to extrapolate from mostly anecdotal experience, but across my career I’ve seen more Seagates fail, across the whole lineup, than anything else. At some point it becomes more than an anecdote.
I don’t have much spinning media anymore. However, to contribute something useful, for 12TB disks in particular, I’ve had about half of the IronWolf Pro drives I have in a larger QNAP appliance bite the dust. I’m just done.
Don’t like WD, buy WD drive, still don‘t like WD.
Don’t like SG, buy WD drive, still don’t like SG.
Hpc admin here.
We have 5 jbods with these and over 50% failed bad. We had Seagate come on site, they wouldn’t tell us or our vendor what actually happened with this model. Eventually the rmas were coming back with the 14 and 16 TB models mapped to 12 TB. Then they stopped making them. Now Seagate is replacing the rest of our lot. They still won’t tell us what happened to the model, but avoid! T
Interesting! I have lots of 10TB IronfWolfs and they perform as expected. Did not go for 12TB at the time because second best was enough and the ones that I had – had shown little problems. If only more people understood the real nature of how fragile hdd’s really are. Have had LTO drives fail and that is so much better – tape drive failing to write is better than dead hdd. Yeah it costs money to replace – but it just money, the data is still there. Having been trough a couple of tape drive failures and countless hdd failures – I prefer the costlier tape drive replacement to dealing with hdd’s. With all the hot spares and all – such a big gamble and stress. I have experienced a firmware failure many years ago on one seagate drives series where all of my drives failed almost simultaneously, and the backup server also had the same drives = it was also failed, that was something. ABC, 3 copies, 2 places, 2 types of media. Everything else is gambling.
“Always assume drives can fail.”
Always assume drives will fail, so make sure to plan for the inevitable.
Redundant, is the R from RAID
I have nowhere near those size raids but after having a whole lot of five seagate drives failing like ten years ago, I have only bought Western digital black drives and have never ever had one full yet. Fingers crossed. They are the only drives I buy or recommend any more.
There was a manufacturing defect with the X12s. I replaced a lot of them. Twice. Those that didn’t fail in the system they were in were just masked as they all almost immediate failed in another one.
We lost a lot of Seagate X12 drives too. Like others it’s in the 40-50% of hundreds of drives
We had more than 10% of the X12’s fail in large JBODs in less than 18 months. This was hundreds of failed drives. It’s not a pop of millions, but I’d say our experiences echo what’s been said above.
How about the 16TB Exos X16 drives ? I have one in my desktop that has games on it and it performs really great.
It’s silent to and haven’t had any seagate drive fail on me since having one.
I know the Exos X16 is a enterprise class drive, but it works just fine in a desktop. I’m running a copy of HDSentinel PRO on every device i’m using.
I’m also using Solid State Drives, but more as a boot drive and that’s a WD Black version.
My personal choice is still WD and SanDisk for ssd’s and seagate for hard drive.
16 * seagate x12 (RAID10) backup pool.
3 years, differential daily backups, consumed just over 70% space so far. Weekly disk-hardware health check via LSI automation (cluster scan).
No failures as of yet. I keep them in a supermicro chassis in a half-height enclosed rack, which I routinely climb over (new wire runs etc).
It always comes down to ‘batch’, someone in some factory decided to do xyz instead of abc for whatever reasons, same badge and sticker, but the innards vary slightly, maybe even something as simple as tolerances. Unfortunately manufacturers aren’t in the business of coming clean as that would inevitably demand a ludicrously expensive recall.
Going forward I’ve already decided to jump onto the Toshiba disk-drive bandwagon for enterprise spinners. Seagate’s quality control is always a gamble in my personal opinion, and from the sound of things it seems we’re looking at a near 50/50 gamble you get a good batch or you don’t.
Brand avoidance in drives is like brand avoidance in cars–doesn’t really avoid the problem.
I think every one of us here have had a drive fail by almost every manufacturer. Sometimes in mass numbers indicating a manufacturing issue that was addressed in the replacement drives. Sometimes just one-offs that were at our site.
But almost any way you look at it, it’s not really smart to avoid a manufacturer based on one experience or even a series of experiences.
Case in point is Seagate. When Mr. Shugart (inventor of the hard drive) was at the helm, they were the top dog drive no question about it. But then the ‘home’ computer category was born and manufacturers flocked to make garbage for those machines that was much cheaper–cheap enough to entice Seagate’s core customers to shift purchasing patterns. And the next thing we know, Seagate is in trouble and Mr. Shugart was ousted from his own company.
Then came the pattern of garbage drives for the next decade from everyone from IBM, Seagate, WD and others where certain drives were good but also certain drives were a disaster and you had to know what was what.
Eventually consolidation and improvements brought us to where we are today–where solid manufacturing is available to the big guys who can partition their product precisely based on the target market and price, and there’s enough plateauing in innovation that everyone is essentially on the same plane. Leaving the only thing that can hurt–the odd lemon or dud that wreaks a little havoc.
I didn’t start looking at Seagate or WD seriously again until the Exos line and WD is basically the same HGST solid design. I’m sure each will have some mistakes in manufacturing, but unlike cars that require a recall when this happens, all we have to do is follow the golden 321 rule and we’ll be fine. :)
The issue is that mechanical drives are hot garbage being asset sweated at the end of their development cycle
If you can afford the upfront price, then SSD _IS_ cheaper in the long run. They last longer, usually have greater endurance than high capacity HDDs, consume vastly less power – this has knockon effects into operational costs of your DC (and SSDs can tolerate much higher heat excursions than HDDs, so cooling requirements are reduced even further)
Be careful assuming WD is HGST. Most of those lines were sold to Toshiba