# disk recommendation 2021



## rootbert (Jan 4, 2021)

I am looking for an upgrade on my disks ... I do not follow the developments on disks. However, I have heard that conventional magnet recording based disks are recommended for usage with zfs. And I have read that the WD red disks I was quite satisfied with had some issues (some users on various places did say they "are not the real WD reds"), don't know if they are rumors or not. I do not need super-fast disks, but found that 5400rpm models are rare and not cheaper either. Reliability is my utmost goal, followed by noise. I will buy 4 of them using them in a stripe of 2 zmirrors on top of geli-encrypted partitions, they will run in my workstation which is basically 24/7 online. Size per disk should be > 10TB. So far the "Toshiba Enterprise Capacity" disks look promising - 5 years warranty and 2,5 million hours MTTF, power drain seems also low which is nice.

According to backblazes disk statistics HGST seems to deliver high quality disks, however, they are out of my budget, their price per TB is more than twice of the Toshiba.

What is your experience with recent(ish) disk purchases (2019, 2020), especially regarding the Toshiba and WD? - any recommendations?


----------



## SirDice (Jan 4, 2021)

rootbert said:


> However, I have heard that conventional magnet recording based disks are recommended for usage with zfs.


I don't know where you got this recommendation from but it's not entirely correct. There's nothing wrong with using SSDs and ZFS. 

However, compared to HDDs SSDs are still quite expensive if you want _large_ storage capacity. So in this respect HDDs are still quite useful.


----------



## rootbert (Jan 4, 2021)

there are various blog entries and sites (e.g. here - final words), especially with resilvering a ZFS SMR disks seem to deliver really bad performance (not that this is a planned scenario but I had bad experience with Seagate Archive HDDs). I am looking for spinning disks for data storage - personal: photo storage and and various VMs and jails for development at my job; everything performance critical is running on NVME SSD storage.


----------



## SirDice (Jan 4, 2021)

rootbert said:


> there are various blog entries and sites (e.g. here - final words), especially with resilvering a ZFS SMR disks seem to deliver really bad performance


There are issues with SMR, yes. But the remark gave the impression it wasn't about that and it preferred conventional HDDs over SSDs.


----------



## ralphbsz (Jan 4, 2021)

First step: Write down your requirements. How much capacity do you need? What speed do you need? Is your performance requirement sequential bandwidth, random seeks, a mix? How reliable do you need it? What is your backup strategy? And how much can you afford?



rootbert said:


> Reliability is my utmost goal,
> ...
> I will buy 4 of them using them in a stripe of 2 zmirrors on top of geli-encrypted partitions,


You say that you are interested in reliability. But then you build a system that uses 4 physical disks to give you 2 disks' worth of capacity, yet can only tolerate one fault. If the number "4" is determined by budget, and the number "2" by capacity need, then set up a 4-disk RAID-Z2 pool, and you get the ability to tolerate two faults.



> According to backblazes disk statistics HGST seems to deliver high quality disks, ...


The Backblaze data is the most accurate disk reliability information that is available to the public. If you want reliability, follow their statistics. If you can't afford that, you won't get reliability.



> What is your experience with recent(ish) disk purchases (2019, 2020), especially regarding the Toshiba and WD? - any recommendations?


Disk failure rates are under a percent per year (calculate the annual failure rate from 2.5 million hours). That means that someone would have to have several hundred disks to even see a few failures in two years (purchases in 2019 or 2020). Few amateurs who post here have that many disks. To make accurate measurements of disk failure rates (more than 1 or 2 failures), someone would have to have tens of thousands of disks. There are people who have that many or more (for example EMC, HP, IBM, Oracle, or Amazon, Microsoft, Google, Tencent, Baidu), but they don't publish their statistics. Your best bet is looking at the Backblaze data.

I could tell you that I've had two HGST disks (one since 2014, one since 2016), and neither have failed. Their predecessors were two Seagate disks (bought in 2009 or 2010), and both failed. But (a) you are interested in recent data for different manufacturers, and (b) nobody should reach any conclusions from measurements based on just a handful of disks.


----------



## rootbert (Jan 4, 2021)

ralphbsz said:


> First step: Write down your requirements. How much capacity do you need? What speed do you need? Is your performance requirement sequential bandwidth, random seeks, a mix? How reliable do you need it? What is your backup strategy? And how much can you afford?


as I wrote: I need minimum 20TB net storage, as mentioned before performance is not important - data transfer rates and access times from current disks via SATA is enough, backup strategy should not bother you when I am just interested in your experience about recent disks/manufacturers, price: 20-30€ per TB

I am well aware that there is not much interesting statistics data available, just wanted to know about your personal experience with some disks/disk series/manufacturers.

And also one point which leaves some discussions: what is your opinion on helium-filled disks? I mean, the physics behind and the benefits are clear. However, I am a little sceptical about their end of life. A manufacturer generally designs to meet their datasheet and thats it - if a disk survives longer it is "just a nice benefit", but does one share my thought that leaking helium over time leads to earlier failure compared to air-filled disks? I use older disks as cold/archive storage which works fine - saving 160GB of data on 6 pieces of 15 year old sata 160GB drives just to give them some use. The backblaze statistics with helium filled HGST disks shows I am wrong... at least with one highly priced manufacturer.


----------



## drhowarddrfine (Jan 4, 2021)

They also sound like a duck when they're spinning.


----------



## msplsh (Jan 4, 2021)

I store my stuff on CMR/PMR Seagate Enterprise drives.  Used to be Barracuda ES, Constellation ES, "Enterprise Capacity", and now Exos 7e.

(I recommend these for reliability, you can determine if they meet your other parameters)


----------



## garry (Jan 4, 2021)

rootbert said:


> I am looking for an upgrade on my disks ... HGST seems to deliver high quality disks, however, they are out of my budget, their price per TB is more than twice of the Toshiba.
> 
> What is your experience with recent(ish) disk purchases (2019, 2020), especially regarding the Toshiba and WD? - any recommendations?



I used a number of WD Red with ZFS and have had only good experience on half a dozen systems for years.  For the past year I've bought only *Seagate 4TB Terascale *HDD because they are very quiet, low-power, and they give me very fast performance -- and I can get refurbished units from Amazon for $50 each.  They are the best HDD I've ever used.  (The Terascale is Seagate's older name for their enterprise 5900rpm drives).  I would rather run the used Terascale enterprise drives than any of the current new drives (that I can afford).

ZFS also runs great on SSDs -- I switched recently from Samsung EVO to *SK hynix Gold S31 1TB* drives -- they are a little faster than Samsungs and represent some very good Korean tech.  The 1 TB ssd is now about $100.


----------



## rootbert (Jan 4, 2021)

hm, interesting, I haven't thought about refurbished disks, increasing redundancy level at lower costs sounds nice, I might also consider that.
I have had bought any disks since 2015 because I was just consolidating old hardware ... but from the time before I can say that I had a mixed bag with seagate (older ones with lower capacity were better reliability wise; not so good experience with 4 pieces of seagate archive from 2015), bad experience with samsung spinpoints (maybe the reason they do not sell disks any more ;-), good experience with WD red and the WD yellow enterprise storage.


----------



## msplsh (Jan 4, 2021)

In re "bad experiences" the drives of various families from the same company are literally completely different beasts.  You're not going to use WD Blues, likewise, I would never use Seagate BarraCuda (no prefix, "DM") drives.


----------



## garry (Jan 4, 2021)

msplsh said:


> In re "bad experiences" the drives of various families from the same company are literally completely different beasts.  You're not going to use WD Blues, likewise, I would never use Seagate BarraCuda (no prefix, "DM") drives.


Exactly that.  I would never use Seagate BarraCuda and yet am completely happy with


rootbert said:


> hm, interesting, I haven't thought about refurbished disks, increasing redundancy level at lower costs sounds nice...


A newer, and much faster 7200rpm, enterprise drive is the Seagate Exos 78E.  It has 2 million hours mtbf.  It is made with every nuance of engineering to get high capacity, high speed, AND very long life in cloud data centers.  (e.g. they are filled with helium gas)  A new one is about $140 for 4TB (and 8TB / 12TB are options).  I saw Exos 78E 4TB *refurbs* from goHardDrive being sold on Amazon for *$90*.  This drive gives incredible throughput -- about 250 MB/s sustained write.


----------



## ralphbsz (Jan 5, 2021)

rootbert said:


> And also one point which leaves some discussions: what is your opinion on helium-filled disks?


Today, they are nearly unavoidable at the higher capacities. So buy them.



> A manufacturer generally designs to meet their datasheet and thats it - if a disk survives longer it is "just a nice benefit", ...


No, that's wrong, and way to cynical. Manufacturers first design to meet their legal and financial obligations. So for example, if a drive has a 5-year warranty, you can be pretty certain that it will last that many years, because the cost to a manufacturer of having to replace a drive (even after 4.9) years is way too high. Profit margins on disks are razor thin, and customer returns would destroy those profit margins.

Once a manufacturer has done that, they design their drives to be useful to their customers. Who are their customers? Not you or me. We have to remember that over 90% of all enterprise-grade disks are sold to less than a dozen customers (the usual suspects, FAANG and their Chinese counterparts). So Seagate/WD/Toshiba all design disks that the likes of Microsoft, Amazon and Google want to use in-house, by the million. What do the big customers want? Foremost reliability. While all of them use RAID-like techniques to make sure data isn't lost just because one disk drive fails (or just because a giant data center catches on fire or falls victim to a flood), the cost of having to store redundant information and of replacing disks is very high. Second, the big customers want low cost, over the useful life of the disk, including the cost of providing power/cooling/physical space for the disk.

And somewhere hidden in that sentence is the key phrase: over the useful life of the disk. Today, disk drives are not used for longer than 5-7 years, because after that the cost of providing power and space for the disk exceeds the utility, and it becomes cheaper to replace it. So I'm quite sure that very few 1TB disks are still in use, and 4TB disks are leaving very quickly.

So to answer your helium question: If you buy a new disk now, filled with helium, you can be quite sure that you will get good service out of it for 5-7 years. If you buy it with a 5-year warranty, it is very certain that it will not fail (again, this is statistics only). After that, you might get lucky, or you might not.



> I use older disks as cold/archive storage which works fine - saving 160GB of data on 6 pieces of 15 year old sata 160GB drives just to give them some use.


For an amateur who doesn't care about space usage and for whom using computers is a hobby, that's a fine thing to do. I somewhere have a 40MB Conner SCSI disk that was bought about 35 years ago, I should see whether it still works. I also have a handful of 1GB Falcon/Imprimis class disks somewhere, which are probably the same vintage. But please don't expect manufacturers to design things so they remain usable after 30+ years; for them that is a waste of money, brains and time.



msplsh said:


> In re "bad experiences" the drives of various families from the same company are literally completely different beasts.


And this points out the fundamental problem of using the past (experience such as Backblaze) to predict the future. You can not extrapolate from a disk model 12345 manufactured in 20XY at manufacturing plant ABC having been very reliable to other models of the same manufacturer (different technology, different models, different plants) also being so. This is why people who study disk reliability for a living (there are dozens of us!) use very fine-grained data, for example tracking what manufacturing location was used for different components. So does this mean that there is absolutely no data about disk drive reliability? To first order, yes.

Here's my personal answer: Look at what manufacturer or model line does consistently well on publicly available high-statistics data, such as Backblaze. Do not listen to anecdotes from individuals (like me), because the plural of "anecdote" is not "data". On the contrary: experiences from individuals tend to be biased and blown out of proportion. If a manufacturer or model line does consistently well, for many years, you can then trust them on average to do better in the future.

Finally: GOOD BACKUPS. Your disks will fail.


----------



## diizzy (Jan 5, 2021)

It very much depends on what you're looking for, I would however highly recommend you to avoid SMR HDDs and brands that tends to screw around with SMART data.
In general when it comes to "consumer" 3.5" HDDs I've found Toshiba to be reliable in general but keep in mind that all HDDs do die at some point.









						Use of Shingled Magnetic Recording (SMR) technology in Toshiba Consumer Hard Drives. | Toshiba Electronic Devices & Storage Corporation | Asia-English
					

The introduction of Shingled Magnetic Recording (SMR) technology has enabled HDD manufacturers, such as Toshiba, to increase the capacity of their spinning platter drives beyond that of existing approaches.



					toshiba.semicon-storage.com
				



In general their X300 and N300 are good HDDs which ticks all the boxes (there are some differences between the two series however), they also work fine using HBAs such as LSI 2008 ones for instance. If you want more silent drives WD Purple might be of interest, I haven't tested those myself however. Be also aware of that 10TB+ HDDs might have different measurement (height).


----------



## ralphbsz (Jan 5, 2021)

There is nothing wrong with SMR disks, if one uses them correctly. Throwing traditional usage patterns at SMR disks in a performance critical situation is likely to be frustrating. On the other hand, most users are capacity or price sensitive. And most home users are so far away from being limited by disk performance that the performance impact of SMR doesn't matter.


----------



## olli@ (Jan 5, 2021)

rootbert said:


> And also one point which leaves some discussions: what is your opinion on helium-filled disks?


I’ve bought two HGST Ultrastar HE12 disks on 2017-10-05, i.e. more than three years ago. Since then, the first one is running 24/7 in my home server, used for storing multimedia files mostly, but also other generic data (the system itself is on a high-end NVMe SSD). The second one is used as a backup disk. No problems whatsoever (I do not use ZFS on them, though). The exact manufacturer ID is “HUH721212ALE600”. These are 12 TB helium disks with SATA-III interface, 7200 rpm and PMR, certified for 24/7 duty. They run surprisingly quiet, even though that was not the highest priority for me (my home server sits in the pantry, so noise is not an issue).

```
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <HGST HUH721212ALE600 LEGNT3D0> ACS-2 ATA SATA 3.x device
ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 11444224MB (23437770752 512 byte sectors)
```
I don’t think you need to worry about helium. According to manufacturers, even if the disks do lose some of the helium, they will just run somewhat slower, but they won’t fail until the gauge is down to 25 %. Note that you can monitor the value with smartctl(8) (SMART attribute 22). For my HGST disks it is still at 100 % after three years.

My recommendation is to avoid SMR disks, unless you have a specific workload that works well with SMR. Using ZFS excludes such workloads ― SMR disks don’t work very well in general with CoW-based file systems (Copy-on-Write, this includes ZFS), and the resilvering time is measured in _days_ or even _weeks_ with SMR disks, whereas it is a matter of hours for CMR or PMR disks. Note that resilvering time is critical, because a disk failure during this time is “not good”.


----------



## ShelLuser (Jan 5, 2021)

It's not only an issue about brand and time, but your geological location can also be of influence here.

I live in the Netherlands and I'm not a fan of WD disks, at all. I simply have seen too many of these die in my direct and in-direct surroundings, both professionally but also within my hobby. My favorite brand is Seagate, been using those since I had an XT with IDE disks and didn't know anything about anything just yet 

Anyway, that same IDE disk which I got around the 90's or so still works today yet I've also had several WD disks die on me. Especially some on those "WD books" (though, mentioned in all fairness, I can't rule out the option that the electronics died on me, because I could still revive some of those disks using a FreeBSD rescue system).

Anyway, point being: I'll take Seagate over WD any day of the week.

Yah... so about that. I have a few friends in the US who have exactly the opposite experience; several Seagate disks which died on them, sometimes in the most bizarre way possible, but WD turned out to be extremely reliable for them.

Therefor I can only conclude that there can be a severe difference between hardware within different continents. These disks are usually shipped in batches and if there's a few issues then it's not unreasonable to assume that more devices could be affected.


----------



## diizzy (Jan 5, 2021)

ralphbsz said:


> There is nothing wrong with SMR disks, if one uses them correctly. Throwing traditional usage patterns at SMR disks in a performance critical situation is likely to be frustrating. On the other hand, most users are capacity or price sensitive. And most home users are so far away from being limited by disk performance that the performance impact of SMR doesn't matter.


True but they perform horrible with ZFS in general :/


----------



## ralphbsz (Jan 5, 2021)

ZFS + SMR is not a good show, agree with that. It might be OK for archival / sequential write workloads; it might even be pretty darn good for that (because of the log-structured writes, as long as there are few deletions and the cleaner doesn't need to run). On the other hand, many file systems run pretty badly on SMR, until you fix them. I think ext4 on SMR is pretty good these days, but I may be biased (since I talk often to the people in charge of ext4, and I met Aghrayev (sp?) at some conference).


----------



## rootbert (Jan 6, 2021)

hehe you are so right ... in the end it boils down to having data of disks I probably won't buy because they are old. So after an adequate amount of time I will have personal experience and data about those disks, however, this will again render useless since this generation of disks will be out of date again when I will by my next batch of disks 

Anyway, I am replacing a zpool of three disks in one of the servers I manage and will have a few weeks time during burn-in phase to do some tests concerning noise and speed ... I will order the Toshiba Enterprice Capacity 12TB disks - the specs seem fine and the price/performance looks excellent. And then I will have at least a feeling about the noise of the disks and whether I like them in my workstation, too.


----------



## PMc (Jan 6, 2021)

In a case as Yours I would consider if I should buy disks of two different brands. While we don't really know the reliability of current disks, we certainly know that different disks are different - and they would probably not fail at the same moment in a common environment (which is well possible with disks of the same brand and batch).
The downside here is the performance difference. This does not hurt so much in a mirrored setup, where ZFS does some load-leveling; it is more unpleasant in a raid setup, where always a full stripe must be read from all disks, and every kind of even subtle performance difference will always bring a penalty.


----------



## JohnnySorocil (Jan 8, 2021)

I am in search for HDD for home ZFS NAS drive and was going to buy Seagate IronWolf 8TB model until I read random post on the net which recommended Seagate Exos. That changed my mind because it has 60 months of warranty (instead of 36) for only 5% cost increase (around $16 in local currency).


----------



## Alain De Vos (Jan 8, 2021)

When comparing Seagate and Western Digital it is weird I hear always conflicting information from different persons on which to choose.


----------



## msplsh (Jan 8, 2021)

Alain De Vos said:


> conflicting information from different persons on which to choose


It's because they don't specify precisely the drive models that are in use.


----------



## Sevendogsbsd (Jan 8, 2021)

I had never considered this: models within brands, as to failure rates. Personally, I have never had either a Seagate or WD drive fail, but my use is on a PC only. My NAS has 2 WD reds with a USB external drive as a backup. The external drive is a Seagate. The only mechanical drive I have ever had fail is a Maxtor and that was 10-15 years ago. I am 100% SSD/NVME now, except for my NAS and I haven't had any of flash media long enough (< 5 years) to see how it lasts.


----------



## gpw928 (Jan 11, 2021)

Hi Robert, there's a lot of good advice above.  Here are my random thoughts:

The Backblaze Hard Drive Data and Stats are a great resource, but never offer a complete view of all options, and often lag the market.
The introduction of SMR by Western Digital was sneaky (but WD seem to have responded to the bad press).  Read the datasheets of any disk you contemplate, and avoid SMR.
SSD and NVME are great if they fit your capacity and budget needs.  But at 20 TB+ might be a challenge.
Choosing very large disks has cost advantage, but an operational down-side.  Re-silvering time for a 3 to 4 TB disk is measured in days.  Disk access for other applications will be severely compromised (maybe unusable) while this is happening.  The smaller the disk, the less the grief (which is why vendors like IBM are still selling truck loads of 500 GB disks into the enterprise market).
You largely get what you pay for. 
My five CMR 3 TB WD Reds (which performed poorly in the Lifetime Backblaze stats) have been quite satisfactory in my ZFS server.  One infant failure, replaced under warranty, and another very recent failure at ~8 years continuous service.
As a generalisation, RAIDZ gives you surprisingly good striping across the spindles (much better than mirrors), and (usually) better capacity than mirrors.  RAIDZ2 gives you striping, plus better redundancy than mirrors.  I expect that mirrors will probably perform better if you have a truly random small block I/O pattern.
I am slowly switching from 8-year old CMR 3 TB WD Reds to 3 and 4 TB Seagate EXOS (enterprise) disks for durability and reliability (I'm laying in spares as the WD Reds are pretty old).
Reliability is so important to me that I am switching my ZFS server from 5 x 3 TB consumer grade disks in RAIDZ1 to 7 x 3 TB enterprise grade disks in RAIDZ2.
My ZFS server backs up everything else on the network, but  has too much data to back up itself.  It warrants as much protection as I can sensibly give it.  So, a UPS, data partitioning (to easily backup some of it to external media with a `ZFS send`), and multi-spindle redundancy with enterprise class disks are all in play.


----------



## olli@ (Jan 12, 2021)

Just a reminder: The purpose of a RAID is to improve performance, reliability and availability, _but_ it is not a replacement for a backup. So, whatever kind of disk setup you have, always make sure you also have a good backup that you can store in a safe place, preferably at a different location, so it is safe from theft, fire and other adversities.


----------



## PMc (Jan 13, 2021)

olli@ said:


> Just a reminder: The purpose of a RAID is to improve performance, reliability and availability, _but_ it is not a replacement for a backup. So, whatever kind of disk setup you have, always make sure you also have a good backup that you can store in a safe place, preferably at a different location, so it is safe from theft, fire and other adversities.



This is indeed very important, because ZFS pools can go bad without the disks involved.

For instance, I am currently observing this (details removed):

```
pool: bm
state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
config:

        NAME            STATE     READ WRITE CKSUM
        bm              ONLINE       0     0     1
          raidz1-0      ONLINE       0     0     2
            ada0p5.eli  ONLINE       0     0     0
            ada2p1.eli  ONLINE       0     0     0
            ada5p1.eli  ONLINE       0     0     0
        cache
          ada3p7.eli    ONLINE       0     0     0

errors: 18 data errors, use '-v' for a list
[...]
```

As we can see, these errors did not happen on any device. And I can always create new errors by creating a snapshot and trying to send that snapshot somewhere:


```
# zfs snapshot bm/bk/data@crap
# zfs send -R -PLcepv -I bm/bk/data@zmir.210112_172101 bm/bk/data@crap > /dev/null
warning: cannot send 'bm/bk/data@crap': Input/output error
# zpool status -v bm
config:

        NAME            STATE     READ WRITE CKSUM
        bm              ONLINE       0     0     2
          raidz1-0      ONLINE       0     0     4
            ada0p5.eli  ONLINE       0     0     0
            ada2p1.eli  ONLINE       0     0     0
            ada5p1.eli  ONLINE       0     0     0
        cache
          ada3p7.eli    ONLINE       0     0     0
```

So what to do with such a pool? Reboot didn't help. There is probably no other way than creating it new and restoring from backup.


----------



## Snurg (Jan 13, 2021)

As others already said: the failures of drives are unpredictable.
But often have common causes (previous operating environment, handling mistakes, firmware bugs,...) that result in concurrent failure.
So according to my bad experience the worst thing for small users like me and the OP is to use several drives of same manufacturer/type and possibly even batch.

Personally I have had failed drives of every manufacturer, so I do not have any "preferences".
In general, my personal feeling is enterprise grade drives (heavy bricks) tend to be way more longlife than the other extreme, CCTV drives.

PMc
This really shocks me!
Is it really this easy to render ZFS pools unusable, just by trying to send a snapshot?


----------



## PMc (Jan 13, 2021)

Snurg said:


> PMc
> This really shocks me!
> Is it really this easy to render ZFS pools unusable, just by trying to send a snapshot?


Short answer: no, not this way round. The snapshot-sending just triggered an error that was persistent in the pool.

A full scrub finally showed the file with the actual defect. (And as this file would be contained in any new snapshot, the snapshots also became erroneous.) Deleting that file and doing another scrub solved the matter.

But the important thing is: this error was not related to any disk failure. It was present in all mirrors, and probably caused by memory fault/power supply spike/whatever. In this case it concerned just a single file, which could be removed/replaced. But it is also possible for such a thing to happen within the pool's metadata, and then things become more difficult. 

Bottomline: a zfs pool is not infallible, it can get lost, and there should be a desaster recovery plan for this.


----------



## aponomarenko (Jan 18, 2021)

For reliability estimates see:









						GitHub - linuxhw/SMART: Estimate reliability of desktop-class HDD/SSD drives
					

Estimate reliability of desktop-class HDD/SSD drives - GitHub - linuxhw/SMART: Estimate reliability of desktop-class HDD/SSD drives




					github.com
				











						GitHub - linuxhw/EnterpriseDrive: Estimate reliability of enterprise hard drives
					

Estimate reliability of enterprise hard drives. Contribute to linuxhw/EnterpriseDrive development by creating an account on GitHub.




					github.com
				











						GitHub - bsdhw/SMART: Estimate reliability of HDD/SSD drives
					

Estimate reliability of HDD/SSD drives. Contribute to bsdhw/SMART development by creating an account on GitHub.




					github.com


----------



## msplsh (Jan 18, 2021)

Charts don't seem useful by themselves, sorted by MTBF.  One sample disks, don't know when the errors occurred, selective reporting bias, etc.


----------



## olli@ (Jan 18, 2021)

In fact I believe that most disks have about the same reliability, no matter what vendor. There are always people who tell you horror stories about arbitrary vendors – and the next person will tell you exactly the opposite. If there were vendors that were considerably worse than others, then those would be out of business very quickly.

The important thing is to buy drives that are suitable for the purpose. Do not put AV disks in a NAS RAID. Do not put a NAS disk as single disk in a desktop machine. Do not use server disks (meant for 24/7 duty) in environments were they’re spun down+up very often. And so on. _Do not just buy the cheapest disk you can get._ Think before you buy, and look at the disk’s specifications.

Another important thing is to watch the drive temperature. Many users make efforts to cool their CPU, but they forget about the disks. Run smartd(8) and configure it to alert you if the drive temperature gets too high.

Also, if a HDD has been used in a certain orientation (e.g. horizontal) for a long time, it should not be turned into a different orientation (e.g. vertical), because this might cause problems with the bearings. Been there, done that … – SSDs don’t have this problem, of course.

And finally, some people recommend to use different drives from different vendors when buying disks for a RAID, in order to spread the risk. This seems to be even more important for SSDs.


----------

