# Help to choose HD



## fernandel (Jan 28, 2018)

Hi!

After long time I decided to buy a new HD for iMac 11,1. First I thought to buy SSD but for box from 2009 is IMO waste of many.
I have bad experience with Seagate (it is the second drive) and I have to choices: 
Toshiba or Western Digital (2 TB).
I will buy from the bellow site:
https://eshop.macsales.com/shop/hard-drives/3.5-SerialATA/

Thank you.


----------



## Deleted member 30996 (Jan 28, 2018)

I run 7200RPM HDD on all my Thinkpads as they have rubber rails on the chassis as part of their system and am used to how they run as far as execution of commands to bring up programs and such.

I was running a Hitachi 100GB HDD and switched one to a WD Scorpio Black 250GB HDD. After I got it set up with FreeBSD installed I did something and started to turn to another and it shocked me how much faster it was than the old HDD. I have WD Blue HDD but this seemed much faster.

I'd go with this one, a 2TB WD Black:

https://eshop.macsales.com/item/Western Digital/WD2003FZEX/


----------



## CraigHB (Jan 28, 2018)

I'm running older hardware and I find SSDs still provide a big improvement.  For example I have one machine with a mechanical disk and it takes much longer to do installs.  So there's still some advantage to be had.  I've been buying SanDisk  "SSD Plus" drives, they're fairly inexpensive as far as SSDs go.  I did have a little trouble with the SanDisk brand in an older laptop computer.  On that one I used a Pny disk.  It had some booting irregularity with the SanDisk drives I have.


----------



## xchris (Jan 28, 2018)

check the Toshiba SSHD Hybrid too


----------



## fernandel (Jan 28, 2018)

xchris said:


> check the Toshiba SSHD Hybrid too


I am not sure of hybrids and I do not know how long mine 2009 iMac will work...and WD as  Trihexagonal recommended.

Thank you to everybody.


----------



## chrbr (Jan 29, 2018)

I have had a Kingston 128G SSD on one system for testing, now I have a mirror of 2x240G ScanDisk for my current computer. If you have no need to store videos which requires more disk space, you will enjoy the speed - and you will also enjoy the silence.


----------



## CraigHB (Jan 30, 2018)

Yeah that's another thing about SSDs.  You don't realize how much noise mechanical drives make until you have a machine without them.  The silence ~is~ golden.


----------



## Deleted member 9563 (Jan 30, 2018)

I assume that a Mac would have room for two drives. An SSD for system files is a good idea. Definitely will add some snap. It can be quite small, and therefor cheap. As for your other choices, I'd definitely go for Western Digital.


----------



## Deleted member 30996 (Jan 30, 2018)

CraigHB said:


> Yeah that's another thing about SSDs.  You don't realize how much noise mechanical drives make until you have a machine without them.  The silence ~is~ golden.



No doubt they are more quiet than a mechanical HDD, and I  have had HDD that did that clicking thingy, but can put my ear to the palmrest of the one I'm on now and can't hear it.

I have 3 running within hand reach, my X61 .mp3 player a few feet away and can't hear any of them.


----------



## aragats (Jan 30, 2018)

fernandel said:


> I have to choices:Toshiba or Western Digital


Keep in mind that WD has color codes: blue, green, red, black and gold (maybe more). IMO blues and greens are waste of money: won't last long. Blacks and golds is the way to go.


----------



## Snurg (Jan 30, 2018)

Trihexagonal said:


> ...can put my ear to the palmrest of the one I'm on now and can't hear it.


With so little sensual feedback, I find it a bit annoying if I want to find out whether the disk actually spins and the heads move.
This is one of the reasons I love disks like the Seagate Cheetahs. They give very good sensual feedback. If I place them loosely outside of the computer, and let them work hard (say, build kernel/world), the drives actually move, only restrained by the cables.


----------



## Deleted member 30996 (Jan 30, 2018)

I'm getting ready in insert a Hitachi TravelStar 7K100 PATA HDD in my new Thinkpad T43 and start work on it. 

That has a lengthy review that goes into some detail about HDD specifics and the methods they used to test it.


----------



## CraigHB (Jan 30, 2018)

Trihexagonal said:


> I have 3 running within hand reach, my X61 .mp3 player a few feet away and can't hear any of them.



On my laptop I never could hear the mechanical drive that one had.  Though the performance was like night and day when I changed it out for an SSD.  The mechanical drive that laptop came with was a really slow one.  Now on my desktop computers, the way the chassis is and the fact they sit on the desk next to me, the noise from the drives is quite pronounced.  The one I mainly use has SDDs.  The other one I don't use as much has a mechanical drive and I can hear that one pretty distinctly.


----------



## swegen (Jan 30, 2018)

aragats said:


> Keep in mind that WD has color codes: blue, green, red, black and gold (maybe more). IMO blues and greens are waste of money: won't last long. Blacks and golds is the way to go.



In 2012 I put together a raidz2 system that initially had six WD greens. One died within first year, second after three years and third drive was replaced after five years. Three of the original drives are still running without errors after almost 6 years of spinning (~52000 Power-On Hours).

Cheap consumer grade drives can still be used when you are protected against disk failures by redundancy. Especially when disabling APM to reduce load/unload cycles and adjusting sysctl() kern.cam.da.default_timeout and kern.cam.da.retry_count to mimic TLER behavior.


----------



## Snurg (Jan 30, 2018)

swegen said:


> ... WD greens... Three of the original drives are still running without errors after almost 6 years of spinning


The problem with the greens is that they park after a few seconds of inactivity. (don't remember how much exactly. 8 seconds? 12 seconds?)
This is why I retired the only WD green I ever bought after I noticed the park count far exceeded the lifetime specification after only a few months of use.
There is a firmware patch from WD to turn off that behavior. If you didn't apply that, I'd be really curious about the SMART data of the surviving drives


----------



## swegen (Jan 30, 2018)

The first thing I did with the greens was to disable that "Intellipark" feature (I'm glad I read about it beforehand). So the _load cycle count_ attribute is about the same as _power cycle count_. That value with those drives is less than 100.

I have noticed that newer consumer drives from Seagate also have this head parking syndrome, but you can stop that behavior by disabling APM.


----------



## drhowarddrfine (Jan 30, 2018)

Hard Drive Failure Rates 2017


----------



## Deleted member 9563 (Jan 31, 2018)

How NOT to evaluate hard disk reliability: Backblaze vs world+dog


----------



## mefizto (Jan 31, 2018)

Hi swegen,


swegen said:


> . . . and adjusting sysctl() kern.cam.da.default_timeout and kern.cam.da.retry_count to mimic TLER behavior.


Can you please elaborate, provide a reference?

Kindest regards,

M


----------



## swegen (Jan 31, 2018)

By default the timeout is set to 30 seconds and retry count to 4. Reducing these values allows a redundant pool to continue delivering data without hiccups when a desktop drive without TLER is failing.

/etc/sysctl.conf

```
# mimic TLER behavior
# note: error recovery is useful in cases where you lost your redundancy!
kern.cam.da.default_timeout=7
kern.cam.da.retry_count=1
kern.cam.ada.default_timeout=7
kern.cam.ada.retry_count=1
```
This link has some additional information about TLER and desktop drives


----------



## mefizto (Feb 1, 2018)

Hi swegen,

thank you for your reply.

Kindest regards,

M


----------



## Deleted member 30996 (Feb 1, 2018)

FYI, I can hear that Hitachi TravelStar in my T43.

I've had tinnitus half my life but it does not effect my ability to hear well.


----------



## ralphbsz (Feb 2, 2018)

drhowarddrfine said:


> Hard Drive Failure Rates 2017



Warning: The hard drive failure rates, as reported by Backblaze, apply to those drives in their environment and with their workload.  They should not be extrapolated without a detailed understanding of the failure mechanism and the general operation of disk drives. 
For example, just because the worst few drives reported by them are Seagate doesn't mean that all Seagate models are bad, nor that all Seagate drives are individually bad.  They report on models that are typically obsolete by the time they have gathered good statistics.  One also has to see their failure rates in perspective of the system size: for a person who has 1 or 2 drives, the difference between an annual failure rate of 1% and 2% is not relevant in a single year, as either makes failures very improbable, but doesn't help with the effects of a disk failure.



OJ said:


> How NOT to evaluate hard disk reliability: Backblaze vs world+dog



Warning: While Henry Newman is an expert on HPC, his analysis (or rather rebuttal) of the Backblaze is spotty and incomplete.  

The important lesson is: Since you can't predict failures, and can't even very well predict the failure rates of drive models, you need redundancy to survive.  Independent of whether you believe Backblaze or Henry Newman, you have to be prepared for disk failures.  And with today's disk sizes and failure probabilities, if you want good data reliability, you better prepare for double faults (finding a second fault in the surviving drives when repairing a disk failure).


----------



## PacketMan (Feb 2, 2018)

aragats said:


> Keep in mind that WD has color codes: blue, green, red, black and gold (maybe more). IMO blues and greens are waste of money: won't last long. Blacks and golds is the way to go.



I agree.

Mt WD green drive died so fast I was mad, but it was my own ignorance then, but it was light-duty home use so I think my madness was still justified. I have since been buying WD Red drives for my FreeBSD NAS servers (yes I have more than one) and so far so good. If I was in desperate need and red was not on the shelf I would buy black or gold but I think gold would be a waste of money for home NAS use.


----------



## Deleted member 9563 (Feb 2, 2018)

I have WD blues from when they started making them, and they still work. In fact I've got blues running 24/7 that are some years old.

We all seem to have anecdotal reports on HDD life spans. For me it's that in the last 20 years or so, all my WD drives still work, and all the Segates have died, although there's only a few of those. It seems to be a personal thing and if I was to take a scientific approach to making a choice I'd say ask an astrologer.


----------



## shepper (Feb 2, 2018)

OJ said:


> It seems to be a personal thing and if I was to take a scientific approach . . .



There has been a datacenter that keeps track of various HDD models and has published results for the last several years.

BackBlaze HHD Stats.  This is about as close as you are going to get to "science".  WDC and HGST look good but a single drive failure, in a group of relatively new 4TB Red drives, may have skewed the results.  Without that one failure WDC/HGST would be an obvious winner.

I use Black WDC drives at home and feel the up-charge, compared to Blues, is worth it for me


----------



## rigoletto@ (Feb 2, 2018)

I never buy WD, Seagate only if the only option. I always do prefer HGST <-- but now this is WD too.


----------



## ralphbsz (Feb 4, 2018)

shepper said:


> BackBlaze HHD Stats.  This is about as close as you are going to get to "science". ...


The published BackBlaze data is the best *publicly available* information on drive reliability.  There are several academic studies published in FAST conferences, but they remove the identity of the disk drives.

There is much better data available, but only within large companies that use disk drives (HP, EMC, IBM, Dell, Oracle, ...), and within the drive manufacturers themselves.  The companies that use millions of drives per year do keep track of the reliability statistics rather carefully.  They also track how drive reliability correlates with temperature, vibration, workload, and so on.  But that information is never released to the public.

My personal statistics: Of the Seagate drives I have bought, all have failed within 5 years of use (every single one, none survived).  Of the Hitachi / HGST drives, none have ever failed, and several are still in use after 8 or 10 years.  With WD it's a mixed bag, some work well, some die.  I have only 1 or 2 Toshiba disks in old laptops, and those were thrown away before the disks failed.  Somewhere in the basement are also 30-year old 600MB and 1GB disk drives, which still function (they are only powered up once every few years); I think they were made by CDC and Fujitsu.


----------



## Deleted member 30996 (Feb 4, 2018)

I still have the IBM 80GB HDD that came with my '98 Gateway tower, and is what I used in my pfSense box a couple years before retiring it. 

It's an electricity hog, and why I retired it, but am pretty sure I could pull it out, fire it up and the HDD still be working.


----------



## CraigHB (Feb 5, 2018)

I got a pile of 80GB WD drives for cheap some years ago.  I still have one machine that uses them.  Never had a failure with those.

I've used Western Digital drives in DVRs over the years and those run 24/7.  The first drive I used for that purpose was the lower service life model, not sure on the label color.  It started getting sketchy after about about 3 years.  I've since gone to the highest service life WD drive for the DVR.  I have one that's been running for about five years now.

On my main Desktop computers I'm using SanDisk SSD drives which I believe is a division of Western Digital.  Though I also like Pny a lot.  Haven't had any trouble with the SanDisk drives.  Main reason I went with SanDisk is because of their affiliation with Western Digital.  I trust that maker.


----------



## Snurg (Feb 5, 2018)

Seagate's consumer grade drives have always been quite bad on the average.

However, as I already wrote, I use cheap used SAS 15k drives from ebay, which are normally 5-8 years old and ran 24/7 for many years.
Of a dozen of these (enterprise grade) drives that were from Seagate I had only one failure in the last 4 years. (Thanks to ZFS it was just a matter of exchanging for another one and resilvering)
As a private user, especially when it's a single drive without redundancy, a dead drive costs much more than just $15 work time for swapping the broken drive.
So I think there is a good reason not to save a few bucks on the drive.

My laptop had a built-in consumer class Seagate drive, and when that one, shortly after warranty expired, started to exhibit minutes-long delays when accessing, I quickly got a new WD black replacement and copied the data to it (dding took an eternity due to the delays mentioned), and that still works (it's 2.5 yrs old only, though).


----------



## trev (Feb 13, 2018)

I've been using Seagate hybrid hard disks in my MacMinis (all 6 of them) for the last few years (counted back... OMG... nine years) and have only had one failure being the first drive I bought which started suffering from a couple of bad blocks last year and so was replaced before the inevitable disaster struck in May of last year. The one that failed had been running FreeBSD 24x7 for 8.5 years. As always, your mileage may vary.


----------



## flipper_88 (Feb 14, 2018)

Here is my least favorite drive manufacturer hgst.


----------



## Deleted member 9563 (Feb 14, 2018)

ralphbsz said:


> The published BackBlaze data is the best *publicly available* information on drive reliability.



Also shepper

I think you either missed my earlier posted link or perhaps you disagree with it and aren't mentioning that. It turns out the Blackblaze data is not statistically interpreted correctly and is not useful or correct. See here: https://www.theregister.co.uk/2014/02/17/backblaze_how_not_to_evaluate_disk_reliability/ It seems like sales tricks are more popular and successful in IT than statistics, but nice to see that someone decided to speak up. The original myth lives on though.


----------



## shepper (Feb 14, 2018)

Statistic can be sales tricks but it all starts with data.  In the link I provided, the 4TB WD had a 8.87% Failure rate, but the number of drives tested (45) with drive days of 4113 indicates indicates that they had been used less than 100 days each. Note that the HGST 8TB drive had a similar number of drives tested and drive days but no failures.

Probably the most egregious use of statistics was by the statisticians hired by the Tobacco industry

Dell used to "Burn In" a newly order computer but I believe the term was misleading.  Manufacturing defects tend to fail early, I view Dells process as less burning in and more weeding out manufacturing defects.  Was the early WD failure due to poor engineering and materials or a manufacturing defect covered under warranty?


----------



## Deleted member 9563 (Feb 14, 2018)

shepper said:


> Was the early WD failure due to poor engineering and materials or a manufacturing defect covered under warranty?


That's a reasonable point, but when the difference is only one and half drives then it seems to me that there is not enough statistical basis to be meaningful.


----------



## ralphbsz (Feb 14, 2018)

OJ said:


> I think you either missed my earlier posted link or perhaps you disagree with it and aren't mentioning that. It turns out the Blackblaze data is not statistically interpreted correctly and is not useful or correct. See here: https://www.theregister.co.uk/2014/02/17/backblaze_how_not_to_evaluate_disk_reliability/


It's complicated.

On one hand: Backblaze is (to my knowledge) still the only source of disk reliability statistics that's publicly available without vendor/model information having been removed.  Backblaze's raw data seems trustworthy, since it would make no sense for them to forge the data.  But in its blogs, Backblaze people may reach conclusions that over-interpret their raw data, by going outside the limits of good taste in statistics.  I have no opinion on whether they do that or not; I look at the raw data only, and I'm capable of doing my own statistics.

On the other hand, Henry Newman's rebuttal of Backblaze's data is mostly just incorrect.  To begin with, he complains that the bulk of Seagate failure's in the old Backblaze data was caused by a small number of disk models, which even Seagate admits have a hardware problem, therefore they should be ignored.  But that doesn't change the (undisputed) fact that customers bought those disks, paid for them, and didn't get their money or their data back after Seagate admitted the hardware problem; and that if you calculate the average reliability of all Seagate drives, you need to include *all* Seagate drives, not exclude some that Seagate *after the fact* declared to be faulty.  Then Henry Newman complains that some of these drives are over 5 years old, and he claims that "disk drives last about 5 years" (direct quote from his writing).  Sorry, but that statement is nonsense; the disk manufacturers specify AFRs or MTBFs of ~1 million hours, which works out to about 112 years.  If, as Henry is implying, all disks fail within 5 years, or perhaps at exactly 5 years of age, they would violate that spec by a huge margin (their MTBF would be about 45K hours, not 1M hours).  But Henry's ludicrous statement contains a grain of truth: Given the progress of disk peformance/capacity, the economic lifetime of many disk drives is about 5 years; after 5 years, it becomes economically advantageous to take large disk subsystems out of production, and move the data to newer (higher capacity, lower energy/space consumption) subsystems.  Then Henry talks about the bit error rate of the drive, and claims that if you use a disk long enough you will get an uncorrectable error; here he fails to distinguish between a drive failing, and it having a single uncorrectable error.  Finally, Henry didn't read the Backblaze statistics carefully enough, and his complaint about 120% of drives failing is pointless, since Backblace explicitly tells us how their numbers are collected and calculated.

Backblaze is not in the business of selling disks; and in their blog they have even explained that they mostly ignore their reliability statistics themselves when making purchasing decisions.  If anyone else tries to use the Backblaze data to make purchase decisions, they have to understand the data first.



shepper said:


> Statistic can be sales tricks but it all starts with data.  In the link I provided, the 4TB WD had a 8.87% Failure rate, but the number of drives tested (45) with drive days of 4113 indicates indicates that they had been used less than 100 days each. Note that the HGST 8TB drive had a similar number of drives tested and drive days but no failures.


That doesn't surprise me at all.  Things like this do happen.

Anecdote from my former professional life: I was involved with shipping a product that contained several thousand disk drives, all of the same manufacturer and model (I will not disclose which manufacturer and which model, nor what the product or the customer were).  Within the first few weeks of operation, we had a failure rate of roughly 10% (which for a system with that many disks is a lot of dead disks).  This is for good quality enterprise disk drives from a reputable manufacturer, which had been burned in by the disk manufacturer, and then "burned in" again by the system integrator (where burnin means: a quick multi-hour test before shipping the system to the customer).  We ended up replacing all the disks with product from a competing disk manufacturer.  Why am I telling this story?  To demonstrate that sometimes real-world problems occur that are specific to one model disk, or to a specific production batch of disks.  In that sense, it does not surprise me that Backblaze observed a 8.87% failure rate of one specific batch of disks within 100 days (if it had been statistically significant); been there, done that, got the T-shirt, in a statistically significant unintentional experiment.



> Dell used to "Burn In" a newly order computer but I believe the term was misleading.  Manufacturing defects tend to fail early, I view Dells process as less burning in and more weeding out manufacturing defects.  Was the early WD failure due to poor engineering and materials or a manufacturing defect covered under warranty?


Burn-in for disk drives is more complicated.  Today's disk drives are supposed to be limited to ~550TByte of total IO in a year.  At "full speed" (about 250 MByte/s for fully sequential), it takes only ~4 weeks to reach the annual limit.  On the other hand, we also know that initial failure of disk drives can often take several weeks, if the failure is caused by problems with contamination, the spindle bearing, the seals of the enclosure, or the lubrication layer on the platters.  So a complete burnin that is likely to get the bulk of early failures is no longer possible, without exceeding the annual workload of the disk.  From this viewpoint, a systems integrator (such as Dell) no longer has the capability of performing burnin of disk drives, and simply has to trust the disk manufacturer.  And as the examples above show, things can go wrong with that trust relationship.


----------



## swegen (Feb 24, 2018)

I recently replaced two disks (one at a time) to my working raidz2 pool.
The new disks I installed were:

Seagate BarraCuda 4TB, 2 platters, 5400 RPM, 256MB, ST4000DM004
WD Blue 4TB, 4 platters, 5400 RPM, 64MB, WD40EZRZ
The Seagate is a Drive Managed SMR disk with the larger cache to compensate for the shingling process. But the resilvering took over 3 times more than with WD. Reading speed is still fine.

```
# Seagate
# resilvered 2.35T in 57h20m with 0 errors on Thu Feb 22 03:15:08 2018 (11.9 MiB/s)
dd if=/dev/ada3 of=/dev/null bs=1M
4000787030016 bytes transferred in 27562.923048 secs (145151043 bytes/sec) 138.4 MiB/s

# WD
# resilvered 2.35T in 16h55m with 0 errors on Fri Feb 23 06:54:27 2018 (40.5 MiB/s)
dd if=/dev/ada4 of=/dev/null bs=1M
4000787030016 bytes transferred in 28698.639272 secs (139406855 bytes/sec) 132.9 MiB/s
```


----------



## Snurg (Feb 24, 2018)

swegen This is an interesting comparison. The raidz parity concept shows increasing disadvantages over mirroring, the bigger and the slower the drives become.

And this becomes even worse, depending on system load. When resilvering can take weeks, the risk of another disk dying while resilvering becomes substantial.

I recently saw a chart from 2009 comparing raidz2 resilver times on arrays with different kinds of drives. The raidz2 resilver time on 600GB 7200rpm SATA consumer drives was many times larger than on 15k SAS drives. The former took up to 8hrs, the latter took about 1h 15min. Enterprise SATA drives were inbetween.  This is still a bit more than about the 50-60 minutes I experience when I take out a 600GB 15k drive for stashing away as backup and resilver the mirror using another drive.


----------



## Crivens (Feb 24, 2018)

I once came across some data from SUN about where the sweet spot is when it comes to size and resilver time. When desaster strikes, it does so mostly when resilvering (stress for the remaining drives). For raidz2 this was about 500GB. So I built my storage server using 8 disks from two manufacturers and bought in two shops so I got different charges. Now, if a charge/type fails together, all is well. Speed is not that important as the limit is the connection anyway.


----------

