# Any Reccomendations For A New Hard Drive Purchase?



## RedPhoenix (Jun 20, 2019)

Hey guys.  It's going to be my birthday on July 19th!  So Mom is getting me a 2TB Hard Drive, which of course is going into my Server.  But I've seen a few kinds... One is Western Digital's Blue series, which they say is for Servers I believe. Can I get by with just a plain 'ol 2TB Hard Drive that really can be used for anything, but in my case, for my Server?  I ask because there's just so many models I could choose from... Then, I hope to attach the 300GB one I have now to a RAID controller for ZFS.  I'll have a bunch of room for Virtual Machines and Jails, whoopeeee!  Thanks for any replies guys!


----------



## Phishfry (Jun 20, 2019)

I would consider dropping the 2TB requirement and consider SSD's. One Terrabyte SSD's are around $140 and they are literally 3-4x quicker. Then you could use the 300GB for backups.
Remember that Virtual Machines take quite a disk speed hit so you want to start with something resonably quick.
There is also the NVMe drive factor. I am buying Samsung PM983 Enterprise 1TB drives for the same cost as a retail SSD.
Booting could be an issue with these on older motherboards.


----------



## Eric A. Borisch (Jun 21, 2019)

I’ve been very pleased with HGST drives for servers.


----------



## Phishfry (Jun 21, 2019)

If you do stick with hard drives they sell them like car batteries.
You buy a battery with 24 month warranty it lasts 25 months maybe.
Buy a HardDrive with a 24 month warranty and it lasts how long???
Point is 5 year drives are more reliable than 2 year drives.


----------



## Phishfry (Jun 21, 2019)

Here is a great barometer. In one hand hold an aluminum body consumer drive. In the other a 5 year SAS drive.
Blindfolded I can tell which is built to last.
The SAS drives use heat treated parts which weigh more and are much more dense. Designed for 24x7x365 duration.


----------



## Phishfry (Jun 21, 2019)

Even SAS drives you have to know what your looking at. There is a whole racket called "mid-line drives"
These are SAS drives with 3 year warranty.

I know your probably going to stick with SATA drives. There are some great deals on old LSI controllers from OEM on ebay for SAS.


----------



## RedPhoenix (Jun 21, 2019)

Phishfry said:


> I would consider dropping the 2TB requirement and consider SSD's. One Terrabyte SSD's are around $140 and they are literally 3-4x quicker. Then you could use the 300GB for backups.
> Remember that Virtual Machines take quite a disk speed hit so you want to start with something resonably quick.
> There is also the NVMe drive factor. I am buying Samsung PM983 Enterprise 1TB drives for the same cost as a retail SSD.
> Booting could be an issue with these on older motherboards.


Ok, I'll consider that.  We also have a limited budget though (I have Autism, so I live with Mom).  But $140 doesn't sound unreasonable, especially for a Boot Drive...  Yeah, this Server is a friend's and mine.  He put Debian on it, which was great, but FreeBSD with ZFS... You get the story, as I'm no fanboy, but I saw the benefits of this great OS.  I may just settle on the HDD option, but the SSD one is one worth considering. Thanks!


----------



## RedPhoenix (Jun 21, 2019)

Eric A. Borisch said:


> I’ve been very pleased with HGST drives for servers.


Ah yes, HGST... I love that brand.  Thanks for the advice!


----------



## RedPhoenix (Jun 21, 2019)

Phishfry said:


> If you do stick with hard drives they sell them like car batteries.
> You buy a battery with 24 month warranty it lasts 25 months maybe.
> Buy a HardDrive with a 24 month warranty and it lasts how long???
> Point is 5 year drives are more reliable than 2 year drives.


So you're thinking about the top-of-the-line then... Well, I should also mention that this Server is just used as backup, and occasion reading from...  For the VMs, they'll be used for a Honeynet, mostly, while some other's will be for Dev purposes.


----------



## RedPhoenix (Jun 21, 2019)

Phishfry said:


> Here is a great barometer. In one hand hold an aluminum body consumer drive. In the other a 5 year SAS drive.
> Blindfolded I can tell which is built to last.
> The SAS drives use heat treated parts which weigh more and are much more dense. Designed for 24x7x365 duration.


Ah, ok then!  SAS... I'll keep that in mind!  I bet they have it at Staples here.


----------



## RedPhoenix (Jun 21, 2019)

Phishfry said:


> Even SAS drives you have to know what your looking at. There is a whole racket called "mid-line drives"
> These are SAS drives with 3 year warranty.
> 
> I know your probably going to stick with SATA drives. There are some great deals on old LSI controllers from OEM on ebay for SAS.


Ok then... So SAS definitely sounds like something I should consider.  I'll also look into the LSI Controllers.


----------



## Sevendogsbsd (Jun 21, 2019)

I currently have 2 (hitatchi?) 15k rpm SAS drives (300gb) running on an LSI controller in my build server. They are probably 9-10 years old. They are hot and a little loud but as phishfry said, dead reliable. Of course I don't run this thing very often so the drives will most likely last a very long time.


----------



## PMc (Jun 21, 2019)

Eric A. Borisch said:


> I’ve been very pleased with HGST drives for servers.



Are these still available?
I highly recommend them!


```
9 Power_On_Hours          0x0012   091   091   000    Old_age   Always       -       66450
10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       608
192 Power-Off_Retract_Count 0x0032   087   087   000    Old_age   Always       -       15890
193 Load_Cycle_Count        0x0012   087   087   000    Old_age   Always       -       15891
194 Temperature_Celsius     0x0002   187   187   000    Old_age   Always       -       32 (Min/Max 18/68)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       55
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0
```

Errors come above 65C - but don't seem to harm. And that one's only consumer (deskstar), but runs in continuous operation. Have a couple of them, all fine.


----------



## Phishfry (Jun 21, 2019)

RedPhoenix said:


> Ok then... So SAS definitely sounds like something I should consider.  I'll also look into the LSI Controllers.


I don't know if this was such a great suggestion. There are plenty of enterprise 5 year SATA drives.
They do carry a heavy price premium.

On the other side of the coin I don't  know that I would feel comfortable with the SMART that PMc is showing.
So, much of this comes down to what you want to spend to be happy.


----------



## PMc (Jun 21, 2019)

Phishfry said:


> On the other side of the coin I don't  know that I would feel comfortable with the SMART that PMc is showing.



Point is, You're definitely not supposed to bring them above 60°C in operation. So, I thought, lets see what happens next - and that was 30'000 hours ago.


----------



## Deleted member 9563 (Jun 21, 2019)

There are of course lots of opinions on this. My amateur experience has only had Seagate drives from a specific generation fail, that's a few years back, so probably not relevant any more. It seems that most drives do just fine in a home environment and it doesn't really matter what it is. I'm just rebuilding my main desktop and happened to notice the original installation date that I had written on main drive. It's a Western Digital black 500GB drive, and it's been running continuously 24/7 for 6 years now. I expect it will keep going for a long time more.

Failures are rare, and only a professional user who goes through many drives will see them. The chance that an amateur will see a failure is very small. 

Flames welcome.


----------



## PMc (Jun 21, 2019)

6 losses in 31 years, 2 of them pilot errors (wrong power supply, and bought as used and sent in a parcel without any cushion - spindle bearings failed after 2 more years), with an average of 4-8 disks running. 
The other four:
Seagate 40 MB - died after substantial lifetime
WD 80 GB - died after substantial lifetime
IBM 36 GB 15k - died after 10+ years lifetime
WD 500 GB - died within warranty - was highly neurotic from the beginning, and the replacement is just as neurotic (but now I know how to handle that).


----------



## Eric A. Borisch (Jun 21, 2019)

PMc said:


> Are these still available?
> I highly recommend them!











						Internal SSDs: Shop Internal SSDs for Computer, Laptop, NAS | Western Digital
					

Shop internal SSDs to save mission-critical business data, PC games, or home backups.




					www.westerndigital.com
				




IBM HDD was division purchased by Hitachi, and then by WD .... it's the Ultrastar line (5 year warranty enterprise HDDs) that are the HGST (né IBM) heirs...









						HGST - Wikipedia
					






					en.wikipedia.org


----------



## Sevendogsbsd (Jun 21, 2019)

1 drive loss in 26 years, but not in server use, desktop use only. Drive was a Maxtor. Had mostly WD and some Seagates. Currently run desktop on Samsung SSDs, had those 3 years. Time will tell...


----------



## ralphbsz (Jun 21, 2019)

OJ said:


> Failures are rare, and only a professional user who goes through many drives will see them. The chance that an amateur will see a failure is very small.
> 
> Flames welcome.


Not a flame, just a philosophical excursion about statistics.

The MTBF of disk drives is spec'ed by manufacturers as a million hours or more (sometimes 1.5 or 2 million for enterprise grade drives). Having been a user of many thousands of disk drives professionally, and not being able to share accurate statistics, my summary is that the disk manufacturers are sort of honest. The actual measured failure rates are perhaps half or two thirds of the specification. Now, some of those failures are probably not the fault of the drive (temporary overheating, too much vibration, flaws in power supplies), so this is not intended as criticism of Seagate, WD, Hitachi, Toshiba and friends. This is experience from drives that are installed in professional-grade enclosures (not computer cases but dedicated disk enclosures), with high-quality power supplies, and installed in well managed data centers. Customers who spend millions on their storage systems tend to not risk those systems due to inadequate environments, that would be dumb. So let's take a real-world MTBF (under professional conditions) of about 1/2 to 1 million hours:

Lesson #1: The MTBF of disk drives in good conditions is very high, about 50-100 years average life time = 1% to 2% annual failure rate, and therefore amateurs with just a handful of drives should very rarely see disk failures.

Quick observation: it is quite possibly to break a disk. Drop it (just a little bit) while it is spinning and seeking (laptop disks are better at that). Drop it from a foot height onto a granite table while powered down. Cook it (the 68 degrees above is bad news). Vibrate it all the time while writing. Connect it to a power supply whose 12V line can be 11V or 14V depending on the phase of the moon. Switch the 5V and 12V pins when making your own power cables (did that once). Some of these things will kill the drive outright, sometimes with smoke coming out; others will just reduce its lifetime (and data reliabilty) massively. But in most cases, these extreme mistakes don't happen: few people solder their own power cables, and most buy good quality cases and power supplies.

Lesson #2: But the MTBF as seen by amateurs is much below what large systems professionals see, because they don't have good environmental controls (temperature goes up and down), good enclosures, good power supplies. Still, it is not catastrophic; drives should not die like flies unless you abuse them.

But even in those professional settings, you have to exclude certain effects to get to these MTBF numbers. First is infant mortality: You have to bake in new drives for a week or two, and drives that fail during that time will be cheerfully replaced by the manufacturer, and does not count towards MTBF. Second is manufacturing problems that escape. We once had a whole delivery of drives (several thousand, single manufacturer single model) that had an infant mortality approaching several % per week, and that mortality kept on going for several months. This was a "manufacturing escape", due to a mistake a whole batch of drives had skipped the manufacturers quality control (oops), and happened to be low quality (double oops). The manufacturer cheerfully took the drives back, and (probably not cheerfully) gave us several M$ to compensate us for our troubles, and to make our customer less unhappy. This is human error, was acknowledged and corrected by the manufacturer, and should not be counted towards the MTBF number. Now, how would an individual amateur user who only bought 1 or 2 drives handled it? They don't have the metrics to demonstrate to the vendor that the problem is pervasive, they don't have hardware and software teams that can do autopsies on defective drives themselves, they don't have teams of lawyers to negotiate settlements.

Another case where drives had high failure rates was a system that was shipped to a city in a tropical country with really bad air quality, and stored there in a non airconditioned warehouse for half a year, unpacked. When it was finally turned on, many disks (about a third!) had electrical shorts, which were due to corrosion from sulfur in the atmosphere. Actually, our field service technicians ended up finding liquid drops of a corrosive liquid on the PC boards of the drives: condensation from the atmosphere, containing sulfuric acid. Again, because we were a big company we were able to diagnose what had gone wrong, and work with all stakeholders to come to an equitable solution. And again, this should not be counted towards the MTBF of the drive itself. But how would an amateur who lives in this city have handled it? He doesn't have ready access to air chemistry, he doesn't know how long the store had the drive on a shelf, and he doesn't have teams of lawyers to negotiate.

Lesson #3: For an amateur, systemic effects can mask the inherent good reliability of quality drives, and they may get lots of failures. Tough luck.

It is now widely known and reported that Seagate Barracuda drives (in particular the 1TB model) have had serious reliability problems. If you average those into Seagate's overall MTBF, then the result looks pretty bad for Seagate. I don't know whether Seagate ever had programs where the refunded those, extended the warranty, or had other arrangements with large users (I never worked with large quantities of that model professionally). Real-world example: Between a colleague of mine and me (he also worked on the disk subsystems for a large storage systems vendor), we went through 7 of these 1TB Barracuda drives at home (he had 5, I had 2), all of which died within a few years (in some cases before 2 years), and we know that our enclosures/power supplies/environment were at least OK. That failure rate is completely incompatible with the quoted 1M hours, and more points towards something in the range of a few 10K hours. He got some replaced under warranty, and we both threw the rest into the trash. But since I had been burned by that, I followed the fate of other Seagate drives later, and found that this problem did not repeat for other models.

Lesson #4: Some drive models just suck, and will die quickly. So quickly that even an amateur with a small number of drives (1...5) will have serious problems in a small number of years (1...5). But you can't extrapolate from a few bad models to all models in a series, and much less to a vendor.

And finally, look at the famous BackBlaze data. It is the best data set on disk quality that is freely accessible; there is better data out there, but it is not accessible. You clearly see that on average, Seagate is less reliable than the others, and that's not just one model, but systematic. But Seagate does not suck: their annual failure rate may be up to 2% and 3% for some models, but it is nowhere near the 30% or 50% that my friend and me saw, and that would be catastrophic.

In summary: The reliability of drives is complicated, and at the amateur level just not predictable. Not enough statistics. You may get very unlucky.
What can you do about this?

Think about the value of your data. If it is worth nothing, and you will not feel bad if it is all gone suddenly, and your time for re-installing the system after a disk failure is worth nothing, then stop reading. All others, keep going.
Use RAID. At the very minimum mirroring of two drives. If this is your only defense, it is not good enough, and a two-fault tolerant system is better or even necessary.
If you are mirroring or RAIDing, consider using different drives (different models or even vendors) in a pair. Like that a systemic problem with a particular model is less likely to wipe you out. But that can be a bit tricky (different capacity, different performance, and in a RAID system the overall performance tends to be dominated by the slowest disk).
Take backups. Like that a disk failure becomes an inconvenience (down for an hour or a day), and perhaps a small data loss (the data from the 23 hours may be gone), but not a catastrophe. Remember that RAID is not a panacea; it does not protect against correlated failure, nor is it 100% reliable, nor does it protect against human error.
You have installed RAID already, right?
Make a plan ahead of time: what will you do if a drive dies? Know the commands to resilver your RAID. Know where your backups are stored. Don't store the only documentation for how to restore from backup on the drive that you are backing up. Do a test run, regularly. A backup that has never been restored is not actually a backup, it might be a blank tape. Not a joke, happened to my wife's company once: after their disks died, they discovered that their clueless sys admin had set them up with RAID-0, and had dutifully written a blank tape every night, labelled it, and put it into the fireproof safe. Not fun.
Your RAID is functioning well, you are monitoring disk health, and have set up an automatic monitoring system, right?
Think about what other disasters you want to protect against (because disk failure is not a disaster, it is an expected operational situation). Are you worried about a fire or flood destroying the place where both your original disks and the backup are physically located? Are you worried about intruders stealing your hardware? Are you worried about someone snooping on you? A good backup system can deal with this, but at some cost.
Inject a test fault into your RAID system (for fun, just pull a disk physically out), and watch it resilvering automatically, and your cellphone beeping because you got an e-mail from the monitoring system. That's when you can stop having anxiety attacks.
True story: About 25 years ago, I interviewed for a job at the storage systems research department of one of the largest and most prestigious computer companies in the world (two letters, not three). My host and future manager gave me a little tour of the computer room for fun, and showed me the main server the group used (in those days, a group of 15-20 people used a single large computer), and the two "big" RAID arrays connected to it (in those days, "big" meant dozens of disks). He then proceeded to pull a disk out of the running production machine, and hand it to me. I was flabbergasted. What this really demonstrated was: the guy was (and continues to be) very smart, and knew the reliability of his systems, and the value of impressing a person they might potential hire. I gave him back the disk, he put it back in, the disk array resilvered for a few more seconds, and everything was fine.


----------



## rigoletto@ (Jun 21, 2019)

Eric A. Borisch said:


> I’ve been very pleased with HGST drives for servers.



HGST is Western Digital now.


----------



## PMc (Jun 21, 2019)

Eric A. Borisch said:


> Internal SSDs: Shop Internal SSDs for Computer, Laptop, NAS | Western Digital
> 
> 
> Shop internal SSDs to save mission-critical business data, PC games, or home backups.
> ...



Yepp - the Hitachi rsp. HGST drives are IBM drives (the deskstar just as well), until the whole branch went to WD, and will most likely dissolve into the WD portfolio.
I for my part had an Hitachi and an original WD side by side in my desktop for quite a while, and there is a huge difference, and I would trust a half-wrecked Hitachi/HGST a lot more than a brand-new WD.


----------



## Phishfry (Jun 22, 2019)

PMc said:


> So, I thought, lets see what happens next - and that was 30'000 hours ago.


In automobile terms you been riding on Empty too long.....
As long as your happy I am happy.
50K mine are retired.
I do appreciate the heat factor. It is a killer.


----------



## tedbell (Jun 22, 2019)

Every hard drive I ever lost was a Maxxtor. My 19 year old 15GB samsung drive still works perfectly with no bad sectors.


----------



## ralphbsz (Jun 22, 2019)

Somewhere in the basement, I have two really good disks. One is a CDC/Imprimis/Seagate Wren, the other a Falcon (and I can't remember whether those were made by Maxtor or Fujitsu). They are both 1GB SCSI, and were bought in the late 80s or early 90s. Last time I booted those computers, they were both working; that was about 5 or 10 years ago.


----------



## rigoletto@ (Jun 22, 2019)

This always amaze me how ralphbsz is able to remember the name and model of almost every single piece of hardware he touched in his life. I barely remember the brand of the disk I am using now.


----------



## PMc (Jun 22, 2019)

Phishfry said:


> In automobile terms you been riding on Empty too long.....
> As long as your happy I am happy.
> 50K mine are retired.



When it has run 5 years, it very likely runs the next 5 years. And there is a mirror, and there is a backup, and there is a copy of the backup, and those things that really cannot be recreated with some effort go on a stick, and there is another stick, and at least one of them should usually be offsite. So there are two possible risks:
1) somebody runs a cruise missile into my home when I'm away. Then maybe only a stick survives, and some valuable things are probably gone, and it takes a week or two to rebuild. (But then there will be other things that take longer to rebuild.)
2) somebody runs a cruise missile into my home when I'm at home. Then there is no problem at all. Anymore.

I just can't throw away good working hardware.


----------



## ralphbsz (Jun 22, 2019)

PMc said:


> 1) somebody runs a cruise missile into my home when I'm away. Then maybe only a stick survives, and some valuable things are probably gone, and it takes a week or two to rebuild. (But then there will be other things that take longer to rebuild.)
> 2) somebody runs a cruise missile into my home when I'm at home. Then there is no problem at all. Anymore.



That's one of the problems with amateurs doing backups: they tend to not think about realistic threat scenarios, and what would matter. For example, I'm not worried about a burglar coming into my house and stealing my server. Because if he does, he will probably steal a lot of other things that are much more emotionally important to me (family heirlooms, musical instruments), much more expensive (musical instruments), much more short-term useful (tools), or much more dangerous (guns and ammo, but those are in a really good safe). Plus, when they're done they will probably set fire to the house. Similarly, if the whole house burns down, and I lose the last few weeks of records from my weather station (because the off-site backup is only updated once in a while), that's really not a big problem. Not having a house, and not having any of my stuff is much more important. At that point, I will be super happy that I have things like bank records and important documents on the off-site backup disk.

On the other hand, if a disk dies (which has happened at least 3 times in the last 5 years, although one of them the problem wasn't the disk, it was the crappy USB-2.0 interface), I really don't want to lose data. Having to spend two days carefully restoring from the offsite backup in that case would be highly annoying. Instead, I just order a spare disk online, it shows up 2 days later, I swap it in (half hour of work), start the resilver, and go back to my glass of wine.

People who make really good backups, and then store the backup right next to their computer, need to understand that even a small fire or electrical problem (lightning) can wipe them out. You need some geographic diversity. And you need to think about what "geographic" means; in view of recent California fires, having the backup in your neighbor's house is probably not good enough. Actually, in my previous job we had a very sad story about a big customer: They had a complete ready-to-go backup data center, in the other tower of the world trade center. Not good.

About two years ago, we actually had to evacuate our house because of a wildland fire that was scarily near (fortunately, ultimately nothing bad happened). The first thing that went into the car was the box with passports etc. from the safe, and the local backup disk. The next thing was a small box of family heirlooms. Only after that did we put in sensible things: Spare clothes, sturdy shoes, blankets, musical instruments (because they are valuable and portable). Eventually, I actually put the whole server into the car. Then we had the smart idea that we should have some drinking water, in case we need to sleep in the car on the side of the road, so we tossed a case of water bottles and a case of soda (coca cola) on top of it. Unfortunately, a sharp corner on the computer poked a hole in a can of soda, and I ended up with a very sticky sugar-coated server. Fortunately, it was all external, and took only half hour to clean up once we were back to unpacking.

P.S. Long-term memory: good. Short-term memory: Not so good. Alzheimer's disease is communicable; you get it from your parents.


----------



## Phishfry (Jun 22, 2019)

I also have a stacker disk compression card. It still works, should I put that back in service?








						DOUBLE YOUR DISK SPACE WITH COMPRESSION Stacker does it and you don't even have to open your CPU
					






					www.baltimoresun.com


----------



## ralphbsz (Jun 22, 2019)

I very much hope that your post about Stacker is meant as humor. If yes, you have succeeded, and I'm laughing.


----------



## PMc (Jun 23, 2019)

Phishfry said:


> I also have a stacker disk compression card. It still works, should I put that back in service?
> 
> 
> 
> ...



Well, the point is: do You have a FreeBSD driver for it?

Among my valuables is: an Orchid graphics card, built mid'88 and sold als "Designer VGA", 800x600/256 colors, a beautiful short and very compact design for PC/XT, with Tseng ET3000 chip - and there is an X11 driver for that, although no longer actively distributed.
Or, similarly beautiful, built in 1990, very long, very high, full of TTL chips, the WD7000FASST SCSI controller - and that one seems to be fully supported in base:
`-rw-r--r--  1 root  wheel  36665 Feb  5 21:37 /usr/src/sys/dev/wds/wd7000.c`
Then there is a bunch of beauties in PCI-X-64 design - which are now also getting increasingly difficult to find a place to plug them in.


----------



## Phishfry (Jun 23, 2019)

PMc said:


> Then there is a bunch of beauties in PCI-X-64 design


That is as far back as I have saved. I am saving an old Gateway server board with serverworks chipset and multiple PCI-X-133 slots.
The main reason I kept it is that it had 3 IDE 40 pin connectors making it ideal for shuffling around disk contents.


ralphbsz said:


> I very much hope that your post about Stacker is meant as humor. If yes, you have succeeded, and I'm laughing.


Yes, the last ISA slot device I personally installed was an Intellicall controller card for a business job during Y2K meltdown work..
I had several challenging legacy machines that needed special motherboards in 1999 in preparation for the big impeding doom.
There was lots of money flowing in the computer world due to that hype.

I bought my stacker compression card from 'Service Merchandise' bundled with a drive. I can't remember what drive.


----------



## Eric A. Borisch (Jun 23, 2019)

Phishfry said:


> bought my stacker compression card from 'Service Merchandise' bundled with a drive. I can't remember what drive.



Service Merchandise. There is a name I haven’t heard in a while. Did it have a conveyor belt from the upstairs stock room?


----------



## Phishfry (Jun 23, 2019)

Service Merchandise was awesome. Bought my first gun there as well as my first nice watch.
They were a retailer that sold computer parts in the early days. Catalog operation with a retail presence.
Much like Staples that the original poster alluded to.
Sometimes you have to get parts where-ever you can.
I used to buy stuff from the Office(MAX/DEPOT) chains. Mostly little stuff like CD/DVD and memsticks.
Serious hardware it was "Computer Shopper" magazine for mail order buying. Later CompUSA for comparisons.
Lately its all ebay.


----------



## Datapanic (Jun 23, 2019)

I was going to give my recommendation, but I changed it.  DOn't use so many happy faces!


----------



## rootbert (Jun 23, 2019)

if you trust statistics ;-) https://www.backblaze.com/blog/2018-hard-drive-failure-rates/


----------



## Phishfry (Jun 23, 2019)

I like the Toshiba drives on the Blazeback list. Surprised to see them using security DVR drives.


> MD04ABA500V Toshiba 5TB 5400RPM


I also could not find one retailer carrying it or the 4TB version also on the list: MD04ABA400V


----------



## ralphbsz (Jun 23, 2019)

There is newer data from BackBlaze at https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2019/
They are absolute heroes for publishing their failure statistics every quarter. It is the best openly available data for disk reliability, broken down by model and manufacturer. Unfortunately, the per-model data is not very useful for amateurs, since they only have good data after using a model for a considerable period, by which point that drive is often no longer available in the consumer market. They also only use nearline enterprise drives, while many amateurs use consumer drives. But one can easily draw conclusions that are generally predictive. My favorite graph is below (annual failure rate, lower is better). That pretty clearly tells you which disks to buy.


----------



## RedPhoenix (Jun 24, 2019)

Phishfry said:


> I don't know if this was such a great suggestion. There are plenty of enterprise 5 year SATA drives.
> They do carry a heavy price premium.
> 
> On the other side of the coin I don't  know that I would feel comfortable with the SMART that PMc is showing.
> So, much of this comes down to what you want to spend to be happy.


Yeah, I think I'll settle on WD Blue, since I don't exactly have the luxury to go big on this one.


----------



## RedPhoenix (Jun 24, 2019)

tedbell said:


> Every hard drive I ever lost was a Maxxtor. My 19 year old 15GB samsung drive still works perfectly with no bad sectors.


Wow, that's *INCREDIBLE*, which says a lot about some Drives today. :O


----------



## RedPhoenix (Jun 24, 2019)

ralphbsz said:


> There is newer data from BackBlaze at https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2019/
> They are absolute heroes for publishing their failure statistics every quarter. It is the best openly available data for disk reliability, broken down by model and manufacturer. Unfortunately, the per-model data is not very useful for amateurs, since they only have good data after using a model for a considerable period, by which point that drive is often no longer available in the consumer market. They also only use nearline enterprise drives, while many amateurs use consumer drives. But one can easily draw conclusions that are generally predictive. My favorite graph is below (annual failure rate, lower is better). That pretty clearly tells you which disks to buy.


"I see", said the blind man.  Thanks for that!  I'll check out Backblaze more often.


----------



## RedPhoenix (Jun 24, 2019)

ralphbsz said:


> Somewhere in the basement, I have two really good disks. One is a CDC/Imprimis/Seagate Wren, the other a Falcon (and I can't remember whether those were made by Maxtor or Fujitsu). They are both 1GB SCSI, and were bought in the late 80s or early 90s. Last time I booted those computers, they were both working; that was about 5 or 10 years ago.


Wow, that's also impressive!


----------



## RedPhoenix (Jun 24, 2019)

PMc said:


> Yepp - the Hitachi rsp. HGST drives are IBM drives (the deskstar just as well), until the whole branch went to WD, and will most likely dissolve into the WD portfolio.
> I for my part had an Hitachi and an original WD side by side in my desktop for quite a while, and there is a huge difference, and I would trust a half-wrecked Hitachi/HGST a lot more than a brand-new WD.


Ok then...  Something to consider. Thanks!


----------



## RedPhoenix (Jun 24, 2019)

ralphbsz said:


> Not a flame, just a philosophical excursion about statistics.
> 
> The MTBF of disk drives is spec'ed by manufacturers as a million hours or more (sometimes 1.5 or 2 million for enterprise grade drives). Having been a user of many thousands of disk drives professionally, and not being able to share accurate statistics, my summary is that the disk manufacturers are sort of honest. The actual measured failure rates are perhaps half or two thirds of the specification. Now, some of those failures are probably not the fault of the drive (temporary overheating, too much vibration, flaws in power supplies), so this is not intended as criticism of Seagate, WD, Hitachi, Toshiba and friends. This is experience from drives that are installed in professional-grade enclosures (not computer cases but dedicated disk enclosures), with high-quality power supplies, and installed in well managed data centers. Customers who spend millions on their storage systems tend to not risk those systems due to inadequate environments, that would be dumb. So let's take a real-world MTBF (under professional conditions) of about 1/2 to 1 million hours:
> 
> ...


Wow..... That was quite a post, and I loved every second of it!  Quite engaging, and informative!  Thanks for sharing your wealth of knowledge and experience!


----------



## RedPhoenix (Jun 24, 2019)

PMc said:


> Yepp - the Hitachi rsp. HGST drives are IBM drives (the deskstar just as well), until the whole branch went to WD, and will most likely dissolve into the WD portfolio.
> I for my part had an Hitachi and an original WD side by side in my desktop for quite a while, and there is a huge difference, and I would trust a half-wrecked Hitachi/HGST a lot more than a brand-new WD.


Thanks, PMc!  Wait, did I already reply to your post?


----------



## RedPhoenix (Jun 24, 2019)

Sevendogsbsd said:


> 1 drive loss in 26 years, but not in server use, desktop use only. Drive was a Maxtor. Had mostly WD and some Seagates. Currently run desktop on Samsung SSDs, had those 3 years. Time will tell...


Ah, yes... The proverbial test.


----------



## usdmatt (Jun 24, 2019)

If it’s for a server I would consider the WD reds instead of blue. We’ve been using them for years and they’ve been pretty solid.


----------



## Sevendogsbsd (Jun 24, 2019)

I have WD reds (NAS) in my NAS - guessing that's what you mean. They are quiet (SATA) and supposed to be long lived. We shall see!


----------



## RedPhoenix (Jun 24, 2019)

Sevendogsbsd said:


> I have WD reds (NAS) in my NAS - guessing that's what you mean. They are quiet (SATA) and supposed to be long lived. We shall see!


Yes, the famed litmus test, so-to-speak.  Just dive in and find out!!


----------



## gpw928 (Jul 11, 2019)

Sevendogsbsd said:


> I have WD reds (NAS) in my NAS - guessing that's what you mean. They are quiet (SATA) and supposed to be long lived. We shall see!


I  have five WD reds in my ZFS server.  They are 3 TB  WD30EFRX.  They are 6.5 years old.  One failed a couple of years ago.  Otherwise no problems (but I wish I had gone RAIDZ2, not RAIDZ1).

The Backblaze stats for 2017 had a fair sample of these exact drives.  The annualized failure rate was 5.06%, which is 2.5 times the average across all their drives, so a poor outcome.

"WD Red" covers a multitude of different capacities, and I strongly suspect that one drive may not perform the same as another WD Red of a different capacity.

The HGST drives generally seem to score well, but they are expensive.  The virtue of the Backblaze stats is that studying them does provide real insight into the bargains.

Cheers,


----------



## Sevendogsbsd (Jul 11, 2019)

Good to know, will check out the HGST drives if I have a failure. Thanks! Mine are fairly low use: they are in a Synology 718+ NAS and spend most of the day sleeping...


----------



## gpw928 (Aug 7, 2019)

The Backblaze Hard Drive Stats Q2 2019 are out.


----------

