Backblaze publishes its hard drive obituary for Q2 2018

We're into the third quarter of 2018, and that means it's time for yet another Backblaze quarterly report. It's interesting to see the results of the cloud backup outfit's abuse of hard drives from every vendor—though Backblaze itself cautions readers from reading too much into its data. The latest report includes results from 98,265 hard drives including Backblaze's first-ever reliability results for 14-TB spinny disks.

The vast majority of the drives the company has in service are from Seagate. That's not a new development; despite the dark shadow that Backblaze's first report cast over Seagate's brand, the backup folks overwhelmingly prefer that manufacturers' drives. HGST is the next-most-prevalent disk provider, followed well behind by Western Digital and then finally by a scant 190 Toshiba disks.

The new 14-TB drives that Backblaze just added are actually from Toshiba, though. The spinners use conventional magnetic recording (CMR) technology instead of the slower shingled method (SMR), meaning they should be just as fast as any other 7200 RPM-hard drive. Backblaze only has 20 of the 14-TB drives so far—not enough to show up in the chart above—but the company says it's pleased enough with their performance that it has ordered an additional 1,200 units.

Backblaze notes that while the quarterly results are surely interesting, the most salient data it has concerns lifetime failure rate. The chart above displays reliability data on a great many hard drives and actually paints Western Digital in the worst light. Two of that company's drives (Red 3 TB and Red 6 TB, respectively) have an annualized failure rate greater than 4%. It should be noted that that figure refers to a combined total of only 614 drives, though.

The backup provide also remarks that the overall failure rate for all of its drives has dropped to 1.8% after including the last quarter's information. That is apparently the lowest overall failure rate the cloud storage service has ever seen.

Backblaze brings up an interesting detail in its analysis. Studying two Seagate drives—one targeted at consumers, and one targeted at enterprise customers—the company finds that reliability is apparently an extremely minor concern when choosing between the two versions. Instead, the company comments that other factors like price and performance are more important when establishing a comparison.

Further, the company is still using spinning disks for all of its storage. In its last quarterly report, Backblaze found that the speed benefits of SSDs for its servers simply weren't worth the extra spend. If you'd like to pore over the latest report's data yourself, you can head over to the company's site for the crunchy numbers.

Comments closed
    • ronch
    • 1 year ago

    Just got a 3TB Seagate Barracuda last month. Crossing my fingers and toes.

    • FireGryphon
    • 1 year ago

    On iOS Firefox I can’t see any of the images for this article. What’s wrong?

      • drfish
      • 1 year ago

      Unofficial webp experiment, thanks for the feedback.

        • thedosbox
        • 1 year ago

        Same issue now with firefox on windows. I can see images on the gallery and “more stuff” side bar, just not the article image. Ad images are also visible.

    • Chrispy_
    • 1 year ago

    I’ve been using Ironwolfs as bulk storage in RAID1 (personally) and RAID6 (corporate archive server) and they’ve been faultless.

    Only a sample size of 26 drives, but in the duration I’ve had them I’ve lost a couple of 7200RPM Enterprise drives that cost 4x more per GB on the premise of increased burn-in testing and quality control. A Seagate and an HGST Helium, not that it makes any difference for the miniscule sample size I’m talking about.

    I’m just glad I moved away from WD Reds at home. They were starting to sound rough after just a couple of years so I ebayed them and don’t regret the capacity/speed/noise-level improvements in trading up to Ironwolfs.

    I’m just glad WD are doing well in the SSD space because it seems like their mechanical drives are rapidly becoming the worst possible choice you can make.

      • Ummagumma
      • 1 year ago

      I just finished “retiring” 20+ 4TB and 10+ 3TB WDC Reds (non-Pro version) after 4 to 5 years of service at home; I needed more space!

      Some of my WDC non-Pro RED drives had 9,000+ operating hours when they were “retired”. None of my drives developed any “noise” issues or accumulated SMART errors during operation. It could just be my luck compared to yours.

      I have no plans to “flea bay” my drives as then are now mounted in “cold storage” servers (also great for experimenting) or held as spares for those same “cold storage” servers.

        • just brew it!
        • 1 year ago

        9,000 hours is only a little more than a year of continuous operation; I would certainly [i<]hope[/i<] that they're still in pretty good shape. I currently have four 3TB WD Reds in service, which are nearing 30,000 power on hours. No issues yet, fingers crossed. Two home servers ago, most of the drives were 1TB HGSTs. I kept that server for a very long time, and some of the drives were past 50,000 hours by the end.

          • Chrispy_
          • 1 year ago

          My reds were 3TB models. If you look at Newegg user reviews, the 3TB Reds were hit or miss, but there are many more negative reviews for the 3TB models than other sizes.

          I guess if you got a good one it’ll stay good but there must have been a bad batch of something that affected a disproportionate number of 3TB Reds.

          • LoneWolf15
          • 1 year ago

          I “retired” my 3TB Reds, but only relatively speaking. They went from my server into the 4-bay Thecus NAS that is an iSCSI target for my server’s backup. The server got 3TB HGST 7.2k NAS drives for an increase in performance.

          I have extra 3TB Reds to last me for awhile when one in the NAS fails, but they’ve been running several years now without a hint of issues. The HGST drives are probably a year and a half, maybe two now, with no failures either; of course, I also have a caching RAID controller, and that may reduce wear a little.

        • MOSFET
        • 1 year ago

        I have four Samsung HDDs (on a shelf) from a 2010 Dell server that have over 60,000 hours on each of them, and not one of the drives has a SMART error.

        9000 is nothing. Over 9000 is something. (Tongue in cheek ok)

    • Klimax
    • 1 year ago

    Just a note: All images in this article are Webp and won’t show in IE 11 nor MS Edge.

    Is it possible to have fallback to regular PNG?

    Thanks.

    ETA:
    BTW: Source article loads correctly.

    • jihadjoe
    • 1 year ago

    Went right to the rightmost column and got a huge surprise seeing an HGST drive has the highest fail rate, until I looked to the left and saw it was a pretty small sample size.

    • ozzuneoj
    • 1 year ago

    How many people load up their favorite SMART diagnostic program to check the status of their drives every time one of these articles pops up?

    I use Hard Disk Sentinel to monitor my Toshiba 3TB drive. I bought a pair of them a couple years ago for cheap ($80) and then found out last year that people were having a lot of problems with them. Now I try to keep a pretty close watch on them. One is in my main PC and the other is in my HTPC\File Server. I do backups frequently… >_>

    If I had to buy another drive, I have no idea what I’d buy. Toshiba was very highly praised when I bought mine, and yet here we are. Previously I had a Samsung F1 1TB and it is still going strong.

      • Leader952
      • 1 year ago

      [quote<]How many people load up their favorite SMART diagnostic program to check the status of their drives every time one of these articles pops up?[/quote<] No need to "load up their favorite SMART diagnostic program" just install CrystalDiskInfo and configure it to be Resident and Startup under the Function Dropdown of the program. That way it is always checking SMART status. [url<]https://crystalmark.info/en/software/crystaldiskinfo[/url<]

      • Waco
      • 1 year ago

      I check SMART attributes and email on changes that are significant every hour. 😛

      You can’t really go wrong – a 4% AFR is still not *that* bad for a piece of spinning rust. All of them are generally well below that with few exceptions.

      • morphine
      • 1 year ago

      Stablebit Scanner for the win.

      • DancinJack
      • 1 year ago

      I still have nothing bad to say about the HGST drives in my own systems, and the ones I’ve put in other people’s systems. I’m probably going to pick up a six or eight TB one here pretty soon.

    • DPete27
    • 1 year ago

    When you think about it, Backblaze was smart to start this reporting. They’ve essentially aired the dirty laundry on hard drive reliability to the world, so if a company makes a failure-prone drive, people are going to know. It also pits all the major players against each other to achieve better reliability than their competitors for marketing reasons. As a positive side effect of this competition, reliability presumably improves across the board and Backblaze doesn’t have to replace as many hdds.

      • frenchy2k1
      • 1 year ago

      Be careful on how you use it though.
      As backblaze itself notes, their usage (dozens of HDD in a pod, originally poorly insulated both in temperature and vibration) is NOT what most of those drives were designed for.

      This is a bit like taking a mustang or other muscle car and running it offroad. It will be beaten by a pick up truck and may get damaged more easily.
      Most consumer drives were never designed to run 24/7 in a server environment and would break more than expected.

      As time went, they improved the design of their pods and HDDs became more reliable, integrating more of the high-tech vibration reduction and sensors only found in server products originally.

      So, nice sampling, but not that useful if you do not run a similar pod home…

        • HERETIC
        • 1 year ago

        Read in conjunction with Hardware France’s failure rates,paints a decent picture.
        [url<]https://www.hardware.fr/articles/962-6/disques-durs.html[/url<] Separately they have their flaws, but together they give me enough info to be able to make a educated choice.

        • Chrispy_
        • 1 year ago

        In saying that, a lot of people now use home NAS boxes. The sales of one and two-bay NAS enclosures and pre-built external drives with an ethernet socket or WiFi have skyrocketed in the last few years.

        Typically, these drives will be always-on, running in poorly-ventilated plastic enclosures, and likely jammed in a cupboard near the main internet router.

        The concept of a consumer desktop hard drive is dying off and the reality is that NAS drives and NAS enclosures are outselling desktop drives so much that most e-tailers here actually have more choice in the NAS range than every other type of drive combined.

          • Ummagumma
          • 1 year ago

          And most will not be attached to anything more than a cheep outlet strip or maybe a cheep surge suppressor equipped outlet strip.

          Power surges can be very damaging to any type of electronic equipment (even refrigerators and other household appliances), but most gerbils already know that.

          So if the heat doesn’t get these devices, a really good power surge (or a drunk buddy with a beer) will kill them.

            • Chrispy_
            • 1 year ago

            Yeah, a desktop drive is powered by (typically) an 80+ standard, Japanese capacitor-equipped, active-PFC switching power supply with stable voltage regulation and relatively low ripple.

            Your external drive is usually powered by a 5VDC Chinese wall-wart that is so low-tech that it barely qualifies as ‘electronic’.

            • strangerguy
            • 1 year ago

            You are a conspiracy theorist if you think the likes of WD/Seagate etc is gonnna cheap out on their ext HDD PSUs when it’s actually more costly and difficult to do intentional planned obsolescence than not, since doing it will most likely backfire in their warranty periods.

            • just brew it!
            • 1 year ago

            I am occasionally that drunk buddy with a beer. Fortunately my server lives in the crawlspace. Can’t think of any reason I’d ever be drinking in there!

            • Waco
            • 1 year ago

            You just described a very good reason my server is in the boiler room. I’m very unlikely to drunk admin something down there. 😛

            • just brew it!
            • 1 year ago

            I also like putting the server in the crawlspace because it doesn’t add noise or heat to my office. It’s also less dusty in there, since the crawlspace is closed up when I’m not moving things in and out.

    • Takeshi7
    • 1 year ago

    I’m so proud of Zak for using the proper term, “conventional magnetic recording” instead of “PMR” that other publications use when referring to non-shingled drives.

      • psuedonymous
      • 1 year ago

      As of today and the last few years the two are synonymous: Perpendicular Recording has been the standard for many years, and will continue to be for at the very least the near future (HAMR has repeatedly failed to transition from the lab to production, MAMR looks like it will go the same way). AFAIK, no LMR consumer drive has been produced in the last decade (I’ll ad the ‘consumer’ caveat because there may be applications like the military world where decade old designs may remain in production).

        • Takeshi7
        • 1 year ago

        But they aren’t synonymous. In a Venn diagram of modern hard drives, SMR and CMR would be mutually-exclusive subsets of PMR.

          • cygnus1
          • 1 year ago

          I think he was saying that PMR and CMR are synonymous. In general, if a drive is shingled it will get labeled SMR and non-shingled (what is really CMR) will get labeled as PMR. Yes, they’re both technically PMR, but they’re not. As far as the public is concerned there’s PMR and SMR and THEY are mutually exclusive technologies.

            • Takeshi7
            • 1 year ago

            That’s because the public has been falsely misled by tech jounalists to think that PMR is the opposite of SMR. But really the press should have been more diligent with their terminology and been using CMR since the beginning. Now we have to retrain everybody to understand that SMR drives are still using PMR technology.

            • cygnus1
            • 1 year ago

            Truth be told, even calling it conventional magnetic recording is “wrong” as there was a time when perpendicular was new too and it still hasn’t been around as long as what came before it. Regardless, the public (including the press) has come up with an understanding of what means what.

            I’m not a fan of the situation myself. I look at it like this, think of it like when electric guitars came out. You used to just have guitars. There were variations, but they all worked the same. But then a new kind of guitar came out, and what used to just be a guitar is suddenly an acoustic guitar. It got a new name even though it didn’t change. Now in that case there was a good word for describing the original that differentiated from the new without any overlap. We don’t really have a good word for the original PMR that can differentiate it from subset of PMR that is shingled recording. What I would’ve done, in hindsight, is go with cPMR and sPMR, to denote conventional from shingled PMR. I think that would’ve been the clearest option. Why we had to stick with only a 3 letter acronym, I don’t get.

            Honestly, I don’t think anyone needs to be retrained. I think what needs to happen is that you need to accept the way the rest of the world has decided to label hard drive recording technology. IF the hard drive’s recording technology is mentioned, it’s going to be PMR or SMR. If it doesn’t say shingled, then it’s not shingled.

        • Krogoth
        • 1 year ago

        PMR always had a dirty little secret though. It is less reliable and more prone to mechanical failure than older LMR. It is the main reason why PMR was never adopted back during early days of magnetic media (1960s-1970s). PMR only came to front again when LMR started to run into super-paramagnetic effect back over a decade ago.

        It is not a coincidence that HDD reliability took a massive nosedive when PMR started to become adopt en mass. Warranties were also trim down a result.

    • Waco
    • 1 year ago

    I wonder how many years of beating/matching the industry average it’ll take for Seagate to recover consumer mindshare.

    /just bought 8 of the 8 TB Barracuda Compute drives, no regrets

      • Srsly_Bro
      • 1 year ago

      RAID 0?

        • Waco
        • 1 year ago

        ZFS, RAIDZ2 w/ a cold spare.

      • just brew it!
      • 1 year ago

      I’ve noted before that they seem to be following the same general trajectory IBM/HGST did after the “Deathstar” fiasco. In both cases, the problematic drives seem to have been a “wake up call” that caused the company to focus more on quality over the ensuing several years.

        • Waco
        • 1 year ago

        They’ve maintained their cost lead as well – so I’m not going to complain!

      • GrimDanfango
      • 1 year ago

      It’s going to take them quite a bit longer to regain my trust after the >£1000 it’s cost me as a contractor to replace 7 out of the 12x3tb drives in my server over the span of about 2 years. I’m still astounded that Backblaze’s stats are even as low as they are – two of the remaining five are already showing signs of going the same way soon. At least it’s slowed for the last year… looks like the healthy 3 might hang in there for ages.

      It isn’t normal to lose 75% of a batch of drives inside of 4 years, is it?

      Still haven’t had a single smart-bit out of place on any of the 7 HGSTs I’ve put in there.

        • Waco
        • 1 year ago

        You used consumer drives in a server for a customer? Yikes.

        3 TB drives seem to be problematic from everyone though.

          • GrimDanfango
          • 1 year ago

          No, I used Seagate Enterprise Value (Constellation CS) drives. Supposedly rated for 24×7 enterprise use. But essentially just rebadged DM001s it turned out.

          It was for my own server… I do visual effects work.

            • Waco
            • 1 year ago

            Ah, understood. Sorry for the bad luck!

            • GrimDanfango
            • 1 year ago

            I guess I should’ve been more suspicious when I saw Amazon was flogging them off cheap at the time I bought them.
            I get the feeling if I’d been more on-the-ball, I might have realised that the reliability issues were already known about by then.

            Buy cheap, buy twice 😛

            • Waco
            • 1 year ago

            I’ve been shucking 8 TB drives from externals for my home NAS. $130 per drive is too hard to pass up!

            • Arbiter Odie
            • 1 year ago

            Woah. Are they the icky shingled ones, or actual honest to spinning-rust normal drives?

            • Waco
            • 1 year ago

            Standard 5900 RPM drives. You toss any hope of a warranty though.

            • Chrispy_
            • 1 year ago

            The data’s the only value in a hard drive though; Usually the last thing you want from a failed drive is to get a replacement of the same exact type that let you down.

            I know that’s not really how it works, but psychologically when brand A lets them down, people want to use brands B-Z instead.

            • Waco
            • 1 year ago

            Yep. People, in general, are pretty irrational.

        • just brew it!
        • 1 year ago

        [quote<]It isn't normal to lose 75% of a batch of drives inside of 4 years, is it?[/quote<] Definitely not normal. But if a particular model has a defective design (or bad firmware), it can happen. You'll note that Backblaze has long since retired all of their 3TB Seagates, so they're not showing up in the chart. Go back and look at some of their stats from a few years back: [url<]https://www.backblaze.com/blog/hard-drive-reliability-stats-for-q2-2015/[/url<] Those drives you replaced wouldn't happen to have been the ST3000DM001 model, would they?

          • GrimDanfango
          • 1 year ago

          Not the DM001, no, but the “Enterprise” badged version of the DM001, the 3TB Constellation CS.

          Yeah, was surely a defective model. I recall Seagate very aggressively undercutting the rest of the market with the consumer model in particular, must have cut a few too many corners in their effort to flood the market.

      • Ninjitsu
      • 1 year ago

      I’m considering buying a 1TB FireCuda soon, although I may just get a 2TB in the end.

Pin It on Pinterest

Share This