WD Red drives will let you add storage 10TB at a time

We all love solid-state storage. Modern SSDs are fast, dense, and not too expensive, relatively speaking. That is, relative to what they used to cost. SSDs are still pretty expensive compared to hard drives, so if you need to cram tons of terabytes of data into a server somewhere (and you don't work for a national laboratory), you probably want spinning rust. WD can actually let you do it ten terabytes at a time now with its new 10TB Red and Red Pro hard drives.

Western Digital's Red series of hard drives is intended for use in consumer and small business RAID arrays. The drives have less aggressive head parking than the company's other disks, and NAS-tuned firmware with support Time-Limited Error Recovery. Instead of spending up to two minutes trying to recover data from a bad sector, the disks will time out in just seven seconds (although the period is configurable).

The Red drives aren't new, of course—they've been around for some five years. We reviewed the 4TB model back when it debuted and had some pretty nice things to say about it. Last year, WD bumped its maximum capacity up to 8TB. The standard Red drives use a 5400-RPM spindle speed, while the Red Pros spin their disks at 7200 RPM. There are also a couple Red 2.5" drives in 750GB or 1TB capacities, both spinning at 5,400 RPM.

Of course, you'll pay a premium for packing that many platters into one place. Newegg asks $400 for a standard Red disk with 10TB capacity. The Red Pro is just a bit more at $460. Those prices come out to about 4.0¢-per-gigabyte and 4.6¢-per-gigabyte, for the mathematically impaired. While those prices might seem high compared to other mechanical drives, that's still just a sixth of the cost-per-gigabyte of even the cheapest SSDs. The new 10TB Reds will be available June 17.

Comments closed
    • Krogoth
    • 3 years ago

    So much pr0n capacity.

    • southrncomfortjm
    • 3 years ago

    Would a Red drive be any good in an HTPC? Or should I go for a Blue? I’d like to add a super-sized HDD to my seriously out of date storage solutions.

    Was also considering a high capacity HGST Deskstar.

      • Waco
      • 3 years ago

      Any single drive, unless you have a good backup, is a great way to lose data.

      A mirrored pair of them would be a good start. For media consumption literally any drive will do.

    • Chrispy_
    • 3 years ago

    I wish manufacturers would make it more obvious about whether a drive is shingled or not.

    I’m using 10TB shingled drives for sequential archiving and when they’re NOT dealing with sequential archiving they are practically unusable. Speeds so slow that we’re talking about a regression to the days when disks were measured in Gigabytes rather than Terabytes.

    • RedBearArmy
    • 3 years ago

    I for one would like see higher capacity 5400rpm 2.5” drives.
    Why can’t we have some of those WD ? (and thats without shingles)

      • HERETIC
      • 3 years ago

      Basic math tells me we’re limited by size.

      Seagate has a ST5000LM000 5TB
      but that is SMR.
      Best I can find with WD is-WD30NPVX-00N2PT0 3TB

    • Binglewood
    • 3 years ago

    When are the 8TB and 10TB Black drives coming out? We’ve got Red, Purple, and Gold now.

      • Zan Lynx
      • 3 years ago

      15k and 10k drives are dead. No one is going to be buying a spinning WD Black when you can purchase an NVMe drive instead. The speed difference from 7,200 to 15k is insignificant compared to jump from any spinner to SSD.

    • DragonDaddyBear
    • 3 years ago

    With that much data able to be lost in one fell swoop they should consider a minimum purchase if 2 drives or give a discount to encourage the practice.

    To DTD’s point, the is of a bit error on a full read is likely enough that RAID5/6 should be out right not considered and 1/10 should be the starting point. Or, better, a file system that can handle bit correction that has a similar function to the RAID levels.

      • travbrad
      • 3 years ago

      Hopefully people buying 10TB drives (especially WD Reds) know that a single drive is a disaster waiting to happen. They aren’t exactly going to be putting these in your average pre-built PC and most people would buy an external drive if they buy a new hard drive at all.

        • EzioAs
        • 3 years ago

        Typo there. You probably meant TB instead of GB…

        …unless we’re back in 1999.

          • travbrad
          • 3 years ago

          Yep definitely meant TB 🙂 Fixed. No need to start worrying about Y2K again.

      • just brew it!
      • 3 years ago

      Provided the drive and controller correctly report unrecoverable reads to the OS [i<]and you have scrubbing enabled[/i<] for the array, this level of unrecoverable errors should be tolerable in practice. The scrub pass will detect the bad sector and rewrite it with good data. If the physical sector is bad due to a media defect, the drive itself will reallocate it to a spare when it is rewritten. Yes, there's still some risk; if you happen to have a drive fail outright between scrub passes and there's an undetected bad sector out there that developed since the last scrub you may lose data. If even this level of risk is intolerable you can either go with RAID-6, or a 3-drive RAID-1 for additional redundancy. Many Linux distros configure the md (software RAID) subsystem to initiate an automatic background scrub of all arrays on the first Sunday of every month.

        • DragonDaddyBear
        • 3 years ago

        I totally agree, but it’s still a lot of data to scrub on a rebuild. It would take days to do a party check on this size drive, no? MD is awesome stuff. I think a z-raid would be better, though.

        Of concern to me is if someone does this in Windows. Default is NTFS, not ReFS. I can see marketing this to people as the “best” thing out there because it has the biggest number. And a consumer would totally go an buy it because they want the “best.” Or an intermediate user might use the onboard RAID controller. Lots of issues that the average person may not realize.

          • just brew it!
          • 3 years ago

          Yeah, depending on how much other activity there is and what has been configured as the scrub throttling rate, it could take a while. Even if allowed to run flat out on an otherwise mostly idle system it’ll probably take a couple of days. I’ve noticed it running into Monday or Tuesday on my server, and the largest drives I have are only 4TB; but IIRC I’ve got it throttled back to make sure it stays out of the way.

          I guess we’re approaching the point where it’ll be normal to just have a scrub running continuously in the background. It reminds me of the story (possibly apocryphal?) of how the crew that paints the Golden Gate bridge just paints continuously — by the time they finish painting the whole thing, it’s time to start over again.

    • Duct Tape Dude
    • 3 years ago

    The other main difference I find important between Pro and non-Pro is the non-recoverable read errors per bits read. On the Pro it’s <1 error per 10^[b<]15[/b<] bits read, on the non-Pro it's <1 error per 10^[b<]14[/b<] bits read. 10^14 bits is 12.5TB. This means with the 10TB non-Pro, if you fill the drive, you have up to an 80% chance you won't read the whole drive back perfectly. Yikes.

      • DragonDaddyBear
      • 3 years ago

      With drives of this size one should mandate zfs it some similar new FS that has bit correction built in.

      • DragonDaddyBear
      • 3 years ago

      Is it just me or is it crazy the higher RPM drive is the more accurate one?

        • just brew it!
        • 3 years ago

        Not necessarily. Stricter QC and tighter tolerances all around could make for a faster [i<]and[/i<] more reliable drive.

      • SecretSquirrel
      • 3 years ago

      I would assume that a non-recoverable read error simply means that a given sector read has too many bit errors for the error correction to salvage the read and that the disk will have to re-read the sector to get good data and so performance will be impacted.

      I would not interpret this as a read error that makes it all the way back to the host. I could be wrong though.

        • meerkt
        • 3 years ago

        I think it’s a hard error, not a retry.

        What I don’t get, though, is the nature of the error. Is it 1 bit (seems unlikely), or is it a whole jumbled sector (seems likely)? If it were 1 bit, and assuming 4K sectors, you would just have to try 32K bit flips and check each against the checksum.

          • Duct Tape Dude
          • 3 years ago

          As I understand it, the nature of a non-recoverable read error is beyond whatever the internal ECC and drive retry algorithms can muster for an entire sector. That’s up to 4k of lost data.

          All hard drives silently re-write any data that is found to be “weak” (low magnetic signal) or had a bit flip that the ECC could tolerate, and non-recoverable reads happen when data is corrupted beyond knowing what to rewrite. So just reading the entire drive end-to-end can help prevent bit rot by letting the drive’s internal algorithms automatically check every sector.

          Of course, the point of Time-Limited Error Recovery is to tell the drive to give up early when attempting to recompute weak data, because you’d rather let a RAID controller or filesystem fix the error by pulling the correct data from the other drives. And, you know, having your entire RAID halt while one drive does a doubletake in disbelief seems really, really annoying.

        • Duct Tape Dude
        • 3 years ago

        Non-recoverable read errors make it back to the host, recoverable read errors are silently fixed, as I understand it.

        • psuedonymous
        • 3 years ago

        [quote<]I would assume that a non-recoverable read error simply means that a given sector read has too many bit errors for the error correction to salvage the read and that the disk will have to re-read the sector to get good data and so performance will be impacted. [/quote<]If it can re-read the sector, and succeed, it is be definition a Recoverable Read Error. A Non-Recoverable Read Error is when the drive tries that and is still unable to successfully read, fails the read operation, and has to add the sector to the g-list.

        • just brew it!
        • 3 years ago

        You are confusing recoverable and non-recoverable read errors. A non-recoverable error means the drive cannot return the correct data for that sector to the host.

      • RedBearArmy
      • 3 years ago

      You gotta clean your glasses for those S.P.E.C.I.A.L pdfs.
      i caught myself on this more then once.
      eg they are equal: pro is <10^15 and non pro is <1^14

      • colinstu12
      • 3 years ago

      Red drives normally have 3 yr warranties and Red Pro have 5 yr warranties too

      • Waco
      • 3 years ago

      I bet if you measure, they are identical.

        • cmrcmk
        • 3 years ago

        This. I suspect those statements are mote about warranty limits than actual mechanical differences.

        • Duct Tape Dude
        • 3 years ago

        DID YOU JUST SUGGEST THE NEXT TECHREPORT DRIVE ENDURANCE CHALLENGE?

        Because I would be totally interested in this.

          • Waco
          • 3 years ago

          Actually that would be pretty interesting. Read drives 24/7 and log errors…

      • Chrispy_
      • 3 years ago

      What you say is true, but the most likely content for a consumer drive of this capacity is media, specifically photos and recorded video.

      So, in 10TB of photos and movies of your kids growing up, 1 single pixel in a 20-megapixel image has a 4-in-5 chance to be wrong? I can live with that to save $60. I don’t think that single pixel is honestly worth $0.06, let alone $60. It’s definitely not “Yikes” territory to me.

        • egon
        • 3 years ago

        Flipping a single bit can screw up a lot more than a single pixel:

        [url<]https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/[/url<] Note that article is discussing the slightly different issue of silent corruption, where unlike an unrecoverable error, the drive doesn't even register that anything's gone wrong.

          • Duct Tape Dude
          • 3 years ago

          I don’t like this article because in reality, a single bit will never just flip on a disk without the ECC doing its job. It’s a great demonstration of what happens when one bit is corrupted, but hard drives work in terms of sectors and automatically correct for a single bit flip or two (and will rewrite the sector properly in case of a single bit flip). If enough bit flips happen beyond the ECC, the entire 4k will be marked unreadable, and that will surely corrupt a JPEG more than just one pixel would.

          And another note: a drive actually [i<]is[/i<] aware of unrecoverable errors like we're talking about with the manufacturer's rating. It gets logged to SMART. There may be other failures upstream though (such as the motherboard's storage controller/a bad SATA cable/etc that cannot be detected by the drive.

        • Duct Tape Dude
        • 3 years ago

        It’d be an entire sector (4k) that gets messed up. In digital formats, that’s an entire picture ruined or a video glitch of up to a few seconds. It’s enough to be annoying, but you’re right–it’s not the end of the world for consumers.

        The thing is, no smaller drives are typically safe from this limit. It’s just a fact of life–for every 12.5TB of data you’re statistically going to get at least 1 URE. The other reality is there are so many other things that may be less reliable, such as a drive/motherboard/storage controller failure.

        But when you are building a RAID 5 system (or any system with a 1-drive failure tolerance), and the RAID has to be rebuilt, you now have a serious issue that the rebuild will likely fail with these huge drives, and then you lose the entire array. That’s an expensive problem to have, and an even more expensive one to solve, because it forces you to invest in a 2-drive-failure-tolerant system like RAID6 (or equivalent). As others have mentioned, ZFS/BTRFS/ReFS or any modern checksumming FS is the most effective way to combat UREs.

          • Chrispy_
          • 3 years ago

          Yep, I’m running RAID 6 across a 50TB (usable) 7-drive Ironwolf Pro backup array, simply because rebuild times are too long for RAID 5.

          I probably should know this (but I don’t, because it’s not as if I have any choice) but is VMFS as checksumming FS?

            • davidbowser
            • 3 years ago

            I should know this too (I used to work for VMware) but I had to look it up. VMFS5 does not, and I have seen nothing in the new docs that indicates that VMFS6 does.

            That said, VMFS6 does have support for the 4k AF format drives (512 emulation mode), which means that the drive itself would have better ECC. It’s not the best, but it’s better than a kick in the head.

            [url<]http://www.yellow-bricks.com/2016/10/18/vsphere-6-5-whats-new-vmfs-6-core-storage/[/url<]

          • Waco
          • 3 years ago

          Modern RAID controllers can be configured to allow a rebuild even with an URE. It means corruption, but it’s better than losing the whole array. They also scrub on pre-configured intervals to catch bad sectors prior to disk failure.

          It’s not the doom and gloom many say it is. I love ZFS, but it’s not the only answer.

    • DancinJack
    • 3 years ago

    Until HGST NAS drives give me a reason to stop buying them, I’ll continue down that road.

    10TB is nice, if not a bit expensive.

      • HERETIC
      • 3 years ago

      Best I can tell-WD 8&10TB Helium filled drives are basically HGST clones.

        • slaimus
        • 3 years ago

        They are actually HGST drives. The UL listing number from the picture (E182115) is the one issued to HGST. Plug E182115 into [url<]http://database.ul.com/cgi-bin/XYV/template/LISEXT/1FRAME/gfilenbr.html[/url<]

Pin It on Pinterest

Share This