news wd introduces black 4tb desktop drive

WD introduces Black 4TB desktop drive

Western Digital hasn’t bumped up the capacity of its high-end Black mechanical hard drive since the 2TB model debuted over three years ago. That older drive was eventually replaced by an upgraded variant with a 6Gbps SATA interface, but the total capacity remained unchanged. Today, there’s a new addition with twice the storage: the Black 4TB.

Aside from doubling the capacity of its predecessor, the new Black has much in common with existing members of the family. The spindle speed is 7,200 RPM, the interface is 6Gbps SATA, and the DRAM cache is 64MB. The Black 4TB also features a dual-stage actuator whose design originated in the Black 2TB. This mechanism uses a second arm to move the drive head with additional precision, helping to keep the head on track with higher-density platters.

Although the spec sheet doesn’t reveal how many platters the new Black uses to reach 4TB, I suspect the drive is a five-platter design like the Western Digital RE 4TB unveiled in September. The enterprise-oriented RE drives have long shared a common hardware foundation with the consumer-focused Blacks. It’s also worth noting that the Hitachi Deskstar 7K4000 4TB is a five-platter design; Western Digital purchased Hitachi’s storage division last year.

The Black 4TB should be available through “select distributors and resellers” soon. According to the press release, the suggested retail price is $340, or a little more than the going rate for the Deskstar 7K4000. That Hitachi drive gets only three years of warranty coverage, so we’d expect the Black to be priced at a premium. Neither Seagate nor Samsung offers a 4TB internal drive, although Seagate does have an external GoFlex product with that capacity. You have to crack open the case if you want to connect the underlying 3.5″ drive via SATA, likely voiding the GoFlex’s shorter two-year warranty.

We should have our hands on the Black 4TB shortly. Expect a full review just as soon as we can run the drive through our test suite. We may have to gather some other 4TB drives to keep it company.

0 responses to “WD introduces Black 4TB desktop drive

  1. You’re believing manufacturer specs again. That’ll come back and bite you if you do that in the real world 😉

    StorageReview measured absolute best-case large sequential reads of 193.0MB/s, but I will admit, that thing just about [i<]averages[/i<] more than the 150MB/s SATA-1 theoretical spec (once you take 10b/8b encoding into account). I'll call that a win (at last) for mechanical storage. \o/ /rejoicing and party poppers.

  2. Seagate’s 3TB (1tb/platter) drives average over 160MB/s. Tops over 210MB/s: [url<][/url<]

  3. From photos, videos, my iTunes library, installed OSs and apps, and my DAW sample libraries, it is HUGE and it is getting bigger every year. I’m closing in on 1.5 TB now just for the DAW and that is not including my downloads partition which partly lives on a NAS.

    [i<]The fact is, I have a [u<]lot[/u<] of data, period![/i<] Whining about "lot of data to lose" or "a lot of data to lose in one shot" is not even in my reality. In fact, to me it seems a little bit petty to complain about that instead of seeing the possibilities that high-capacity devices bring to me. So let's talk for a moment about those possibilities. Remember, I have a vision. To acheive any semblance of success in that vision, I need to figure out how to solve the problems. I've found that whining is counter productive and doesn't make me feel all that much better anyway. Go figure. For somebody like me the choice is: 1 big drive (and a second drive installed for backups). ...or.. 3 or 4 little drives (and some number of little drives for backups). The reality is that there are plusses and minuses for every choice. But the short of it for me is that fewer drives will reduce my heat, noise, power consumption, and will require less space inside the box. I'm already running a minimum of TWO physical hard drives; one for the system and the second for the automated nightly backups. I'm not discussing SSDs as part of this strategy, but they too need infrastructure. I typically buy the very largest drives for use as backup media. Using automated Acronis tasks to take full and incremental images, you need a backup drive that's larger than the drives in production, just to maintain full backups, incrementals, differentials, backup versioning, and so forth. As my data capacity needs grow, I'll buy a new set of larger capacity drives, and I'll roll the backup drives in as my daily system drive (and it's cold-imaged clone copies). For example: When my system drive was a 1TB drive, I had three 1TB drives (1 for system and 2 for emergency bootable clones) and a couple of 1.5 TB drives for warm backups. When the 1TB drive was too small for my system, I used one of the 1.5 TB drives for the system and brought some new 2.0 TB drives online for backups. And so forth. Now I'm running my system on a 2.0 TB drive and I use 3.0TB drive for backups. If I just sat here and whined that this was "a lot of data to lose", then I might as well just give up on any sort of vision. But that's not gonna happen.

  4. I still have relatives who won’t buy a car with automatic door locks or windows because “OMG, that’s just more stuff to break”. Damned fogies!

  5. I agree! Sometimes I think I’d want to know just who is downthumbing people who offer a well considered and cogent observation.

    Then I remember that’s why I don’t join the R&P forum. Part of me would just rather not know that much about everybody else here… 😛

  6. Heh, ‘hammer’ is one word.

    I would have used the phrase [i<]'the outer 5% of the disk is just able to reach SATA-I speeds if you do a purely-synthetic, best-case sequential read test [url=<]like TR did when they reviewed it[/url<][/i<]. But yeah, actually I was unaware that they'd made a 7200RPM disk able to sustain more than 130MB/s so we're getting there, slowly 😉

  7. Because the greens are the same drive, when the 4TB Reds debut there will be a green equivalent to be had for less.

  8. The red drives are RE4 drives with a gimped spindle speed. I had to listen to Western Digital go on about them at a lunch and learn. Basically if you want RE4 drive but can’t afford them buy the red drives

  9. While I agree a 7200rpm drive is overkill for a HTPC/media server, the WD Black drives aren’t THAT much more expensive than Red drives, especially considering they come with a better warranty and higher performance. The price gap between Red and Black drives is about the same as the gap between Red and Green.

    Usually the highest capacity drives have a price premium anyway though, at least at first. It’ll probably be better to just stick with 3TB Reds for awhile.

  10. Yes, I knew that. Which is why I was questioning your comment that reads “If you are not running them in raid you might as well go with the green drives since they are essentially the exact same drive with firmware enabling TLER.”

  11. The Hitachi 7k3000 drives have been able to hammer against the SATA I theoretical limit for a while now.


  12. There isn’t one, there isn’t one of the reds either because they are the same drive with different firmware.

  13. It has a 5 year warranty according to our Google overlords, which is pretty good these days for a consumer drive. I think all “Black” drives do.

    I’d still backup my data though. A warranty won’t get you your data back, and 4TB is A LOT of data.

  14. The article suggests that this drive will have a longer warranty than the 3 years of the Hitachi, but doesn’t actually state what the WD warranty is.

  15. I’ve seen and read the XT reviews here on TR (as well as commented on them), that isn’t a direct replacement for 64Mb caches that barely do anything.

  16. Don’t kid yourself, Jimmy. If the cow had the chance, he’d eat you and everyone you care about.

  17. Here is the formal WD announcement re. these drives:


  18. They need to add a White line of drives. They’ve got Green and Red, Black and Blue, but it’s reverse racism not to have White!

  19. [url<][/url<]

  20. [quote<]Neither Seagate nor Samsung offers a 4TB internal drive, although Seagate does have an external GoFlex product with that capacity[/quote<]Seagate does have a nearline product out, Constellation ES.3 4TB.... and I think the 4TB Barracuda XT, which is an older drive, is finally available to consumers? [url<][/url<] From the areal density spec I believe they are 1TB platters. 😀 (or at least the 566 Gbits/sq. in.range is in-line with what Hitachi publishes for their 7K1000.D, which we know is a single-platter design... I could be wrong tho?) [url<][/url<]

  21. *edited for spelling

    The RE versions are still going for $400-490.

    I can’t complain about the consumer release because incremental progress in the mechanical space keeps the lights on until SSDs fully overtake the market. I am hoping it pushes down the price of the 3TB Reds because they are my new NAS and RAID standard.

    On a side note (triggered by some comments in the thread): I am now running a Crucial Adrenaline caching SSD in front of my 2TB Samsung drive for my desktop. Very good results so far with the new version of Nvelo Dataplex software that supports secondary drives (non boot). I get response times on par with my M4 boot drive for my commonly used data (Windows explorer directory listing, search, picture edits, etc.).

  22. I can understand down voting for my other comments but this one just proves that you guys are trolling 😛

  23. Five platters means ten heads for parrallelism,
    4TB means a higher areal density per platter,
    7200RPM means more high-areal density goodness per second.

    [b<]Will we actually see a mechanical hard drive able to saturate the original 1.5GB SATA-I spec, at last?[/b<] The 5th generation Velociraptor came close at about 145MB/s sequential read, but that just hammers home the point that SSD's were bumping up against the bandwidth limit of SATA-III almost as soon as it was released, whilst mechanicals (to this day) struggle to saturate a decade-old interface.

  24. Thankyou, Grigory. You’re not the only one tired of this nonsensical cliche every time a storage capacity is mentioned.

  25. The more recent comments (post-flood) about the Greens are that they are not built like they used to. I see equally bad reviews on both the Greens and the Samsung/Seagate equivalent.

  26. I’m not disagreeing with you that the hardware has not gone bad. Still there are a lot of people that see their green drive disconnecting and sending them back as defective.

  27. This. I have had more than a few WD Greens in RAID 10 in the past few years without any issues of drives dropping, failing, etc.

  28. Impossible!
    -Thailand flood.
    -Something something capacity rampup.
    -Something something economic context.

    Duopoly? Psssshhh…

  29. The head-parking issue has been fixed for a while AFAIK and I seem to remember a pretty easy fix for it as well.

    As for the lack of TLER – most software RAID setups are pretty lenient of slow drives. I wouldn’t use WD Green drives for a parity RAID setup but for mirroring and striping they have no real drawbacks.

  30. Ugh. Yes, that’s what a longer TLER would do, but it doesn’t mean the *physical drive hardware* itself has gone bad.

  31. The latter issue though is not a drive hardware issue per se, the drive just gets dropped from an array by the controller. They should be able to be added back to the array because the drive itself isn’t actually bad.

  32. There are two issues I know of that cause green drives to fail early; One was a head-parking issue on the EARS and EACS series drives. The other affects all the slower drives in that they take too long to respond when they can’t read a file (vibrations or bad sector) and the controller shelves the whole disk as “bad” just because you had one slow read, one head crash, or one bad sector.

  33. A “me too!” post but like btb and Waco said, there’s nothing intrinsically wring with using Greens for RAID arrays; as stated, it’s almost always software RAID in any case so there’s no funky controller voodoo going on. What hampers the WD Greens is the lack of TLER (plus a couple of what are tantamount to windows-specific power saving features), which means that the drive can go offline for minutes at a time in the event it finds a bad block and tries to recover from it – most RAID arrays will then go “ARGH! The drive is dead!” and kick it out of the array.

    Notwithstanding that it was WD who removed the TLER feature from the greens in the first place and then reintroduced it for a “reasonable fee” in the Reds range, I’ve been running WD greens in mdadm arrays for years. Once I disabled their overly aggressive head-parking (the infamous load cycle count/LCC issue) they’ve been running very smoothly. I’ll still be replacing them with reds as and when though.

    Agree that the reds haven’t existed long enough to make any judgements on their reliability, but since they’re basically just greens with some “RAID optimised” tweaks I doubt there’ll be much difference.

    FWIW, I have an array of five 3TB greens and one 3TB red in RAID10, and a mix of WD and samsung 2TB drives in an eight-spindle RAID6 and haven’t had any operational problems with either.

  34. Why? Other than slowing down the array to the speed of the slowest disk I see no real reason to avoid this condition.

  35. I have a 2tb Black that I got a few years ago for ~170CDN. It’s a great drive, but not as good as a Hitatchi 4tb that I bought this year for ~200CDN (I got it so cheap through one of NCIX’s blowout sales). The Hitatchi drive is even faster that the WD. I love WD but I wish they were more competitive on price.

  36. The Red drives haven’t been around long enough yet to have much of a record of anything.

  37. On the subject of high capacity disk concerns with RAID setups throughput IS the major concern. When High availability is key striping over 3TB drives is precarious enough that you either go raid 6 or risk a second drive failure and all your data before the array is done rebuilding. That is due to the amount of time it takes to bring up a new disk in the event of a failure because of it’s capacity. You can work with it but it’s a big concern these days.

  38. Don’t know why you are getting so worked up over a statement. It is, in fact, a lot of data to lose at once if it died. System administration is a whole other concern.

  39. Given that WD tunes the firmware to spin the platters based on power consumption of the drive, not spindle speed which can vary from drive to drive it is not a wise choice to use WD green drives in RAID configurations.

  40. Well, that’s a fair point but it does not apply to RAID setups and incremental backups. (Both with their own sets of problems but still.) You see the problem in the bandwidth of the storage devices which is a lot more realistic than to complain about the capacity. (“The capacity is too damn high!”)

  41. Are you implying that using WD Green drives in RAID arrays (which are usually [i<]software RAID[/i<] these days) kills them?

  42. I was thinking the exact same thing. Every dang time a drive sets a new record for most amount of data on a single drive someone ALWAYS goes “omg but thats a lot of data to looz if it failz!!!”

    Think im gonna start posting that every single time a new drive with a high capacity is announced as well, except i will pair my comment with a trollface.

  43. I have 2x3TB REDs in raid 1, and 2×3 TB greens in raid 1. Havent had problems with any of them, but since RED is specifically made for raid setups, I plan to stick to them in the future. And the price difference is rather small anyway

  44. The Red’s have not even been out long enough to say that they are any more reliable. Also the greens were not meant to be used in a raid environment, something they explicitly tell you but people continued to use them and complain when they died.

  45. Other than the green drives have a horrible record of dying and the Red drives don’t. The red drives may have better QA or may be cherry picked green drives. Red drives are also have a 50% longer warranty and claim to be designed for better handling of RAID(vibration and heat) environments.

  46. Even when a floppy was “big”, it still only held a few minutes of data to copy/restore. Restoring a single 4TB HD can take many hours.

    The opposite of restoring is backing-up. If someone could back-up and restore 4TB of data in under 10 minutes, they probably wouldn’t say “it’s a lot of data”.

  47. Every single effing time. A whole floppy disk was also a lot of data to lose at once in the mid-eighties, too. Should we have stopped there? Care for your data (backups, RAID setups) or risk it. It is your choice, nobody else’s.

  48. If you are not running them in raid you might as well go with the green drives since they are essentially the exact same drive with firmware enabling TLER.

  49. Mhmmm, just slap another platter on that thing and pretend the world is fine and dandy.

    Seagate and WD really need to get better engineers to design their caching algorithms… And mesh them with like 4GB of flash.

  50. Yuppers, even a TB is a lot of data to lose. It’s one of the reasons why I clone my full drives and put the clone in a fireproof safe in a separate building.