Toshiba’s latest hard drives store 14 TB without shingles

We love our SSDs around these parts, but when it comes to inexpensive storage of large volumes of crucial data like Bruno's collection of cat pictures, rusty old hard drives are still tough to beat. Spinning platter drives with 14 TB of storage capacity have been around for a couple months, but Toshiba says its MG07ACA-series drives are the first of that size to use conventional magnetic recording rather than the slower shingled magnetic recording. The company also claims the new drive is the first with nine platters inside its aluminum housing.

The MG07ACA avoids the pitfalls of SMR by stacking a lot of disks inside a helium-sealed enclosure. For comparison, the shingled arrangement of tracks in competing SMR drives requires rewriting adjacent tracks when the contents of an existing track are updated, slowing down the write process. Toshiba's offering uses no such tricks and is potentially faster than its competition. The platters' rotational speed of 7200 RPM also helps keep ones and zeroes moving quickly by mechanical drive standards. All models all use the familiar SATA 6 Gbps interface.

Toshiba didn't provide any throughput or latency specifications, but it did say the drives have a 2.5 million-hour (285 years) mean time to failure. The company also says the besides a 40% increase in capacity over the previous-generation MG06ACA drives, the new models should offer a 50% increase in power efficiency measured in W/GB. The new series also includes eight-platter models with 12 TB of capacity.

Toshiba is shipping MG07ACA drives to its customers starting today. The drives are intended for use in cloud and business datacenters, so so don't expect these particular models to appear at Amazon or Newegg for the time being.

Comments closed
    • mcarson09
    • 2 years ago

    Can’t wait to see that smart error telling you to refill the helium.

    • ptsant
    • 2 years ago

    Is it true that Helium leaks inevitably and loss of data is certain to happen at some point?

      • faramir
      • 2 years ago

      Helium leaks inevitably. That’s a fact (it cannot be stored permanently in containers that are far better than the enclosure of a HDD).

      Now whether that results in data loss? I doubt it, it should merely affect the performance (decreased) and error rate (error count increased). It doesn’t outright cause data loss.

        • just brew it!
        • 2 years ago

        I wouldn’t be so sure about that. The helium is needed to maintain proper head flying height. Since helium is better at leaking through/past seals than normal air, as the helium leaks the overall amount of gas inside the drive will drop. At some point you don’t have enough gas left to keep the heads flying correctly, at which point the drive is effectively a paperweight.

        You might be able to compensate for less helium to an extent by changing the rotational speed of the platters, but I doubt they’re doing that.

        Some (if not all) helium filled drives have a SMART attribute to tell you how much helium you’ve got left. IIRC I also read somewhere that the value of this attribute is calculated by monitoring head flying height.

          • sluggo
          • 2 years ago

          Why would helium (or any other gas) leak against a pressure differential? I’m probably missing some basic physics here, but if there’s 1 standard atmosphere’s worth of helium behind a seal, and 1 standard atmosphere’s worth of something else on the other side, what’s motivating the helium to leak out?

          I guess there’s more than 1 atm of helium inside?

          EDIT: oops, I meant to reply to faramir’s post, got jbi’s by mistake

            • D@ Br@b($)!
            • 2 years ago

            Agree.
            Only because of temperature rise, the pressure will increase a few %.
            So if the enclosure is sealed enough to prevent air coming in, only a few % wil escape.

            • JustAnEngineer
            • 2 years ago

            Helium molecules are [b<]very[/b<] small. They're so small that they can actually diffuse into metal, not just through flexible seals.

            • just brew it!
            • 2 years ago

            That’s not how mixtures of gasses work. If there’s more helium inside than outside, there will be a net leakage of helium out of the drive even if the net pressures are equal.

            Edit: Plus what JAE said.

            Edit 2: A first-hand example from the world of beer… a typical bottle of beer is pressurized to an atmosphere or two above ambient with CO2. But even though the pressure inside the bottle is significantly higher than the pressure outside, there is still a net diffusion of oxygen through the seal and into the bottle, eventually causing the beer to go stale (oxidation). You can get special caps (which cost a bit more) which use a seal material which is less permeable to oxygen. These oxygen barrier caps make a very noticeable difference in the flavor of the beer once you get out a few months. Bottles with barrier caps maintain their aroma and flavor much better over time than bottles with standard caps. But even with the standard (non-barrier) caps, bottles will hold their CO2 pressure for years because CO2 molecules are larger than oxygen molecules, and diffuse through the seal more slowly.

        • TheRazorsEdge
        • 2 years ago

        If the drive has enough helium to survive 10+ years of outgassing, who cares?

        On the other hand, if it has problems after 18 months then it’s basically garbage.

        Haven’t helium-filled drives been on the market for a while now? What are their expected lifetimes based on historical data?

        I tried googling it, and I see a lot of scaremongering accompanied by zero data.

          • just brew it!
          • 2 years ago

          The expected lifetime is presumably at least as long as the warranty. Most He drives seem to have 5 year warranties. So it’s probably still a little too early to tell.

          • mcarson09
          • 2 years ago

          It’s a Toshiba hard drive. You’ll be lucky to get 12 months out of the thing.

        • D@ Br@b($)!
        • 2 years ago

        Isn’t helium only in there because of better heat transfer.
        The gas also needs to cushion the head but isn’t air good enough for that task.
        Or are the aerodynamic specifications to specific for air to work?

          • just brew it!
          • 2 years ago

          Helium behaves differently than regular air. Lower friction (so reduced power consumption) and facilitates lower head flying height.

      • GTVic
      • 2 years ago

      Indubitably and death is just as certain. Timing is everything though.

    • GTVic
    • 2 years ago

    12.73 TB actual.

      • Waco
      • 2 years ago

      12.73 TiB, you mean. It is 14 TB.

        • GTVic
        • 2 years ago

        I don’t see TiB on my computer, just TB, GB, MB and KB.

          • Waco
          • 2 years ago

          Actually, you do see TiB, GiB, MiB, and KiB.

    • ronch
    • 2 years ago

    Personally I think mechanical hard drives are still great. Just because we have SSDs doesn’t mean hard drives are second rate. They have their place, just as optical drives do, at least for me.

      • just brew it!
      • 2 years ago

      Yup. My desktop has a SSD for the OS drive, but there’s still a pair of HDDs (in RAID-1) for my data. Going SSD for everything is just too damn expensive.

      And “The Cloud” (where, TBH, most data is stored these days) still relies on mechanical HDDs. It’s my day job for the past couple of years. There may be some SSD caching to speed things up, but you can bet the back end storage is still spinning rust, due to cost/GB. Tape even still has a place, for infrequently accessed archival data.

        • rudimentary_lathe
        • 2 years ago

        Depends on what you’re doing in the cloud. The clouds I work with are all SSD. You can rent an all SSD VPS with enough grunt to serve hundreds of thousands of pages a day for $5 or $10 bucks a month (see Linode, Digital Ocean, etc). Why would anyone run a web server or database server on spinning rust when the vastly superior technology is so cheap?

        I’m with you though on still having spinning rust in my desktop. That said, I would like to replace them eventually – the noise, though minor, can be annoying.

          • Thrashdog
          • 2 years ago

          I’m with another firm now, but my last employer had all of their data in Amazon S3 (which is, I believe, still HDD-based) accessed via local flash-based caching and synchronization devices. Given what it cost to store all of their data (my industry is very graphics-heavy) on mechanical drives, I don’t want to know what all-SSD storage would cost.

          • just brew it!
          • 2 years ago

          I should’ve said “bulk Cloud storage”, where primary concern is cost, not performance. Obviously for an application or database server you’ll generally want SSDs.

    • Blytz
    • 2 years ago

    I am wondering at current production methods how many more platters they can cram in before they need to shift design.

      • just brew it!
      • 2 years ago

      They can probably still make the platters thinner by going to a stiffer/stronger material. Not sure how cost-effective that would be.

        • Blytz
        • 2 years ago

        Ask a question get downvoted.

      • GTVic
      • 2 years ago

      Back to 5.25″ full height drives I say.

      [url<]https://vignette.wikia.nocookie.net/uncyclopedia/images/0/08/5.25_inch_MFM_hard_disk_drive.JPG/revision/latest/scale-to-width-down/300?cb=20091129162959[/url<]

    • jensend
    • 2 years ago

    [url=https://www.theonion.com/fuck-everything-were-doing-five-blades-1819584036<]**** Everything, We're Doing Nine Platters[/url<] I'm telling them to stick two more platters in there. I don't care how. Make the platters so thin they're invisible. Put some on the PCB. I don't care if they have to cram the ninth platter in perpendicular to the other eight, just do it!

      • jensend
      • 2 years ago

      erm, what is up with the bbcode problem above?

        • just brew it!
        • 2 years ago

        Well, without seeing the original bbcode it’s kind of hard to tell.

        • bhtooefr
        • 2 years ago

        My guess is that a language filter isn’t happy.

        However, as The Onion’s now using Kinja, you can remove any words that aren’t the hostname and domain from the URL: [url<]https://www.theonion.com/1819584036[/url<]

    • egon
    • 2 years ago

    Bit of a physical resemblance to WD’s and Hitachi’s helium drives. The helium-based WD Reds and Hitachi He8/He10 were practically identical in appearance, while this one looks kinda sorta based off the same, just modified.

      • HERETIC
      • 2 years ago

      Pretty sure-when WD bought Hitachi’s HD section,part of the deal was Toshiba to get some Tech.

      • Bauxite
      • 2 years ago

      Toshiba got at least one complete Hitachi factory and patent/licensing stuff per WTO when WD bought them up and relabeled it all HGST.

    • Flying Fox
    • 2 years ago

    9 platters, does it mean it is taller/thicker than regular 3.5″ drives?

      • just brew it!
      • 2 years ago

      That would be a monumentally stupid move. This drive is aimed at datacenter use, where it will need to fit into standard server hot-swap bays.

      As a guess, they’ve slimmed the platters down, put them closer together, and/or slimmed down the motor to make more room for platters.

      • Ummagumma
      • 2 years ago

      Probably not any thicker than the usual height for a 3.5 inch drive.

      Though I would suspect the platters are as thick as possible while still fitting into the case to prevent unwanted flexing of the surface.

      • willmore
      • 2 years ago

      Owww, could we go back to 3.5″ ‘Half-Height’ drives? I’ve got some 2GB and 4GB Barracudas in a crate in the basement. Man, the array of them running all at once was momentous.

        • just brew it!
        • 2 years ago

        Why not go all the way back to 5.25″ full-height drives? This is from back in the day when “full-height” meant double the size of what we now think of as “normal” 5.25″ bays. A full-size contemporary optical drive is actually “half-height”, according to the original bay sizes.

        I remember dealing with a lot of full-height 5.25″ drives at my day job, circa early- to mid-1990s. IIRC the capacities were in the 500MB to 1.5GB range. I think they were 3600 RPM (but don’t quote me on that, my brain has undergone a lot of bitrot in the intervening 2+ decades).

        • MOSFET
        • 2 years ago

        You almost had it, but oh the extra o – Momentus!

        And the few thin Cudas I saw along the way were craptastic.

    • blastdoor
    • 2 years ago

    Here’s a trivia question (that I don’t know the answer to):

    In what year was the total installed hard drive storage capacity of the planet last less than 14 TB?

      • meerkt
      • 2 years ago

      The following paper presents data in a rather unclear manner. But based on the tables on pages 6 and 81-82, and my general assumptions about the average HDD capacity back them (not necessarily home computers), I would guess late 70s to early 80s.

      [url<]http://science.sciencemag.org/content/suppl/2011/02/08/science.1200970.DC1/Hilbert-SOM.pdf[/url<]

        • just brew it!
        • 2 years ago

        Seems plausible to me. That’s when HDDs first became available for PCs, which would be when total installed capacity would’ve started to ramp up.

        • blastdoor
        • 2 years ago

        Cool — thanks!

    • Sargent Duck
    • 2 years ago

    Bruno needs to share his cat pictures.

      • morphine
      • 2 years ago

      Oh, but I have: [url=https://techreport.com/news/32918/day-of-the-ninja-shortbread<]click here[/url<]. That's one of mine.

    • End User
    • 2 years ago

    You don’t want shingles.

      • Ummagumma
      • 2 years ago

      I hear there is a vaccine for that…. /ducks

        • aspect
        • 2 years ago

        I don’t want my hard drives to have autism.

      • Waco
      • 2 years ago

      For general storage use they’re perfectly capable. Don’t install your OS on them, don’t use them for streaming video recording…and that’s about it.

        • just brew it!
        • 2 years ago

        Wouldn’t streaming video recording be one of the things they’re OK at? Sequential writes are fairly SMR-friendly.

          • Waco
          • 2 years ago

          You would think, but file systems (due to metadata updates) are pretty good at interrupting that workflow especially when fragmentation comes into play.

          The newer drive-managed drives are far far better, but they’re still not great once they’ve been written to poorly even a single time.

          ZFS plays pretty nicely with them even with some not-so-ideal workloads, but with larger arrays the average write size ends up too small to be performant pretty quickly.

            • just brew it!
            • 2 years ago

            Heh. Fragmentation is on my sh*t list right now (day job).

            Turns out there’s a really weird interaction between ext4’s “delayed block allocation” feature (which is enabled by default, and is supposed to [i<]reduce[/i<] fragmentation), and the posix_fadvise() system call (which allows you to tell the kernel that data you've just written/read won't be accessed again any time soon, thereby freeing up buffer space). With certain workloads, these two features apparently conspire to cause the free space on your disk to quickly become hopelessly fragmented, making it impossible to write files contiguously.

            • Waco
            • 2 years ago

            I dislike all things EXT. 🙂

            I see someone likes ext* filesystems. I really, really, don’t. They’re okay-ish for desktop workloads but they’re all pretty bad at everything else…and there are better options for desktop filesystems. Some of my hate comes from Lustre using ldiskfs under the hood, which is a modified ext filesystem but shares many of the same annoying attributes.

            • just brew it!
            • 2 years ago

            Well, the downvote didn’t come from me; I’ve got mixed feelings on EXT* myself. As I’ve gotten deeper into storage tech at the day job, I’ve come to realize that there’s definitely some dirty laundry there.

            I still use it on my home desktop and server (since it is the path of least resistance and “good enough”), but learning ZFS is on my to-do list.

            • Waco
            • 2 years ago

            No worries, I figured it wasn’t you. ZFS is definitely worth learning!

            • DrCR
            • 2 years ago

            Interesting comments, though for less insightful reasons I’ve also avoided EXT#.

            In the native Linux realm, to what filesystems do you bias towards for OS and storage respectively?

            • just brew it!
            • 2 years ago

            Ext4 is still the best supported by the Linux ecosystem overall. Btrfs is an interesting next-gen file system (SUSE defaults to it now IIRC), but less mature than either ext4 or ZFS. ZFS has a lot going for it, but hasn’t gained much traction in the Linux space because distros generally haven’t bundled it due to lingering questions over whether the license is GPL-compatible. (Canonical broke with this in Ubuntu 16.04 LTS, making it available in the official repos.)

            Ext4 for your OS volume is still likely to yield the fewest surprises, and when bad things do happen there will be an abundance of info available online to help you troubleshoot.

            I get the feeling ZFS is probably the wave of the future, at least for storage volumes. OTOH, if the ZFS licensing issue continues to be a hindrance, then it’ll be btrfs, assuming it continues to mature.

            • Bauxite
            • 2 years ago

            ZFS is really easy to get in linux and use, just don’t boot off it in most cases. The license thing really hasn’t been an issue for actual deployments. ButterFS is [b<]still[/b<] failing to mature, it has had quite awhile yet too much cream but not enough solid stick.

            • Waco
            • 2 years ago

            Boot support is getting better and better. I’ve lost data to BTRFS twice (the two times I trusted it) and I have yet to lose data with ZFS with the exception of hardware failure that I couldn’t control or predict.

            • just brew it!
            • 2 years ago

            I realize getting it working shouldn’t be difficult these days, but as long as many of the major distros refuse to include it in their official repos adoption will be slow.

            • MOSFET
            • 2 years ago

            I still read it as B-Tree-F-S.

            • Waco
            • 2 years ago

            My answer is ZFS, followed by ZFS, followed by ZFS.

            I’m a bit of a zealot, but I also manage well over 100 PiB of ZFS storage and it has yet to let me down. 🙂

            • DrCR
            • 2 years ago

            Thanks Waco, Bauxite, and JBI for the further replies. I’m still running JFS on my old but still solidly reliable C2D mdadmed NAS. I’ll have to take a more serious look at ZFS in the Linux space.

        • Bauxite
        • 2 years ago

        For certain values of “capable”, to paraphrase a fun physics saying. Write amplification in spinning rust, woooo!

        They are quite a regression without an enterprise management solution layer above them, about like using a 10-15 year old drive in the same price bracket. The savings isn’t being passed on to consumers either, and the labeling is buried deep if even advertised at all.

        Do not pass go, do not collect $200, no shingles. These polished turds should only be sold to cloud vendors and the like.

        The two* remaining drive vendors (toshiba was a WTO mandated spinoff of a part of HGST so I’m sticking with 2) are about as nimble as telecoms though so we have weird but awesome stuff like shuckable helium PMR drives for dirt cheap.

          • just brew it!
          • 2 years ago

          [quote<]Write amplification in spinning rust, woooo![/quote<] At least spinning rust shouldn't have much additional wear and tear from the write amplification. Yes, it's a little additional wear on the head positioning mechanism, but at least you're not wearing out the media. [quote<]Do not pass go, do not collect $200, no shingles. These polished turds should only be sold to cloud vendors and the like.[/quote<] "Cloud vendors and the like" will probably be the lion's share of mechanical storage sales soon, if they aren't already. Even people who buy them for home use probably have a "private cloud-like" use case, e.g. archive/backup of content from other devices, and streaming media.

    • smilingcrow
    • 2 years ago

    I had shingles once and it hurt so much that I shouted for so long that if you had recorded it at 24 bit 96KHz it would just about have filled a formatted 14TB drive so maybe they could name this model after me!

    • UberGerbil
    • 2 years ago

    This does remind me I have to get that vaccination one of these days…

    • Duct Tape Dude
    • 2 years ago

    Surprised it’s not hydrogen-filled, I heard they’re twice the density and will fuel explosive growth in 2018.

      • UberGerbil
      • 2 years ago

      [i<]Oh the humanity![/i<]

      • modulusshift
      • 2 years ago

      I dunno, that sounds like it’ll bomb to me.

    • chuckula
    • 2 years ago

    [quote<]Toshiba didn't provide any throughput or latency specifications, but it did say the drives have a 2.5 million-hour (285 years) mean time to failure.[/quote<] And no, that doesn't mean if you get a drive you should expect it to work for 285 years.

      • Takeshi7
      • 2 years ago

      but if you get two drives one of them should work longer than 285 years.

        • Duct Tape Dude
        • 2 years ago

        The real secret is to buy a new drive and a dead drive. The new drive will last 570 years.

          • chuckula
          • 2 years ago

          THANK YOU MATHEMATICS!

      • Wirko
      • 2 years ago

      Let alone sue Toshiba afterwards.

Pin It on Pinterest

Share This