news hgsts 10tb drive uses custom software to access shingled platters
News

HGST’s 10TB drive uses custom software to access shingled platters

HGST first revealed its 10TB cold storage drive back in September. The 3.5" mechanical unit combines helium-filled internals with Shingled Magnetic Recording (SMR) to hit a new capacity milestone, and it's finally ready for prime time. Customers will start getting production drives in a couple of weeks. The Ultrastar Archive Ha10 won't be available through conventional channels, so don't expect it to hit online retailers. HGST's host-managed SMR implementation requires substantial customization on the software side.

SMR crams more data onto the platters by overlapping the individual tracks like rows of shingles on a roof. This layering is typically managed by the drive, without host-level intervention. That approach is great for compatibility, but according to HGST, it can result in inconsistent performance and dramatic slowdowns over time. The firm claims its host-managed solution can smooth out those wrinkles with custom software.

The simplest path to SMR support involves "large modifications" to the storage driver and block layer. SMR hooks can also be integrated into the file system and applications. That's all possible in Linux now with an open-source SDK available on Github. HGST also expects some native driver and file system support for host-managed SMR by late 2016, but only for server platforms. Archival drives aren't meant for consumers.

The Ultrastar Archive Ha10 is derived from HGST's Ultrastar He8, an 8TB drive based on traditional perpendicular recording tech. It inherits the He8's helium-filled internals—and the thinner platters and reduced power consumption made possible by helium's lower density. Seven platters are stacked inside the chassis, and perhaps more impressively, they rotate at 7,200 RPM. Don't expect impressive performance, though. Compared to He8's 205MB/s sequential specs, the Ha10's 157MB/s read rate and 68MB/s write speed are plodding at best and crawling at worst.

Slower write speeds are inherent to shingled recording, limiting the technology's appeal to archival environments that only write data once. More specifically, the Ha10 is targeted at active archives, where data is read frequently at first and less so over time—think social media and online storage rather than a replacement for tape-based archival backups. The big "hyperscale" customers  in HGST's sights already run custom software, so they shouldn't be fazed by the coding requirements attached to host-managed SMR.

Unlike some archival alternatives, the Ha10 comes with an enterprise-class five-year warranty. HGST claims the drive has a better load/unload tolerance, lower error rate, and higher MTBF spec than the competition, which isn't too shabby for a product that also happens to offer more storage.

0 responses to “HGST’s 10TB drive uses custom software to access shingled platters

  1. I always thought that it was better than using it for balloons. Or to change a person’s voice.

  2. I didn’t see any compelling arguments against 5.25″, aside from the writer speculating that platter droop due to gravity over a larger surface area might cause an issue with tighter modern tolerances. That’s just speculation though, and we don’t know if this would actually occur or be an issue.

    Higher latency/access time would definitely NOT be a problem, and might even be an improvement over this shingled model, even with larger platters at a lower rpm. I’d expect the biggest problem by far to be cases/enclosures/racks no longer built to physically accept 5.25″, but if the drives were compelling enough, that would slowly change. Power draw might also be higher, but lower-rpm and higher inertia of the platters may offset this somewhat.

  3. Yes, I know, but given how dire the budget is when it comes to NASA and suddenly any kind of talk about deep space exploration is immediately off the table, I’d say there isn’t enough helium around to satisfy demand in the near future (this century). Once it’s gone, what the h*ll do we use then?

  4. Helium isn’t that finite. It is constantly being generated by alpha decay from heavier elements buried in the Earth. Almost all of the primordial helium that formed with the Earth has been gone long before humanity existed.

    The problem is that it takes geological timescales for sufficient quantities to accommodate through alpha radioactive decay.

    There’s plenty of it elsewhere in the solar system.

  5. I don’t like it that these drives are using a precious, finite gas to achieve high-density storage. There must be another, less wasteful, way to accomplish this. I hope to God those upcoming storage technologies mature soon.

  6. It turns out there are [url=http://www.howtospotapsychopath.com/2012/09/12/bring-back-boat-anchor-drives/<]some good reasons[/url<] why we're unlikely to see large-diameter drives again. Taller drives may be easier to bring back, though.

  7. Both are ongoing processes. Helium comes pretty much entirely from alpha decay of various radioactive isotopes.

  8. Interesting point. I guess I’ve thought of SSD and NVMe optimizations as a part of the necessary evolution to take advantage of the the benefits that flash based storage has over magnetic, whereas SMR and other hardware/software modifications on the mechanical have seemed more hacky.

    Probably not a terribly fair comparison on my part.

  9. As I understand it, Earth isn’t making helium anymore. But then neither is she making oil.

  10. We should tell those guys working on fusion research to step it up already. We need more heliums!

  11. [quote<]But....am I the only one who's concerned that "large modifications" to the storage driver and block layer are necessary as capacity creeps upwards?[/quote<] I am concerned about the "large modifications" but does it really correspond to an increase in size, or to a shift in the underlying technology? Newer types of SSD's are requiring modifications as well, for NVMe as an example, to be supported by an OS.

  12. There’s vast amounts of helium we let go in natural gas, but it’s more expensive a method of acquiring it than the dirt-cheap federal supply. Like many supposed shortages, it’s only a shortage in terms of what an individual might be willing to pay. (Then again, if we were charging the full cost, people wouldn’t waste it in party balloons either. Supply, demand, price signals; funny how that works)

  13. Psssh, amatuer, you watch your pr0n less and less over time in favor of bew stuff? It’s all about the classics.

  14. This. Pressure differences between gases are a nightmare. Silicones, rubbers, pretty much any kind of gasket that would be considered affordable has a very very low gas permeability. They’re not perfect seals, basically.

  15. Oh man, it’s funny ’cause it’s true. I used to work for a large local white box shop that sold a whole lot of SuperMicro gear to local dev studios and SMBs. Like, so much gear that we were getting the same pricing as NewEgg on SuperMicro hardware.

    So many disasters along the lines of what you’re describing. Sales person: “Pssh. Green drives are fine for this RAID 5 storage server! It’ll save the customer thousands!” (On the onboard LSI fakeraid, of course.)

    After constant array failures due to bad config: “Pssh. It’s totally that stupid LSI controller. We’ll put an Adaptec in there.”

    After drives kept dropping with revised config: “Pssh. Stupid WD is intentionally making the Green drives not work in RAID. We’ll use Seagate 7200.11 drives — they’re way better anyway.”

    After massive failure rates on the .11 drives for both mechanical issues and firmware: “Pssh. Okay, no more consumer drives in there. Let’s go with Seagate ES2 drives.” (Built, of course, on the same broken fundamentally broken .11 internals.)

    And so on. I really could keep going.

    I seriously went through versions of the above with a half a dozen different corporate sales people, each refusing to learn from the lessons of previous config failures from other members of the sales team.

    It was a very Dilbert-y job some days.

  16. Those are drive-managed, no special software required. They do funky things in streaming/random write workloads for extended periods of time. Perfect for consumers, somewhat fun to deal with when high bandwidth and consistency is required.

    That said, CoW filesystems can manage to run on host-managed drives perfectly fine for consumer applications with some changes.

  17. “Archival drives aren’t meant for consumers.”

    You mean like seagate’s 8TB Archive HDD that’s been selling to consumers for $250? it sounds like the HDD has reached it’s peak capacity if SMR is all that’s left for larger storage. that and marketing the highest capacity drives as archive drives, like it’s the end of it all.

  18. Besides the heat transfer issues, it takes a stronger seal to hold a significant pressure differential as compared to maintaining a hermetic seal between two different gas compositions at similar pressure.

  19. Thermal conductance would be my guess.
    With vacuum you end up with heat radiation only as a means of dissipation. -> sucks.

  20. I’m no aerospace engineer, but I don’t understand why a sealed hard drive case with a slight vacuum for a smaller air bearing would be any worse than a hermetically-sealed He drive.

    I’m sure there’s a good reason, and I may be a little annoyed about wasting helium on hard drives.

    EDIT: Instead of replying to everyone I’m just editing this. Thanks for the insight, all. Makes a lot of sense.

  21. As far as I’m concerned, none of the top-tier enterprise storage providers are touching things higher than 6GB, and most are sticking to 4GB.

    That’s because the reliability and delicacy of these things gets worse as they try more and more methods of squeezing bits into the same old magnetic platters.

    Access time is the biggest issue, even in cheap+deep SANs low IOPS drives manage to be a notable problem despite all the intelligent controller management and cache. We build arrays based on a 7200RPM drive having 80 IOPS. Realistically, any array which results in an end-user of the storage seeing fewer than 20 IOPS is a disaster, making these disks very usage-specific.

  22. As a storage admin, I look forward to being told by someone they can buy a Supermicro full of these things for way less than my Enterprise NAS appliances.

    What? Performance? It’s a RAID, it’s gonna perform!

  23. And, these extreme capacity disks are meant for rackmount environments, where total volumetric data density across multiple drives is more important. And, 2.5″ drives offer better density than 3.5″ or 5.25″ with all else being equal…

  24. The low-hanging fruit is to bring back the Quantum Bigfoot: 5.25″ hard drives, but that has its own problems.

  25. They are doing what they have to do to get capacities up.

    That’s the only way hard drives will stay relevant as SSDs get cheaper and larger.

    There were some projections that were posted here (or somewhere else) a couple months ago showing that hard drive capacities were going to grow at a pace much faster than any time in recent history. It was a simple little graph.

  26. i wouldnt mind going back to the 5-1/4 “sizes again
    gimmie another bigfoot 🙂

  27. What we have today are 3½” LP (19 mm tall) discs. I haven’t seen any 3½” half-height or 5¼” half-height hard-drives since the mid-90s, and it was in the 80s that I last saw any 5¼” full-height hard-drives. Mainframes still had 8″ HDD packs that looked like washing machines when I was in high school and college.

    The first half-height 3½” hard-drive that I purchased for my own use was a Seagate ST-157N. You could still find a very few CDC Wren and Connor 5¼” full-height drives on the market in the mid-80s but they were already a rarity. When the 157N’s stiction failures became intolerable, I replaced it with a 5¼” half-height ST-296N in the early 90s.

    Most cases still include 5¼” half-height bays for optical drives, but I don’t expect any of these larger form factors to make a return.

    P.S.: I found this:
    [url<]http://en.wikipedia.org/wiki/Hard_disk_drive#FORM-FACTORS[/url<]

  28. If we keep spinning discs, I expect to see 3.5″ disks that are double the height. Not exactly convenient but it gets the desired space and with a lot of room to spare. Most cases would need to be modified to fit this though.

  29. 10TB on a single drive — bully for them.

    But….am I the only one who’s concerned that “large modifications” to the storage driver and block layer are necessary as capacity creeps upwards? And that the 8TB drives from Seagate and HGST are only spec’d for cold storage due to the necessary SMR technology?

    Seems as though high-density mechanical storage really is hitting a wall. Haven’t heard of anything other than SMR and/or helium to allow for capacity increases beyond 5 or 6TB drives, though I’m sure someone else will point out something that’s “a few years out” if it’s in the works.