Sony and IBM team up to create 330 TB tapes

The oldest gerbils probably think about magnetic tape in the context of data storage for Commodore VIC-20s and the like. The youngest gerbils probably think of tapes as the way their parents used to record TV shows before the days of online streaming and DVR boxes. Magnetic tape is alive and well in the realm of large-scale backup operations, and Sony and IBM have worked together to create a prototype system that can store a mammoth 330 TB of data on a single cartridge with a volume of one third of a liter. For comparison's sake, a standard 3.5" hard drive displaces about 0.4 L and currently tops out at a capacity of 12 TB.

Achieving the ludicrous 201 Gb/in² areal density needed to cram that much data into such a tiny volume requires closing the gap between the surface of the tape and the magnetic head. Sony contributed a new lubricant that bonds to the magnetic surface of the tape and reduces friction between the contact point of the magnetic head and the tape. IBM brought its 48-nm-wide-tunneling-magnetoresistive (TMR) read-write heads, advanced servo control technology, and signal processing algorithms that enable the high storage density.

The magnetic layer in the tape is also special. The prototype tapes have a magnetic layer that is applied using sputter deposition to achieve a more uniform crystalline structure. The sputtering method produces a nano-grained magnetic layer with an average grain size of 7 nm. Sony says this deposition method and a suite of other technologies allow packaging more than a kilometer (3280 ft) of tape inside a cartridge measuring 4.3" x 4.9" x 0.96" (or 11 cm x 13 cm x 25 cm).

Neither company provided any details about when to expect a product based on the new tech to hit the market, but when that happens, it will be in the realm of datacenter administrators with large budgets and data that cannot be lost under any circumstances. Users looking to back up their cat videos and gameplay footage will probably have to wait a bit longer.

Comments closed
    • adampk17
    • 2 years ago

    Imagine if these tape drives hooked up via the floppy drive interface like those Colorado drives popular in the 90’s. LOL!

    • davidbowser
    • 2 years ago

    Maybe they are using a new type of sputtering, so it’s “a thing”, but sputtering itself has been around for a while. A few careers ago, when I was in R&D (mid 90s), it was standard when you wanted to lay a layer of conductor (gold was good) that was a few atoms thick on a ceramic substrate.

    I’m guessing it’s new because my gut tells me getting a magnetic material to go down that thin with traditional sputtering wouldn’t work. I’m guessing it would be either too thick or too clumpy.

    EDIT – text

    • DPete27
    • 2 years ago

    My company still backs up their measly 1TB of server storage onto tape every night…..

      • Waco
      • 2 years ago

      Hopefully they test their backups occasionally too. 🙂

    • NTMBK
    • 2 years ago

    Finally, enough storage for my [s<]por[/s<]art collection!

    • albundy
    • 2 years ago

    bring back the 8-track!

      • Chrispy_
      • 2 years ago

      I know, right?
      LTO-1 through 7 are only 4-track, but Ford was putting 8-track into their cars back in the 60’s

      Sony and IBM need to catch the hell up.

    • Chrispy_
    • 2 years ago

    We moved away from LTO tape simply because the transfer rates are too slow. What’s the point in being able to save 330TB of data to a single cartridge if it takes three weeks at the [i<]theoretical maximum sequential rate[/i<] and more like eight weeks in practical real-world use?

      • Waco
      • 2 years ago

      You’re doing archival/backup storage wrong if the bandwidth of a single tape bothers you. 🙂

      Further, if you do need everything from the tape, getting 95%+ of sequential speeds is pretty darn easy.

        • Wirko
        • 2 years ago

        “Single tape”? Do we need 16 of them in RAID 0?

          • Anonymous Coward
          • 2 years ago

          You’d be well advised to have more than one copy of your data.

            • Waco
            • 2 years ago

            Multiple copies are too expensive if the data is big. Multiple copies also aren’t inherently safe, you need multiple protected copies if you truly can’t lose something.

          • stdRaichu
          • 2 years ago

          Most SME tape libraries will have at least two and usually four or six tape drives in them in order to get all the data from an overnight ingress written in time to avoid backing up during the business day, and you’ll generally want to factor in keeping a drive or two free for when you need to run emergency restores without busting through your backup window SLA.

          We’ve got a bunch of Quantum tape robots at our main data centre, each I think with 32 tape drives. It dupes out to two tapes for all out production data for storage in two separate storage facilities (nonprod stuff only gets written out once).

          Not a patch on our old beautiful Sun/Storagetek tape robot though (SL8500 IIRC?). If it’s your kinda thing there’s plenty of lovely videos on youtube of tape robots in action.

          • Waco
          • 2 years ago

          Certainly not striped with no protection. Some type of erasure across multiple if you need more bandwidth? Absolutely.

      • jihadjoe
      • 2 years ago

      IIRC LTO-5 was doing 500-800MB/s back when HDDs and SSDs were at SATA2, so for pure sequential bandwidth tape was faster. This new medium, with 201Gb/in should be super fast. at 8 ips it’ll be doing 200GB/s. That’s in line with the fastest NVMe SSDs.

      Of course the biggest drawback will always be the seek time.

        • stdRaichu
        • 2 years ago

        We’ve still got some LTO5 tape libraries at work, and throughput is more along the lines of 150MB/s IIRC (at least on the level of a single tape) – takes about 3-4hrs to write one out (compared to about 6hrs for our LTO6).

        If you’re dealing with tape at all seek times [i<]should[/i<] be irrelevant - because you should never be seeking, and always reading/writing out a whole tape sequentially from your staging pool.

          • Anonymous Coward
          • 2 years ago

          Although with a 330TB tape you might not want to read the whole thing out. 🙂

            • stdRaichu
            • 2 years ago

            ~300TB tapes are the sort of thing that’d be great for doing your year-end backups that go into cold storage for seven years so you should only ever be putting a tape into a drive if you’re planning to write the whole thing or just read a bit of it back 😉

            Note that, similar to hard drives, areal density increases generally means faster seq reads and writes – a quick look at wikipedia suggests that LTO’s 7 through 10 can write at 300, 420, 700 and 1000MB/s respectively. Assuming the new tech could sustain writes at 1GB/s (and hopefully much faster) you should be able to write out a whole 300TB tape well within 4 days.

          • Chrispy_
          • 2 years ago

          Yeah, we stopped using LTO at LTO6 and We were getting about 1.5Gbit/s (~200MiB/s) after hardware compression.

          Sadly, even with a twin-drive autoloader we spending more than two weeks backing up our 200TB of deduped data, even with a disk staging server. 200TB is just too much data at ~400MB/s to be quick enough for disaster recovery backups, and with 2Gb/s Fibre connections to CoLo datacenters, it’s just faster, easier, cheaper, more flexible to throw a SAN into a CoLo and do a bit-level sync of the whole company dataset and snapshot every few hours as well;

          One full backup a fortnight (plus incrementals) became a permanently-synced off-site copy of the data with instant recovery to any 2-hour increment going back six weeks. I can’t honestly see a need for tape anymore, at least not in your average company size. Perhaps if your business is in long-term storage of non-latency-senstive (like, 10 minutes access time) for datasets reaching into hundreds of Petabytes or even Exabytes, then perhaps a 330GB tape is going to be amazeballs. I just hope the areal density means massively improved read/write speeds because at the current LTO7 speeds, a 330TB tape would take a week and a half to go.

            • stdRaichu
            • 2 years ago

            Syncs and snapshots aren’t backups though, and there’s a significant ongoing cost in keeping data hot or warm in that fashion (ditto for paying to keep it in various types of cloud), but that’s firmly into “how long is a piece of string?” territory depending on your business methodology and any statuatory regs.

            If you only had a twin drive robot for a 200TB dataset then I’d say someone vastly underspent on your backup solution. I think we’re fortunate in that our veteran storage manager was once in a firm that suffered a catastrophic SAN failure that clobbered the DR “copy” as well, and then found that they didn’t have enough tape capacity to restore in under a week (which ultimately resulted in the firm going under).

            Surprised you saw such poor throughput speeds on the tape drives though – back-of-a-fag-packet shows we appear to get at least 85% of the 300MB/s you’d expect.

            Agreed though that a company of a few hundred people is unlikely to have anything approaching enough data to make tape backups economic.

            • Chrispy_
            • 2 years ago

            What do you mean, we EXCEEDED the theoretical max throughput of LTO6? that is not poor throughput speed, that is attaining 97-100% of the absolute maximum 160MB/s and then getting a little bit extra with on-the-fly 1.2x hardware compression.

            You’re confusing LTO6 with LTO7 I think. We’ve been out of tape for years, for the reasons you explained.

            Regardless, all of these conversations are sort of a moot point when discussing single digit TB capacities of LTO. This crazy IBM/Sony thing is [b<]55x higher capacity[/b<] than an LTO7 tape! That means it is going to take much longer to write than anything we've ever seen before, unless they've increased the write performance by more than 55x. I [b<]highly[/b<] doubt that, since if they had managed 55x faster tapes, [b<][i<]THAT[/i<][/b<] would be the headline, not the capacity!!

            • Waco
            • 2 years ago

            By the time these are out I’d bet we’ll see multiple GB/s out of them.

    • Ifalna
    • 2 years ago

    *Imagins random 4K writes on that thing*

    Well given the lubricant, I don’t think this is designed for frequent read/write passes. Probably more like “write once and read only if you have to”.

      • CuttinHobo
      • 2 years ago

      Perfect boot drive, amirite?

        • jihadjoe
        • 2 years ago

        Is this I.T.’s version of recommending a Hayabusa as the ‘perfect starter bike’?

          • strangerguy
          • 2 years ago

          More like the personal mobility vehicle for the crippled but with unlimited range as a ‘perfect starter bike’. You can go anywhere if you waited long enough!

    • derFunkenstein
    • 2 years ago

    4.3″ x 4.9″ x 0.96″ (or 11 cm x 13 cm x 25 cm).

    2.5cm I’m guessing?

      • Generic
      • 2 years ago

      The conversions really don’t match either.
      • 11 x 13 x 2.5 → 4.3″ x 5.1″ x .98″
      • 4.3″ x 4.9″ x .96″ → 10.9 x 12.4 x 2.4

        • derFunkenstein
        • 2 years ago

        Good point. The link to the Sony Global press release has these figures:

        4.29 in. x 4.92 in. x 0.96 in. (109.0 mm x 125 mm x 24.5 mm)

        So TR rounded a bit and precision was lost.

          • odizzido
          • 2 years ago

          going from 24.5mm to 250mm is a pretty terrible rounding error.

            • DrCR
            • 2 years ago

            Hey, what’s an order of magnitude between friends.

    • just brew it!
    • 2 years ago

    The main things tape has going for it are low cost per byte stored and robust archival characteristics. The fancy media required for this tech may very well kill the cost advantage, and until it has more of a track record the archival characteristics are a big question mark.

      • lem18
      • 2 years ago

      “more of a track record”, heh

    • Anonymous Coward
    • 2 years ago

    I should point out that the power of [i<]The Cloud[/i<] allows almost anyone to store their cat photos on tape at a very low price. In fact thats the cheapest storage offered on AWS.

      • Waco
      • 2 years ago

      The power of [i<]The Cloud[/i<] where your data is as important as everyone else's (read: not very important). 😛 Give this 8-9 years based on current lab stunts -> product timelines.

        • Anonymous Coward
        • 2 years ago

        Meh, in the cat photo class, the cloud providers are vastly more professional than any user can be expected to be, and also super easy to use. Best practices from the best professionals applied on a global scale via automation and standardization.

        As for future availability, like everything else, the API’s and services offered in the cloud will crustify as people grow to depend on them, and they should be there longer than all but the largest on-site data centers. Every work of humanity has a finite shelf life, but have no reason to suspect that the cloud providers will do poorly.

          • Waco
          • 2 years ago

          Except that they already do poorly… :shrug:

            • Anonymous Coward
            • 2 years ago

            Who’s doing what poorly?

            I swear people can’t rationally evaluate cloud services because they are afraid for their own jobs. But from my experience, I’d choose the cloud in any organization except perhaps the very largest and best funded.

            • Waco
            • 2 years ago

            I have no fear of cloud services, there is literally zero chance they’ll ever be utilized for the data I shepard at work.

            What I don’t like is their protection domains, costs for storage, and utterly terrible speeds given the hardware you have to throw at them to make them “safe”.

            I’m not sure what “largest” and “well funded” mean, but we spent 5 years investigating cloud storage systems and found ourselves wanting. We built our own since none of them offered the type of bandwidth, data protection, and efficiency that we needed.

            • Anonymous Coward
            • 2 years ago

            Fair enough. I admit the storage options are low performance compared to a good SAN, but I’ll work around that. Yeah an instance hooked up on 500mbits per CPU core, sounds terrible, I thought so too, but we work around it and have good productivity. I’m not even sure we’re saving money. We might be spending more, but productivity pays the bills.

            • Waco
            • 2 years ago

            Traditional SANs are far too slow as well (for our purposes, anyway). We generally have 100+ Gbps available per node and it’s an utter waste to get a handful of GB/s out of one. 🙂

            Yet, I’m still excited about the prospect of tapes like these being deliverable in the next decade.

            • Anonymous Coward
            • 2 years ago

            I think you’re one of those customers I had in mind when I was saying “biggest and best funded”. 🙂 Not much on a cloud platform is going to give 100 gbits per node, except local SSD.

            • Waco
            • 2 years ago

            Not the best funded, just the most frugal. 🙂 It’s incredible how tight budgets can be when your requirements are astronomical.

            Our current system deploys, with servers and Infiniband backbone, at just under 5 cents per GB at the ~60 PB scale. With enough clients, it’ll move data at somewhere around .5 TB/s once we get our new stack deployed.

            But anyway – it’s an odd world. Cloud vendors are slowly moving towards multiple tiers of protection within one namespace. I wonder where they got that idea… 🙂

      • mcarson09
      • 2 years ago

      Any data you put in the cloud is no longer owned by you. There are courts rulings about this.

Pin It on Pinterest

Share This