Seagate’s Shingled Magnetic Recording tech layers tracks to boost bit densities

Since about 2006, hard drives have used perpendicular recording technology to store data. That method has enabled platter densities up to 1TB and drives as large as 4TB. Perpendicular recording is starting to bump into physical limits, though. Seagate says the read and write components of current technology can’t get any smaller. Neither can the associated drive tracks, which are down to 75 nanometers in width.

According to the firm, a new approach is needed if areal densities are to continue their upward trajectory. That new approach is called Shingled Magnetic Recording, or SMR.

Shingled recording preserves the perpendicular bit orientation of its predecessor. However, it fundamentally changes the way in which those bits are organized. Instead of arranging individual tracks with space in between, shingled recording lays tracks on top of each other in a staggered fashion—much like the shingles on a roof.

Traditional (left) versus shingled (right) track layouts. Source: Seagate

As the diagram illustrates, the read head is much narrower than the write head. This size difference allows the tracks to overlap without affecting the drive’s ability to read the data. The overlap poses a problem when data is rewritten, though. Because the write head covers the read portion of the next track, that data has to be "picked up" before the rewrite can occur. The displaced data then needs to be written back to its original location, displacing the data in the following track. And so on.

To prevent rewrites from cascading down too many tracks, Seagate arranges the tracks into bands. The precise layout of these bands will be different depending on the drive’s target application. Increasing the number of tracks per band will raise the storage density, but it will also slow rewrite performance.

Seagate says it’s already shipping drives with SMR technology, though the first product isn’t set to debut until next year. That drive promises a 25% increase in storage density: 1.25TB per platter and up to 5TB per drive.

SMR looks like a clever technology, and I’m eager to test the first drives based on it. That said, Seagate needs a better promo video. This short introduction has nothing on HGST’s classic Get Perpendicular clip.

Seagate will also have to convince folks that SMR is worth the rewrite penalty. That might be an easier task than topping a disco-infused technology demonstration. SSDs have long since replaced mechanical drives as the go-to solution for high-performance PC storage, relegating HDDs to secondary and high-density storage. SMR’s benefits may outweigh the penalty for those applications.

Comments closed
    • jsavona
    • 9 years ago

    Disclaimer: Not a storage technologies expert.

    Hopefully write combining/command queuing can help mitigate this some, but effective track width is reduced from 75nm to somewhere between 40nm and 60nm, and drive head will need to move for each extra write and wait for platter to spin back to same radial location.
    Also unlike NAND where sectors are mapped to LBA (logical block addresses) on write (to control wear leveling) and using TRIM to un-assign mapping, harddisk LBA are largely fixed (with the exception of sector relocation ‘spares’ to mask for bad sectors) and do not have an un-used flag to track the sectors that have not been allocated. Please excuse the run-on sentences.

    • cygnus1
    • 9 years ago

    With a non-volatile cache it could be made atomic.

    • bcronce
    • 9 years ago

    They could probably read an entire group of them in one pass, then write back the entire group in another pass.

    It wouldn’t be atomic, so there is the potential of dataloss if they don’t make sure they have some caps to maintain power for at least one write.

    • cygnus1
    • 9 years ago

    Does it really require so many passes for a single band? I would think they have enough sense to pick all the writes to a single band out of the command queue, read the entire band, and then rewrite the band with all writes included? So wouldn’t a write to a non-empty band be very similar to NAND (read, modify, write) and not require a separate pass for each track in the band?

    • jsavona
    • 9 years ago

    math correction 🙂

    If a band is 2 tracks wide, there is a 50% chance that underlying data will need to be picked up and rewritten in an extra pass.
    If a band is 3 tracks wide, there is a 66% chance that underlying data will need to be picked up and rewritten in one (33%) or two (33%) extra passes.
    If a band is 4 tracks wide, there is a 75% chance that underlying data will need to be picked up and rewritten in one (25%) or two (25%) or three (25%) extra passes.

    So, if a band is x tracks wide, there is a 1-(1/x) chance of needing 1 to x-1 extra passes to write data.

    A write transfer rate of 150MB/s becomes 15.79MB/s {ignoring the read before re-write penalty} if the drive has to pass over the data an average of 9.5 {(0+1+2+3…+17+18+19)/20} times more (using a 20-track band, as an example).

    • cygnus1
    • 9 years ago

    With the right firmware and enough NAND write buffer, I’d trust this. I would hope they have the drive support TRIM so that they can mark used sectors as clear to rewrite without penalty.

    • just brew it!
    • 9 years ago

    [quote<]Lead-based soldering were another victim of this and more ecology-friendly substitutes aren't as durable nor tolerant.[/quote<] Yeah, no kidding: [url<]https://techreport.com/forums/viewtopic.php?f=29&t=81577[/url<]

    • just brew it!
    • 9 years ago

    Yes, I’m sure it is done with a highly focused laser. The additional time I’m speculating about isn’t due to affecting adjacent bits (and rewriting them); it is due to the fact that heating something to help flip its magnetic field has got to take more time since you’ve got to wait for it to heat up. Current drives using conventional heads take less than a nanosecond to write each bit.

    Hmm… maybe it is possible to focus the laser far enough ahead of the write head to eliminate the delay. Hadn’t thought of that before.

    • Tom Yum
    • 9 years ago

    The main problem is the inability to shrink the write heads, thats whats causing the current plateau in areal density. The smaller the data gets, the less sensitive the data must become in order to prevent magnetic domains flipping due to thermal fluctuations, which is done by increasing the magnetic domains resistance to change (magnetic coercity). The smaller the write head becomes the harder it is to generate enough magnetic flux to overcome the magnetic coercity (which has to increase as magnetic domains get smaller) and write the ever decreasing domain sizes. The read head doesn’t have this problem as it simply needs to sense the change in magnetic flux, not create a field strong enough to change it.

    This shingle tech allows the read heads to continue to decrease in size (increasing track density) at the expense of having to rewrite data overwritten in neighbouring tracks on the HDD.

    • jihadjoe
    • 9 years ago

    There’s also that platter salting thing that was reported around 2011. Supposedly gives 6x the space for HDDs. I wonder what happened to that…

    • MarkG509
    • 9 years ago

    This reminds me of the move from 512B to 4KB sectors a few years ago. Early on, there was a lot of read-modify-write going on until BIOSs, O/Ss and filesystems caught up – learning to use the 4K sectors and maintain sector alignment.

    If read/write heads turned into hydras with one head per shingle, then they could possibly write the whole band on the same pass. We’d probably need the O/Ss to go 16K or 64K sector support.

    Couple that with newer disk interfaces (SATA Express, straight PCIe, etc.) and this might be a good way to source/sync the bandwidth by using parallel “shingle” reads/writes.

    • Krogoth
    • 9 years ago

    CRT aren’t dead. They are reduced to niches that need their advantages (color accuracy, true blacks, superior contrast). The main reason why CRTs were discredited in the mainstream is because of new eWaste disposal standards that were pushed out around early 2000s. CRTs are among the worse offenders of eWaste due to their high use of heavy metals (cadmium and lead) for shielding. Lead-based soldering were another victim of this and more ecology-friendly substitutes aren’t as durable nor tolerant.

    Optical media is still #1 for mass physical media distribution in terms of cost/per GB basis by a good margin. Flash will never be as cheap in bulk physical media distribution. Digital distribution on the other hand is putting a nice dent on mass physical media distribution

    HDDs are more stable than SSDs for archival data storage and are more cost effective in $$$/per GB and physical volume basis. SSDs are also running into their own walls. The only thing that SSDs killed was fast-RPM HDDs.

    • Bensam123
    • 9 years ago

    Yeah, how they heat it would depend a lot on their implementation though. They could do something like that with a laser (I don’t think it would effect the bits).

    • f0d
    • 9 years ago

    holographic storage for low cost consumer use is still a long way away

    HAMR is a go for the 2014/2015 timeframe

    • just brew it!
    • 9 years ago

    HAMR is still on the table, AFAIK. I’m guessing that the stumbling block is write speed, since the media needs to be physically heated. HAMR may be more suited for use in a hybrid flash/magnetic drive where the flash can absorb bursts of write activity, which then get flushed out to disc in the background.

    • Bensam123
    • 9 years ago

    I thought there was other tech like holographic storage tech, HAMR, that was supposed to be arriving sometime soon? Stuff like that has been announced over the last few years, but doesn’t seem to be arriving.

    This definitely seems like it’s ready to hit the market as soon as they want to (whatever they’re already shipping means), but I can only imagine how this will effect data loss as well.

    You know shingles may actually be pretty sweet if they combine it with their SSHDs. The SSD will swallow the write penalty and the HD will give it ridiculous size.

    Perhaps they’re already thinking about and trying to implement this? This is just the first to make it to market. If they can figure out their SSHD caching and performance issues that could completely dominate the market.

    • just brew it!
    • 9 years ago

    If the drive is at all fragmented, even “sequential” writes aren’t sequential on the media; so you’re amplifying the bad effects of fragmentation.

    Furthermore, doing copy-on-write and block remapping are likely to be disastrous on a device with non-trivial seek time, since they will result in logically sequential blocks being scattered all over the physical media. Even your “sequential” accesses become effectively random (and slow)!

    To make matters even worse (as if they weren’t bad enough already), all of the meta-data required to manage that massive block remapping scheme (it sounds like you’re proposing the potential remapping of every sector on the drive?) needs to be stored somewhere as well. This means there will be more seek and rotational latency delays as the drive has to chase down all of that meta-data. I suppose you could put a fast flash-based meta-data cache in the drive, but at that point the additional cost of the flash (and a more complex controller) have likely eaten up any economic advantage you had from using the denser media in the first place.

    • Bensam123
    • 9 years ago

    ‘A lot’ isn’t relative at all… no sir.

    • Bensam123
    • 9 years ago

    To a certain extent. I don’t think there are really any servers built for drives like that though. It would just be consumers that benefit (as they’re the ones with 5 1/4 drive slots available). So I don’t know if this would actually be lucrative for Seagate to make a special run on drives (I’m sure producing 5 1/4 platters would be it’s own feat). They’d also cannibalize their normal high end storage market (4TB drives) for consumers.

    It would be interesting from a performance standpoint as well. The outside edge would be running super fast, unless they slow the inside down.

    • madmilk
    • 9 years ago

    You’re making the implicit and faulty assumption that every workload writes to completely random sectors. I don’t think it matters to most people that their hard drive goes from “crap” at random I/O to “somewhat more crap” at random I/O, when SSDs are clearly the better choice for such tasks. Additionally, techniques like copy-on-write and block remapping could be used to gain most of the performance back. I wonder if these drives will require TRIM?

    • Haserath
    • 9 years ago

    This would have made more sense if they shared write space instead of read space(ref to image 1).

    Instead, it looks like a rewrite will deliver 1/3 the performance(write vs read/2x write) or less if it cascades even more.

    I think Seagate is worried about Samsung’s V-NAND tech reaching HDD capacity in a few years.

    • peartart
    • 9 years ago

    Nothing beats the sneakernet.

    • Deanjo
    • 9 years ago

    [quote<](yes i said bigfoots was recent tech lol)[/quote<] Lol, not to worry, I still think that MFM and RLL were done with alien technology. "A WHOLE 5 MB ON ONE 5 1/4 FULL SIZED DEVICE? GTFO!"

    • f0d
    • 9 years ago

    exactly
    i wouldnt use it for critical storage but as a backup and (for me) a large scratch drive (somewhere to store my bluray collection before converting it down) – i have almost a hundred blurays and im trying to convert them for my htpc (which takes ages to do with the settings i have in handbrake – motion est 64/qprd all frames and everything turned to 11 heh)

    and heck even a full height drive like you said would be good – its not like speed would matter much, i just said bigfoots because it was actually done befroe with relatively recent technology (yes i said bigfoots was recent tech lol)

    • f0d
    • 9 years ago

    people thought the same back since ever
    i remember people telling me my 40mb (yes megabyte) hdd was way too much – nobody would ever use that amount of space and it would be too much data to lose all at once if i did fill it

    here we are with 4TB drives and people are still saying the same thing

    • Deanjo
    • 9 years ago

    I agree with you. I wish they would come out with a 5 1/4 drive that was reasonably priced with HUGE capacity for backups and media.

    • Starfalcon
    • 9 years ago

    Yeah no reason not to do this except the constant move in tech to keep making things smaller and smaller. As a side note, I have several Bigfoot drive and they still work fine…if a little slowly.

    • PopcornMachine
    • 9 years ago

    Seems like an awful lot of extra work for just 25% increase in space.

    If there weren’t high density ssds out, then it might makes sense.

    But there are and it doesn’t.

    • f0d
    • 9 years ago

    ah ok thanks – my memory fails me from something so many years ago

    personally i would buy one if they were 6-10TB in size as i have 5 2TB drives all full and a few more 1TB drives and i would jump at the chance to replace them with 1 or 2 “bigfoots” if possible – sure i could get 4TB drives now but atm the price/storage doesnt really seem worth it for them

    im sure its a niche thing and i guess not many others have the same storage requirements but i would totally jump on the chance to get a few “bigfoots” (or any 6+TB drive)

    and as i said – need something to put in those 5 1/4 bays too 🙂

    • Waco
    • 9 years ago

    Write once, read many (or almost never) is a typical workload for an archive…especially if you treat the HDDs like tapes. 🙂

    • Deanjo
    • 9 years ago

    Big foots were 3600 and 4000 rpm

    • f0d
    • 9 years ago

    i kind of agree for half height drives (as silly as it might seem at first)

    as was said before most people use hard drives for storage now and the hit they will take to rpm probably wouldnt affect most people that have fast ssd’s or even fast hard drives as their main storage

    anyone remember the bigfoot? it was a 4200rpm 5-1/4 half height hdd with very large capacities (i still have one somewhere 🙂
    since the largest amount of storage is on the outer tracks a 5-1/4 drive with newer perpendicular tech would make much larger hard drives than we have today

    besides we need something to put in those 5-1/4 bays 😛

    • ColeLT1
    • 9 years ago

    It’s possible now. At my company the (new) desktops and laptops have SSDs, our SAN has 24x400GB SSDs, servers have SD cards for hypervisor, and Samsung SSDs for local storage for Citrix xenapp and profile storage. Only backupexec writes to 3x2TB then dupe to tape. We store a TB of scanned images to a NLSAS SAN, which are rebranded sata3 7200 drives.

    • Aliasundercover
    • 9 years ago

    What were the give backs related to perpendicular recording? It wasn’t directly responsible for the move from 512b to 4k sectors, only a share as that change came from the general increase in density and corresponding need for more error correction. It cost money of course but our money bought better hard drives.

    This shingle business and its rewrites make for a nasty regression. Remind me if I have forgotten but the only time so far when hard drives become worse was the sector size change. That was a minor annoyance mostly because it called for changes in long calcified practices. Shingles do somewhat resemble 4k sectors if you grant them a 1,000:1 difference in scale, something I am reluctant to do.

    • ChronoReverse
    • 9 years ago

    Well, we don’t know the details of the implementation. I suspect Seagate engineers worked something out so that speeds will at least be similar. At the very least, read speeds will be faster.

    It’s probably because they couldn’t do this that HAMR still isn’t out.

    • ClickClick5
    • 9 years ago

    I’m just saying it is a lot. Like the people saying why do you need a 3TB boot drive. It is just a lot of space! Now, if all backed up, then yeah all is ok. But still….lots of space.

    • ClickClick5
    • 9 years ago

    I switched a switch with a 10Mb one as a joke. That was lovely.

    • just brew it!
    • 9 years ago

    I would be very leery of using drives based on this tech on any system that isn’t on a battery backup. A power interruption during a write will mean that you lose not only the data which was being written at that instant (as with a normal drive), but whatever data was stored on the adjacent track as well.

    Also, WTF does
    [quote<]Seagate says it's already shipping drives with SMR technology, though the first product isn't set to debut until next year.[/quote<] mean? That they're shipping engineering samples?

    • Chrispy_
    • 9 years ago

    Ultra 3D Cloud Recording Technology HD+ II (Championship Edition [i<]Turbo[/i<])

    • Chrispy_
    • 9 years ago

    But the speeds went up too, then; Capacity increase [i<]and[/i<] speed increase. This is a small capacity increase for seemingly large speed penalties.

    • Chrispy_
    • 9 years ago

    Yeah, I’m happy enough with 100MB/sec transfer speeds too.

    Have you tried copying a couple of Blu-Ray rips over 100Mbit ethernet though? that is so slow that it actually [i<]hurts[/i<].

    • ChronoReverse
    • 9 years ago

    Eh. To put it into perspective, when perpendicular recording came out, the platter size increased from 160 to 188.

    Less than 15%

    • bittermann
    • 9 years ago

    Ultra 3D Cloud Recording Technology HD+

    • Aliasundercover
    • 9 years ago

    This is far too much pain for +25% capacity. All that rewriting means lower reliability with large quantities of data at risk to power failure. It isn’t just new writes but old data which is normally much safer.

    There must be more to this, that or the hard drive people have lost it.(*) Perhaps at 2x or 4x capacity this sort of thing would make sense. Hard drives would wind up having remapping layers keeping track of pre-cleared sets of tracks ready for fast writing. All the complexity of SSDs with yet lower performance. Forget 4k “advanced format” sectors. May as well make them 4MB.

    * If the display folks can chase 3D down a rat hole I guess the HD people can have their turn.

    • sparkman
    • 9 years ago

    This technology could be useful for archival drives where performance is not a top priority.

    I can’t *wait* for SSD’s to make archival the only market for spinny disk drives like these. We already got rid of CRT’s, CD’s/DVD’s are nearly dead for common PC desktop use, and the hard drive will be the next domino to fall to the almighty silicon wafer.

    • bthylafh
    • 9 years ago

    I expect lackwits have been saying that since the first multimegabyte drive, if not earlier. It’s like they don’t run backups, therefore nobody else does.

    • albundy
    • 9 years ago

    get Shingles? no, that didn’t sound right. [url<]http://www.youtube.com/watch?v=xb_PyKuI7II[/url<]

    • albundy
    • 9 years ago

    that’s what everybody’s been saying since the 100GB drive came out, yet here we are.

    • A_Pickle
    • 9 years ago

    Two things:

    1.) I am willing to bet Seagate’s engineers have considered this, and…

    2.) Hard drives are increasingly more about storage, not speed. Speed is for SSD’s. (That said, I’m pretty happy with my Hitachi 1 TB drives and their 100 MB/sec transfer speeds).

    • ClickClick5
    • 9 years ago

    Still, my concern is this: That is a lot of data to loose on one drive. 4+TB gone…poof.

    • Shouefref
    • 9 years ago

    This stretches possibilities too much, for a slim advantage: +25% for computer speed is a lot, but not for hd space.

    • CasbahBoy
    • 9 years ago

    Ultra 3D Cloud Recording Technology HD

    • Chrispy_
    • 9 years ago

    [quote<]Increasing the number of tracks per band will raise the storage density, but it will also slow rewrite performance.[/quote<] If a band is 2 tracks wide, there is a 50% chance that underlying data will need to be picked up and rewritten in an extra pass. If a band is 3 tracks wide, there is a 66% chance that underlying data will need to be picked up and rewritten in two extra passes. If a band is 4 tracks wide, there is a 75% chance that underlying data will need to be picked up and rewritten in three extra passes. [b<]So, if a band is [i<]x[/i<] tracks wide, there is a 1-(1/[i<]x[/i<]) chance of needing [i<]x[/i<] extra passes to write data.[/b<] I'm no [i<]rocket surgeon[/i<], but from what I know of hard disk technology the write speeds are going to completely nosedive for minimimal percentage improvements, especially if they've only managed 25% so far. A transfer rate of 150MB/s becomes 15MB/s if the drive has to pass over the data an average of 10 times more (using a 20-track band, as an example).

    • axeman
    • 9 years ago

    If it has to rewrite multiple tracks to perform one write, it would seem that write performance would suffer. Curious to see tests of these drives.

    • Narishma
    • 9 years ago

    Better yet, 3D Cloud Recording Technology. Because everybody know cloud makes everything better.

    • Deanjo
    • 9 years ago

    I say bring back full height 5 1/4 drives. Lots of room for storage capacity then.

    • chµck
    • 9 years ago

    Good thing they aren’t using NAND mem

    • lilbuddhaman
    • 9 years ago

    The reliability sounds terrible, you’re writing possibly several times more for every write than on a traditional drive, for only 25% more room?

    • ChronoReverse
    • 9 years ago

    You could have said the exact same thing when they invented perpendicular recording.

    In the end I’m curious as to how well it performs (speed and density) and whether it has implications for reliability.

    • ronch
    • 9 years ago

    This thing, while fairly innovative, feels like it’s just a stopgap solution to the problem of how to continuously increase bit densities.

    • ronch
    • 9 years ago

    Needs a better promo video and better marketing. How about… 3D Recording Technology?

Pin It on Pinterest

Share This

Share this post with your friends!