Western Digital brings Advanced Format to Caviar Green

For quite some time now, mechanical hard drives have stored data in 512-byte chunks called sectors. That sector size worked for lower capacity points, but as areal densities rise, it’s become increasingly inappropriate for new drives. As a result, the industry has decided to transition to a 4KB sector size dubbed Advanced Format.

So-called legacy formatting schemes sandwich each 512-byte sector between Sync/DAM and ECC blocks that handle data address marking and error correction, respectively—and also take up space. You still need those blocks with Advanced Format, but only every 4KB rather than every 512 bytes, which translates to a dramatic reduction in overhead. This approach allows Advanced Format to make more efficient use of a platter’s available capacity, and Western Digital expects it to boost useful storage by 7-11%, depending on the implementation. Current 500GB/platter products stand to see an increase in useful capacity of about 10%, which is really quite impressive.

Advanced Format’s reduction in overhead. Source: Western Digital.

Since not all operating systems can handle 4KB sectors natively, Western Digital’s Adavanced Format implementation divides each 4KB physical sector into eight 512-byte logical sectors. The drive’s firmware performs all the necessary translations, and according to Western Digital, there’s no loss in performance as long as partitions are properly aligned. Windows 7 and Vista should align new partitions properly on their own, but you’ll have to download a free WD Align application to align partitions correctly with Windows XP. WD Align is also necessary if you’re using a disk cloning utility to create partitions under Vista or Win7.

Although Western Digital’s Advanced Format implementation uses the drive’s firmware to translate requests, users won’t be able to add the feature to existing drives with a firmware upgrade. The drive’s platters must be prepared for Advanced Format at the factory.

Western Digital is first rolling out Advanced Format in its Caviar Green line. 500GB drives featuring the new formatting scheme are scheduled to start shipping this week and should be followed quickly by higher capacity points. You’ll be able to identify an Advanced-Format-compatible drive by its model number, WD10EARS, or by stickers on the drive and its packaging. These new models will also feature larger 64MB caches (previous Greens topped out at 32MB), although Western Digital doesn’t expect their street prices to be any higher.

Comments closed
    • blubje
    • 10 years ago

    Back in 1992, I was using a floppy disk formatter that would format normal 1.44MB floppy disks at 1.48MB (2% boost) 1.6MB (11%) and if you’re lucky (firmware dependent) 1.92MB (33%) capacity doing roughly the same thing.

    The larger capacities only required a driver addition. A 1k TSR for the more strange ones. The 1.48MB didn’t even need a special driver.

    In other worse, WD is choosing to make you buy another hard drive instead of releasing their Advance Format software. Big surprise there, they wanna make money =P. I suspect their newer “Advance Format” drives will cost roughly the same as their corresponding non AF counterparts.

      • UberGerbil
      • 10 years ago

      Yeah, because floppy drives work exactly like hard drives. If it were possible to do this via a driver change, people would have been doing it on Linux for some time now.

    • beetlebud
    • 10 years ago

    What happens when you try to ghost this drive to a new one?

    • Anonymous Hamster
    • 10 years ago

    Some people don’t seem to be clear on exactly what’s going on here, so let me give a shot at explaining.

    Let’s imagine you’re the drive controller and you’ve been given an instruction to write a block of data to a give location on the drive. We’re going to skip over the head seeking to the correct track/cylinder and assume that that has been done already. Now the controller has to wait for the correct block to rotate under the head.

    In order for a the drive to write the block at the correct location on the platter, it first has to have the disk head in read mode as it searches for the start of the block. Each block really contains several parts: a sync header, the data itself, the ECC (error correction code), and a gap. As the sync header goes under the head, the drive controller identifies which block it is. Once the correct block has been identified, the drive switches over to write mode, then writes out the data and the ECC.

    Because of several sources of variability (motor spin speed, thermal expansion, etc.), the actual physical location where the block was written to may move around slightly each time that block is written. Because of that, the gap area is necessary so that when one block is written, there is no chance that it will overwrite the sync header for the next one.

    By making the blocks bigger, they reduce the number of sync headers and gaps that are needed, allowing more space for actual useful data.

    The drive controller CPU can make this change transparent to the operating system. Reads are unaffected, since the drive always reads extra data into its cache anyway. It’s writes that may require changes.

    If only a few 512-byte blocks within a 4K block require writing to, then the drive must first read the whole 4K block into its buffer, change the 512-byte blocks in the buffer, then write out the whole 4K block again. This is doing a read-modify-write cycle rather that just a pure write, and it can slow things down. That’s why it’s better if the OS also uses the larger block sizes.

    As far as changing existing drives to use the larger block size, I imagine that it’s possible, but it would require the manufacturer to develop very-low-level reformat software that would run on the drive controller CPU. This kind of low-level stuff is usually just done at the factory. They’d probably much rather it be done under the carefully controlled conditions at the factory rather than worry about all the possible ways it might go wrong at user’s home.

    • Jordan4u
    • 10 years ago

    Manufacturer Western Digital has a new format for the data blocks of Caviar Green hard drives are introduced. The data blocks increases from 512 bytes to 4 kilobytes, which should reduce the amount of overhead. I have go through this on the §[<http://www.techarena.in/news/21805-western-digital-brings-advanced-format-data-blocks-caviar-green-hard.htm<]§

    • mzap
    • 10 years ago

    How about if i want to use a few of these 4kb-sector-drives in a raid array? How to align then? I.e, how do i make the raid-controller align correct?

    • indeego
    • 10 years ago

    WD should know better using a jpg for that graphicg{<.<}g 😉

    • dustyjamessutton
    • 10 years ago

    I know my RAID 0 setup has block sizes. I have a choice of 64kb or 128 kb, which I use 64. It uses less space. No matter what your file size is, if it’s below 64 kb, it will always be at least 64 kb. Same for 128 setting. Even if the file is 1 kb, it uses 128 kb’s. At least this is what I’ve noticed. I’m assuming these setting are similar to the new Advanced Format.

      • indeego
      • 10 years ago

      -[

      • Meadows
      • 10 years ago

      No, Advanced Format is hardware-level.

      • ew
      • 10 years ago

      That is only true if you also set the block size of your file system to 64kb. If you didn’t then your file system is likely using 4kb block size.

      • just brew it!
      • 10 years ago

      You’re confusing your RAID stripe size with disk sector and/or file system allocation unit size. They are all different things. Setting your allocation unit size equal to the RAID stripe size is both unnecessary, and very wasteful of disk space (unless the average size of all of the files on your system is very large).

    • jcw122
    • 10 years ago

    Couldn’t you already do this yourself when you format a hard drive? If this is the same thing, I’ve known about this for years.

      • derFunkenstein
      • 10 years ago

      no, you’re confusing clusters with sectors.

      • just brew it!
      • 10 years ago

      No, formatting the drive only allows you to change the size of the allocation units used by your file system. Formatting your drive with 4K allocation units implicitly maps each allocation unit to 8 physical disk sectors, when using a traditional hard drive (with 512 byte sectors).

      What WD is doing is changing the underlying physical sector size used on the platters, which is a different thing.

    • jcw122
    • 10 years ago

    duplicate…way to have a delete feature.

    • dustyjamessutton
    • 10 years ago

    Unless linux already supports this, there will be an update for Linux at the most a week after the drive is released. But that’s just a vague guess.

    • Mr Bill
    • 10 years ago

    Interesting, I wonder how this might apply to SSD drives.

    • iatacs19
    • 10 years ago

    If Windows Vista and 7 natively support this new 4KB, why is the firmware needed? Would the drive not use this feature when an OS that supports the new format is detected?

      • NeronetFi
      • 10 years ago

      l[

      • just brew it!
      • 10 years ago

      No, we are talking about the low-level format of the platters, which the OS does not have control over. The low-level format is determined by the firmware in the drive.

    • eloj
    • 10 years ago

    Hope there’s a jumper to turn off that ugly “MS Windows” firmware hack.

      • UberGerbil
      • 10 years ago

      …..what?

        • ssidbroadcast
        • 10 years ago

        I am totally LMAO right now.

          • OneArmedScissor
          • 10 years ago

          Hahahahahahahaha I keep trying to read that in different ways and I laugh every time.

        • eloj
        • 10 years ago

        Let me rephrase that in the vain hope that even you will understand me: “I would be happy were there a way to disable that ugly firmware hack which I won’t be needing on my linux servers”

        HTH. HAND.

          • TO11MTM
          • 10 years ago

          Uh, that doesn’t help me. The firmware changes the low level formatting of the drive, which has nothing to do with Windows, *nix, *BSD, or even DOS.

          The Utility for XP is a user run tool…. So I still don’t get it?

          • just brew it!
          • 10 years ago

          It’s not a firmware hack, it’s a way of forcing Windows XP to align its partitions on 4K boundaries so that partitions are guaranteed to start on a 4K physical sector boundary.

          • crazybus
          • 10 years ago

          I don’t think it’s a Windows issue. Hasn’t Windows supported 4k sectors since Vista? It’s more like every other device along the chain that is hardcoded to 512 bytes.

            • just brew it!
            • 10 years ago

            Edit: Nevermind, I had a brain fart.

    • yuhong
    • 10 years ago

    q[

    • crazybus
    • 10 years ago

    I wouldn’t expect a performance penalty unless you’re using a partition cluster size smaller than the sector size, which would be stupid and is impossible as far as Windows is concerned.

    edit: reply fail.

    • UberGerbil
    • 10 years ago

    BTW, if you want an analogy, you can think of this as “jumbo frames” for HDs. The principle — making the data payload larger to reduce the percentage of overhead due to per-unit bookkeeping — is pretty much the same.

    • Nuclear
    • 10 years ago

    Most likely, the 4kb was chosen because it’s the same as most SAN’s (HP EVA and NetApp)
    /edit
    Oh and i forgot, NTFS default block size is 4kb

    • l33t-g4m3r
    • 10 years ago

    do we really need a new drive?
    Couldn’t you just low-level format an old drive to do this?
    providing there is software capable of it.

      • UberGerbil
      • 10 years ago

      No. This is the translation between the platters and the controller electronics within the drive. The software running on your CPU, no matter how low-level, doesn’t see any of this.

    • bdwilcox
    • 10 years ago

    Polishing brass on the Titanic.

      • Kurotetsu
      • 10 years ago

      You can say that when good SSDs don’t cost a paycheck to buy.

        • bdwilcox
        • 10 years ago

        Why bother changing it now, with all its introduced changes, complexity, and bugs when mass market SSD adoption is right around the corner? This might have been viable three or four years ago, but now? It’s a little too little, a little too late.

        And it’s not like the old model was choking the life out of hard drives. It was eating up a little more room for overhead. With drives as big as they are does that matter enough to warrant a systemic overhaul? They should just stick with the existing model until SSDs are mainstream and then jettison the HD model completely.

        Unless, of course, this is setting the stage for a change they’d like to make for SSDs as well.

          • UberGerbil
          • 10 years ago

          Hint: typical NAND page size is 4KiB (block erase size is usually 512KiB)

          • OneArmedScissor
          • 10 years ago

          It wasn’t viable 3 or 4 years ago, because hard drive sizes were a fraction of what they are now.

          And it will still matter more to come in the future. This adds capacity. HDDs are going to eclipse SSDs in price per GB until there’s some radical change in transistor technology. Shrink them all you want, but they will hit a physical limit, and HDDs will continue adding larger and larger amounts of GB every time their density increases.

            • UberGerbil
            • 10 years ago

            Except that /[

            • OneArmedScissor
            • 10 years ago

            But if they have a home file server, then that just means the HDD is moving there, and drives will have an even greater need for high capacity, as the total amount of information in a household must be consolidated.

            I don’t think it’s very likely that many people will have a home file server, anyways, even in 2013. That’s three years away. I don’t believe I know a single person who is aware of the entire concept, much less someone who actually uses one.

            While *[

            • bdwilcox
            • 10 years ago

            No, bdwilcox isn’t making it seem as if HDDs are already done for and should be given up on. bdwilcox is saying that for the time that HDDs will still be viable, a radical, problem inducing change isn’t worth the headache; the cost to benefit ratio for this one is pretty slim. Instead, stick with the standard model with its near universal support, wait for SSDs to become the standard, and then get rid of mechanical hard drives for good. Less problems, less headaches, more compatibility, and easier support. The cost is slightly higher overhead in disk usage. With drive capacities at the sizes they are and drive prices as low as they are, it’s not much of a trade-off.

            If hard drives had 10-20 years in usable life, sure, start the transition. But hard drives probably have 2-4 years before they’re replaced by SSDs so why cause chaos now?

            • UberGerbil
            • 10 years ago

            What chaos are you anticipating?

            • bdwilcox
            • 10 years ago

            BIOS incompatibility, motherboard incompatibility, OS incompatibility, disk utility incompatibility, corrupted drives from using the wrong settings/BIOS/motherboard/utility. Tons of technical articles that need to be revised and updated. Tons of technical equipment and patches that will need to be validated. And on top of all that, the added support complexity. No thanks.

            • yuhong
            • 10 years ago

            On the other hand, to go beyond 2TiB with 512-byte sectors you need to move to GPT with similar classes of issues.

            • OneArmedScissor
            • 10 years ago

            There’s just no way they’re going away in 2-4 years. There’s going to be 2-4 more years’ worth of information to store in that time, and the SSDs won’t even have reached price per GB ratios of today’s HDDs!

            1.5TB drives are $110. On the low end of things, single platter 500GB drives have been $50 for about a full year now. 3.5″ platter density is a very short time away from going up another step to about 640GB per platter.

            Right now, it’s about $300 for a 120GB SSD, and you’re only allowing enough time for cutting transistors down to 1/4.

            Think of what the internet will be like in 2-4 years. Server farms will have to be enormous. Without this change, that would be a ton of wasted money on buying additional HDDs.

            • turrican
            • 10 years ago

            /[

            • Shining Arcanine
            • 10 years ago

            They will either cease production of that size or require such purchases to be done in bulk.

          • Welch
          • 10 years ago

          HDD’s have one thing going for them right now…… Capacity. In order to make their selling point that much more appealing they are working on an issue that we often see people cry about “WHAT….. IT SAID 500GB!! I DON’T GET ALL 500!!” Due to this type of overhead and the file system you loose a really large amount of your drive. When you consider that a 2tb is something like 1.8tb after a format, your loosing a good 200gigs of space!!? That is a VERY large amount of space to loose, and some of us still run our entire system on drives not much larger than 200 before formatting. When you consider 7-11% increase in your capacity on a larger drive (and they are only getting larger), your talking about some serious space. This will matter the most to people running storage servers, home file servers, and large companies that work on housing lots of data.

          Take Google for instance, think of how much space they have in total. From an article back from 2006 (so you know they probably are double this by now hehe) §[<http://googlified.com/how-much-data-does-google-store/<]§ Google search crawler uses 850 TB, lets assume the LOWEST gain in capacity from this new change to mechanical drivers, 7%... That is 59.5 TB of data that is saved. This being a crude number since the 7-11% are based on INCREASE of existing space after a file system. Solid state drives right now cannot, and will not for many years begin to compete on a raw storage level for these type of applications, HDDs have lots of life in them.

          • Shining Arcanine
          • 10 years ago

          This change is because of SSD adoption, as it allows the platters to store more data in less space. If it were not for SSDs, this would not have been done.

            • just brew it!
            • 10 years ago

            I seriously doubt that SSDs were the primary motivator for this. Most of the increased efficiency is due to the elimination of 7/8ths of the inter-sector gaps. SSDs don’t need (or have) inter-sector gaps; the gaps exist to give the read/write electronics in a mechanical hard drive enough time to switch on and off, and sync up between sectors. SSDs /[

            • UberGerbil
            • 10 years ago

            Well, maybe he meant that the HD guys are doing this to allow them to eke out more space and thus better compete with SSDs on a $/GB basis for a little longer.

            But the reality is that this has been in the works for quite a while, and for reasons independent of SSD adoption rates — the looming 2TB MFT barrier with 512b sectors, and the inefficiency of 512b sectors in general. Those factors would be enough to push the HD industry into this change even if SSDs didn’t exist.

      • Farting Bob
      • 10 years ago

      I fully expect to use large mechanical HDD’s for years to come. Maybe an SSD will find a place as my system drive once prices of the quality SSD’s drop to $1/GB.
      But i have a 6TB fileserver, HDD’s will have a home there for a good long time.

      • CasbahBoy
      • 10 years ago

      SSDs are great, but they’re not good for everything yet and will not be for some time still.

      • UberGerbil
      • 10 years ago

      HDs will have a life in data centers for a long time to come, and efficiency for installations that buy drives in the dozens or hundreds is a pretty significant factor.

      • StashTheVampede
      • 10 years ago

      File systems are really where you should point your finger. Most of the file systems (today) aren’t really optimized for SSDs in mind (sure, it works, but not built from ground up with low latency in mind).

      Spinning platters will have several years of life left — especially since SSDs haven’t yet hit the sized of spinning platters with the cost that users like (sub $200, ihmo).

        • UberGerbil
        • 10 years ago

        I don’t think building a file system “from the ground up with low latency in mind” would make as much difference as you seem to think. The assumption that write-speed = read speed (especially for small writes) seems to be a bigger problem wrt file system design decisions on SSDs.

    • Spotpuff
    • 10 years ago

    What exactly does a larger cache do for hard drives? I’m a bit confused as to why drives have such large caches. Is it for the pagefile?

      • Vaughn
      • 10 years ago

      i’ve been wondering that myself, pretty soon we will see drives with 128mb caches but what is the point?

      • UberGerbil
      • 10 years ago

      In the past (when they were smaller and the system was less adept at caching things itself) it was metadata, mostly. These days it also makes NCQ work better. (We hit a point of diminishing returns a while ago, but you do want your cache to grow somewhat proportionally to the storage it is backing).

      • Blazex
      • 10 years ago

      §[<http://en.wikipedia.org/wiki/Disk_buffer<]§ should help with your question

        • flashbacck
        • 10 years ago

        I think spotpuff was commenting that the performance increase with cache sizes after 16mb has so far been pretty dismal, not that he didn’t know what a disk buffer was.

          • Spotpuff
          • 10 years ago

          Right.

          I know what it is, it’s just… I mean if the buffer was 1gb what would the point be? If your data isn’t in system RAM then the limiting factor is still the drive reading data off the platter isn’t it?

      • moritzgedig
      • 10 years ago

      for this drive it helps because they can store incompleatly (<4KB) written clusters for later optimised reading and then (re)writing.

    • ClickClick5
    • 10 years ago

    Western Digital (WD) is /[

    • Sahrin
    • 10 years ago

    Why’d they stop at 4K? I know that there’s also the factor of losing usable space becaus the block size is larger – but I’d imagine it’s pretty straightforward to build an extremely conservative usage model which tells you where the ideal is (hey, it might be 4K!). My bet is that the average file size of today’s users is probably in the tens of MB (at least) – give three or four STdev out from that and set your block size there – saving you a whole lot more capacity!

      • UberGerbil
      • 10 years ago

      There’s always a granularity trade-off between large chunks (which tend to be more efficient for IO) and small chunks (which have less wasted “slack” for things that aren’t large enough to fit into a chunk…which includes the last chunk of any file larger than 1 chunk). Have a look at your cookies folder, for example: it’s easy to have 10,000 cookies or more, and they are each on the order of a few hundred bytes. On a volume with 512byte sectors, they’re each wasting perhaps a hundred bytes or so, so even 10K of them is only wasting 1MB. On a volume with 4K sectors, those same 10K of cookies are wasting something like 27MB. If the sector size was larger, the wasted slack would be proportionally larger again. And as you make the sector size larger and larger, more and more files fit into a single sector with slack left over, so over the entire disk the problem gets even worse. You have a lot of small files on your disk, even if you don’t notice them, so it adds up. (Have a look at your Temporary Internet Files folder for another example).

      Moreover, on x86 the page size is 4KiB. Regardless of how big a program is on disk, its code gets paged in and out of the system in chunks with 4KiB granularity. Large files are often memory-mapped for performance, which uses paging, and data resources in programs (vs raw data files) are paged as well. So 4KiB is kind of a “special” size (which is why NTFS defaults to that as its cluster size; Advanced Formatting just makes 1 sector = 1 cluster)

      Aside/tangent: Surprisingly (considering how long ago it was decided upon) 4KiB is still close to the optimal page size for desktop usage. There’s some evidence that for server loads the sweet spot is higher, which is why Itanium uses an 8KiB page size. But I saw an Intel presentation where they simulated different page sizes for x86 and found that the benefit to going up to 8KiB, while real in some cases, was very small (and certainly not worth the headaches and enormous software revisions that would necessary). And the special cases where /[

        • shank15217
        • 10 years ago

        Actually, x86 page sizes can be as big as 1G(opterons support) or 2M (standard). They are called huge pages, and are used for certain database optimization steps in large memory systems.

          • UberGerbil
          • 10 years ago

          Yeah, I’m well are of that and in fact I mentioned it at the end (as an aside, because it’s pretty much irrelevant to this discussion). Windows 7 uses large pages for the kernel, btw, as do some other OSes.

      • emorgoch
      • 10 years ago

      l[

      • derFunkenstein
      • 10 years ago

      yeah, OS files are almost all smaller than “10s of MB” and the files I create are almost all 1MB or under, unless we’re talking about recorded audio. That’d be a relatively small number of files.

      Besides, 4k lines up pretty well with the default cluster in an NTFS partition, and it would make sense to not have your clusters be smaller than your sectors (which is what you’d have if you created an NTFS partition with 4k clusters on a hard drive with sectors even 8k or larger)

      • ew
      • 10 years ago

      Average files size isn’t as important as number of files. You always (usually) get an average of 1/2 cluster size overhead per file.

      Of course, the hard disk doesn’t know anything about files so it doesn’t matter. Files storage is the task of the file system. (Another thing the hard disk has no knowledge of)

      • eofpi
      • 10 years ago

      I suspect the real reason is extending MBR compatibility. MBRs can’t fully partition a disk larger than 2TB, if it has 512 byte sectors. With 4KB sectors, this limit goes up to 16TB.

      They’ll have the same problem in a few years, but by then the number of machines running XP will be way down, so GPT can be used instead.

        • yuhong
        • 10 years ago

        AFRIK boosting the sector size is already being used by RAID cards to workaround the 2TB limit. But it also creates compatiblity issues

      • Meadows
      • 10 years ago

      4 KiB is the ideal, Windows NTFS hard drive formatting defaults to it, and NTFS compression doesn’t support anything higher (note: this depends on how you format, NOT what the drive itself uses).

      It’s the current “sweet spot”, unless you set NTFS to use bigger allocation units to try and combat fragmentation.

      • NeronetFi
      • 10 years ago

      I just searched my C: drive for files that are at least 4KB and found 53,537 files 🙂

    • elty
    • 10 years ago

    So shouldn’t the new 500GB become 550GB?

      • cygnus1
      • 10 years ago

      No, because now when you format it, it will actually format to 500GB.

        • Flying Fox
        • 10 years ago

        Still stuck in the GB vs GiB land?

      • ImSpartacus
      • 10 years ago

      I’m thinking that it’s a 500GB drive either way.

      With smaller sectors and more overhead, maybe you lose more drive space when you format? With the larger sectors the overall capacity will be closer to 500GB?

      But that’s speculation. Anybody know for sure?

    • Helmore
    • 10 years ago

    Does is have any effect on performance?

      • Jive
      • 10 years ago

      “according to Western Digital, there’s no loss in performance as long as partitions are properly aligned.”

        • 5150
        • 10 years ago

        So don’t use XP.

      • khands
      • 10 years ago

      l[

        • Stargazer
        • 10 years ago

        I’m getting the feeling that they’re only talking about the (lack of a) performance penalty for the firmware translations here.

        If that’s the case I’d actually expect a general *increase* in performance since the lowered overhead will result in more data being accessed when covering a given section of the HD platter.

          • Trymor
          • 10 years ago

          Thats what I was thinking. BUT, you would think WD would be pimping the performance increase if that was the case :-/

            • derFunkenstein
            • 10 years ago

            I’d not expect them to really talk about performance at all in a “green” product. Wait for the Caviar Black to get updated, and if there’s no talk of performance increases then we can say for sure “no”.

            • Trymor
            • 10 years ago

            It would make sense to use the green line for a trial of new tech. If performance increases are found, they can roll it out on the Black series after a little tweaking, as you mention. If not, no egg on face…heh.

            • Stargazer
            • 10 years ago

            Yeah, I found that a bit weird too. However, the alternative would seem to be that there’s some bottleneck that would prevent the drives from utilizing the maximum spin rate (with less overhead you’ll cover more data at a given spin rate, and if the effective data rate remains the same with reduced overhead, that would seem to imply a lower utilized spin rate(*)).

            Further, the data rate is higher on the outer tracks, so if there’s a bottleneck (from translation, increased ECC, or whatever), I’d still expect the data rate to increase on the *inner* tracks, unless the bottleneck somehow also changes with track location.

            Speaking of ECC, the Western Digital Information Sheet also mentions improved ECC.

            “Advanced Format technology improves burst error correction by 50% through the use of larger ECC (error correction code) code word.”

            Edit: (*) At least for sustained transfers

Pin It on Pinterest

Share This