A-Data serves up extra capacity for SandForce SSDs

If you’ve been paying attention to the SSD market, you’ll know that drives based on SandForce controllers tend to offer less storage capacity than the competition. They typically stick to 60, 120, 240, and 480GB capacities rather than the 64, 128, 256, and 512GB sizes favored by rivals. Now, A-Data has announced a series of “expanded capacity” SSDs based on tweaked SandForce firmware. The new firmware offers additional storage by reducing the amount of NAND capacity set aside for use exclusively by the controller. According to LSI, which now owns SandForce, this move also eliminates the RAID-like RAISE redundancy scheme that protects SandForce-based SSDs from data loss due to unexpected flash failures.

Cutting an SSD’s spare area can affect performance, but A-Data’s specifications suggest the new drives will be competitive with current offerings. The flagship XPG SX900 can purportedly crank out 85,000 4KB random-write IOps and push sequential read and write speeds as high as 550 and 530MB/s, respectively. One rung down the ladder, the Premier Pro SP900 offers nearly identical speed ratings; the only difference is its 520MB/s sequential write rate, a drop of just 10MB/s. Both of those models come with 6Gbps Serial ATA connectivity, and has a cheaper Premier SP800 that’s limited to 3Gbps SATA speeds. The SP800 is restricted to 32 and 64GB capacities and offers about half the performance of its 6Gbps siblings.

The XPG reportedly uses synchronous NAND, while the Premiere Pro is equipped with asynchronous memory. We’ve observed substantial differences in real-world performance between other SandForce SSDs with similar memory configurations, making me a little dubious that A-Data’s specs represent the true performance of the new drives—and their expanded-capacity firmware. When the last generation of SandForce SSDs switched from enterprise-style 28% overprovisioning to the 7% OP typical of consumer drives, performance dropped with both sequential and random writes.

A good argument can be made for giving up a little performance in exchange for more capacity. SSDs are plenty fast, but they’re almost always too small for all the data, games, and applications one might want to store. We’ll have to put these higher-capacity drives to the test to get a better sense of the trade-offs involved.

Comments closed
    • Bensam123
    • 8 years ago

    Before people run around and go ‘oh noes, not RAISE, what will we do when our flash pages go bad?!?!?’ I will point out that through personal experience and what I’ve read from people online, I’ve NEVER heard of a SSD dieing gracefully from pages giving up. You just wake up one day, turn on your computer and you get a fizzle and kapoof. SSDs don’t give you little to no warning, especially compared to their brethren with spinners.

    This is coming from someone who uses absolutes very minimally. I’ve honestly never heard of a SSD dieing from reaching max page writes.

      • demani
      • 8 years ago

      Wait-using absolutes minimally from the guy who claimed no bandwidth limits on DL-DVI? 😉

      The thing is its not really an issue whether or not you have personally heard of it. EVERY manufacturer says that it will happen eventually. They aren’t just saying that to cover their asses-there is a legitimate reason they say that. You can’t stop physics (you can only hope to contain it). And most haven’t even been in use long enough to hit the limit. Give it another two years and lets see what happens (though inexpensive upgrades may push out the oldest and smallest before hand).

      Also, my understanding is that SSDs don’t die per se (i.e. not usable), they just move into read-only mode.

      To article:
      Well, if Sandforce weren’t so balky I’d say that bit of redundancy is a shame to lose. But given the practical results of everything they tried to do was a less than stellar track record, maybe crossing one of the tricks from the list may help them make things work better going forward.

        • Bensam123
        • 8 years ago

        Aye… and that coming from someone who still can’t find the limits on the spec…

        Not sure what physics has to do with anything in this case (although we all love to relate everything to physics). While SSDs might seem like the new thing, they really have been out for quite some time now. Plenty of time for high transaction file servers to run into the write block limits.

        Yes, they are supposed to move into a read only mode. My whole point was they never reach that point. There are reliability problems with SSDs in general and they don’t move gracefully into that mode. They simply pop and fizzle one day, well before they ever reach their page limits.

    • Compton
    • 8 years ago

    I know I would rather give up a little capacity in exchange for better performance and longevity. Overprovisioning gives both. I don’t have a problem with SF ditching RAISE as I don’t think it works very well in practice.

    Anyway, Intel has given it the heave-ho as well in the 520, so this was bound to come down the pike. You get 32GB more at 512GB than a 480GB, and you can use that bounty to OP the drive yourself.

      • ImSpartacus
      • 8 years ago

      Since you can always overprovision it yourself, there’s no issue for the consumer market.

      • willmore
      • 8 years ago

      I was going to write a long post that most people would TL;DR, but it comes down to what you just said.

      RAISE and overprovisioning are different things. If someone ran the numbers and things that RAISE won’t pay off vs the cost of doing it, then that’s great (Intel doing it is a pretty sound indicator). If it were just fly-by-night A-Data, I’d worry.

      For me? I’m happy with my 120G SF drive with an extra 8G left unallocated to provide both RAISE and overprovisioning. I like my data and my availability of it.

      • BobbinThreadbare
      • 8 years ago

      With SSDs I would give up performance for capacity, but I agree longevity is paramount.

    • flip-mode
    • 8 years ago

    [quote<] SSDs are plenty fast, but they're almost always too small for all the data, games, and applications one might want to store.[/quote<]Yes, but increasing the size of each tier by a few megabytes isn't really fixing that at all. The real fix is time - another couple of years for advancements here and there to bring 240 and 480 GB drives down to "democratic prices".

      • cegras
      • 8 years ago

      Once I can buy 512 or a terabyte for maybe … 200 – 300, I’ll consider jumping entirely to SSD.b

      • ImSpartacus
      • 8 years ago

      No, the fix is TLC NAND. OCZ claims that it is 33% cheaper than MLC NAND.

      So a 256GB drive like the M4 could be had for less than $200. At that price, you could afford 27% overprovisioning (serious overkill) before you get below 1GB/$.

      Eventually MLC will hit similar price points, but TLC should help bring prices down quicker.

        • Visigoth
        • 8 years ago

        TLC has its own set of long-term reliability problems, which I suppose none of us wants to see in an already trouble-prone SandForce SSD…

        It’s a good thing that Intel’s exclusive firmware appears to have stabilized the SandForce controller once and for all.

          • ImSpartacus
          • 8 years ago

          Not at 20% overprovisioning, it doesn’t!

          Don’t forget that current MLC consumer drives use like 7% overprovisioning (and the A-Data drives use 0%!). But at such a cheap price, you could afford to do a lot better than 7% and still get a good deal.

          Remember, no one is locking you into a certain level of overprovisioning. Just partition your drive with however much you want.

    • indeego
    • 8 years ago

    A good argument can be made for giving up a little performance in exchange for more [b<]capacity[/b<]

      • Dissonance
      • 8 years ago

      Fixed. Thanks.

Pin It on Pinterest

Share This