New Fusion-io SSDs can hit 1.4GB/s write speeds

While Intel has made headlines with its X25-E Serial ATA solid-state drives, Fusion-io has been tackling the enterprise solid-state storage concept in an altogether different way. In company tradition, the newly announced ioDrive Duo SSDs plug into PCI Express interfaces instead of Serial ATA—but they’re considerably faster than Fusion-io’s previous-gen products.

How much faster? Fusion-io quotes top sustained bandwidth of 1500MB/s for reads and 1400MB/s for writes, with a respective 185,000 and 167,000 IOPS. (That’s with the 320GB model. Others have notably lower write performance, although they can still purportedly reach more than a gig per second.)

As far as we can tell, the firm achieves these speeds by sticking two physical SSD modules on a single PCIe card and using them together, likely in a RAID config. Fusion-io says you can set up the modules in a RAID 1 array for redundancy, although that probably induces a performance hit.

On the capacity front, Fusion-io will start offering 160GB, 320GB, and 640GB ioDrive Duo SSDs in April, and it plans to introduce a 1.28TB model in the second half of the year. The two lowest-capacity variants are based on single-level cell flash memory, so they should last longer and presumably cost more. Oh, and all models support PCI Express x8 or PCI express 2.0 x4 interfaces. You can peruse a detailed PDF spec sheet over here.

Comments closed
    • grantmeaname
    • 11 years ago

    Update: FusionIO announced that it will be rebadged ioDrive Duo GTS in light of poor sales performance.

    • jstern
    • 11 years ago

    Just yesterday I was thinking about how cool it would if SSD were this fast. And in just one day.

    And when I saw the picture of this SSD, disappointment hit me, and I felt let down, because I thought I misread and they were talking about some graphic card.

    • dustyjamessutton
    • 11 years ago

    Holy expensive hardware batman!

    • gunmuse
    • 11 years ago

    And the $500 Scsi SSD’s that already exist are not better why? Sorry I could build a HUGE array for the price of one of these. Standard Server configs that use Linux and/or /tmp on a HD crash SSD because of the high rewrite rate. That forced us to move to very High ram and Ram drives for High rewrite rate information. Solving our speed issue and putting us back on to standard drives in Raid configuration.

    Just because someone built a Ferrari doesn’t make it the fastest car in every race. This has a limited application besides function cost is insane for what it does.

      • Joshvar
      • 11 years ago

      Speed density, that’s why. This takes up a single PCIe slot, and reaching sustained reads and writes that this offers would take many drives, and an controller to manage it, which would be multiple points of failure. This goes in, drivers get loaded, and you’re good to go. Simple to manage. It has its pluses, for sure. While it is expensive, some people are willing to pay for this level performance…when a device like this saves people-hours in wait time, it’s very easy to justify. It’s also got the performance of a SAN, which is one of the major reasons for purchasing a SAN in the first place, so it definitely has a place.

    • bubba
    • 11 years ago

    Just got a call from Fusion-io in response to their Contact Sales Now! form. Nothing is bootable yet, but they are working on it. Pricing is:
    80gb – $3K
    160gb – $7.2K
    320gb – $14.4K

    Youch!

      • Peldor
      • 11 years ago

      Are you sure that’s not the previous model of their drives? The new model doesn’t appear to have an 80GB option.

      • Thanato
      • 11 years ago

      Wow so much for the average Joe considering this product. I could buy a new Toyota Yaris for the price of 320 gigs.

        • indeego
        • 11 years ago

        But it can’t play crysisg{<.<}g

          • pot
          • 11 years ago

          Neither can a SSD drive =P

    • ludi
    • 11 years ago

    Ah, the circle of technology. I’ve pulled expansion cards that looked like this out of 286/386 hardware.

    • Joshvar
    • 11 years ago

    I have actually used one of these. It’s very, very fast, but keep in mind this:

    1) The model we were using cost roughly the same as a fairly nice car now (it was around $30k).
    2) You won’t find these in Fry’s or at Newegg any time soon.
    3) No one wants to boot off of this. Even most aggressive IT shops limit boots to a maintenance window once per month, so having a bootable one doesn’t make a lick of difference. They want to keep anything that could keep clients or employees waiting on here (such as a database) in order to justify its very high cost.
    4) While it was fast, you still need some beefy CPUs (or tuning) to keep up with its random read/write capabilities, because any application that’s read or write limited, probably won’t be once this is in place.

      • 5150
      • 11 years ago

      I can see alignment making a big difference with this.

        • Joshvar
        • 11 years ago

        Yeah, in order to get from about 50% of their expected performance to 99%, everything had to be pretty much aligned (from 8k blocks in the database on down). I’m not a sysadmin, so I didn’t do much in that part but there were lots of questions related to the filesystem and OS config that was asked 🙂 Very different from my predominantly SAN-based experience. I was tempted to try ASM against it as a raw device…didn’t have time to go there though.

    • vikramsbox
    • 11 years ago

    Its time that HDD’s started catching up with CPU’s memory. When CPU are getting faster and faster and highly multithreaded, HDD’s are slow to catch up. To make HDD’s work fast, we need to buy the more expensie models (costing as much as a decent Dual core CPU) an darrange them in RAID configs.
    I’ve a ‘cuda 500GB model and it drags as soon as I put in simultaneous read/write ops! Many HDD’s fail in random read/writes and multi threaded read writes.
    The perfect HDD would be costing the same as present models, have good capacity and excel at random and multi threaded seeks. When’s that goind to happen?

      • Krogoth
      • 11 years ago

      FYI, HDD = hard *[

        • UberGerbil
        • 11 years ago

        Let’s just say “mass storage” and leave it at that.

        • MadManOriginal
        • 11 years ago

        If you actually read the post he was talking generically, where HDD = ‘storage.’

          • BooTs
          • 11 years ago

          If you actually read the post he was rambling about something unrelated to this news post.

            • MadManOriginal
            • 11 years ago

            No, he stated the direction he’d like to see storage move. This article is about a storage product so it’s related.

            • vikramsbox
            • 11 years ago

            Thanks MMO, for seeing the point. Some years back, CPU’s were the most expensive components of a system build. Now, they seem to have become the cheapest! Storage is the slowest component and is expensive (compared to the CPU). HDD’s suck at random and multi threaded read/writes and SSDs are not reliable (there was an article some days back that SSD’s fail after a few months in server environs).
            Poor multi threaded app performance indicates that all heads move together, why not change that design so that heads move independently? We’ll at least get good fast storage till the SSD’s become cheaper and more reliable.
            In short- to fully utilize a 100$ CPU, we have to buy two 90$ HDD/300$ SSD and a 100$ MB to play catch up with the CPU. Its hilarious! But at the consumer’s cost.

            • TheTechReporter
            • 11 years ago

            I don’t want to be too anal retentive here, but while HDD’s will _never_ catch up with RAM (or cache, of course), SSD’s might.

            And sure, SSD’s that are as fast as RAM and as cheap as HDD’s is a nice pipe dream, but so is living in a solid gold mansion and flying a jetpack to work. The point is, we’re simply _not_ going to get there any time soon, if ever.

            Improving our storage technology isn’t nearly as easy as you seem to think.

            • grantmeaname
            • 11 years ago

            If SSD’s ever caught up to RAM, RAM would be eliminated from the system.

            Cache Hierarchy, google it.

            • ludi
            • 11 years ago

            “why not change that design so that heads move independently? We’ll at least get good fast storage till the SSD’s become cheaper and more reliable.”

            That already exists, it’s called a RAID array. If you understand how modern hard drives operate, then you know that any attempt to combine the hardware into one chassis, assuming existing technologies, would be enormous, hot, and have a noticeably decreased MTBF rate without the option of pulling and replacing the bad disk in the event of a failure. You would also lose the option for rebuilding the lost volume on the replacement disk (assuming the comparable array had been configured for redundancy).

            SSD technology is presently where graphics technology was right about the time the original GeForce was released — the groundwork has been layed, the current products are spectacular if a bit expensive, the direction for future products is understood, and the competition is fierce. Mechanical disks will be obviated in a few short years.

            • DrDillyBar
            • 11 years ago

            We need OS level changes to data paths now.

            • UberGerbil
            • 11 years ago

            Hard drive capacities have grown almost a thousand-fold in a decade, while getting cheaper both in absolute terms (how much you pay for a mainstream “big enough drive”) and massively cheaper in $/B. That’s much better than CPUs.

            No, speeds haven’t increased the way CPU speeds have, though again in that decade we have seen remarkable improvements: at the turn of the century people would’ve called you crazy if you’d predicted throughput pushing past 100MB/s from mainstream 7200rpm drives. But there are fundamental physical limitations to spinning metal platters, especially if space and cost and power are constraint (we could levitate the disk in a vacuum and spin it at crazy speeds, for example, but you’d need a containment vessel the size of a dorm refrigerator, with power demands to match).

            Yeah, having parallel heads moving independently yadda yadda yadda — it’s obvious to you, which means it was obvious to the engineers in the field decades ago. If there was a way to do it (again, within the constraints of consumer products) they would have done it long ago. And the reality, as Ludi points out, is that just adding more disks gets you much of the way there with no extra fancy engineering required.

            The reality is that we’re hitting the limits of non-solid-state storage. Solid state has been in the game for a long time, but it could never keep up. Every time a solid state tech (bubble memory, MRAM, etc) seemed like it was poised to take over, the adoption of bulkier media types (first bitmaps, then audio, then video) ensured that hard drives with their lower $/MB won the battle. But we seem to be hitting the end of that progression: 1080p seems to be the last bulky format that is going to see widespread adoption for a while, and meanwhile Moore’s law has been helping NAND catch up. Now, NAND faces some serious challenges in the future too: it’s not at all clear Moore’s Law will continue to hold for much longer. But there are other solid state techs on the horizon, and something is going to win out. And in the meantime, the data that is really performance sensitive (which doesn’t include audio and video, which stream quite nicely off cheap, huge disks) already fits into commonly-available SSDs. They’re big enough, and as they drop 50% per year or more, they’ll soon be cheap enough.

            So complaining about hard drives, and fantasizing about ways to make them faster, is really a lot of wasted effort.

            • green
            • 11 years ago

            /[

            • indeego
            • 11 years ago

            /[<"SSDs are not reliable (there was an article some days back that SSD's fail after a few months in server environs)."<]/ Would really like to see this article. Please linkg{<.<}g

            • UberGerbil
            • 11 years ago

            Yeah, I’d be interested in seeing that too.

            • vikramsbox
            • 11 years ago

            Guys, pl get me right. I’m not complaining. What I’m saying is that the Co’s in the storage space adopted the tech very late. We need Co’s to adopt good and cheap tech. HDD tech may be has improved in theory, but see the reliability issues that we have. 7200 rpm drives today are much less reliable and clattery than before.
            As for the articles on SSD reliability, pl see Toms Hardware- they have added a new story on how CPU power saving tech slows down SSD’s also.
            Stories- 6 new SSDs- Jan 28, 2009.
            Hope that everyone gets it now that I’m not anal retentive. We see others for what we are. I pointed out that Storage co’s were slow off the block, as a result of which fast platforms are expensive overall.
            Thanks.

            • indeego
            • 11 years ago

            You lost me after saying “see Tom’s Hardwareg{<"<}g

            • Arag0n
            • 11 years ago

            The HDD performance of 7200 rpm are getting down with CPU savings too… Just try to do some benchmarks at windows vista with Energy Balanced or Performance, and you will see that the performance between them its arround 15-5% depending the test and System. This is not only happening at SSD’s….

            Anyways I think that it’s so pointless dismiss some reviews just because the website that publish them. You must do critics over the test or results that they show to you.

            Not very long ago we had published that AMD 9950 with DDR2 1066 5-5-5-15-20, has a banwith measure with Sysoft Sandra arround 7,5Gb/s… Well, I can say to you that i can reach arround 9-10Gb/s without problems with DDR2 800 4-5-5-14-18…. All i want to say that everyones has his methods and points. It’s like never hear ad’s because they lie… sure they do, but they could help you figure wich are your options, then you must determine wich is better for you.

    • flip-mode
    • 11 years ago

    That’s amazing. So what, can we expect the same performance from consumer drives in 5 or so years?

    • Shinare
    • 11 years ago

    Would love to be able to raid 5 3-5 of these for my database crap… thats probably not possible, being PCIe and all.

      • Veerappan
      • 11 years ago

      I don’t see why you couldn’t use software RAID in Linux/other to do this, other than probably needing a ridiculous processor/system to keep up with it.

      • Zymergy
      • 11 years ago

      Nope, you’ll need a NF200 chip to do that! (/sarcasm).
      This is one of those items that has ZERO mention of price because if you have to ask you can’t afford one….

    • spiritwalker2222
    • 11 years ago

    I’d like to see pricing, I’d be nice if they sold for around 1K.

    For the 320GB version.

      • Peldor
      • 11 years ago

      Googling suggests the *[

    • SuperSpy
    • 11 years ago

    l[

    • kamikaziechameleon
    • 11 years ago

    price, price, price?

      • titan
      • 11 years ago

      If you have to ask, you can’t afford it.

      • 5150
      • 11 years ago

      OMG I’m going to get 2 of these and run them in RAID 0 FTW!!11! Left 4 Dead will load in like 2.4 seconds!!1eleventy!1 w00t!! Di zumbys!

    • Sargent Duck
    • 11 years ago

    Nice, very nice. Even the card itself looks pretty snazzy

    • wingless
    • 11 years ago

    I remembered Uber Gerbil posted a link back in October stating that the original Fusion-io drive was waiting for a firmware update taht would make it bootable.

    §[<http://www.dvnation.com/Fusion-IO-IODrive-SSD-Solid-State-Disk-Drive-Review.html<]§ WHAT HAPPENED TO THAT IDEA?!

    • Meadows
    • 11 years ago

    The funny thing is, you can’t have a “1.28” TiB SSD unless you boost capacity asynchronously with some wear leveling chip(s).

    If we consider their “GB” notations to stand for “GiB” – which would make sense, given powers of 2 -, then the next step would be _[<1.25<]_ TiB, or 1280 GiB. I hope they won't combine SI with binary for the tebibyte drives which would lead to a deja vu, invoking "1.44 MB" floppy disks.

      • Anomymous Gerbil
      • 11 years ago

      They can have any amount of GB or GiB they desire – it all depends how much space they set aside for wear levelling etc, and how they account for that in the advertised spec.

        • Meadows
        • 11 years ago

        That’s true, but then still, what’s the point of a weird number? Laymen won’t have a clue why it’s exactly that, and people with a tiny know-how will think it’s fishy (or at least strange).

          • DancingWind
          • 11 years ago

          Well . I just dont see avg joe buying this product …

            • Meadows
            • 11 years ago

            Think about the future. As for the present (as I said), the number will surely puzzle system admins.

            • titan
            • 11 years ago

            I’m not puzzled at all.

            • derFunkenstein
            • 11 years ago

            it’s 2×640. I’m not saying it’s right, but it’s not a mystery to see where it comes from.

            • UberGerbil
            • 11 years ago

            Yes, if they call it a “1280” — which would be entirely in line with the naming of the rest of their product line — it will make perfect sense and not puzzle anybody. It would even be accurate, in that it correctly describes the product as having twice the capacity of the 640.

            The only way the issue comes up and they become targets for criticism is if they restart their numbering scheme for the Terabyte era and they start using the wrong notation (with the resulting inaccurate numbers). But unless and until an “ioDrive 1.28” comes out, you’re just complaining about hypotheticals.

            If it were up to me, I’d stick with their current notation. Having a 1280, and then a 2560, and so on, works fine for quite a while; numbering systems where most of the numbers are to the right of the decimal (1.25) make for awkward names. They can always restart their branding when the 10TiB era arrives.

            • TheEmrys
            • 11 years ago

            When does the average joe every buy anything for a PCI-express slot? Average Joe’s don’t upgrade graphics. Or anything beyond RAM. And they pay the stupid geek squad to do it for them.

          • eitje
          • 11 years ago

          in this theoretical situation, why would laymen be wondering about exactness of the figures to begin with?

            • Meadows
            • 11 years ago

            I would sure as hell start a class action lawsuit over this.

            • titan
            • 11 years ago

            Because they lied to you about….

            • UberGerbil
            • 11 years ago

            Why? If the advertised size is the size available for use, and there’s some amount of additional blocks set aside for wear-leveling, what exactly is the basis of your complaint?

            • BooTs
            • 11 years ago

            I think he’s suffering from rambusitis.

      • UberGerbil
      • 11 years ago

      Look, their last-gen 640GB (actually 640GiB) card used twelve 64GiB chips
      §[<http://www.slashgear.com/gallery/showimage.php?i=23718&c=38<]§ That's 10 chips to get to the advertised capacity, plus two more for reservoir. The shroud on the back of this new car makes it hard to count, but the pic in the PDF is clearer, and it looks like there are 24 chips on this one. Which would make sense, since it really appears to be two units on a single card. So that's 20 chips to get to 1280GiB, or 1.28TiB, and four chips as reservoir.

        • Meadows
        • 11 years ago

        Except you divide by 1024, which makes 1280 GiB equal 1.25 TiB.

          • UberGerbil
          • 11 years ago

          Oh, I see what you’re objecting to. Yes, 1280×2^36 = 1.25×2^40

          I suspect this is PR stupidity. I imagine the actual product will be labeled 1280, ie they’re still measuring in GiB. That would be in keeping with the rest of the lineup. Given their history, I would expect them to switch to TiB (rather than TB) if they decide to go that route with the naming scheme. But the marketing drones haven’t had that beaten into their heads yet, since the product doesn’t yet actually exist.

        • Hattig
        • 11 years ago

        4 chips as reserve or as parity for error correction?

    • Peldor
    • 11 years ago

    Benchmarks are forthcoming, yes?

    Check out the management bios…the Woz is on board!

      • paulWTAMU
      • 11 years ago

      I can’t see TR buying a 10 grand hard drive to benchmark. Or a company giving them one to bench. That’s an insane amount of cash, but does inspire some (pointless) geek lust.

    • MadManOriginal
    • 11 years ago

    I still don’t get why this company doesn’t make an ioDrive that is bootable. Afaik there aren’t technical limitations because it could just act like any other add-on RAID card. The only reason I can think of is that it’s a business decision where they figure that companies will only buy these for storage and not boot drives.

      • Meadows
      • 11 years ago

      It’s a pity, because with one of these drives, you wouldn’t even have time to fetch your coffee cup while an OS installs, let alone make coffee.

        • Helmore
        • 11 years ago

        You would need a very fast DVD drive to pull that off though.

      • notfred
      • 11 years ago

      You don’t want to boot off this drive, that’s not what it is for. This is more about fast access on servers that probably only reboot once every 6 months at most.

      I would love one for my compile server, but I can’t afford it.

      • UberGerbil
      • 11 years ago

      They actually were promising that they would make it bootable (some of the reviews of the last gen one mentioned this promise). As a practical matter for their market it doesn’t matter much: it’s really no different than the way many (most?) servers are set up: one small stupid drive for boot and then a huge/fast array for all the heavy lifting. Once a system is up and running everything it needs off the boot drive is part of the in-memory working set, so performance doesn’t matter, and nobody cares how long it takes to boot because that only happens for maintenance.

      From a technical standpoint it is extra work/expense for them to make it look like a SCSI device with the configuration info to show up in the BIOS, etc. If they ever decide to target a more mainstream market they’ll have to do it, which is probably why they’re talking about it, but clearly that’s not a priority ATM.

        • MadManOriginal
        • 11 years ago

        Yeah I get why it’s not necessary for servers. It seems they just want to stick in the high-margin enterprise market which is understandable. At some point someone will come out with an SSD card that’s bootable though. Actually I’d be curious to see how this stacks up versus an array of many small SSDs on a quality RAID controller.

          • UberGerbil
          • 11 years ago

          Actually, I wouldn’t be surprised if Intel does a proprietary interface in their chipsets that works with a PCIe version of their SSDs (they might even do a proprietary connector and cable, and just put that connector alongside the SATA one on their regular SSDs). They’re already putting a “non-volatile cache controller” into Ibex Peak; the next step is to make that cache be your entire secondary storage.

      • jwb
      • 11 years ago

      Because most of the logic is actually implemented in their driver, not in their hardware. Your host CPU does a lot of work to keep this card running.

    • Farting Bob
    • 11 years ago

    Woah. Thats quick.

Pin It on Pinterest

Share This