Intel introduces 910 Series PCIe SSD

Intel is no stranger to the world of enterprise-grade SSDs. Its first offering for that market, the X25-E, was released way back in the fall of 2008. That was a 2.5″ drive, and it’s since been replaced by the 710 Series, which has a similar form factor and SATA interface. Today, Intel breaks away from Serial ATA with the 910 Series PCI Express SSD.

The 910 Series is built on a half-height, half-length PCI Express x8 card designed to squeeze into low-profile servers. Although the card occupies a single slot, it packs multiple circuit boards. The 400GB model is a dual-board affair, while the 800GB flavor serves up a triple-stacked NAND sandwich. Both conform to the gen-two PCI Express standard. Future versions will tap PCIe 3.0 “and beyond,” Intel says.

As one might expect, the 910 Series is made up of multiple logical SSDs. The 400GB and 800GB drives have two and four NAND controllers, respectively. These chips are the same as what’s found in Hitachi’s latest Ultrastar SSD; they pair Intel’s controller tech with Hitachi’s SAS interface logic. The controllers communicate with the system through an LSI SAS-to-PCIe bridge chip.

Rather than resort to RAID or virtualization schemes to present the multiple controllers as a single drive, the 910 Series shows its component SSDs to the operating system as separate units. Users are free to combine those units in software RAID arrays. They can also choose to address the component drives individually, although the 910 Series can’t be used as a boot device.

Intel equips the 910 Series with its own brand of high-endurance MLC NAND. This flash memory is built on the same 25-nm process as the NAND that populates Intel’s other SSDs, but only the best chips are cherry picked for use in the company’s server products. Dubbed HET, the higher-grade NAND will purportedly last 30 times longer than the flash memory in consumer SSDs. Intel says the 910 Series 400GB can withstand 7 petabytes worth of writes over its lifespan, and the 800GB drive is supposed to be able to handle double that. Both are covered by a five-year warranty.

Capacity Lifetime endurance Max sequential Max random Price
Read Write Read Write
400GB 7 PB 1GB/s 0.75GB/s 90k IOps 38k IOps $1,929
800GB 14 PB 2GB/s 1GB/s 180k IOps 75k IOps $3,859

The 910 Series’ performance will obviously depend on how its component SSDs are configured, but Intel says the drive can achieve sequential transfer speeds up to 2GB/s and random I/O throughput as high as 180,000 IOps. With a little help from Linux kernel modifications, the drive’s random I/O performance can be raised even higher, according to Intel. There’s more performance to be gained if you’re willing to pump a little extra juice into the card, too. The 910 Series pulls 25W from the PCIe slot in its default configuration, but Intel’s software can increase that power draw to 28W and deliver a sequential write speed of 1.5GB/s in return.

Obviously, the 910 Series is rather expensive. The prices aren’t outrageous given the cost of other enterprise SSDs, though. Intel’s 710 Series 300GB will set you back nearly $1300 at Newegg right now, or about $4.33/GB. The 910 Series costs less than five bucks per gig, and it should be quite a bit faster.

Comments closed
    • rimsha
    • 8 years ago

    it is expensive
    check more detail: [url<]http://www.gadget-mag.com/intel-ssd-910-series-with-a-pci-e-interface/[/url<]

    • ShadowEyez
    • 8 years ago

    Good stuff, but at this point it’s still aimed at the professional data center/hosting market, not the enthusiast home user. 2 grand for a drive? A few 240 gig SATA ssds are only around $550, which is still a lot for the storage component, but at least you can boot on it.

    The only PCIe SSD for the home enthusiast with a somewhat realistic price tag is the OCZ revo series. When PCIe becomes easily bootable, they get rid of the SATA/SAS to PCIe convertor/bridge (which in theory could make it cheaper, and faster), and they go down in price by a factor of 5, they could hit mainstream.

    HDD is the last big bottleneck in the typical modern PC, and flash cells via SSD have done a lot to eliminate that; moving it to PCIe seems like a natural choice, but the transition is likely to be a little painful – software, and other parts of the hardware have to catch up.

      • cynan
      • 8 years ago

      You’d be hard pressed to buy a 240GB SSD for $550 these days. Maybe the new Intel and Vertex 4 drives might come close. Many top performing SSDs around that size have dropped to $300 or less (ie, Mushkin chronos deluxe, Crucial M4, Samsung 830)

    • kamikaziechameleon
    • 8 years ago

    Its funny we haven’t even worked all the kinks out of the SSD to begin with and we are already starting the transition to PCIE… Anyone else see a problem with this?

      • Peldor
      • 8 years ago

      No. PCI/PCIe controllers have been around a long time.

        • Flatland_Spider
        • 8 years ago

        Originally they were cards with a bunch of RAM slots or chips on them.

          • Forge
          • 8 years ago

          Giga-byte iRAM! I wanted one of those so bad. I’d take a SATA 6G or PCIe version for DDR3 memory in a heartbeat. They really should have built a good rechargable battery into it to make it seem semi-volatile. As they were, they missed some of the good points for SSD.

            • willmore
            • 8 years ago

            I remember a SCSI buss attached one from a few decades back. It came with a battery so that it might survive small power losses. Solid state for storage acceleration isn’t exactly a new idea. It sure has made some great progress recently, though.

            • derFunkenstein
            • 8 years ago

            The price of RAM is coming down, but it’s still more expensive per GB than SSD, and that’s before you buy the card/interface. Theoretically it’d be faster, but probably not by a ton.

        • JohnC
        • 8 years ago

        Yea… Texas Memory Systems has been manufacturing such PCIe cards for a long time already, as well as other solid-state storage systems which were used successfully in many commercial applications (I believe CCP, the company behind EvE Online, has been using their RamSan drives for a very long time for their servers).

      • stdRaichu
      • 8 years ago

      I don’t think a single invention throughout the entirety of human history has ever had all the kinks ironed out, especially not before other start to move on to different (and hopefully better) things. Wheels still wear out, levers still break, hard drives still crash.

    • kamikaziechameleon
    • 8 years ago

    trim support???

      • Farting Bob
      • 8 years ago

      Please, it doesnt even have boot support. Why the hell they cant do that i dont know. Can anybody explain why PCIe SSD’s seem to find it so damn hard to boot from?

        • Thatguy
        • 8 years ago

        I’ve always wondered that and would also like an answer. I would guess it has something to di with how the PCIe powers its devices, or the lack there of when the PC is off?

        • stdRaichu
        • 8 years ago

        Not sure it counts as an explanation as such, since all it really requires is a BIOS boot ROM and OS driver support (since there’s not yet any standard for PCIe storage), but both those things cost money, especially in the server market. Enterprise customers (of which I am one, and which these drives are definitely targeted towards) will typically be booting off either a SAN or local SAS (i.e. RAID of platter or SSD drives) and using one of these IOPS-monsters either solely for caching stuff or to house the hotly used sections of databases. As such, boot support will be a low priority, whereas high performance, high reliability and high margins are high priority.

        Simple matter of fact, these drives are complete and total overkill for the OS/system drive itself which, in most company setups, will not house the programs or data (or even temp space) at all.

        And yes, it’s frustrating. I would dearly love to see a unified standard for PCIe boot but that’s at least a couple of years away.

        [url<]http://community.fusionio.com/products/f/18/p/40/1081.aspx?PageIndex=1[/url<]

        • Vaughn
        • 8 years ago

        Good question but I don’t see the Point of running your OS off this drive. A small SSD would be fine for just the OS and this drive would be where you would store your database etc most of your workload.

        Two sata 6gbps in Raid 0 would be enough to push data rates to match up with what this drive can do performance wise.

    • Bensam123
    • 8 years ago

    Stuff like this is really awesome and also seems like its in a really weird transitional period all at the same time. If Windows had better caching there would almost be no need for SSDs. For that matter if mechanical HDs had better caching in and of themselves it would really decimate the need for SSDs too.. Both of which are pretty much hybrid drives.

    It’s weird… after all these years mechanical HDs have remained relatively unchanged even though they’ve had the option to improve caching on the drive too. It’s such a common sense idea you think they would’ve thought about throwing a gig or two of volatile memory on the HDs themselves. Or that windows would have the ability to use extra memory as a pure HD cache. Heck Intel is sorta doing this with their new chipsets with msata SSD.

    I do truly appreciate something like this, but a good question is why the need has arisen in the first place. Not necessarily the need for a faster storage medium, but rather why it’s not already better balanced between other extremely fast components in the system. If you have 16/24/32/64 gigs of DDR3 dangling off the processor sitting idle you think someONE someWHERE would’ve been like… hey… if we make this better at caching reads and writes we could change the way the world views storage.

    After playing around with different caching schemes and filesystems, ZFS of which I’m truly impressed with, Microsoft should really get it’s act together and stop playing around with BS shiny toys like Metro.

      • Firestarter
      • 8 years ago

      Excuse me, better caching? For a consumer, Windows 7 + a ton of RAM is as good as it’s going to get cache-wise. Provided you keep the cache warm (that is, don’t shut down or hibernate the computer), it will cache everything you regularly use. And it just doesn’t cut it! Caching can never actually replace good performing storage, only help mitigate and hide the effects of bad perfomance. Having 16GB of RAM filled to the brim with cached data doesn’t help a lot when you suddenly break your pattern and touch data that you haven’t used for quite a while.

      SSDs are not about just improving the median performance of a PC, it’s about so effectively improving the worst case performance that we forget about the worst cases. Case in point: OS updates. You can’t effectively cache the process of updating critical OS files and verifying that they’ve been updated correctly. On a regular system running Windows 7, the admittedly slow update process can be dreadfully slow. Slap an SSD in there, and suddenly it becomes just a minor nuisance. All the caching and RAM in the world is never going to make that happen.

        • ish718
        • 8 years ago

        And mechanical HDDs will always have to spin and they use much more power than SSD. SSD for the future!

          • NewfieBullet
          • 8 years ago

          While in general you’re correct, I don’t think there’s a mechanical hard drive that requires anything close to the 25W that this SSD uses.

        • Deanjo
        • 8 years ago

        [quote<]Case in point: OS updates. You can't effectively cache the process of updating critical OS files and verifying that they've been updated correctly. [/quote<] Correction, Windows can't, other OS's can just fine.

        • Bensam123
        • 8 years ago

        I straight up disagree. I’ve been using fancycache for close to a year while it’s in beta and everything loads extremely fast. For instance when playing a game, after it loads a map once or assets everything else loads insanely fast, this goes for things outside of gaming too including web browsing. Without it, Win7 grinds my HD constantly even after reloading similar data over and over again.

        Windows sucks at caching.

        A cache can also do write caching. Deferring writes for just a couple seconds is sometimes all it takes for a mechanical HD to stay on top of things.

        Why can’t you cache OS updates? Do each update incrementally in memory and then apply the patch in one go. If you lose power while it’s writing it’s no different then losing power halfway through a patch if it was writing directly to the HD. It will need to hit the drive to reverify such files, but that can be done in the background and it’s a worst case scenario. I don’t think I would justify the cost of a SSD with OS updates, especially when much of the cost can be deferred by having a better cache and simply having the OS handle data better.

        Excuse you.

      • tay
      • 8 years ago

      Sigh no. This is for database and enterprise workloads.

        • Bensam123
        • 8 years ago

        Why?

          • Flatland_Spider
          • 8 years ago

          Because 1.5GB/s is like drinking from a fire-hose. This is about getting data out as quickly as possible, and there are very few people who have workloads that could exploit this technology. Graphics/Audio people and developers are two that I can think of.

          Businesses have quite a few workloads that can exploit the abilities of this card and keeping hungry, hungry CPUs fed is one of them.

          Fusion IO has some case studies about a their cache cards.
          [url<]http://www.fusionio.com/[/url<]

            • Bensam123
            • 8 years ago

            I can buy 16 gigs of memory for $60 that is ungodly fast and compliment it with a mechanical storage device that is ridiculously cheap $/gb, that means it’s meant for corporations?

            Tays comment is in response to mine, which isn’t directly related to the news snippet BTW.

      • bcronce
      • 8 years ago

      “If Windows had better caching there would almost be no need for SSDs.”

      Windows already does caching, but people keep disabling the service because the claim it makes Windows faster.

        • Bensam123
        • 8 years ago

        Erm… for those of us who don’t disable it, it’s still pitiful at it.

      • bandannaman
      • 8 years ago

      Such a friendly lad! I can feel the positive energy. I’m sure you’ll make lots of friends here!

        • Bensam123
        • 8 years ago

        Thanks… Top of the mornin’ to you too sir!

      • Krogoth
      • 8 years ago

      I think the largest problem is the *lack* of demand for faster I/O on the mainstream front.

      There’s no killer app in the mainstream that *needs* 100MB/s+ of bandwidth on the I/O end. HDD are just sufficient enough for the most part without costing an arm and a leg.

      The only thing that SSDs brought to the mainstream table is faster access speed (lower latency) which why they make system feel more snappy like 15K HDDs RPM of yesterday, but their STR performance isn’t that much better (except for PCIe units).

      Improving memory caching is just a band-aid to the problem. Because the programs/OS still need to pull the data off the much slower, non-volatile media (HDD/SSD) onto system memory/on-board cache.

        • Bensam123
        • 8 years ago

        I don’t know about that… with MS vouching and concentrating on ultra fast boot times for their OS and people who enjoy having a fast storage subsystem, I think this is just a relevant as SSDs are, it’s just a different method for attaining a better storage subsystem.

        It’s weird how some of you are making a distinction between a cache being TOO fast and a mechanical being too slow, but you’d readily support a SSD and probably even own one… it’s almost like there is a bias in here some where…

        I don’t think better caching is a silver bullet, but it most definitely is a very prominent solution for a lot of the problems we’re having with storage. It’s like turning your mechanical into a hybrid drive without a hybrid storage drive… or really buying anything except a piece of software. Drives really should be doing this to begin with.

        As caching algorithms get better, so does the worst case scenarios for HDs as the cache can start doing some pretty fancy predictions, that predict data access before it happens and cache it to begin with. Like being able to tell what files are connected, rather then just what the disk accesses regularly. Really Intel or AMD would be awesome at working on a piece of software like this as they already do pretty hardcore predictions inside a processor.

        A cache can do access in the background when the HD isn’t getting pummeled by urgent data requests.

        • derFunkenstein
        • 8 years ago

        [quote<] 100MB/s+ of bandwidth on the I/O end.[/quote<] Pardon?

          • Krogoth
          • 8 years ago

          I/O end = the bandwidth of your non-volatile mediums.

          You don’t need that kind of bandwidth to pull up your favorite game, browse the web and play your music/audio file.

          The demand doesn’t exist in the mainstream market, because there’s no killer app for it.

          It is a different story for enterprise and datacenter markets though.

        • Waco
        • 8 years ago

        Lack of demand? You’re kidding right?

        I’m assuming by “STR” you mean sustained rates…it’s both easy and cheap to build an array of two drives that’ll sustain 1 GB/s reading and writing.

          • Krogoth
          • 8 years ago

          1GB/s STR is certainly obtainable, but it isn’t cheap nor easy to pull off.

          It is still in the enterprise/datacenter tier and will remain there for a number of years.

    • internetsandman
    • 8 years ago

    I wanna be excited for this….but that review of OCZ’s revodrive kinda spoiled high performance storage for me. I wanna see a review done on this drive, I want Intel to bring back my faith and confidence in these kinds of technologies, confidence that was so rudely shattered by OCZ

      • Draphius
      • 8 years ago

      well this works differently then ocz’s setup and sounds like its more flexible but takes more knowledge to get configured properly for your setup. i still think pci-e ssd’s have a ways to go though imo especially when u account for cost

        • DrDillyBar
        • 8 years ago

        Agreed. Both.

        • kamikaziechameleon
        • 8 years ago

        SSD’s have a way to go as a primary form of storage when you consider cost, people use them as OS drives but not to store music and media yet. When that transition happens we will finally know that SSD’s are in a good place.

      • kamikaziechameleon
      • 8 years ago

      I agree, SSD’s haven’t been resolved with SATA why are they in a hurry to pick up PCIE???

        • Flatland_Spider
        • 8 years ago

        Higher throughput. Speed matters when the storage subsystem is the bottleneck.

Pin It on Pinterest

Share This