Notes from TR’s next-gen storage testing

In between work on other projects, I’ve been running exploratory tests on PCIe SSDs to see how their performance characteristics differ from the SATA drives we’ve been reviewing for the past few years. This research is part of a larger effort to come up with a new collection of tests for our next-gen storage suite.

Our old suite dates back to 2011, and though it’s been tweaked here and there, it’s long overdue for a major overhaul. The tests we conceived three years ago aren’t ideal for the latest SATA drives, let alone the coming wave of PCIe SSDs. So, I’ve been testing some newer drives to see what it takes to exploit their full potential.

SSDs have been bottlenecked by the Serial ATA interface for quite some time. The 6Gbps host connection is the obvious limitation, but it’s not the only one. SATA is also constrained by the AHCI protocol, which was conceived in an era of much slower mechanical drives.

Fortunately, the table is set for a PCI Express revolution. Windows 8.1 offers native support for NVM Express, a newer protocol designed specifically for PCIe SSDs. Intel’s 9 Series chipsets have their own provisions for PCIe drives, and compatible M.2 slots can be found on most enthusiast-oriented motherboards from that camp.

The PDC P3700 (top), M6e (right), and 850 Pro (left)

There aren’t a lot of purebred PCIe SSDs on the market right now, but we can get a sense of what’s possible from Intel’s DC P3700. This datacenter drive comes on a half-height expansion card with a beefy heatsink. It has four-lane Gen3 interface, and it’s based on the NVM Express protocol. It’s also extremely expensive; our 800GB sample sells for a whopping $2,600 at Newegg.

Although the P3700 works in standard desktop systems, we haven’t been able to boot Win8.1 on the thing. The motherboards we’ve tried don’t even show the drive as a boot option in the firmware. That’s not entirely surprising given the P3700’s target market, but it does highlight the fact that not all PCIe SSDs are fully supported in the PC arena.

Plextor’s M6e is a whole other story. This drive has a tiny M.2 2280 form factor, and it’s only $220 for 256GB. The dual-lane Gen2 interface is a good fit for 9-series motherboards, and we haven’t had any issues booting Windows. However, the M6e uses AHCI instead of NVMe, so it’s not a truly next-gen product.

For comparative reference, we’ve also been running preliminary tests on Samsung’s 850 Pro 512GB. It’s the fastest SATA SSD we’ve encountered to date, making it a good control of sorts.

Our first batch of results comes from Iometer, which lets us tweak the queue depth and the number of workers (threads, basically) hammering the drive with I/O. The number of concurrent I/O requests is the product of the number of workers and the queue depth. For example, one worker at QD32 produces 32 concurrent requests—the same as for a four-worker config at QD8.

Even with the lightest load, the P3700 more than doubles the sequential speeds of the other SSDs. Its read performance scales up aggressively as the queue depth rises, but there’s less improvement with writes. Interestingly, the P3700’s sequential speeds drop in our four-worker tests, at least versus single-worker configs with the same number of simultaneous requests.

In the write speed test, the four-worker setups produce similar slowdowns on the M6e and 850 Pro. There’s less of an impact in the read speed test, where the performance of those drives is fairly consistent across our six load configurations. Neither the M6e nor the 850 Pro hits substantially higher speeds under heavier loads.

Additional workers do help with random I/O, at least for some of the SSDs. Check out the peak 4KB random write rates:

The P3700 and M6e both get a boost from additional workers. The gains are bigger with heavier loads, especially on the Intel SSD. Check out the 50% jump in IOps from one worker at QD32 to four workers at QD8.

Curiously, the 850 Pro doesn’t respond well to loads spread across multiple workers. Its random write rate drops substantially when we switch from one worker to four, even when the total number of concurrent requests remains the same. That’s a shame, because the 850 Pro actually outperforms the M6e with a single worker.

Those random write peaks are much higher than the sustained rates that each SSD achieves. Here’s a closer look at how the drives compare across a 30-minute test. Click the buttons below the graph to switch between the various worker-and-queue combos.

All the drives peak early before trailing off as the clock ticks. The speed and shape of the decline is different for each one, in part because of the large differences in overprovisioned area.

The M6e 256GB and 850 Pro 512GB allocate roughly the same percentage of flash to overprovisioned area, but since the Samsung has a higher total capacity, it has more of this “spare” area to devote to accelerating incoming writes. The P3700 800GB has an even higher total capacity, but that’s not all. Like most server-grade gear, it also sets aside a much larger percentage of its flash as overprovisioned area.

The P3700 is a beast, and so is this test. I can’t think of a client application that generates an uninterrupted stream of random I/O for any considerable length of time. One of the biggest challenges with developing this new suite is balancing our desire to push drives to their limits with the need to present performance data that’s actually relevant to desktop workloads.

Sequential speeds don’t waver over longer tests, so there’s no need to draw out those results out over time. The same goes for random read rates. IOps is the most commonly used metric for random I/O, but we think response times can be more instructive.

All the SSDs are in the same ballpark up to four simultaneous requests. The M6e and 850 Pro slow down considerably after that, and they really struggle under our heaviest load. The P3700’s response times get slower, as well, but not by nearly as much.

Thanks to our resident developer, Bruno “morphine” Ferreira, we have another storage benchmark with a configurable load. RoboBench is based on Windows’ robocopy command, which can be run with up to 128 simultaneous threads. With the aid of a RAM drive, we can use RoboBench to test read, write, and copy speeds with real-world files. Here’s a taste of how RoboBench scales when reading files:

The work test uses tens of thousands of relatively small spreadsheets, documents, web-optimized images, HTML files, and the like. Read speeds increase dramatically to start, but the gains peter out as the thread count rises.

The media test comprises much larger movie, RAW, and MP3 files. Four threads are sufficient to reach top speed even on the P3700.

Robocopy defaults to eight threads, so that’s probably a good test to use along with the single-threaded config. It’s more difficult to make a case for testing additional configurations, in part because of the time required to secure-erase and pre-condition SSDs before any test that writes to them.

The above results provide a small taste of what we’re working on for future SSD reviews. I have a tendency to go a bit overboard with testing, but I’m trying to exercise more restraint this time around. We’ll see how that works out. Stay tuned.

Comments closed
    • Turd-Monkey
    • 5 years ago

    I’d love to see an Intel 730 added to this mini comparison. It’s not that great from a value perspective, but it would still be interesting to see how much the “hand me down” enterprise firmware helps for these tests.

    • stdRaichu
    • 5 years ago

    Damn, I seem to have missed this one. Well figure I’d chip in anyway since it’s either that or help someone make egg nog.

    Some observations from my own IO testing at home and at work:
    [list<] [*<]We make extensive use of sysinternal's diskmon to capture a trace of how an application uses its storage; it'll give you a timestamped rundown of read/write requests along with their destination discs and the request sizes. [/*<][*<]As well as handy info, the output from diskmon can also be used to build an approximation of the IO pattern in iometer or similar. Given that a great deal of applications are impossible to benchmark, or run in batch mode, this is priceless. [/*<][*<]It's not really worth testing queue depths, at least on a single machine level, more than about 16 - it's great for utterly thrashing the disc subsystem and making benches look good but it's almost impossible to attain on anything but server-grade hardware. I ran a test on my workstation (windirstat analysing the drive, aftershot indexing four lots of RAW images at a time, and two 2008R2 VMs installing SP1 and that was enough to max out the CPU) and queue depth never exceeded 8. Even our thirty-VMs-per-server blades rarely exceed a QD of 32. [/*<][*<]It's not so much of an issue now, and it's something Anand touched on years ago, but some SSDs behaved differently depending on the sort of IOs that were going on at the time. Specs saying "50,000 4k read IOPS!" are all well and good but don't tell the whole story - in a real world scenario you're also going to be reading and writing lots of different sized IOs at more-or-less the same time and performance of some controllers just fell through the floor when you had craptons of random access. Not so much of an issue these days though. [/*<][*<]We usually pair the diskmon with a perfmon trace loading Processor/% Privileged Time/<All Instances> and Processor/% Processor Time/<All Instances>. This'll allow us to spot if/where storage is becoming bottlenecked on the CPU and whether it's bottlenecked by the hardware/driver or the application (and indeed whether the application will scale to more than one CPU). Care has to be taken when interpreting the results on a machine with SMT enabled of course. [/*<][*<]Following on from that we've established that most of our IO is CPU-limited and we've already passed the point where SSDs frequently sit twiddling their thumbs whilst the CPU puffs and pants. Particularly on the lower-end of the CPU scale and especially where the application(s) doing the IO are single threaded. [/*<] [/list<] On the workstation side of things, most SSDs these days feel practically indistinguishable from one another so we buy primarily on a) reliability and b) price-per-GB with testicles-to-the-vestibules performance a tertiary consideration after long-term speed/TRIM/GC performance. All IMHO, IME, my £0.02, YMMV etc etc and I'll be interested to see if your testing disagrees with ours!

    • nerdrage
    • 5 years ago

    Typo: On the 4th IOMeter graph (the one with the buttons), the left-most button should read “1 worker QD[u<]1[/u<]" rather than "1 worker QD[u<]4[/u<]"

    • Luminair
    • 5 years ago

    I need a real-world test to see the value of it all. We know from past tests that a big synthetic difference does not necessarily show itself in real-world testing.

    My favorite test so far is an Anandtech one where they show how long the test took to run. This is even more understandable than real-world transfer rates. It’s real world time. Device X took 10 minutes to run the test, and device Y took 5 minutes. This I can relate to.

    I would enjoy a test pattern which reproduces some of the crazy stuff we do in real life: running windows update, steam update, torrent downloads, file serving, and 100 browser tabs paging to disk because they ran out of RAM, all simultaneously. The times we tap our fingers on the desk, waiting. Graphing how long it takes each device to finish that process would be golden!

      • DPete27
      • 5 years ago

      Yeah we’re starting to get into the realm of performance that far exceeds the needs of typical consumer usage and the difference between drives for those uses is increasingly miniscule. That said I don’t want TR to lose sight of that reality. Enterprise workloads are different. yes. but make sure you’re keeping your feet on the ground.

      Anandtech is on to an interesting view. Performance consistency. They’ve found that SSD manufacturers are only now starting to improve consistency where previous drives were “bursty”. I’d like to see TR’s take on that.

      • EndlessWaves
      • 5 years ago

      Yeah, the only reason to do individual SSD reviews is to find the differences and explain when they will matter. Otherwise you just want basic (half-)yearly group tests of everything on the market to ensure there aren’t any lemons and point out any changes.

      Real world recording of multiple loads would be great, and I’d like TR to focus on feature differences too. One of the things I came across recent was how difficult it is to secure erase an SSD, you only have to look at crucial’s recommended method to see that it’s an area that needs further work:
      [url<]http://forum.crucial.com/t5/Crucial-SSDs/SSDs-and-Secure-Erase/ta-p/112580[/url<]

      • dragosmp
      • 5 years ago

      It sounds great, but the problem with that if I recall one of Damage’s replies a while back is the workload isn’t repeatable; threads aren’t necesarily scheduled the same between various runs so on the same config you could end up with different result each time. It could work if all the above could be recorded and a trace to be played on the tested devices.

        • bwcbiz
        • 5 years ago

        Yeah, most tech sites don’t have the time/resources to do 10+ runs on each drive so they can get a good statistical mean. In the best case, I typically see 5 runs where they throw out top and bottom outliers.

      • September
      • 5 years ago

      I’d go even further and say what matters most (and needs to be tested) how much lag is removed from “short” tasks. I almost don’t care how much time a long task takes – anything over a minute and I switch to something else or walk away (yes, I’m truly a GenX ADHD child). My reason for wanting a fast SSD is to eliminate all short delays while I’m actively “working” – so the delays between repeated inputs that can make ME faster.

    • LoneWolf15
    • 5 years ago

    Thanks, TR crew, for working on this. Continuing to update testing standards for both speed and reliability, and testing products so that we can get good ideas of what fits our budget and performance needs, is a great thing.

    Merry Christmas.

    • aceuk
    • 5 years ago

    FWIW, Microsoft released a hotfix last month (KB2990941) to add NVMe support to Windows 7.

    [url<]http://support.microsoft.com/kb/2990941[/url<]

      • September
      • 5 years ago

      Thanks!

    • ptsant
    • 5 years ago

    Does this translate to any advantage for the casual home user? What is the difference in common tasks like loading stuff, application start-up and processing big files on the disk (photos etc)?

    Anyway, if I had an M2 slot, I would buy the Plextor. All Plextor I bought work flawlessly.

      • nanoflower
      • 5 years ago

      I suspect that what Damage said during the podcast is true. For the casual home user it makes little difference in performance what SSD you choose. Sure, you can get some performance advantage if you are somewhat picky and willing to spend a little bit more money but the performance boost is something most users won’t notice. It’s going from a mechanical HD to a SSD where the casual user really sees that performance increase.

        • VincentHanna
        • 5 years ago

        People say this consistently, but I don’t think it’s one hundred percent accurate. People have a threshold for latency that they expect for certain tasks. That doesn’t mean that we aren’t capable of discerning the difference between 2 levels of performance side by side, but more precisely that we can be comfortable with either. Most people don’t realize how snappy windows 8 is, until they compare it directly with or try to go back to windows 7.

        Nearly 3x the sequential read speed of a 850 pro is substantial, 20% higher access rates is substantial, and from what I see up there, those are the low numbers. PCI slots are also closer to the metal than SATA ports. You really don’t think that numbers like that would effect boot-up and load times noticeably for your PC gamers and your professional photographers and your 3d Modelers? Anyone who works with large files?

        • AdamDZ
        • 5 years ago

        I played with 2x and 4x SSDs in RAID0 and couldn’t tell a difference in real life use between 530MB/s and 1800MB/s: games, web, photoshop, etc. I went back to individual drives for simplicity sake. I thought games would load more than twice as fast but that didn’t happen.

          • UnfriendlyFire
          • 5 years ago

          Team Fortress 2 loads slower than Windows 7’s startup.

          Seriously.

          • Krogoth
          • 5 years ago

          It is because with SSDs, load times for mainstream application such as games are CPU-bound.

            • UnfriendlyFire
            • 5 years ago

            TF2 doesn’t fully use the CPU. 60%-80% on one core, around 20% on the other core.

            • Krogoth
            • 5 years ago

            It is still limited by the CPU and game engine. The Source engine was never designed to load-up GiBs worth of content for a gaming session. Titanfall is another example of this.

            • VincentHanna
            • 5 years ago

            I’m not necessarily talking about LAUNCHING programs, which can definitely involve waiting on the CPU, I’m talking about opening/saving large RAW files or databases, once a program is launched. Not only are these tasks not necessarily bottle-necked by the CPU, but depending on how the program buffers, you can actually watch the SSD catch up. This should also apply to loading large, rich maps once in games too… making your Batman: Arkham Knight experience noticeably better, but not necessarily improving your BF5 experience by nearly as much for obvious reasons.

            • bwcbiz
            • 5 years ago

            Considering that SSDs perform operations in the order of microseconds, DRAM runs in the 10s of nanoseconds and CPUs in the sub-nanosecond range, I don’t think “CPU” bound is quite the proper term here. What you’re actually seeing is a combination of access times at the different storage levels combined with the CPU deciding what needs to be accessed next. CPU-bound comes into play once the AI and game engine are running and the game has to track hundreds or thousands of entities and model their actions. Engine-bound or OS-bound is a distinct possibility in many cases. The engine can force the CPU to wait while it reloads an area from disk rather than caching it somewhere in memory. And one of the reasons AMD came up with Mantle was that graphics were still single-threaded in DX11, despite the fact that graphics cards are massively parallel. OS can also come into play for things like code-signing and validations.

            And last but not least, if the app does any kind of interaction over the internet, that will impact your performance every time it runs. Even at its best, internet service times are similar to mechanical hard drives. Think of multi-player or always-on DRM for worst-case scenarios.

            • Krogoth
            • 5 years ago

            You realize that games are programs at heart and the CPU needs to process, organzine and compile all of the data into a usable format?

            That’s why one of the most noticeable benefits from upgrading to a faster CPU is faster load times with your programs.

            Unfortunately, it is difficult to make the process multi-threaded so the benefits of multi-core chips aren’t noticeable unless you are trying load-up several instances at the same time, but that’s typically the workload of a server/workstation not a mainstream system.

          • VincentHanna
          • 5 years ago

          Again though, RAID is a different animal. RAID doubles write performance, at the expense of access times (it adds 2 more storage controllers to the mix) and a less than 1:1 improvement to read performance as well. With diminishing returns. Using Raid to make your bootup times faster can compare, in a few ways with hiring a second head chef at a restaurant, where maximum I/O capacity is improved dramatically while cook times are bounded primarily by the recipe you are using. 100 chefs working in tandem could not improve the start to finish cook time of a beef wellington much more than 20%.

          The numbers above come from moving beyond the limitations of AHCI and SATA, so comparing exotic AHCI/SATA-bound setups isn’t really on point. I’ve played around with ramdisks for instance and I think they make a difference with certain types of programs. If you want your beef wellington faster, you need a new way of cooking it.

    • Irascible
    • 5 years ago

    “I have a tendency to go a bit overboard with testing”

    Oh yeah.

    • UnfriendlyFire
    • 5 years ago

    Any updates about the idle/load power consumption? Those are very important factors for laptops.

    An SSD sucking down 5W at load and 1W at idle isn’t that much of a concern for a desktop system, especially if it has a low cost per GB.

    For laptops? You’re going to need a bigger battery, and that’s not an option for even some of the business laptops (especially if the battery isn’t easily removable). Consumer laptops? Good luck.

    • ikjadoon
    • 5 years ago

    “One of the biggest challenges with developing this new suite is balancing our desire to push drives to their limits with the need to present performance data that’s actually relevant to desktop workloads.”

    I think this is key for the upcoming tests. In a GPU review, I understand and/or directly relate to most benchmarks. The same is true for CPU review, but to a lesser extent.

    RAM and SSD reviews? Maybe one or two pages where I will learn about the useful performance.

    Queue depths, workers, even response time: I don’t exactly understand why these are relevant. And, to be honest, looking at real-world tests that show the majority of SSDs within seconds or less of each other (though costs varying wildly), I’m not sure how relevant they are.

      • Krogoth
      • 5 years ago

      Outside of certain worklords and usage patterns. Mainstream SSDs are more than adequate. Their primary benefit over HDDs has always been random access speed and it is a night/day difference.

        • ikjadoon
        • 5 years ago

        I think those “workloads and usage patterns” need to be made explicit in this new storage bench. And, if they’re not relevant for TechReport’s audience, I think they can safely be de-emphasized or removed.

        From what little I know, these are all enterprise features. Why are we testing consumer drives on their enterprise performance?

        It’s not like GPU reviews show pages upon pages of CUDA or cryto performance.

          • Klimax
          • 5 years ago

          Not removed. There are way too many ways for programs to access, just because somebody doesn’t know about particular case existing, doesn’t mean it doesn’t exist in reality. Just try to compile large C++ project with significant optimization with VS 2013 and you quickly discover that massive CPU along side with best SSD (like Intel’s PCIE one) are almost required if you don’t want to wait minutes for completion. (And if one uses “debug” build or lower optimizations they might not need CPU, but hard drive will be still bottleneck…)

          Another, working copy check out. (SVN, to some extents GIT and HG depending on repos)

          There are way too many ways to generate even on client large workloads, go to workstations and you are bound to find some nice cases. Servers are guaranteed.

          For one, I like seeing those extreme tests, too often I see them.

            • nanoflower
            • 5 years ago

            Yes, but again you are getting into something closer to enterprise loads. I’ve been there myself and used IObench to help qualify solutions but that doesn’t really reflect the typical user. That being said I’m not against including some tests that reflect those heavy loads as an addition to the more normal workload tests. Just so long as the focus is kept on something that more closely reflects the typical user and not those extreme cases.

            • kuraegomon
            • 5 years ago

            Counter-counter-argument: I’d say a significant portion of TR’s readers are “pro-sumers”. I.e. are significantly more likely to have at least a few use cases where queue depths and multiple workers matter.

            For instance, a common workflow for me is to be using my main desktop at home for typical single-user tasks in the host (Windows 7) OS, while running one (or possibly two) VMs where a lengthy multi-threaded compile is running, and an enterprise application suite is running in the background. In this context, not only does maintaining responsiveness matter, but shaving 5 minutes off that 10-15 minute build is going to be worth a lot to me in the long run. Today, my disk setup of choice is dual SATA SSDs in RAID0, but I’m looking forward to an NVMe-based config because I expect to receive a perceptible benefit from it.

            If you’re not interested in some of the more involved analysis, feel free to skip over those pages. But we’re a pretty heterogeneous audience here at TR – and the primary reason this is a go-to site for many of us is because of the comparatively rigorous reviewing methods that have become so hard to find elsewhere.

          • dragontamer5788
          • 5 years ago

          Well, the standard performance benchmarks are:

          1. File Copy
          2. Cold Bootup time

          Beyond that, I’m thinking maybe file search would be the other task “normal” people do.

          In any case, the synthetic benchmarks are there for those who understand synthetic benchmarks. “Consumer” benchmarks look at a problem with a ton of confounding factors, while synthetic benchmarks are more “pure” (but harder to understand / conceptualize).

          TR should definitely just do both however if time permits.

    • Krogoth
    • 5 years ago

    SSD cards are the new “SCSI”. Professional-tier, offers higher performance at a cost if you want/need it. PCI Express versions were are just a stop-gap solution while NVM Express is the real deal. NVM Express is really more about changes at firmware and software level so it doesn’t require hacks to get working as a bootable device.

    SATA Express is going to be next generation solution for the mainstream market.

    I don’t see SSD cards ever becoming mainstream because the lack of a killer application that would benefit from their bandwidth throughput. SATA Express is going to be cheaper, have legacy support and have more than sufficient performance for mainstream users. ~1GiB/s is more than enough for your p0rn collection and gaming needs. SATA Express is the new “PATA”.

    I’m more surprised that Intel hasn’t push for another form factor. They already got SOAC solutions and now both SATA Express and NVM Express can remove the need for 2.5″, 3.5″ and “5.25” devices from the chassis. NVM Express is going to completely change the rackmount arena. You will start seeing more and more “daughterboards” for your storage needs.

      • ikjadoon
      • 5 years ago

      SATA Express, though, looks fugly: [url<]http://www.kitguru.net/wp-content/uploads/2014/02/SATA-Express-connector.jpg[/url<] It's not IDE, but I'm not excited about that at all. M.2 looks great, however: I like the idea of a mini-ITX system + M.2 via NVM Express. Look, no hard drives, ma! 😀 --- I agree completely: I'm waiting for the day we completely scrap 2.5" -> 5.25" drives from computers. That's why I got the Corsair Air 540: no utterly useless bays blocking airflow and disrupting the clean front-to-back lines. We had a discussion about this before, but also excited for ATX to die. A standard made decades ago....

        • Krogoth
        • 5 years ago

        You realize that SATA Express allows devices plug directly into it? The cable is only there for legacy support? (2.5″, 3.5″ and 5.25″ devices). M.2 was a prototype of sorts for SATA Express.

          • ikjadoon
          • 5 years ago

          Huh? SATA Express, from this Anandtech article (http://www.anandtech.com/show/7843/testing-sata-express-with-asus/2 ):

          “Officially SATA Express (SATAe from now on) is part of the SATA 3.2 standard. It’s not a new command or signaling protocol but merely a specification for a connector that combines both traditional SATA and PCIe signals into one simple connector.”

          SATA Express is just that monstrosity of a connector. M.2 is my preferred connector for NVM Express, not SATA Express.

          And, what exactly do you mean, “devices plug directly into it”? That SATA Express connector is placed on some motherboards perpendicularly: it’s going to be like a stick of tall RAM? Not very elegant. M.2, laying flat along the motherboard, sounds a lot more sturdy and clean.

            • September
            • 5 years ago

            M.2 is the solution for boot drives, period. For now x2 Gen2 SATA may be common because it is the most compatible for booting, but once NVMe is supported on most motherboards that will be the future with x4 Gen3.

            I won’t ever use SATA Express. Not ever. In any build. Or at work. It will be a niche artifact that is purged from the ecosystem. It’s ungainly, ugly, bulky connectors, wide cables, and confusing for non-Krogoth’s.

            SATA3 will always be around for standard SSD’s that can be moved between systems and for mechanical drives. PCIe will be there for special high-performance storage besides the boot drive – for video editing and HP databases. M.2 will be the bolted-down boot drive on every motherboard.

            /rant

            • Krogoth
            • 5 years ago

            SATA-Express has more headroom than SATA-600 which is currently a bottleneck for newest generation of 2.5″ SSDs. The extra cables are only way to overcome this limitation. USB 3.0+ has extra pins for bandwidth and power delivery.

            M.2 is going to be only for portables and “extras” on higher-end desktop motherboards.

            PCIe SSD cards are going to be prosumer/ethusiast-tier just like their spiritual SCSI predecessors.

      • Bauxite
      • 5 years ago

      [quote<]PCI Express versions were are just a stop-gap solution while NVM Express is the real deal.[/quote<] Did you even read the article? PCI express is the bus interface and is a "stop-gap" solution for diddly squat, its as close to the cpu you're going to get in anything not baked into silicon. No new dumb form factors needed for servers and workstations. AHCI and NVMe are protocols, and NVMe is already here, and that PCI express SSD supports it. [url<]http://www.nvmexpress.org/products/[/url<] I know a place already running a couple terabytes of those P3700s to crunch data and they are amazing. I'm holding out for the P3500 to show up in the channel, its a lot cheaper for people who don't write dozens of TB/day, still overkill for an enthusiast but why not.

        • Krogoth
        • 5 years ago

        The older generation of PCIe SSD card required bridge chips and used older protocols in order to work correctly since PCIe Express was never designed to handle storage needs on its own. That’s why there were known limits such as it cannot be used as a boot device without hacks.

        NVM Express is a rework (mostly firmware) of the PCIe controller that allows it to handle storage needs on the fly and doesn’t require bridge chips or legacy support.

        NVM Express technically doesn’t require a new form factor, but that doesn’t stop storage companies from creating artificial ones to promote market segmentation as seen with SAS and SATA.

      • NTMBK
      • 5 years ago

      SATA Express is a monstrosity, and thankfully seems to be dead in the water. Not seen a single drive that supports that fugly connector. M.2 NVMe, or GTFO.

      Intel are at least creating thin mini-ITX- it’s pretty darn small, and specifying the socket position allows for much more efficient cooler designs than the old “tower of power” designs. You can finally get the cooler integrated into the chassis, which we lost when BTX died.

      They don’t seem to be doing an awfully good job of pushing it, though- they seem more interested in selling their NUCs.

        • Krogoth
        • 5 years ago

        SATA Express isn’t a monstrosity. The cabling design was done entirely because of legacy support. There’s still a massive amount of SATA devices. It would be utterly foolish to create a standard that wasn’t compatible (M.2 isn’t).

        It isn’t dead in the water either, since it has only been a few months that the first generation SATA Express boards came out into the market. SATA Express is the same place as SATA-I was back in the day (took almost two years for SATA HDDs to appear outside of tech demos and even longer for ODDs). It will probably take a while for SATA Express devices to appear since there’s no mainstream application that needs anything more than what SATA-600 can muster.

        M.2 will be exclusive to portables and tiny form factors like older “mobile” I/O standards (ExpressCard, PCMCIA). It might be thrown in as “Extras” for higher enthusiast-tier motherboards.

        NVM Express is going to be exclusive to the prosumer/enterprise market like its spiritual predecessor, SCSI.

        • dragontamer5788
        • 5 years ago

        Both SATA Express and M.2 are part of SATA revision 3.2.

        Multiple form factors have been proposed, but they’re all electrically similar. Both are being pushed by the SATA-IO organization.

        [url<]https://www.sata-io.org/sata-express[/url<] [url<]https://www.sata-io.org/sata-m2-card[/url<]

    • praxum
    • 5 years ago

    Micron has a bunch of PCIe SSD Options.

    [url<]http://www.micron.com/products/solid-state-storage/bus-interfaces/pcie-ssds[/url<] I have used these at work in Dell R720 Servers; they are extremely fast.

      • chuckula
      • 5 years ago

      Intel & Micron have worked together in flash development. Do you know if those PCIe SSDs use a special controller from Micron or the Intel controller?

      • September
      • 5 years ago

      Where do you buy these? Nothing at newegg, some iffy listings on Amazon…

    • sweatshopking
    • 5 years ago

    DAMN BRO. THAT’S SOME MOTHER FLIPPING PERFORMANCE.

      • bthylafh
      • 5 years ago

      It made your mom do a 180.

        • sweatshopking
        • 5 years ago

        Nope. Deanjo does that. They’re the same age, and his bronzed farming body DRIVES HER CRAY CRAY

Pin It on Pinterest

Share This