Threadripper owners can now fire up their NVMe RAID arrays

Combining NVMe drives for enhanced throughput or redundancy on high-end desktop platforms has been a contentious issue. Intel promised NVMe RAID capability on its X299 platform, but the company restricts the option to boot from an array unless it's made of its own SSDs. On top of this, the blue silicon giant will likely charge users a premium for an "NVMe key" and has yet to release the drivers and firmware needed to make it happen. AMD's Threadripper platform has lacked NVMe RAID entirely, until now. The silicon underdogs from Santa Clara announced this morning that NVMe RAID modes 0, 1, and 10 are coming to the X399 platform as a complimentary upgrade through a BIOS update. There will be no restrictions on the brands or models of the six drives that can go into an NMVe array.

AMD claims that an experimental six-disk RAID 0 array scaled to a perfect 6x the speed of a single drive when reading and 5.4x when writing. The company's community update page says the array achieved a ludicrous 21.2 GB/s when reading from the sextet of drives. The post doesn't specifically say what kind of drives it used to achieve that figure, though screenshots on the community update page suggest a collection of 512 GB Samsung 960-series drives.

While AMD's NVMe RAID functionality doesn't require software keys or specific types of drives, not everything is sunshine and roses. NVMe RAID support requires a BIOS update from the motherboard manufacturer, and users must change their disk configuration from SATA or ACHI to RAID. Threadripper owners with an existing RAID array can't perform an in-place driver or BIOS upgrade to add NVMe RAID capability. Users with a SATA RAID array have to back up all their data and break down the array before installing a BIOS update adding NVMe RAID support. If that array is bootable, Windows 10 needs to be installed afresh, a task that presumably goes pretty fast on six speedy drives in a RAID array, 16 to 32 hardware threads, and four memory channels.

The feature only supports Windows 10, at least for now. Gerbils already ripping up threads or considering AMD's X399 platform can read more on the company's NVMe RAID support page and the community update page.

Comments closed
    • pirate_panda
    • 2 years ago

    “complementary upgrade”

    Did you mean complimentary, as in free?

      • Chrispy_
      • 2 years ago

      No, the upgrade just tells you that you are wise and smart beyond your years for buying a Threadripper, then comments on how well you are looking today, and have you changed your hair? It suits you.

        • geniekid
        • 2 years ago

        Oh hell. That’s all I ever really wanted from my computing platform.

        • dpaus
        • 2 years ago

        Given that Apple mostly caters to the fashion market these days, the reality that Siri doesn’t do this means sum1 got sum ‘splainin’ ta do…

          • NeelyCam
          • 2 years ago

          Wow, dpaus is alive!!!

            • dpaus
            • 2 years ago

            Do you have ANY IDEA what the interest rate is these days on overdue beer-n-wings??

            • Chrispy_
            • 2 years ago

            I’m guessing NeelyCam is not going to like the answer 😉

    • Bauxite
    • 2 years ago

    There is an epyc board out there set up to use 96! of the 128 lanes for NVMe drives, typically via U2 backplane in a chassis but probably possible with a bunch of m2 adapters if you want. That would make one hell of a ZFS array or hadoop/ceph/whatever node, you probably actually would need the other 32 lanes just for NICs able to handle that much bandwidth.

      • lesd
      • 2 years ago

      All in a 1U: [url=https://www.hpe.com/us/en/product-catalog/servers/cloudline-servers/pip.hpe-cloudline-cl3150-g4-server.1010129707.html<]HPE CL3150[/url<]

      • msroadkill612
      • 2 years ago

      up to 24x nvmeS is a common spec for epyc boards.

    • Shinare
    • 2 years ago

    I’m just wondering what workstation application would benefit from such an increase in IO. I know in my day to day job a single NVMe drive is already dang fast and it doesn’t seem like things are IO bound. But none of my users do video editing or medical imaging or something along those lines.

      • ludi
      • 2 years ago

      GIS and mapping.

      • msroadkill612
      • 2 years ago

      This blurs the boundaries between memory and storage.

      imo, the most pervasive app will be simulating scarce and costly memory and as workspace for scratch files.

      A simple example would be if u had a triple nvme raid 0 array, and set win swap and temp files to reside there, you could live well using a mere 8GB of DDR memory.

      a better example is vega hbcc allowing such storage to be used as a gpu cache extender.

    • gmskking
    • 2 years ago

    Can’t wait until these are cheap/big enough to replace mechanical for storage drives. Shouldn’t be too much longer.

      • DPete27
      • 2 years ago

      Don’t hold your breath.

      • Waco
      • 2 years ago

      It’ll be a long time if you want them to be the same cost/bit stored. They’re at a 10-20x penalty in cost today.

        • just brew it!
        • 2 years ago

        This. Cost/bit of HDD will also continue to drop; I’m not sure if flash will ever reach price parity.

        Heck, people have been predicting the death of tape for years too, and it’s still around.

          • the
          • 2 years ago

          Tape as the benefit of crazy capacity that can be stored off line safely. In this era of large data breaches, having archival data physically off line is looking more like a feature.

          Improvements in spinning disk have been slowing down in commercial models. From the sounds of it, Western Digital and Seagate still have a slew of improvements in their labs but the cost of bringing them to market increases the price per gigabyte. If it is too close, why not just go flash?

          The other problem disk faces is that while prices may drop, they’re not dropping at the same rate flash has traditionally been. (This year is really the first year flash prices have started to level off.)

            • just brew it!
            • 2 years ago

            I believe flash is approaching the point at which technology limitations will slow down increases in density and cost/GB. We probably need to start looking beyond flash for the next big leap.

          • Waco
          • 2 years ago

          Tape is good for a decade+ of improvements at the minimum. Flash prices are only high due to demand outstripping capacity – once more fabs come online (which takes a long time and megabucks) they can pummel HDDs out of existence.

            • just brew it!
            • 2 years ago

            I have to wonder if flash is getting close to hitting a density wall though. Planar flash already hit the wall where the cells were getting too small to reliably support TLC; that’s why we’re seeing 3D flash with increasing numbers of layers.

            HDDs also have an opportunity to improve cost/GB by moving to alternate form factors which can accommodate more platters. This amortizes the cost of the motor, interface electronics, and controller port across more platters than conventional designs.

            • Waco
            • 2 years ago

            Sure, but now it’s almost entirely production bound – more density just means more GB shipped. If production capacity was available to match HDDs in terms of GB shipped…we wouldn’t have HDDs any more.

            • just brew it!
            • 2 years ago

            I’m still not convinced that the actual cost of production is low enough for that to happen. Making 3D chips is fundamentally a much more complex operation than coating a metal or glass disc and polishing it.

            Yes, I expect HDDs to all but completely disappear from the consumer market. But I think they’ll continue to have a place in datacenters.

            Much the same way that tape was once a consumer tech, but now is used only in enterprise applications.

            • Waco
            • 2 years ago

            I hope you’re right, to be honest. I like dealing with HDDs far more than cheap consumer flash…even SMR HDDs. 😛

            • frenchy2k1
            • 2 years ago

            ^THIS
            HDDs are not going anywhere as a technology.
            It may seems like the rate of density growth has slowed, but this is false, HDDs have become bigger and cheaper faster than flash has.
            We went from 4TB to 6TB to 8TB and now 10TB HDDs in a few years.
            Look at their road maps and see that they plan to continue this growth.

            On the other hand, this has become irrelevant for client computing. Most people are happy enough with 1TB or less. Cell phones typically still have <100GB. Laptops are fitted with 512GB or less SSDs. In that space, HDDs make no sense anymore, because the price penalty you pay for SSDs is easily offset by the performance gains.

            TLDR: consummers use SSds, as the price for small ones is competitive while mass storage has moved to the cloud on HDDs (+flash cache).

            • Waco
            • 2 years ago

            Eh, I’ve seen the roadmaps (including the non-public ones) and I’m not convinced. 🙂

            In the time we went form 4 TB to 12 TB in HDDs we went from 256 GB to 30+ TB SSDs…

            • just brew it!
            • 2 years ago

            Yeah, but the higher capacity SSDs are stupid expensive, whereas there’s only a slight price premium (in $/GB) for the larger HDDs.

            • Waco
            • 2 years ago

            Sure, but they’re capacity constrained at the fabs, so it makes sense. If raw “cheap” flash gets into the 5 cent/GB territory it’s going to be hard to justify HDDs even for enterprise uses unless they drop in price by quite a bit.

            Personally I’m just hoping for the latter – ultra cheap HDDs would make me very happy. 🙂

    • chuckula
    • 2 years ago

    I’d be curious to see what the pros/cons of this solution over a plain software RAID-0 would be [hint hint followup article TR].

    Putting 6 NVME drives in a RAID-0 array sounds like a lot of fun until one of the NVME drives decides to go over the cliff.

      • kuraegomon
      • 2 years ago

      I can think of many applications for the kind of super-high-throughput, high-IOP storage that RAID0 (or 0+1) NVMe would give you. It’s just that if I was deploying those applications, I sure wouldn’t be doing it on Threadripper/TR4. Epyc or Xeon multi-socket would be my weapons of choice.

      • Waco
      • 2 years ago

      Totally depends on the *type* of software RAID. Windows RAID, in my experience, has been fine for performance but lousy for actually writing the bits properly in the long run. Mdraid has a good track record but I’ve never built a high-performance RAID with it.

      I’d love to see the comparison though. 🙂

        • chuckula
        • 2 years ago

        The chips that go int these motherboards have all kinds of cores that spend a lot of time not doing much. Might as well put them to work.

          • just brew it!
          • 2 years ago

          Presumably you bought all those cores because you wanted to use them for computational tasks. Doing software RAID at NVMe speeds is going to be pretty CPU intensive.

          OTOH, I’d be willing to bet that this “native” solution is just “fake RAID” anyway. I.e. bare minimum BIOS support to maintain and boot from the array, with RAID duties during normal operation handled in software (but hidden down in the device driver).

          So meh.

            • chuckula
            • 2 years ago

            Outside of a few microbenchmarks even most of TR’s multi-threaded benchmarks leave plenty of unused resources in a 1950X and there are very few workloads out there that rail both the ALUs in the CPU at 100% while also railing disk I/O (which is not the same as RAM I/O).

            Basically, even in a heavily utilized Threadripper system there are going to be free resources and you might as well put them to use.

      • dragontamer5788
      • 2 years ago

      [quote<]I'd be curious to see what the pros/cons of this solution over a plain software RAID-0 would be [hint hint followup article TR].[/quote<] The general rule of thumb seems to be Software RAID for the win. Ultimately, these motherboard drivers end up using the CPU for the RAID calculations anyway. I'd imagine that software RAID0 (ie: Windows Storage Spaces or Linux LVM/mdRAID) wouldn't be much slower and would also be a heck of a lot more reliable. ---------- If there's some sort of secret-sauce with regards to caching or whatnot, maybe the BIOS RAID will be worth it. Further testing is definitely required.

        • SlappedSilly
        • 2 years ago

        I always figured BIOS RAID was about boot drives. Software RAID can’t help there (I assume).

      • dpaus
      • 2 years ago
    • Krogoth
    • 2 years ago

    RAID doesn’t really make much sense for solid state media due to the nature how they typically fail. They already yield insane amount of IO throughput for a single drive especially NVMe units.

      • stdRaichu
      • 2 years ago

      …but you know some people will just was 2*insane IO 😉

      Regardless of the failure mode, if one of your RAID1 pairs goes south it’ll still be nice to know you can drop another one in and carry on as if nothing happened. One thing I’m curious of though it how one would tell from an NVMe device as to which one has failed, not having any IO or failure lights…? Hmm this sounds like a job for RGB LEDs.

      • the
      • 2 years ago

      Depends. NVMe has openned up the wide world of high speed storage. There are applications that genuinely require it and NVMe RAID assists there. Think crazy things like 8K video capture etc. Not a consumer thing but something professionals will want in their workstations.

      • Waco
      • 2 years ago

      I have an 8-way RAID0 array on my desktop. It’s just a bunch of cheap SATA drives, but it’ll do 4 GB/s pretty easily in many workloads.

      It was worth it for the fun factor alone, and it’s backed up regularly in case of drive failure.

      • End User
      • 2 years ago

      I think the benefits (so fast and redundancy) far outweigh the negatives:

      “Comparison with hard disk drives

      An obvious question is how flash reliability compares to that of hard disk drives (HDDs), their main competitor. We find that when it comes to replacement rates, flash drives win. The annual replacement rates of hard disk drives have previously been reported to be 2-9% [19,20], which is high compared to the 4-10% of flash drives we see being replaced in a 4 year period. However, flash drives are less attractive when it comes to their error rates. More than 20% of flash drives develop uncorrectable errors in a four year period, 30-80% develop bad blocks and 2-7% of them develop bad chips. In comparison, previous work on HDDs reports that only 3.5% of disks in a large population developed bad sectors in a 32 months period – a low number when taking into account that the number of sectors on a hard disk is orders of magnitudes larger than the number of either blocks or chips on a solid state drive, and that sectors are smaller than blocks, so a failure is less severe. In summary, we find that the flash drives in our study experience significantly lower replacement rates (within their rated lifetime) than hard disk drives. On the downside, they experience significantly higher rates of uncorrectable errors than hard disk drives.”

      [url<]http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/23105-fast16-papers-schroeder.pdf[/url<] Nobody say "RAID is not a substitute for a backup"!

        • dragontamer5788
        • 2 years ago

        “Serious” solutions generally are handled at a higher filesystem software level now, like Window Storage Spaces or ZFS from the *NIX world.

        The big question for RAID0 is simply a question of speed. Because a lot of web-hosting jobs are database / storage heavy (think Youtube: requires lots of parallel video streams, while others like Google need to search through tons of data very quickly), a lot of web jobs are handled by even RAM-disks.

        Since PCIe Flash is cheaper than RAM by a significant factor, and RAID0 them together can achieve very high performance, we might be entering an age where PCIe Flash is a superior option over DDR-RAM disks (mostly for the cost-effectiveness of the solution. I’d expect PCIe Flash to be much slower, but also much cheaper)

      • davidbowser
      • 2 years ago

      If you have the means…

      [url<]https://www.youtube.com/watch?v=GV2Y2kIUkIs[/url<]

Pin It on Pinterest

Share This