ASRock Ultra Quad M.2 Card puts four NVMe SSDs in a PCIe slot

These days there are motherboards out there with three, four, and even five M.2 sockets. Most boards only have one or two, though. If you've got a need for a bundle of NVMe storage, something like the ASRock Ultra Quad M.2 card is probably what you'll want. This card is covered in an attractive brushed-aluminum heatsink parted by a 50-mm centrifugal fan. Inside, there are four M.2 sockets. The good folks at TechPowerUp got a close look at the card at CES, and here it is in all its glory.

Photo: TechPowerUp

ASRock makes direct comparisons to the similar Asus Hyper M.2 x16 Card. The differences in the two cards are subtle but significant. The ASRock product has a slightly larger fan, and it takes power from a 6-pin PCIe power connector. It also mounts its M.2 sockets differently—the Asus card mounts the drives parallel to the PCI Express slot, while the ASRock Ultra Quad M.2 has the sockets angled from the bottom.

Slide from ASRock, via TechPowerUp

The arrangement of the M.2 sockets is supposedly superior in that it minimizes trace distance from the sockets to the PCIe slot, according to ASRock. The back panel of the card has activity lights for all four sockets, and ventilation for the heat that four NVMe drives are likely to produce. ASRock includes thermal pads attached to the heatsink over the M.2 sockets. The Ultra Quad card also has a configuration utility to control fan speed.

It's worth noting that these cards will not work fully in most desktop motherboards. There's no PCIe switch on board, so your motherboard has to support PCIe bifurcation. That feature is common on the high-end desktop platforms, and indeed ASRock recommends the card as an NVMe RAID solution for Intel X299 and AMD X399 systems. According to TechPowerUp, ASRock expects the Ultra Quad M.2 to go for $70 when it launches.

Comments closed
    • davidbowser
    • 2 years ago

    For comparison, NVMe RAID cards in this form factor go for $400.

    HighPoint SSD7101A-1

    [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16816115217[/url<]

    • Bauxite
    • 2 years ago

    Coincidentally…well no, not really 😉 asrock was the first one to bring bifurcation to consumer boards. They support it on quite a few now.

    The power thing actually makes sense as for various reasons pcie slots may limit themselves to <25W power draw instead of ~75W. Things like the new 800p optane ssds are going to push the typical power envelope of M.2 up a bit. Doesn’t hurt either as more power connections generally helps stability in circuits.

    • DPete27
    • 2 years ago

    Why is that an axial fan instead of a centrifugal fan?

      • RdVi
      • 2 years ago

      Yep, it has a steep blade pitch so it should work ok enough for purpose I would assume, but it’s definitely not centrifugal.

        • GrimDanfango
        • 2 years ago

        It also definitely cost 10p to source, and will emit a constant ear-piercing whine from the moment it’s turned on.

        Nice device for a nice price, but you know that fan cable is getting unplugged within minutes of installation 😛

          • Wirko
          • 2 years ago

          It says “variable speed”, so there will be variable ear-piercing whine, not constant. Imagine your neighbour drilling holes into the ceiling.

        • Chrispy_
        • 2 years ago

        The intake shroud that exposes only the blades nearest the hub, and the lack of a frame for the fan mean that this will work as a centrifugal fan no matter what shape the blades are really. (souce: qualified fluid-dynamics engineer)

      • RAGEPRO
      • 2 years ago

      You’re right of course, my mistake. Sorry about that.

        • DPete27
        • 2 years ago

        I think you had it right the first time. Centrifugal fans are like those in laptops and I think it’d work much better in this implementation. This is definitely a axial fan.

    • wingless
    • 2 years ago

    $70?! That’s an excellent price. I’d like to see if it works with older systems in some fashion as well.

      • Kjella
      • 2 years ago

      It won’t, PCIe bifurcation is what lets you split a 1×16 lane physical slot into 4×4 lane logical slots. With no switch on card either the last three SSDs won’t work at all. And that’s why it’s cheap, it’s just traces passing through + a cooling solution while the mobo/SSDs do all the heavy lifting.

      Even if they did support it you need 16 free PCIe lanes, the mainstream systems wouldn’t have enough extra unless you dropped the GPU. So it takes a rather niche system to be able to use this card in the first place…

        • Chrispy_
        • 2 years ago

        Don’t a lot of higher-end motherboards (and almost all of the Ryzen/Threadripper ones) support PCIe full 3.0 x16 in at least two PCIe slots? It’s only Intel’s asinine product segmentation that keeps their mainstream processors deprived of PCIe lanes?

        I feel sorry for the people who bought 7700K’s billed as high-end chips when they (and their platform) are really quite low rent in terms of features and bandwidth, but four NVMe drives is not the sort of thing going into an economical machine. This will be paired with Ryzen7, Threadripper, Skylake-X etc, where there are plenty of available PCIe lanes remaining, even after 16 are dedicated to the GPU.

          • the
          • 2 years ago

          Even if you have enough PCIe lanes available, this may still not work. The PCIe root complex has a distinct number of devices it can drive regardless of lane width. This essentially puts four devices into a slot which the root complex was only expecting a single devices.

          This is why cards with bridge chips on them are preferred as the root complex will assign the entire bandwidth to the bridge chip itself.

            • Bauxite
            • 2 years ago

            I wouldn’t say preferred, the bridge chips have their own caveats and potential issues. Easier to deploy and wider support, definitely. If you do have bifurcation and know it, it is technically lower latency etc.

            • jts888
            • 2 years ago

            I’m with Bauxite and don’t consider a switched fan-out card a clear win. A 32 lane PEX 8732 switch sucks down an extra ~6W and hasn’t exactly gotten cheaper since Broadcom bought PLX.

            Moreover, most new on-die PCIe controllers support high degrees of bifurcation (e.g., Zeppelin has 2 blocks of 4+4+4+2+2 lanes) and pretty high overall lane counts, so the strongest proposition IMO for a switched card would be to have 4 NVNe sticks sharing an x8 slot and still individually being able to hit 4GB/s transfers.

            All that said, a card like the ones from Asus and ASRock should cost under $30 given the lack of virtually anything on it, but I doubt they’ll ever reach that point given the size and general spendthriftiness of the target demographic.

            • the
            • 2 years ago

            The problem with bifurcation on modern consumer systems is that the primary 16x PCIe slot is generally bifurcated with other devices on the motherboard already. For example, using M.2 slots on the mother board on systems will reduce the primary PCIe slot down to an 8x. In this scenario, two of the slots on this carrier card would not function.

            The other nice thing about a switched card is that it can be installed into older systems that don’t support bifurcation. There are plenty of Sandy Bridge era systems still in use on the market for example.

            • msroadkill612
            • 2 years ago

            Which makes 32 lanes per zeppelin die, there fore 64 for TR & 128 for epyc.

            So what happened to the missing lanes on ryzen? Only 24 are in evidence afaik.

            Further, can we assume RR apu will be 16 lanes including chipset, as it is half a zeppelin -1 ccx?

            Maybe they are held in reserve, much as intel gimps resources on their chips. When intel offer more lanes, so will ryzen.

          • Bauxite
          • 2 years ago

          If your consumer bios does not have a specific entry about bifurcation, this won’t work at all.

          Many (all?) TR boards, X299 pretty common and it started popping up on some mainstream socket boards back around Z170 and X99 refresh time but still uncommon there.

          Various C6xx chipset E5 boards have been doing this for a long time.

          • Airmantharp
          • 2 years ago

          In my opinion, this part exceeds ‘high-end’; this is a boutique part with very specific use cases, as most boards have plenty of connectivity for nearly every heavy consumer and workstation workload.

          • derFunkenstein
          • 2 years ago

          No single-die Ryzen chip has enough lanes to do 2×16. But that’s not what’s even the problem here – the motherboard has to support multiple devices in the same slot. That’s why the board has to support bifurcation.

            • Chrispy_
            • 2 years ago

            Ah okay.

            I assumed it was all down to the CPU used since the PCIe lanes in the two primary slots usually come directly from the CPU and it’s the other x4 and x1 slots that usually run off the PCH or MCP or whatever the Southbridge gets called for your chosen vendor.

          • msroadkill612
          • 2 years ago

          Exactly.

          For gods sake, its a 16 lane card. If you haven’t got 16 lanes, you have a major problem.

          This means ~all intel and ryzen with gpu installed.

          You may use it in 8 lane form if you run your gpu at 8 lanes. That will free the card,s requirements to run 2x of it’s 4x m.2 sockets.

          This card’s true potential is only available to lane rich TR/epyc moboS. It’s entry is testimony to the market success of Zen.

          Within 6 months, raid nvme will be top of most wish lists, and even premium intels will be excluded or very restricted.

          onboard Intel nvme ports are frauds. They and other resources, must share the same bandwidth (4 lanes = 4GB/s less some overhead) a single good nvme needs. It works ok if you use single drives alternately, but fast raid arrays need many times that bandwidth.

        • mikewinddale
        • 2 years ago

        What would happen if you put this into a x16 physical slot that was running at x8? (Many motherboards have two x16 slots that run at x8/8 when they are both occupied.)

          • jts888
          • 2 years ago

          Like you’d probably suspect, you’d just have only two working NVMe slots on the card. I don’t think there’s any reason to suspect that the disabled lanes on the AIB slot would cause interference with the adjacent slot.

Pin It on Pinterest

Share This