Corsair Neutron NX500 SSD says: “that’s not a heatsink, mate”

Corsair's had SSDs in its lineup for a while now.  The Force Series MP500 was the company's most recent foray into the field, and we liked it enough to slap a TR Recommended award on it. However, Corsair hadn't launched a drive of the PCIe add-in card variety until today. Meet the Neutron NX500.

The Neutron NX500 packs MLC NAND flash and comes as a half-height half-length PCIe x4 card. The drive should be capable of pushing sequential data at up to 3000 MB/s for reads and 2400 MB/s for writes. Those figures are pretty healthy as-is, but the quoted 300K random read IOPs and 270K write IOPs should whet the appetite of any storage aficionado. Corsair claims the custom-designed heatsink wrapped around the drive lets it sit 20° C cooler than most M.2 NVMe SSDs. Whatever your feelings on the subject may be, we figure more heatsink is always better than less.

The drives are available in 400 GB, 800 GB, and 1.6 TB capacities, all covered by a five-year warranty. The endurance rating for the the two smaller models is a healthy 698 TBW. You can get either of those smaller drives from Corsair's webshop right away. The Neutron NX500 400GB will set you back $320, and the 800 GB version is going for $660. Corsair says the 1.6 TB variant is coming later this month.

Comments closed
    • crystall
    • 2 years ago

    “That’s no heatsink” – Obi Wan

    • HERETIC
    • 2 years ago

    Full review on Tom’s-This sums it up perfectly-
    ” The NX500 is over the top, but there is already a superior product that you can turn to if you decide to invest this much cash in an SSD. The better product simply costs less and delivers more.”

    • the
    • 2 years ago

    Is this a true add-in card or is it really M.2 + adapter card + massive heat sink to actually hide that it is using an adapter card?

    Edit:
    Apparently it is a true add-in card from [url=https://www.pcper.com/reviews/Storage/Corsair-NX500-400GB-NVMe-HHHL-SSD-Review-One-Flashy-SSD<]various reviews.[/url<] The PCB looks to have an enterprise variant appearing as there is room for various capacitors to account for sudden power loss. Kinda disappointed that all the NAND channels are in use as I'd like to see higher capacity drives in PCIe form factor vs. M.2.

      • MOSFET
      • 2 years ago

      I see your point, but either way, isn’t it still a PCIe 3.0 x4 add-in card? I admit, I’ve been satisfied enough with two Plextor 512M8PeY’s that I haven’t bothered to check. One is a mean VMFS6 datastore and the other is my Win10/VMWorkstation datastore drive. (Btw, NTFS on NTFS is slower than NTFS on VMFS6, or VMware’s NVMe driver is better.)

        • the
        • 2 years ago

        Some of the early add-in cards were just M.2 sticks with a carrier card to fit into a 4x PCIe slot. They worked and functioned the same but removing the M.2 adapter from the carrier voided the warranty. This was lame as the add-in card version was readily in stock where as the M.2 version was harder to come by due to its popularity. Actually I think it was Plextor who was doing this.

        Performance between a M.2 + carrier and pure M.2 should be the same but an add-in card grants an opportunity to provide better cooling. Where some M.2 slots are located on motherboards, they’re near hot running GPUs which can get the M.2 SSDs to enter thermal throttling. Simple heat sinks on to an M.2 can prevent throttling in most casts but won’t fit with everything else on a motherboard. Mounting heat sinks on to a M.2 stick on a carrier card is generally not an issue.

        In the case of the NX500, the add-in card design is going to be used for another product down the line so there is a common PCB. Corsair is also doing it right by providing good cooling to the SSD controller and NAND. So performance of the NX500 add-in card should be the same as an M.2 version that isn’t thermal throttling.

    • ronch
    • 2 years ago

    Cool. Is it made in Australia?

      • PrincipalSkinner
      • 2 years ago

      It’s not made of wool.

    • JosiahBradley
    • 2 years ago

    With hedt platforms gaining popularity and the price of the drives, why not 8x PCIe?

      • RAGEPRO
      • 2 years ago

      Because that would make it unusable in a huge portion of desktop motherboards without forcing the graphics card to x8 mode. (That said, most motherboards’ PCIe x4 slots are only 2.0 version, so…)

        • willmore
        • 2 years ago

        He said HEDT. You should be swimming in at leat 8x slots if not 16x.

          • RAGEPRO
          • 2 years ago

          Yes. Except, this is a product that has to make money. Most people’s desktops are not HEDT. This is not being sold exclusively to HEDT customers. Ergo, see my last post.

        • Takeshi7
        • 2 years ago

        I already have to force my graphics card into x8 mode when I use an NVMe SSD (unless I want to put it in the PCIe 2.0 slot from the chipset. Yuck). So I may as well use that SSD slot in x8 mode as well. Otherwise I’m just wasting 4 PCIe 3.0 lanes.

          • RAGEPRO
          • 2 years ago

          To be frank it is very unlikely you would ever see any difference, even in massive file copy operations, between PCIe 2.0 x4 and 3.0 x4. 2GB/sec is around the limits of most NVMe SSDs for sequential reads, much less writes, or random accesses.

            • Takeshi7
            • 2 years ago

            But the point is, If I’m already taking 8 lanes away from my GPU, I may as well use all 8 of those lanes on an SSD, instead of just using 4 and wasting the other 4.

            Also no, PCIe 2.0 2GB/s would be a major bottleneck, especially in reads. The 960 EVO can reach 3.5GB/s sequential reads. And that would be exacerbated because the chipset has to share that bandwidth with all of the other devices connected to the chipset.

            • RAGEPRO
            • 2 years ago

            Except, MY point is, you may as well use the PCIe 2.0 x4 slot because it makes no real difference in the performance of your drive.

            • Takeshi7
            • 2 years ago

            3.5GB/s is a very noticable difference in performance compared to 2GB/s. And if you’re copying data to another SATA SSD that’ll drop to 1.4GB/s.

            You can’t seriously say it makes no real difference.

            • RAGEPRO
            • 2 years ago

            What SSD do you have that does 3.5GB/sec? What SATA SSD do you have that does 1.4GB/sec?

            • Takeshi7
            • 2 years ago

            Samsung 960 Pro is rated for 3500MB/s. And I’m not saying a SATA SSD does 1.4GB/s. I’m saying that if my SSD is on on the chipset’s PCIe 2.0 slot, it is limited to 2GB/s, and other SATA devices on the chipset have to share that bandwidth, so using the SATA would cut around 600MB/s of bandwidth from the 2GB/s

            • RAGEPRO
            • 2 years ago

            Sure, one of the (if not the very?) fastest NVMe SSDs that exists will give up some performance when doing [b<]sequential reads[/b<] into RAM, or across 10GbE. Otherwise it's unaffected. If you have 10GbE set up you probably have an HEDT system where all this is moot, and sequential copies into RAM are very rare indeed. Not really impressed by that argument. (The [i<]peak[/i<] random read performance of the 1TB and 2TB 960 Pro drives is 1.76GB/sec.) On the other thing, you're a little confused, I think. Devices connected to the chipset have to share the DMI 3.0 x4 link (4GB/sec) to the CPU, but the PCIe 2.0 x4 connection to the PCH is not shared. Even assuming you have a SATA device that can write at ~560MB/second, you're still not saturating that bidirectional link in either direction. You might lose a SATA port entirely when connecting an M.2 device, but that's another issue entirely.

            • Takeshi7
            • 2 years ago

            I’m on Z97 which is DMI 2.0 (2GB/s), and I have 2 SATA SSDs in RAID 0. If I’m using an encryption application all of the data has to go through the DMI link to the CPU to be encrypted and then back to the chipset. Trust me it’s not that out of the question to bottleneck the chipset. I think they should still make PCIe x8 SSDs so that I can connect it to the CPU without wasting 4 PCIe 3.0 lanes.

            • RAGEPRO
            • 2 years ago

            They do (and x16), but they’re enterprise hardware. 🙂 To be perfectly clear, I’m [u<]not[/u<] endorsing this statement, but the view of those companies is that if you [i<]need[/i<] that kind of storage performance, you should be on HEDT or server hardware. This all just kinda leads back to the same thing I told willmore—most people can use or make use of an x4 SSD, most people can't (or won't) make use of an x8 SSD.

            • juzz86
            • 2 years ago

            Or if not that, then at least a switch that can divert four lanes away and leave the other twelve for the GPU, rather than divert eight away and use four.

            My Z97 board does this too (ASRock one, with Ultra M.2).

            • Waco
            • 2 years ago

            The PCIe spec (and architecture) doesn’t allow for non-base 2 lane counts.

            • juzz86
            • 2 years ago

            Thought it might be along those lines, thanks mate. +1

      • Forge
      • 2 years ago

      Because M.2 is setup for max four lanes of PCIe. This is an M.2 card on a carrier, with a heatsink.

        • K-L-Waster
        • 2 years ago

        Actually no….

        [url<]https://www.pcper.com/files/imagecache/article_max_width/review/2017-08-10/170630-180533.jpg[/url<] Real actual PCB, not a carrier.

    • FuturePastNow
    • 2 years ago

    This seems unnecessary.

      • MOSFET
      • 2 years ago

      What in the fack would make you say that? I love faster storage.

    • PBCrunch
    • 2 years ago

    These appear to be priced for future sales and rebates.

    At MSRP there are lots of better options.

      • JustAnEngineer
      • 2 years ago

      I’m looking at the [url=https://pcpartpicker.com/product/Ykbkcf/samsung-960-evo-500gb-m2-2280-solid-state-drive-mz-v6e500<]$468/TB[/url<] Samsung 960 Evo . That's a solidly-performing PCIe 3.0 X4 drive for half the price of the Corsair drive.

        • just brew it!
        • 2 years ago

        Maybe they’re positioning this as a product for slightly older systems that lack an M.2 slot, or systems in which the M.2 slots are already occupied. It seems competitive with other PCIe slot based SSDs.

          • JustAnEngineer
          • 2 years ago

          Lacking a free M.2 slot is a problem that [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16815256014<]$20[/url<] will fix.

        • MOSFET
        • 2 years ago

        It is a very solid drive, performance-wise and all around. 850 EVO at the right price is still an awesome drive too, but no 960 EVO. To add, I’ll put it this way. I think Samsung hit a two run homer with the 850, and then they hit a grand slam with the 960.

    • Waco
    • 2 years ago

    $.83 per GB? Ouch. No thank you.

      • rems
      • 2 years ago

      Those heatsinks are expensive to make man!

    • derFunkenstein
    • 2 years ago

    Came for the Crocodile Dundee reference, stayed for the SSD.

      • Mr Bill
      • 2 years ago

      Needs a blood groove.

Pin It on Pinterest

Share This