Toshiba’s XG6 SSDs put 96-layer NAND to good use

When we reviewed Toshiba's XG5 NVMe SSD, it was one of the fastest drives we'd tested. Although that was the first drive we fondled that came equipped with Toshiba's 64-layer BiCS 3D flash memory, at the time Toshiba had already announced that it had 96-layer chips in development. That's been a year ago now, and the XG5's successor is here. Meet the Toshiba XG6 NVMe SSD. As you've no doubt already assumed, this drive uses fourth-generation TLC BiCS flash, stacking 96 layers of NAND cells in each chip.

If you're like us, when you hear “96-layer flash memory” you probably begin to think of extremely high-density SSDs. These drives aren't bad on that front, packing up to 1 TB of storage onto a single-sided 80-mm M.2 card. The real story here is the XG6's speed, though. Toshiba says these drives can perform sequential reads at up to 3180 MB/s and writes at “nearly” 3000 MB/s. Random performance is no less impressive: the XG gets spec'd for 355K IOPS on random reads and 365K IOPS on random writes.

Toshiba says it achieved the XG6's “industry-leading” sequential write performance by optimizing the drives' controller. Said controller supports NVMe 1.3a and connects to PCIe 3.0 x4 as usual. These drives are intended for OEMs, and the company says it's already sampling the XG6 for prospective buyers. Toshiba will be showing the XG6 series at the Flash Memory Summit on August 7, so maybe we'll hear more then.

Comments closed
    • Laykun
    • 1 year ago

    Can I get some slow 8TB SSDs please?

    • ishould
    • 1 year ago

    Can you imagine how fast these will be in RAID 0? When Tom’s Hardware reviewed 2 Samsung 950’s in RAID 0, write speeds *doubled* and read speeds increased by about 50%. Theoretically 6GB/s writes and 4.5GB/s reads. Read IOPs seem to only be slightly increased by RAID 0, but again write IOPs double

      • tacitust
      • 1 year ago

      Yeah, and I still would totally be unable to take advantage of that extra speed. I just ordered a bog standard MX500 stick after realizing it wasn’t even worth the extra $30 for the cheapest NVMe drive for my new HTPC/backup server build.

      • Waco
      • 1 year ago

      They’ll be about as fast as two of them if you can get the queue depth up.

      My desktop has an 8-way RAID 0 array of SSDs off of a SAS card. It scales linearly *only* if you can scale the load appropriately. It’s no faster than a single decent SATA SSD in terms of game loading 99% of the time. With a good RAID implementation read and write IOPs/bandwidth double if you align your access and stripe sizes appropriately.

    • Chrispy_
    • 1 year ago

    Does 3D NAND have the ability to read/write to different layers simultaneously?

    I was under the impression that the more layers you have with the same package bandwidth, the lower bandwidth per layer.

    That’s irrelevant if NAND can only read/write to one layer at a time, and detrimental if it can simultaneously work on mulitple layers.

      • Wirko
      • 1 year ago

      Detrimental meaning not scaling with the number of layers?
      The controller writes one block at a time, and a block may be located on one layer or multiple layers or even all layers, but I can’t ses why that detail would have an effect on performance or durability.

    • SlappedSilly
    • 1 year ago

    I’m kind of curious what those those unpopulated mount points are at the end of that stick, both on the front and the back.

      • Chrispy_
      • 1 year ago

      Likely for post-production and RMA testing and possibly fast firmware updating of stock that’s been sitting in inventory before packaging.

      • willmore
      • 1 year ago

      Likely debug connectors for development. The one on the bottom looks like a pretty standard JTAG connector. The top might be a serial connection or two. It costs very little to leave them in the PCB design but to not populate them during manufacture. If you wanted to remove them for manufacture, you run the rist that you’re shipping a board that you didn’t test. So, most companies chose to ship the same board they did all their testing on, but to leave off the actual connectors. Though some leave the connectors on, too. Not sure if that’s by design or if they just sort of forgot. 🙂 “It was in the BOM! and we bought 100K of them, we’re putting them on the damn boards or I have to explain to management why I bought 100K connectors that we *don’t need*! Or would *you* like to explain it to them? They *never* shoot the messenger. Oh, you agree we should just ship it as is? Excellent.”

      Edited to add: Looking at the traces on the bottom, it seems they’re both JTAG connectors as they appear to be in parallel. So, maybe it’s a choice of what JTAG connector their engineers and test people prefered?

      • Wirko
      • 1 year ago

      To connect the RGB LED strips, doh!

    • JosiahBradley
    • 1 year ago

    More layers same capacity…

      • techguy
      • 1 year ago

      But the stick is smaller!

      Super important. Who needs more capacity when you can shrink the massively bloated M.2 card down?

      Don’t get me wrong, I like Toshiba’s storage products – I have a bunch of their hard drives in my media server and they’ve been 100% reliable over the last 2.5 years. Fast, reliable, and cheap – can’t beat that.

      • DPete27
      • 1 year ago

      They can’t sell higher capacity because they’re price gouging so much that consumers won’t buy. So instead, they use higher density to reduce BoM even further = even higher profits.

    • SecretMaster
    • 1 year ago

    “Although that was the first drive we fondled that came equipped with”

    I guess that’s one way to describe how you test hardware…

      • RAGEPRO
      • 1 year ago

      What can I say? Fast storage, ahem, [i<]arouses[/i<] my interest.

        • chuckula
        • 1 year ago

        TMI MAN! TMI!

        • uni-mitation
        • 1 year ago

        Fast storage mokkori!

        uni-mitation

Pin It on Pinterest

Share This