The Toshiba conglomerate isn’t having its brightest days ever. An accounting scandal, massive layoffs, and most importantly, Westinghouse’s recent Chapter 11 bankruptcy filing have left the company’s finances in an unenviable state. In response, Toshiba made preparations to sell off its healthy and profitable memory endeavors by spinning them off into a separate entity known as Toshiba Memory Corporation. Several interested parties have entered bids for TMC, and all might have been well for Toshiba, but Western Digital (who purchased SanDisk last year) felt that the terms of SanDisk’s joint venture with Toshiba gave it some right to intervene in the sale. Negotiations have since broken down, and a whole lot of arbitration and litigation have ensued. The situation is still unfolding, but the long and short of it is that TMC remains unsold for the time being.
Even with those dark clouds over its corporate umbrella, Toshiba’s memory business itself is booming. Unperturbed by the corporate-level bickering, the Toshiba-Sandisk partnership has iterated on its “Bit Cost Scalable” 3D NAND technology to reach several key milestones. In the last few weeks alone, Toshiba has announced the following BiCS technology leaps: quadruple-bit cells, 96-layer dies, and through-silicon vias. The most important thing about BiCS, however, isn’t any of the above. That honor goes to the fact that BiCS NAND is finally shipping in a client SSD. Say hello to Toshiba’s XG5.
|Capacity||Max sequential (MB/s)|
This review will be a bit different from our usual fare, since the XG5 isn’t a retail drive. Toshiba’s XG line is sold to OEMs and system integrators rather than directly to consumers. Regardless of its target audience, the XG5 is a PCIe 3.0 x4 NVMe drive with Toshiba’s 64-layer BiCS NAND and a Toshiba controller running the show. For now, the drive is being produced in 256GB, 512GB, and 1TB flavors, but Toshiba tells me nothing is stopping it from making a whopping 2TB version if one of its partners asks for it.
That story seems to check out, since the sample unit the company sent me squeezes its terabyte of capacity into just two packages on a single-sided PCB. 64-layer BiCS TLC dies come in both 256Gb and 512Gb densities. Toshiba wouldn’t confirm which XG5 versions use which dies inside their memory packages, but it’s a safe bet that our 1TB sample uses 512Gb dies. It would follow, then, that each package should have eight dies stacked inside, but we couldn’t get confirmation from Toshiba on implementation details. Specifics of the controller and its firmware will have to remain a mystery, as well. The chip bears the same “TC58” prefix we saw on the OCZ RD400’s controller, but the firmware of that part has almost certainly been updated since then. We do know that it supports pseudo-SLC caching for burst writes that the DRAM cache can’t handle on its own, but that’s about it.
Toshiba hasn’t disseminated much technical detail about how BiCS flash actually works, but we do know it’s based on charge-trapping insulators rather than traditional floating gates. Although Toshiba’s implementation likely differs, readers can refer back to our primer on Samsung’s charge-trap-based V-NAND technology for some basic principles.
Our sample unit doesn’t feature any encryption capabilities, but Toshiba does produce a variation of the XG5 which offers hardware-based full disk encryption through the TCG Opal 2.01 standard if OEM clients should want those features in their products. The company didn’t reveal an endurance spec for the XG5, but it did say to expect the same endurance from BiCS TLC drives that we’ve seen from Toshiba’s planar 15-nm MLC drives of equal capacity.
Ordinarily we’d talk price and warranty here, but neither of those things are relevant to the XG5. You can’t just buy the drive off Newegg, since Toshiba’s only selling it to OEMs and system integrators. Similarly, issues experienced by the end user would be addressed by the brand selling the system, not directly by Toshiba. You might ask why we’re reviewing the drive, then, and here’s why: some form of the XG5 will almost certainly reach retail channels. The XG3 was essentially the same drive as the OCZ RD400. Toshiba tells us a retail equivalent of the XG5 is already in the works, and there’s no small chance that will hit shelves as the OCZ RD500.
Now, on to our results. BiCS flash spent a long time in the oven, and it’s finally time for us to see what it can do.
IOMeter — Sequential and random performance
IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.
Our sequential tests use a relatively large 128KB block size.
The XG5 can’t quite reach the eye-popping high reads that Samsung’s 1TB 3D TLC drive did at QD1, but the gap becomes very narrow at QD4. Sequential write speeds are similar across the XG5, 960 EVO, and RD400. Overall, the XG5’s sequential speeds are definitely fast enough to run with the big dogs.
Random read response times fall in the middle of the pack, but nonetheless are much less than a single millisecond. Random write response times are very competitive, approaching the top of the charts.
The XG5 sailed easily through our basic IOMeter synthetics, so let’s hit it with some tougher tests.
Sustained and scaling I/O rates
Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.
We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.
Samsung’s V-NAND drives may hit higher peak speeds, but they don’t hold on to them for quite as long as the XG5 does. The drive’s steady-state speeds seem close to on par with the 960 EVO 1TB. Let’s look at the actual numbers to confirm.
The 960 EVO 1TB wins here, but the gains Toshiba’s made over the OCZ RD400 are astounding. The XG5’s peak and steady-state write rates are almost double those of the RD400.
Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.
For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.
The XG5 scales well until QD32, after which it levels off. This performance is another big win over the RD400, which only makes it to QD8 before fizzling out. Let’s take a look at a few of the contenders side-by-side.
The XG5 and RD400 are neck-and-neck until QD4, at which point the XG5 starts to drastically outpace its planar cousin. The 960 EVO 1TB scales a little faster and further than the XG5, but the Toshiba drive remains within striking distance.
Our scaling and sustained IOMeter tests gave Toshiba’s BiCS drive a big lead over the older, planar 15-nm MLC one. Samsung’s 1TB V-NAND drive maintains an edge, but the XG5 may yet give it reason to start sweating. Let’s move on to real-world performance tests.
TR RoboBench — Real-world transfers
RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.
Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.
|Number of files||Average file size||Total size||Compressibility|
The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.
RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.
Let’s take a look at the media set first. The buttons switch between read, write, and copy results.
Hail to the king! The XG5 snags a single-threaded read record and both 1T and 8T copy records. The EVO still maintains a substantial lead in the write tests, but clearly the XG5 is no slouch when it comes to pushing files around. Next up, the work set.
Another couple of record-setting performances. This time around, the 960 EVO 1TB can’t claim any real victories over the XG5.
Overall, the XG5’s real-world performance is exemplary. It shows substantial improvement over the RD400 and trades blows with Samsung’s V-NAND equivalent. For our last page of tests, we’ll toss a Windows installation onto the drive and see how it handles.
Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.
We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused on the time required to load the OS, but these new ones cover the entire process, including drive initialization.
As usual, there’s not a whole lot of separation happening here. The XG5 boots with expected SSD alacrity.
Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in the GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.
It’s the typical smorgasbord of results. Productivity programs pose no problem for the XG5. Last up, games.
Once again, no surprises here. The XG5 1TB will competently store a billion indie titles, or about 20 Titanfalls. With our boot and load sanity checks complete, we’re out of tests to run. Check out the next page for our test methods.
Test notes and methods
Here are the essential details for all the drives we tested:
|Adata Premier SP550 480GB||SATA 6Gbps||Silicon Motion SM2256||16-nm SK Hynix TLC|
|Adata Ultimate SU800 512GB||SATA 6Gbps||Silicon Motion SM2258||32-layer Micron 3D TLC|
|Adata Ultimate SU900 256GB||SATA 6Gbps||Silicon Motion SM2258||Micron 3D MLC|
|Adata XPG SX930 240GB||SATA 6Gbps||JMicron JMF670H||16-nm Micron MLC|
|Corsair MP500 240GB||PCIe Gen3 x4||Phison 5007-E7||15-nm Toshiba MLC|
|Crucial BX100 500GB||SATA 6Gbps||Silicon Motion SM2246EN||16-nm Micron MLC|
|Crucial BX200 480GB||SATA 6Gbps||Silicon Motion SM2256||16-nm Micron TLC|
|Crucial MX200 500GB||SATA 6Gbps||Marvell 88SS9189||16-nm Micron MLC|
|Crucial MX300 750GB||SATA 6Gbps||Marvell 88SS1074||32-layer Micron 3D TLC|
|Intel X25-M G2 160GB||SATA 3Gbps||Intel PC29AS21BA0||34-nm Intel MLC|
|Intel 335 Series 240GB||SATA 6Gbps||SandForce SF-2281||20-nm Intel MLC|
|Intel 730 Series 480GB||SATA 6Gbps||Intel PC29AS21CA0||20-nm Intel MLC|
|Intel 750 Series 1.2TB||PCIe Gen3 x4||Intel CH29AE41AB0||20-nm Intel MLC|
|Intel DC P3700 800GB||PCIe Gen3 x4||Intel CH29AE41AB0||20-nm Intel MLC|
|Mushkin Reactor 1TB||SATA 6Gbps||Silicon Motion SM2246EN||16-nm Micron MLC|
|OCZ Arc 100 240GB||SATA 6Gbps||Indilinx Barefoot 3 M10||A19-nm Toshiba MLC|
|OCZ Trion 100 480GB||SATA 6Gbps||Toshiba TC58||A19-nm Toshiba TLC|
|OCZ Trion 150 480GB||SATA 6Gbps||Toshiba TC58||15-nm Toshiba TLC|
|OCZ Vector 180 240GB||SATA 6Gbps||Indilinx Barefoot 3 M10||A19-nm Toshiba MLC|
|OCZ Vector 180 960GB||SATA 6Gbps||Indilinx Barefoot 3 M10||A19-nm Toshiba MLC|
|Patriot Hellfire 480GB||PCIe Gen3 x4||Phison 5007-E7||15-nm Toshiba MLC|
|Plextor M6e 256GB||PCIe Gen2 x2||Marvell 88SS9183||19-nm Toshiba MLC|
|Samsung 850 EV0 250GB||SATA 6Gbps||Samsung MGX||32-layer Samsung TLC|
|Samsung 850 EV0 1TB||SATA 6Gbps||Samsung MEX||32-layer Samsung TLC|
|Samsung 850 Pro 500GB||SATA 6Gbps||Samsung MEX||32-layer Samsung MLC|
|Samsung 950 Pro 512GB||PCIe Gen3 x4||Samsung UBX||32-layer Samsung MLC|
|Samsung 960 EVO 250GB||PCIe Gen3 x4||Samsung Polaris||32-layer Samsung TLC|
|Samsung 960 EVO 1TB||PCIe Gen3 x4||Samsung Polaris||48-layer Samsung TLC|
|Samsung 960 Pro 2TB||PCIe Gen3 x4||Samsung Polaris||48-layer Samsung MLC|
|Samsung SM951 512GB||PCIe Gen3 x4||Samsung S4LN058A01X01||16-nm Samsung MLC|
|Samsung XP941 256GB||PCIe Gen2 x4||Samsung S4LN053X01||19-nm Samsung MLC|
|Toshiba OCZ RD400 512GB||PCIe Gen3 x4||Toshiba TC58||15-nm Toshiba MLC|
|Toshiba OCZ VX500 512GB||SATA 6Gbps||Toshiba TC358790XBG||15-nm Toshiba MLC|
|Toshiba XG5 1TB||PCIe Gen3 x4||Toshiba TC58||64-layer Toshiba BiCS TLC|
|Transcend SSD370 256GB||SATA 6Gbps||Transcend TS6500||Micron or SanDisk MLC|
|Transcend SSD370 1TB||SATA 6Gbps||Transcend TS6500||Micron or SanDisk MLC|
All the SATA SSDs were connected to the motherboard’s Z97 chipset. The M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941, 950 Pro, RD400, and 960 Pro require more lanes, they were connected to the CPU via a PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.
We used the following system for testing:
|Processor||Intel Core i5-4690K 3.5GHz|
|Platform hub||Intel Z97|
|Platform drivers||Chipset: 10.0.0.13
|Memory size||16GB (2 DIMMs)|
|Memory type||Adata XPG V3 DDR3 at 1600 MT/s|
|Audio||Realtek ALC1150 with 184.108.40.20644 drivers|
|System drive||Corsair Force LS 240GB with S8FM07.9 firmware|
|Storage||Crucial BX100 500GB with MU01 firmware
Crucial BX200 480GB with MU01.4 firmware
Crucial MX200 500GB with MU01 firmware
Intel 335 Series 240GB with 335u firmware
Intel 730 Series 480GB with L2010400 firmware
Intel 750 Series 1.2GB with 8EV10171 firmware
Intel DC P3700 800GB with 8DV10043 firmware
Intel X25-M G2 160GB with 8820 firmware
Plextor M6e 256GB with 1.04 firmware
OCZ Trion 100 480GB with 11.2 firmware
OCZ Trion 150 480GB with 12.2 firmware
OCZ Vector 180 240GB with 1.0 firmware
OCZ Vector 180 960GB with 1.0 firmware
Samsung 850 EVO 250GB with EMT01B6Q firmware
Samsung 850 EVO 1TB with EMT01B6Q firmware
Samsung 850 Pro 500GB with EMXM01B6Q firmware
Samsung 950 Pro 512GB with 1B0QBXX7 firmware
Samsung XP941 256GB with UXM6501Q firmware
Transcend SSD370 256GB with O0918B firmware
Transcend SSD370 1TB with O0919A firmware
|Power supply||Corsair AX650 650W|
|Case||Fractal Design Define R5|
|Operating system||Windows 8.1 Pro x64|
Thanks to Asus for providing the systems’ motherboards, to Intel for the CPUs, to Adata for the memory, to Fractal Design for the cases, and to Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.
We used the following versions of our test applications:
- IOMeter 1.1.0 x64
- TR RoboBench 0.2a
- Avidemux 2.6.8 x64
- LibreOffice 4.3.2
- GIMP 2.8.14
- Visual Studio Express 2013
- Batman: Arkham Origins
- Tomb Raider
- Middle Earth: Shadow of Mordor
Some further notes on our test methods:
- To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.
- We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.
- Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.
The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Toshiba’s XG5 almost uniformly improved on the RD400 throughout our test suite, which is exactly what Toshiba set out to do with the drive. It couldn’t dethrone Samsung’s V-NAND in many of our tests, however, so it’s likely that the XG5 won’t set an overall performance record. We distill the overall performance rating using an older SATA SSD as a baseline. To compare each drive, we then take the geometric mean of a basket of results from our test suite. Only drives which have been through the entire current test suite on our current rig are represented.
As suspected, the BiCS-equipped XG5 has made huge strides over its predecessor and almost encroaches on the 960 EVO 1TB’s turf. We can call BiCS’ client-drive debut a resounding success.
This is where we’d usually take some time to talk price and relative bang-for-buck, but without even a suggested price to refer to, that’s not really an option. In any case, you can’t directly buy the drive unless eBay sellers start shucking brand-new laptops to get at them. Therefore, we’ll save that discussion for when we get our hands on the inevitable retail version of the XG5.
BiCS has made a strong first impression. Products equipped with 64-layer BiCS TLC will be well-positioned to give the 960 EVO series some needed competition. Samsung’s 960 Pro line, however, will likely go unchallenged for a good while yet. Toshiba tells me it’s completely committed to TLC for BiCS client products, so there are no ultra-high-end MLC drives waiting in the wings to wrest the crown away from the 960 Pro. The company doesn’t believe that the performance-to-cost ratio favors MLC for client applications. It’s difficult to dispute that, seeing the numbers that 3D NAND TLC drives like the XG5 and 960 EVO manage to put up.
Nonetheless, there are a few things to look forward to. 96-layer BiCS and NAND built with through-silicon vias should offer tangible improvements when they eventually filter into consumer product lines. In the nearer term, a retail version of the XG5 might come with a custom driver to eke a little more oomph. Or maybe even one of those fancy M.2 heatsinks that are becoming more common. Only time will tell.
Toshiba Corporation’s board of directors might be sweating bullets right now, but Toshiba Memory Corporation seems to be firing on all cylinders. BiCS NAND promises a bright future for whoever eventually ends up with a controlling stake in the operation.