Intel’s Optane SSD 800P 58 GB and 118 GB solid-state drives reviewed

Intel’s 3D Xpoint memory technology has been on the market for almost a year now, but mainstream builders have yet to see an Optane product they can really sink their teeth into. Intel’s 16-GB and 32-GB Optane Memory accelerators carved out a new niche by offering SSD-like (or even greater) speeds to systems otherwise hobbled by mechanical storage. Jeff’s analysis concluded that the little drives aptly handled that use case, but Intel’s decision to restrict their use to Kaby Lake and newer Core CPUs hamstrung their market appeal.

Intel’s next 3D Xpoint client drive followed several months later in the form of the data-center-derived Optane SSD 900P.  By all accounts, the 900P is one beast of a drive, but it carries a price tag of $380 for 280 GB or $600 for 480 GB of storage. That’s simply too high for most builders to stomach. Who would spend $600 on storage when that much scratch could buy a couple of sticks of DDR4? (We kid, of course.)

Clearly, there’s some space left to occupy between the teeny-tiny Optane Memory drives and the exorbitant 900P. Today, Intel is launching a product intended to slide smoothly into that gap: the Optane SSD 800P.

Optane SSD 800P
Capacity Max sequential (MB/s) Max random (IOps) Price
Read Write Read Write
58 GB 1450 640 250K 140K $129
118 GB 1450 640 250K 140K $199

The 800P is only available in 58 GB and 118 GB capacities to start with. While it’s still much smaller than the meaty 250-GB and 500-GB class SSDs the mainstream market has grown accustomed to, these drives can easily handle a Windows installation and a few applications. This sets them apart from the Optane Memory line, which Intel hopes will still entice Kaby Lake and Coffee Lake builders on a budget. Gamers looking to keep more than one or two recent AAA titles on ultra-fast storage will still need to consider an Optane SSD 900P, though.

The 800P drives are NVMe M.2 gumsticks, but unlike most expensive drives that fit that description, they only take advantage of two lanes of PCIe 3.0 bandwidth. Intel says that decision comes down to these drives’ focus on low-latency and low-queue-depth performance versus raw bandwidth at the high queue depths that would be necessary to saturate traditional NAND SSDs. More on that in a second.

With the sticker peeled away, we can see the drives’ two 3D Xpoint packages and controller chip. Intel wasn’t ready to share details about packaging or controller implementation. All we know is that it’s an Intel controller with Intel firmware whipping Intel 3D Xpoint to extreme speeds. The company is keeping its mouth closed for the moment when it comes to technical specifics of its Optane products.

Intel isn’t at all shy about trumpeting Optane’s unique advantages versus NAND, though. The blue team promises 38% better response times than competing PCIe 3.0 x4 drives. It’s particularly bullish on the 800P’s low-queue-depth performance, where the company correctly claims that the vast majority of client workloads live.

Additionally, Intel assures us that the 800P’s sustained performance remains truly good-as-new regardless of how full the drive is. This is in stark opposition to NAND’s behavior. The performance of NAND SSDs can decline precipitously as drives are pushed closer to their total capacity. Finally, Intel touts these drives’ endurance, rating each version of the 800P for 365 terabytes written over the drives’ five-year warranties. That’s absurdly high for the size of these drives, and even higher than Intel’s more conservative initial spec of 200 TBW when the 800P was unveiled at CES.

These improvements over NAND are undoubtedly due to the storage media’s “cross point structure,” as Intel calls it. 3D Xpoint is addressable at the cell level, entirely circumventing the flash-translation-layer rigamarole that NAND drives must deal with as they juggle writing pages and erasing blocks without wearing out too quickly.

3D Xpoint’s superiority over NAND better assert itself spectacularly in these drives, because they don’t come cheap. While neither Optane SSD 800P will set you back as much as a chunk of change as the 900P will, the 58 GB and 118 GB drives still carry suggested prices of $129 and $199, respectively, working out to a whopping $2.19 and $1.68 per gigabyte. 

Before we dive into testing, there’s one bit of bad news. Our storage test rigs are beginning to show their age. While the Optane 800P drives are perfectly usable as secondary storage in older systems, our venerable Asus Z97 Pro refused to acknowledge the existence of the 800P as a boot device in its BIOS, rendering it impossible to conduct boot and load tests against the other drives in our test suite. We nonetheless managed to collect our usual complement of results from IOMeter and RoboBench, and can happily throw the Optane duo alongside our usual SSD lineup. Let’s get to it.

 

IOMeter — Sequential and random performance

IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.

Our sequential tests use a relatively large 128 KB block size.



In these sequential tests, the 800P drives don’t shine particularly brightly. The 118 GB is slower than some speedy SATA drives, while the smaller drive is scarcely faster than Intel’s 760p and its 3D NAND. Let’s see if random response times look any better.



In fact, they look a heck of a lot better. The Optane 800P drives post the lowest random-read latencies we’ve ever seen, period, and by no small margin. Their read response times are one-fifth that of the $1250 Samsung 960 Pro 2TB. Absurd. Write response times are also blazingly fast, just not record-setting.

Intel’s talk of responsiveness was no idle boast. Let’s see if the rest of the company’s claims bear out in our remaining tests.

 

Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.


What a sight to behold. Intel said the 800P would give “fresh-out-of-box” performance regardless of how much of a beating it might be taking, and our graphs completely agree. The drives’ high peak write speed remains remarkably constant throughout the 30 minute period. The 118 GB drive suffers frequent dips to a lower rate, but it always bounces back up to its initial higher speed, spending the lion’s share of the test period there.

The Optane 800P drives broke the x-axis scale on our steady-state graph, doubling the previously unchallenged DC P3700. As our graph of our entire test period showed, these drives barely fall from their peak rates under a sustained workload. Bravo, 800P.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.

For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.


The Optane drives forced us to use the much-larger vertical scale we’ve only had to break out for Intel’s data-center-class P3700. Intel’s press materials were careful to emphasize the 800P’s performance at low queue depths, and here we see why. The graphs flatline after QD4. But given the dizzying height of that line, it’s nothing to complain about, as the next graphs will show.


These graphs may flatten out, but they flatten out so far above any other drive we can compare against that it’s a moot point. The 760p is a fast drive, remember, but it looks pathetic alongside the Optane 800P. Only the data-center-class P3700 manages to crest above the 800P 118 GB, and it took it until QD64 to do so. Client workloads are highly unlikely to ever exploit that kind of parallelism, and the incredible performance of these Optane SSDs from QD1 to QD4 bolsters Intel’s claims about Optane’s unique characteristics.

Our sustained and scaling tests really hammer the point home that 3D Xpoint is a fundamentally different beast than NAND. The limitations and degradations that traditional SSDs are subject to just don’t apply to Intel’s new mojo. On the next page, we’ll put away IOMeter and see how the 800P handles real-world workloads.

 

TR RoboBench — Real-world transfers

RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

  Number of files Average file size Total size Compressibility
Media 459 21.4MB 9.58GB 0.8%
Work 84,652 48.0KB 3.87GB 59%

The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

Let’s take a look at the media set first. The buttons switch between read, write, and copy results.



RoboBench often echoes what our first set of IOMeter tests reveal. The 800P runs very quickly despite only running off two PCIe lanes. Its write speeds are fine, but they hover only a little bit beyond the reach of fast SATA drives.

The work set might better showcase the 800P drives’ stellar random performance. Let’s find out.



And indeed it does. The 800P duo grabs records for the single-threaded read and copy tests, and the margin of victory is substantial. Their write performances are also good, if not quite as dominant.

RoboBench reaffirmed that sequential writes are the 800P’s weak point, but that weakness is offset by the drive’s insane random performance. Ordinarily the next page would host our boot and load tests, but as we cautioned in our introduction, our test rigs just can’t hack it with booting Optane. So flip to the next page to read about our test methods, or skip ahead to our final thoughts.

 

Test notes and methods

Here are the essential details for all the drives we tested:

  Interface Flash controller NAND
Adata Premier SP550 480GB SATA 6Gbps Silicon Motion SM2256 16-nm SK Hynix TLC
Adata Ultimate SU800 512GB SATA 6Gbps Silicon Motion SM2258 32-layer Micron 3D TLC
Adata Ultimate SU900 256GB SATA 6Gbps Silicon Motion SM2258 Micron 3D MLC
Adata XPG SX930 240GB SATA 6Gbps JMicron JMF670H 16-nm Micron MLC
Corsair MP500 240GB PCIe Gen3 x4 Phison 5007-E7 15-nm Toshiba MLC
Crucial BX100 500GB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
Crucial BX200 480GB SATA 6Gbps Silicon Motion SM2256 16-nm Micron TLC
Crucial MX200 500GB SATA 6Gbps Marvell 88SS9189 16-nm Micron MLC
Crucial MX300 750GB SATA 6Gbps Marvell 88SS1074 32-layer Micron 3D TLC
Crucial MX500 500GB SATA 6Gbps Silicon Motion SM2258 64-layer Micron 3D TLC
Crucial MX500 1TB SATA 6Gbps Silicon Motion SM2258 64-layer Micron 3D TLC
Intel X25-M G2 160GB SATA 3Gbps Intel PC29AS21BA0 34-nm Intel MLC
Intel 335 Series 240GB SATA 6Gbps SandForce SF-2281 20-nm Intel MLC
Intel 730 Series 480GB SATA 6Gbps Intel PC29AS21CA0 20-nm Intel MLC
Intel 750 Series 1.2TB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Intel DC P3700 800GB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Mushkin Reactor 1TB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
OCZ Arc 100 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Trion 100 480GB SATA 6Gbps Toshiba TC58 A19-nm Toshiba TLC
OCZ Trion 150 480GB SATA 6Gbps Toshiba TC58 15-nm Toshiba TLC
OCZ Vector 180 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Vector 180 960GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
Patriot Hellfire 480GB PCIe Gen3 x4 Phison 5007-E7 15-nm Toshiba MLC
Plextor M6e 256GB PCIe Gen2 x2 Marvell 88SS9183 19-nm Toshiba MLC
Samsung 850 EV0 250GB SATA 6Gbps Samsung MGX 32-layer Samsung TLC
Samsung 850 EV0 1TB SATA 6Gbps Samsung MEX 32-layer Samsung TLC
Samsung 850 Pro 512GB SATA 6Gbps Samsung MEX 32-layer Samsung MLC
Samsung 860 Pro 1TB SATA 6Gbps Samsung MJX 64-layer Samsung MLC
Samsung 950 Pro 512GB PCIe Gen3 x4 Samsung UBX 32-layer Samsung MLC
Samsung 960 EVO 250GB PCIe Gen3 x4 Samsung Polaris 32-layer Samsung TLC
Samsung 960 EVO 1TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung TLC
Samsung 960 Pro 2TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung MLC
Samsung SM951 512GB PCIe Gen3 x4 Samsung S4LN058A01X01 16-nm Samsung MLC
Samsung XP941 256GB PCIe Gen2 x4 Samsung S4LN053X01 19-nm Samsung MLC
Toshiba OCZ RD400 512GB PCIe Gen3 x4 Toshiba TC58 15-nm Toshiba MLC
Toshiba OCZ VX500 512GB SATA 6Gbps Toshiba TC358790XBG 15-nm Toshiba MLC
Toshiba TR200 480GB SATA 6Gbps Toshiba TC58 64-layer Toshiba BiCS TLC
Toshiba XG5 1TB PCIe Gen3 x4 Toshiba TC58 64-layer Toshiba BiCS TLC
Transcend SSD370 256GB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC
Transcend SSD370 1TB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC

We tested the Optane SSD 800P duo using Asus’ Hyper M.2 X4 PCIe 3.0 adapter card. All the SATA SSDs were connected to the motherboard’s Z97 chipset. The Plextor M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941, 950 Pro, RD400, and 960 Pro require more lanes, they were connected to the CPU via our PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.

We used the following system for testing:

Processor Intel Core i5-4690K 3.5GHz
Motherboard Asus Z97-Pro
Firmware 2601
Platform hub Intel Z97
Platform drivers Chipset: 10.0.0.13

RST: 13.2.4.1000

Memory size 16GB (2 DIMMs)
Memory type Adata XPG V3 DDR3 at 1600 MT/s
Memory timings 11-11-11-28-1T
Audio Realtek ALC1150 with 6.0.1.7344 drivers
System drive Corsair Force LS 240GB with S8FM07.9 firmware
Storage Crucial BX100 500GB with MU01 firmware

Crucial BX200 480GB with MU01.4 firmware

Crucial MX200 500GB with MU01 firmware

Intel 335 Series 240GB with 335u firmware

Intel 730 Series 480GB with L2010400 firmware

Intel 750 Series 1.2GB with 8EV10171 firmware

Intel DC P3700 800GB with 8DV10043 firmware

Intel X25-M G2 160GB with 8820 firmware

Plextor M6e 256GB with 1.04 firmware

OCZ Trion 100 480GB with 11.2 firmware

OCZ Trion 150 480GB with 12.2 firmware

OCZ Vector 180 240GB with 1.0 firmware

OCZ Vector 180 960GB with 1.0 firmware

Samsung 850 EVO 250GB with EMT01B6Q firmware

Samsung 850 EVO 1TB with EMT01B6Q firmware

Samsung 850 Pro 500GB with EMXM01B6Q firmware

Samsung 950 Pro 512GB with 1B0QBXX7 firmware

Samsung XP941 256GB with UXM6501Q firmware

Transcend SSD370 256GB with O0918B firmware

Transcend SSD370 1TB with O0919A firmware

Power supply Corsair AX650 650W
Case Fractal Design Define R5
Operating system Windows 8.1 Pro x64

Thanks to Asus for providing the systems’ motherboards, to Intel for the CPUs, to Adata for the memory, to Fractal Design for the cases, and to Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Conclusions

Intel’s Optane SSD 800P breaks new ground in our storage test results. Never before have we seen such breathtaking random response times. Nor have we seen a drive stubbornly cling to its peak random-write speeds while writing its entire capacity. The 800P fulfills all the claims Intel makes for it in its marketing materials. Sadly, being unable to boot the drive on our aging test platform means we don’t have enough results to compare it to the rest of our crew in overall performance rankings. And we can’t graph its price-to-performance ratio on our scatter plots, either. But we’ll soldier on and conduct our final analysis even without visual aids.

Even with this Optane duo’s world-class performance in certain workloads, $129 and $199 are incredibly dear prices for 58-GB and 118-GB SSDs. $129 can easily buy a 500-GB-class SATA drive, and $199 is enough to move up to a 500-GB-class NVMe stick or get most of the way to a terabyte of NAND. PC gamers and demanding users don’t just need speed, they need capacity, and the Optane SSD 800P can only fulfill one of those demands.

For the curious, 58 GB might be enough room for a Windows installation and some applications, but it won’t leave a lot of breathing room. The 118-GB drive might offer room for a small music collection or one or two older AAA games. No matter how you slice it, allocating the money to good old NAND instead of 3D Xpoint gets builders a lot more space to stretch their legs, and we imagine cost-per-gigabyte will remain the primary roadblock to Optane adoption.

DIY builders’ wallets are already strained by the ongoing price insanity for graphics cards and RAM, and the 800P duo’s prices are too high too squeeze into prebuilts the way Optane Memory might. The 800P is a non-starter as an upgrade path for older systems that might not boot from an NVMe SSD, too. The drives’ low-queue-depth performance is nice, to be sure, but it’s certainly not worth the price of admission as secondary storage. Cost and capacity are more important considerations in that use case, as we see it.

Intel suggests that RAIDing the larger 800P could be a route to higher capacities and even higher performance from Optane, but we see little point to that exercise when the 280-GB Optane SSD 900P lists for less money than a pair of 118-GB 800Ps, offers even more space, and doesn’t have to contend with any potential headaches of a RAID.

Nonetheless, it’s hard not to get excited for the possibilities that 3D Xpoint opens up. Despite our inability to put it to the test this time around, we’ve seen how Optane’s blistering response times translate into less time waiting around for stuff to load. And the fact that the drives suffer no performance loss as they fill up alleviates a pain point we’ve all had to deal with over the years. NAND is great tech, to be sure, but being forced to treat a portion of a drive’s capacity as unusable to preserve its performance can get frustrating. 

Overall, the high price tag and limited capacity of this duo makes it difficult to recommend the Optane 800Ps for all but those curious about the potential of this technology. Intel has got a good thing going here, but it will take some sticker-slashing before we can start heaping the awards on Optane. Since 3D Xpoint is Intel and Micron’s proprietary tech, though, it’s highly unlikely any Optane competitors will surface any time soon to exert price pressure. Micron’s QuantX might eventually burst forth from the shadows, but QuantX products will likely be for the data center when they do appear. For the time being, moving beyond NAND in the hobbyist market will remain a rich man’s game.

Comments closed
    • JoeKiller
    • 2 years ago

    Can we get a 900P 480 GB review now?

    • ranocchia
    • 2 years ago

    58 GB does not work even for a boot drive, [url=http://www.china-prices.com/phone/7604/asus-zenfone-3-zoom-ze553kl<]unless[/url<] you do nothing on your pc...

      • chuckula
      • 2 years ago

      Please ban this account for spamming.

    • Voldenuit
    • 2 years ago

    I liked Linus Sebastian’s youtube review of these. “What is it for???”

    • Takeshi7
    • 2 years ago

    I just want 3D Xpoint Compactflash cards so that I can upgrade old computers that don’t support TRIM. Optane gets around the need to TRIM flash.

    • FranzVonPapen
    • 2 years ago

    Why was this not compared to the Intel 900P in the benchmarks? I want to know how much performance, if any, I’m sacrificing by going with the smaller-capacity 800P.

    • meerkt
    • 2 years ago

    Doesn’t appear there’s much gain in common daily uses, so no point considering the price. But what about data retention characteristics?

    BTW, I still don’t get how in the 1-32 QD graphs, the read and write graphs track exactly the same shape. And not only in this drive, but all of them.

    • Bauxite
    • 2 years ago

    Its too late now, but for a few months after launch the 280GB drive had an effective cost of about $160: selling the SC code for ~$200 was quick and easy.

    I picked up 3 (one of each version) as these things make incredible boost drives for ZFS arrays. They can even be a stand-in ramdisk for certain use cases like web caching. They are also the little known killer app for mmos or other games that heavily load lots of data in busy areas. In desktops you can adapt u.2 to m.2 easily, in fact one of the versions ships with a prebuilt cable for this.

    Demand for the codes has dropped a bit since then unfortunately.

    • SecretSquirrel
    • 2 years ago

    I may just pick one of these up — probably the 118GB version. I have a very specific use case for which these would be unbeatable — hosting the working clone of various git repositories for development. On large repositories, git is very sensitive to IO responsiveness, as are software builds. Master copy of the repo can be on github or local on NFS and the working clone on one of these drives. Hard to beat.

    While the price may be a bit high for gamers and the like, for a professional developer, I could see the productivity returns making them well worth the purchase price.

      • Bauxite
      • 2 years ago

      Pros should have bought the 900p already. The drive-caching (attempt #3456 by intel) and general consumer use of the overpriced tiny drives make very little sense other than giving intel an outlet for lesser bins or maintaining a baseline production. But for real I/O work these are killer.

    • Bensam123
    • 2 years ago

    As with SSDs when they first came out, the best option still is a hybrid solution. Not as in having two drives, but they need to figure out how to use this memory as a cache for a normal SSD (also not talking the Optane cache). Like Seagates hybrid drives they released waaaaay too late. This stuff really should be built into Intels current SSD lineup. It would truly make premium drives and add a differentiating point to SSDs besides just another manufacturer rebadging a controller.

      • tacitust
      • 2 years ago

      I don’t think Intel is planning on their Optane products being this non-competitive with SSDs on price for that long. Once they’ve released 256MB and 512MB versions and come within 20% to 30% of the price of premium SSDs, they will find a market without a need for inclusion into hybrid solutions.

        • DavidC1
        • 2 years ago

        Yea… they kinda do.

        They are aiming Optane for the performance segment, and they said they will increase that, while its their NAND SSDs that will have a capacity as the primary focus.

        I knew from the beginning Optane was suited for caching, or for NVRAM. Optane SSDs are too pricey, and the medium is limited by the interface. Caching makes sense because it has low absolute price, and NVRAM allows for full unleashing of the medium, so the price is justified.

        • Bensam123
        • 2 years ago

        It’s more of a ‘why not’ situation. Obviously the optimal solution would just to have a pure drive made out of the tech, but that’s proving troublesome much like with original SSDs. A hybrid is a good stop gap solution till they can reach a point of 1TB worth of this stuff that doesn’t cost $1200.

        It further compounds the issue that there isn’t a clear cut line between SSDs and these drives as the perceptible performance isn’t as big as a SSD vs mechanical as with the original release. This makes people less likely to opt for a essentially a really expensive SSD over a budget one.

    • brucek2
    • 2 years ago

    Seriously no one at Tech Report (or Intel, who surely would have provided one?) was able to come up with a motherboard that would boot this drive so we could get a sensible take on its obvious intended purpose?

    I understand that means results wouldn’t be directly comparable to previous tests. Doing the tests a second time on the older system to get comparable results would a be a nice ADDITION, but not a complete substitute for, reviewing it in its intended use.

    • Mr Bill
    • 2 years ago

    Dedicated Swap File drive, I’m thinking; would be better than OS partition. Or did somebody already suggest it?

      • stdRaichu
      • 2 years ago

      Using these for swap would be a helluva waste IMHO… any scenario where you’re limited by the speed of swap you’d be better off spending the money on more RAM (and I suspect most people on TR have enough RAM that their pagefile is rarely touched in any case).

      I’ve got a couple of the older 64GB ones sitting as a cache in front of some slow platters, and they work brilliantly at improving random IO.

        • bhtooefr
        • 2 years ago

        The only thing that I could see using solid state storage for swap making sense for is if you’re stuck on a 32-bit client Windows release, I guess? Then you can’t get the 36-bit PAE address space, and swap performance starts to matter.

        But if you’re stuck on a 32-bit client Windows release in 2018, you’re probably doing things that don’t need that performant of a system anyway.

          • stdRaichu
          • 2 years ago

          It’s a rare computer that runs enough 32bit apps (maxing out at 2GB memory usage per process, or perhaps 3GB if you’re able to use the switch) to obliterate the ~3.5GB address space enough to be heavily dependant on swap space.

          In any case, a regular SATA SSD would likely provide sufficient IO for that. There’s very little optimisation done in the pagefile routines and random IO in and out of it generally sucks donkeys; I don’t think there was ever any prefetch written for it back in the spinning rust days and I never saw a pagefile that was able to sustain any kind of IO workload when I tested them on SSDs back in the day.

        • derFunkenstein
        • 2 years ago

        For the part I agree, and if we were still in the days where 16GB of RAM was still $100 I’d enthusiastically agree. The current price of memory makes me hedge it a little bit, though. The 32GB version [url=https://www.newegg.com/Product/Product.aspx?item=N82E16820167427<]costs $60[/url<] where that 32GB of RAM would cost over $400. If you're memory limited and constrained for cash it might be a nice stopgap.

        • Mr Bill
        • 2 years ago

        I’ve been moving my swap drive to the physically fastest drive in the system since the days of spinning platters. But you notice I said “Swap File drive”. Some games could entirely fit on such a drive. I’ve stopped gaming (no time anymore) but I used to put my WOW partition on a dedicated fast SSD to speed up file access. It gave me a snappier system enough that I could raid successfully every week prior to Legends coming out.

    • desertdweller
    • 2 years ago

    I read and count on Techreport.com daily but, I am not sure if the reviews of many ssds are helpful anymore. I have a PC with an Intel Optane 900p ssd and after Windows update for Meltdown and Spectre and the motherboard cpu bios firmware update I have lost a significant amount of speed on my PC. I don’t know what good reviews are when they don’t reflect real life experiences that include all current bios and Microsoft updates so we can know accurately what kind of experience to expect when making hardware purchases. The inclusion of the data in these articles illustrating the effect of Meltdown/Spectre mitigations would also be of benefit in pushing Intel and the motherboard/computer OEMs to follow through with needed mitigations for this once in a lifetime computer crisis….and that is what it is and I don’t think enough focus is being placed on the problem in hardware articles. What good is an article that doesn’t give realistic information about the capability of hardware the way we are forced to actually utilize it?

      • tacitust
      • 2 years ago

      Once in a lifetime computer crisis? A little melodramatic, don’t you think? The vast majority of computer users won’t even notice the slowdown — and I’m talking 90% or more, since few people really tax their systems enough to see a difference — and has anyone noticed millions of people complaining about the terrible drop in speed of their cloud-based services? Me neither.

      Of course, there will be some who are impacted, but of those who are, there will be many for which it is a minor inconvenience, others who recently upgraded from a much slower system and will still enjoy the speed of the new one, and so on.

      There will be more articles, no doubt. There will be progress reports, lawsuits to cover, new hardware, new patches, new hacks, and so on, but I for one don’t want the press to be in the crisis mode you seem to think they should be.

        • Shobai
        • 2 years ago

        The ‘once in a lifetime’ bit caught my eye too, but I just assumed he wasn’t long for this world.

        • desertdweller
        • 2 years ago

        Intel, Microsoft, Linux and Google think this is a crisis……99% of the world’s computers have a problem that basically leave them exposed to malware and they are working as fast as they can to “mitigate” the effects. All I am suggesting is that Techreport.com do the right thing and post results that reflect real world capabilities of the hardware they review so consumers reading the reports will know what they are actually buying. After all, Microsoft’s “mitigations” will show up on every Windows 10 computer and wise owners will update the cpu firmware to prevent malware as soon as they can. So, why not have the articles reflect the way future hardware purchases will perform. Truth and accuracy in reporting only results in a better informed public…..and then the computer hardware industry will be more likely to provide ever better products. It bothers me to see reports of unachieveable ssd speeds in actual real world use….why not show what we will really get when we upgrade our computers.

          • chuckula
          • 2 years ago

          YouWouldGetMoreUpthumbsIfYouDroppedThoseUselessSpacesFromYourTextWalls.

    • derFunkenstein
    • 2 years ago

    So those dips for the 118GB model on IOMeter, is that a heat issue?

      • cygnus1
      • 2 years ago

      My thoughts as well. Might be interesting to hear what Intel’s comment is on that.

      • Bauxite
      • 2 years ago

      Probably, if not a firmware handicap. The u.2/pci-e card that starts at a little over twice the density draws at least 10W under load.

        • derFunkenstein
        • 2 years ago

        Yeah, and it’s not nearly as prevalent in the graph for the 58GB model. A little metal might go a long way to smoothing it out.

      • Mr Bill
      • 2 years ago

      If it were heat induced, would’nt there be some curvature in the spikes rather than the on/of behaviour? I’m ignorant of the hardware design, but I vote for a buffering problem.

        • Freon
        • 2 years ago

        Throttling can be abrupt. Imagine an algorithm like “while temp > 90 speed = 20%”

        The early days of CPU protection circuits looked something like this. ala Pentium 4.

        The design engineers may have figured it wasn’t that important to try to implement a smoother grading.

    • benedict
    • 2 years ago

    58 GB does not work even for a boot drive, unless you do nothing on your pc. 2 years after a fresh W7 install on a fresh 100GB drive, it was 100% full without installing anything on it. 118 GB might work for a bit but it won’t be long before you’ll be forced again to find stuff to delete from it to free some space. Other reviews show real-world performance to be equal to other NVM-e SSDs. There’s simply no point in this product.

      • synthtel2
      • 2 years ago

      If Windows itself can’t fit in 58 GB, there’s something wrong with that install.

      118 GB would comfortably fit every bit of non-game data I’ve got, including both OSes and a few dozen gigs of dedicated free space for Windows to overflow into.

        • Chrispy_
        • 2 years ago

        Honestly, Windows 7 is so bloated with patches now that it’s genuinely about 40GB on a clean install. The WinSXS folder easily dwarfs the rest of the OS once the recommended eleventy-four-quadzillion patches since the last service pack have been applied 🙁

          • stdRaichu
          • 2 years ago

          Get KB2852386 installed and run cleanmgr and you should be able to shrink the bloat in winsxs considerably.

            • Chrispy_
            • 2 years ago

            Yeah, that patch is on all Windows 7 machines thanks to WSUS 3.0.

            It typically reduces the size of an old Windows install from 55GB to about 40GB. Microsoft stopped caring about Windows 7 install size; it needs a clean service pack but instead we got a “convenience rollup” which makes downloads and restarts less horrendous but otherwise the last time Microsoft cleaned up the OS due to patches was SP1, on February 11th 2011 – AKA “over seven years ago” 🙁

        • benedict
        • 2 years ago

        Do you only have Windows on your pc and nothing else? If the only thing you do on your pc is stare at the desktop then 58gb would suffice perfectly.
        Try installing a couple of Visual Studios, MS SQL Server, several SDKs, .NET and additional tools. All on your big drive, yet they insist on taking tens of GBs off your boot drive. You’ll quickly reach a point where even Windows Update won’t run, because for some unknown reason it also insists on using the boot drive for all its needs.

          • RAGEPRO
          • 2 years ago

          That may be true for Windows 7 but it’s simply not true for Windows 10. I’ve got a fresh copy of Windows 10 installed just days ago on a machine here using a 60 gigabyte SSD for its primary storage and it has over 35 gigabytes left. Fully updated too. I do think Windows 10 just takes a lot less space. I think a lot of it also is simply knowing how to configure your install. Plus if you’ve done any upgrades or [s<]Service Pack[/s<] Feature Update installs on your machine, rather than doing a clean install each time, you'll be wasting a lot of space because Windows doesn't clean itself up very well.

            • benedict
            • 2 years ago

            Well, give it a couple of years without formatting and tell me how well does your 60GB boot drive fare.

            • Waco
            • 2 years ago

            I have two netbooks with 60 GB drives and Windows 10. Never formatted, plenty of free space. :shrug:

            • Shobai
            • 2 years ago

            As a counterpoint, my workplace just bought a number of Lenono 11″ notebooks with “360 degree” hinges [the model number escapes me]. They have ~60GB SSDs, and Win10 reported better than 50GB free space.

            Being S Mode devices they couldn’t run the application they were intended to, and I had to update them all to current before I could convert to Win 10 Pro.

            Once fully updated and running Win10 Pro, they had under 11GB free space remaining.

            Benedict is not wrong to point out that these are too small; the vast majority of Joes Blo will encounter this issue.

          • synthtel2
          • 2 years ago

          At the moment:

          Linux install: 166 GB /home/ (mostly games), 12 GB /var/cache/pacman/pkg/, 10 GB everything else
          Windows install: 27 GB games, 29 GB everything else

          My dev tooling mostly falls in the Linux everything else category, and is pretty lightweight. Windows is actually on a 128 gig drive.

        • ptsant
        • 2 years ago

        Restore points and patches etc tend to accumulate. I just removed ~20GB of restore points and update stuff by running cleanup.

      • Takeshi7
      • 2 years ago

      That’s odd because I have a Windows 10 PC that works just fine with a 32GB boot drive.

      • DavidC1
      • 2 years ago

      To be fair, NVMe SSDs have real world performance equal to older SATA drives.

      • bhtooefr
      • 2 years ago

      Windows 10 servicing can work a lot better in that space – Windows 7 and 8 servicing is a never-ending growth pattern, but Windows 10 fresh installing and migrating your applications, data, and settings every 6 months (that’s seriously what it does) nips that in the bud.

        • rnalsation
        • 2 years ago

        Not if you have Installer folder bloat. Having the .msi install version of MS office installed still makes your Windows folder huge. It persists through build updates too.

      • rnalsation
      • 2 years ago

      How big is your “Installer” folder? If it is huge you might want to try [url=http://www.homedev.com.au/free/patchcleaner<]Patch Cleaner[/url<] (pretty much the third party replacement to msizap)

      • Ninjitsu
      • 2 years ago

      My current Windows 10 install is at 53GB, including pagefile. Of course it can exceed the 58GB threshold when it downloads updates.

      So yeah, 118GB is minimum for a OS + Program Files drive.

    • UberGerbil
    • 2 years ago

    So the fact that they can get performance this high out of just two PCIe lanes is intriguing. We’ve always kind of known that raw bandwidth isn’t the constraint for a lot of PCIe devices — because the ~doubling in per-lane performance with each PCIe version didn’t do very much in the real world — but this really lays that bare. Looking to the future it also makes the PCIe allocation calculus at the platform level more flexible, and more interesting. We’re not there with XPoint prices yet, but being able to hang (say) eight multi-hundred GB “drives” on a single x16 PCIe card would be wild. More realistically, those motherboard M.2 slots that have looked a little bandwidth-constrained in fact may be a lot more useful than I thought.

    At least until we’re just putting XPoint in all the DIMM slots (and the way DDR prices are going, that day be coming sooner than later, modulo getting the rest of the hardware and software stack ready).

    • Goty
    • 2 years ago

    I’d love to play with an optane drive, but Intel’s vendor lock-in means I won’t even have the chance to give them some of my money!

      • chuckula
      • 2 years ago

      Unless you want to use these exclusively as boot drives there’s no lockin (and it’s unclear if boot drives are actually “locked in”, it might just be a hardware support glitch).

      Optane has no problem with Epyc Servers: [url<]https://www.phoronix.com/scan.php?page=article&item=intel-optane-900p&num=2[/url<]

        • cygnus1
        • 2 years ago

        edit: meant to reply to Goty

        • Goty
        • 2 years ago

        Oh, nice; I was under the impression all Optane drives were locked into the Z-series motherboards. Now to see how much lunch money I can save…

      • cygnus1
      • 2 years ago

      The only lock-ins is on the tiny Optane Memory cards, the 16GB and 32GB ones. And that’s only “locked in” when used in the Intel Caching mode. They can be used as standard, albeit tiny, disks if you like. I know of people using them as cache devices for home ZFS storage servers. The 800p and 900p only require a system capable of booting NVMe devices, works on AMD or Intel systems. Same requirement as any other NVMe SSD.

    • rnalsation
    • 2 years ago

    [quote<]We used the following versions of our test applications: ... Batman: Arkham Origins Tomb Raider Middle Earth: Shadow of Mordor[/quote<] Did I just miss game load times and boot times or are they not there?

      • moose17145
      • 2 years ago

      I noted that too…

      • UberGerbil
      • 2 years ago

      They’re not there. As was stated in the review:[quote<]While the Optane 800P drives are perfectly usable as secondary storage in older systems, our venerable Asus Z97 Pro refused to acknowledge the existence of the 800P as a boot device in its BIOS, rendering it impossible to conduct boot and load tests against the other drives in our test suite.[/quote<](bottom of the first page) [quote<] Sadly, being unable to boot the drive on our aging test platform means we don't have enough results to compare it to the rest of our crew in overall performance rankings. [/quote<](first para of the conclusions page). And if you've read the boot and game load times page in other reviews, the first thing on the top of that page on every other review is: [quote<]Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.[/quote<]That "next batch" is the boot times [i<]and[/i<] the load times. No boot, no boot time tests, and also no game load tests. It looks like they just didn't update the "Notes and Methods" page to remove the notes on those tests.

        • rnalsation
        • 2 years ago

        So I half fail at reading. (tldr I skimmed)

      • Chrispy_
      • 2 years ago

      Those three games are larger than these drives.

        • derFunkenstein
        • 2 years ago

        Together they might be, but individually they should all fit. Tomb Raider may not fit on the 58GB one, not sure, but all of them should individually go on the other.

          • Chrispy_
          • 2 years ago

          Well sure, you’re [i<]technically[/i<] correct - but nobody is going to install the OS with zero options, zero applications, and just one game at a time.

            • derFunkenstein
            • 2 years ago

            No, probably not, but I wanted this review to either give me a boner about super-fast load times or shoot me down entirely because something else is the bottleneck.

            • Chrispy_
            • 2 years ago

            Well, in that case, read the 900P review 😉

    • tay
    • 2 years ago

    Could you either in the introduction or conclusion talk about the feasibility of using this as a tiered storage similar to how FusionDrive works on macOS.

      • UberGerbil
      • 2 years ago

      Fusiondrive is just Apple’s name for a hybrid hard drive (aka SSHD), which is old tech that’s been around for a while. Other than some simplification for the user, there’s really not a lot of difference between that and the caching techs that use two separate drives, eg Intel’s “Smart Response” technology. Replace the SSD in “Smart Response” with XPoint, and you get Intel’s “Optane Memory Boost” that [url=https://techreport.com/review/31644/intel-gives-hard-drives-a-boost-with-optane-memory<]TR already reviewed[/url<]. Bundling XPoint into an HD will simplify it a little for end-users, but it's really not a lot more interesting than the old SSD+HD hybrid tech we already had. And that's even less interesting than it was because there used to be a use-case for it in laptops that could only fit one drive; but these days when it's easy to shoehorn a gumstick SSD in somewhere there's not many formfactors where it makes sense. And what laptops even have spinning HDs, anyway? The cheapest of the cheap ones. Those aren't candidates for XPoint in any form, at least until it comes down in price to the point where you could just use it as your only drive with no spinning atoms whatsoever.

        • derFunkenstein
        • 2 years ago

        The difference between Fusion drive and either SSHD or Smart Response is that macOS gives you the total storage capacity of both the flash and the mechanical drive. Nothing is duplicated, which is kind of scary because if the NAND dies for some reason you didn’t just lose performance; you lost the whole volume, unless you have some way to salvage the file table (which might be on the mechanical drive but I wouldn’t bet on it).

        Fusion Drive is more like a very meticulously-maintained JBOD array than anything else.

          • rnalsation
          • 2 years ago

          Fusion Drive is automated tiered storage. You can also do automated tiered storage on Windows 10 on not boot volumes

        • DavidC1
        • 2 years ago

        “Bundling XPoint into an HD will simplify it a little for end-users, but it’s really not a lot more interesting than the old SSD+HD hybrid tech we already had. ”

        Optane doesn’t suffer from dirty drive or full drive performance degradations. That makes it *much* better for a caching drive. If they put a small amount where the current DRAM buffer goes on HDDs, we might get most of the benefits of DRAM buffer speed, but with much larger capacity.

        The biggest HDDs have 256MB DRAM buffers? What about a 4GB 3D XPoint buffer? Assuming IMFT makes 4GB dies.

Pin It on Pinterest

Share This