Intel’s 750 Series solid-state drive reviewed

PC enthusiasts have a proud tradition of appropriating enterprise-class hardware for personal systems. Stuffing server-grade gear into a desktop can improve performance dramatically in some cases. It also unlocks immediate bragging rights over systems equipped with more pedestrian hardware.

Although hardware makers tend to frown at this practice, some have adopted it as their own. Intel, for example, has long fueled its high-end desktop platform with parts pulled from the server and workstation world. Haswell-E and its predecessors are really just Xeon CPUs repurposed for desktop use—and packaged specifically with enthusiasts in mind. The company’s storage division has been getting in on the action, too. Last year, it introduced a 730 Series SSD that’s basically a rebadged datacenter drive with a few tweaks under the hood.

The 730 Series is pretty sweet, but it’s tied to a Serial ATA interface and AHCI protocol that can’t keep up with modern flash memory. That’s why Intel’s latest datacenter drives use a faster PCI Express interface backed by an SSD-specific NVM Express protocol. This new family hit servers last summer, and today, it migrates to the desktop as the 750 Series.

Under the hood, the 750 Series features the same controller as its datacenter counterparts. This proprietary Intel chip has an eight-channel NAND interface at one end and four lanes of PCIe Gen3 goodness at the other. It’s meant to connect directly to PCIe lanes in the CPU rather than through an intermediary chipset on the motherboard. (Intel’s 9-series chipsets are limited to Gen2 speeds, so they’re not fast enough to keep up.)

Four lanes of PCIe Gen3 connectivity offer up to 4GB/s of theoretical bandwidth, which is well above SATA’s top speed—and comfortably beyond the bandwidth of the dual-Gen2 M.2 slots on most motherboards. A wider pipe is only one piece of the puzzle, though. The controller is also based on the NVM Express protocol designed to replace SATA’s ancient AHCI spec.


Watch our discussion of the Intel 750 Series and other PCIe SSDs on the TR Podcast

AHCI was architected for hard drives based on mechanical platters. Those drives are a low-speed, high-latency proposition compared to the massively parallel NAND arrays behind modern SSDs. NVMe was designed from the ground up for solid-state storage, so it lacks legacy baggage from the mechanical era. It promises better performance through lower overhead and greater scalability. Where AHCI is limited to a single command queue 32 entries deep, NVMe supports up to 64k queues with 64k entries each.

Intel says the 750 Series achieves peak performance at a queue depth of 128, which is much more than AHCI can muster yet well short of NVMe’s maximum capacity. That’s probably a good place to be at such an early stage in the protocol’s life.

Capacity Die config Max sequential (MB/s) Max Random (IOps) Price $/GB
Read Write Read Write
400GB 28 x 16GB 2200 900 430k 230k $389 $0.97
1.2TB 86 x 16GB 2400 1200 440k 290k $1029 $0.84

PCIe and NVMe combine to give the 750 Series crushing performance stats for both sequential and random I/O. The flagship 1.2TB config hits 2400MB/s, according to the spec sheet, quadrupling the maximum speed of Serial ATA. Versus last year’s 730 Series, the new hotness is specced for severalfold performance gains on all fronts. You don’t lose too much dropping to the base 400GB model, either.

Although the 750 Series doesn’t match the 2800MB/s sequential peak of Intel’s top datacenter SSD, the DC P3700, it does beat that drive’s random write rating. Credit the firmware, which contains “radical” changes focused on improving random I/O performance. The firmware is also configured to allocate 8-9% of the drive’s total flash capacity to overprovisioned area. That’s similar to the overprovisioning in typical consumer drives but less than the ~25% set aside by the P3700.

Like its enterprise forebear, the 750 Series uses 20-nm NAND fabbed by Intel’s joint flash venture with Micron. The chips weigh in at 16GB apiece, and they’re a lower grade than the top-shelf bin reserved for the P3700. The drive’s endurance rating is much lower as a result, but it’s still more than sufficient for typical consumer usage patterns. The 750 Series is rated to absorb up to 70GB of writes per day over the length of its five-year warranty.

There are no guarantees after the drive’s endurance spec is exceeded, but the 750 Series should be able to write a lot more data before reaching the raw cycling limit of its NAND. (The Intel 335 Series in our SSD Endurance Experiment wrote over 700TB before hitting that media wear threshold.) When the NAND’s limits are reached, the 750 Series is designed to slip into a “logical disable” mode that throttles write speeds severely enough to produce an effective read-only state. Intel’s other consumer SSDs are programmed to brick themselves at the next reboot, preventing users from accessing their data. The 750 Series instead emulates its enterprise counterparts, which remain in read-only mode through subsequent reboots.

Like its server-oriented siblings, the 750 Series comes in two form factors. The half-height, half-length add-in card pictured on the left slots into standard PCIe slots, and Intel throws in a full-height backplate for typical desktop enclosures. The 2.5″ version on the right is meant for traditional drive bays, though its 15-mm thickness requires more headroom than most SSDs.

Both variants use prominent heatsinks to cool the controller and NAND. The 750 Series is rated for peak power draw of 25W, so there’s a lot of heat to dissipate. Thanks to these hunks of finned metal, the drive is rated to withstand ambient temperatures up to 70°C, an important consideration for systems crowded with multiple graphics cards and other high-end components.

Instead of connecting via PCIe slot, the 2.5″ unit has an SFF-8639 jack and associated cabling. Intel ships the drive with an 18″ shielded cable from a company called Amphenol. This cable pipes signaling and clock data for the quad PCIe lanes to a smaller, square-shaped SFF-8643 connector that plugs into the host system. Instead of drawing power from that connection, the cable pulls juice from a standard PSU SATA connector.

The cabled solution purportedly delivers identical performance to the add-in card. It also leaves PCIe slots open for multiple graphics cards, which is why Intel believes the cabled version will end up being more popular than the card.

Intel says the SFF-8643 host connector can be mounted on motherboards in numerous orientations, including an edge-facing config to facilitate clean cable routing. We haven’t seen any motherboards with the requisite SFF jack onboard, though. Asus’ new Sabertooth X99 does come with a compatible connector, but the port lives on a bundled adapter card rather than on the motherboard itself. It may take some time before truly native implementations arrive.

Even with the add-in card, motherboard firmware still needs the right hooks to boot from the 750 Series and other NVMe SSDs. Intel has been working with the major firmware vendors to integrate support for NVMe drives, and UEFI version 2.3.1 has everything that’s required. Motherboard makers have to roll that revision into the firmware for their individual products, of course, but Intel tells us all Z97 and X99 boards should have access to the necessary update. Depending on the firmware, older UEFI-based boards may also work with the 750 Series.

Once you have a compatible motherboard, the next requirement is an operating system with NVMe support. Windows 8.1 has native drivers built in, and Win7 adds them via hotfix. Intel offers its own NVMe drivers, as well, and it claims they’re faster than the ones Microsoft supplies. We used the Intel drivers for all our testing. Speaking of which, let’s dig into the performance analysis on the next page.

 

The competition

To gauge the 750 Series’ performance, we tested the 1.2TB add-in card against three PCIe SSDs: Samsung’s XP941 256GB, Plextor’s M6e, and Intel’s own DC P3700 800GB. The Plextor and Samsung drives have similar per-gig pricing to the 750 Series, but they’re confined to much smaller M.2 gumsticks. They also have slower Gen2 interfaces—two lanes for the M6e and four for the XP941—and AHCI underpinnings.

The DC P3700 (left), M6e (middle), and XP941 (right)

The P3700 is priced around $3/GB, so it’s obviously in a different league. We’ve included it more for the sake of sibling rivalry than realistic competition. It will be interesting to see how the scaled-back consumer derivative compares.

Recent Serial ATA SSDs from Crucial, Intel, OCZ, and Samsung fill out the rest of the field. That group also includes a SATA 3Gbps drive from the old-timer’s league: Intel’s X25-M 160GB, which was released way back in 2009. The X25-M is marked with a darker shade of gray, while the PCIe SSDs are colored to set them apart from the SATA pack.

IOMeter — Sequential and random performance

IOMeter fuels much of our new storage suite, including our sequential and random I/O tests. These tests are run across the full extent of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. (87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less.) Clicking the buttons below the graphs switches between the different queue depths.

Our sequential tests use a relatively large 128KB block size.



The 750 Series lands in second place throughout our sequential tests, wedged between the faster P3700 and the slower XP941. Although it narrows the gap to the datacenter drive in the four-deep read test, it’s mostly stuck at the mid-point between the two.

That said, the 750 Series hits well over 1200MB/s in the QD1 test, more than doubling the performance of the SATA drives. And it’s even faster at QD4. Regardless of the queue depth, the XP941 is at least 200MB/s behind with reads and 500MB/s behind with writes.

Next, we’ll turn our attention to performance with 4KB random I/O. We’ve reported average response times rather than raw throughput, which we think makes sense in the context of system responsiveness.



Although there’s some intermingling between the PCIe and SATA SSDs, Intel’s NVMe drives continue to occupy the top spots, with the 750 Series trailing the P3700 slightly. Both are consistently ahead of their PCIe competition, and their advantages are especially acute with writes at QD4.

The preceding tests are based on the median of three consecutive three-minute runs. SSDs typically deliver consistent sequential and random read performance over that period, but random write speeds worsen as the drive’s overprovisioned area is consumed by incoming writes. We explore that decline on the next page.

 

IOMeter — Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, which should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.


Note that there are two sets of results for the 750 Series. The first looks normal, with an initial period of extremely high performance as the drive’s overprovisioned area captures incoming writes. When that area becomes saturated, the write rate plummets, and the march toward a slower steady state begins. Sometimes, though, the 750 Series gets stuck around 200 IOps for the first half of the test. The same thing can happen to the P3700, too. The initial slowdown doesn’t seem to be related to temperature or to the number of IOMeter workers hammering the drive with writes.

Intel recommends pre-conditioning the 750 Series with an hour’s worth of sequential writes immediately before running IOMeter performance tests. But we didn’t pre-condition the competition, so the 750 Series didn’t get any special treatment. Perhaps that explains the anomalous results. In any case, we’re working with Intel to trace the source of the issue. We’ll update this section when we get to the bottom of it.

Apart from the anomaly, the 750 Series looks very strong. It hits nearly the same peak as the P3700, though it doesn’t have enough overprovisioned area to maintain the high for as long. Performance is reasonably consistent after the initial decline, and the IOps even tick up slightly toward the end of the test.

To show the data in a slightly different light, we’ve graphed the peak random write rate and the average, steady-state speed over the last minute of the test.

The 750 Series peaks nearly 2X higher than the next drive down the line, and it’s ahead of the M6e and XP941 by even greater margins. That lead narrows considerably in the final minute of the test, at least versus the top SATA drives. The M6e and XP941 fall to a whopping 5-6X slower than the 750 Series over the long haul, in part because their lower total capacities have less overprovisioned area.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though; that’s the max depth of their native command queues.

We use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale. We’ll compare all the PCIe drives on that scale in a moment.


Were it not for the P3700, I could say that the 750 Series completely dominates the field. “Destroying everything but the datacenter drive” doesn’t quite have the same ring to it.

The 750 Series boasts higher I/O rates right out of the gate, and it continues to ramp up across the full extent of the test. As an added bonus, performance scales particularly quickly at the lower queue depths most indicative of typical desktop workloads.

Somewhat surprisingly, the SATA-based OCZ Vectors come the closest to matching the 750 Series here. The M6e and XP941 have middling I/O rates at best, a point driven home by the graphs below. These plots compare just the PCIe drives on the expanded scale required to capture the P3700’s otherworldly I/O rates. Clickety click to switch between total, read, and write IOps.


At its peak, the P3700 manages about 3X the IOps of the 750 Series. That sounds about right given the similar difference in dollars per gigabyte.

 

TR RoboBench — Real-world transfers

RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

  Number of files Average file size Total size Compressibility
Media 459 21.4MB 9.58GB 0.8%
Work 84,652 48.0KB 3.87GB 59%

The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

Read speeds are up first. Click the buttons below the graphs to switch between one and eight threads.



The 750 Series scores a rare victory over the P3700 in the eight-thread media test, but its advantage is slim, and the two NVMe drives are otherwise closely matched. They have sizable leads over the competition in all but the single-threaded work test, where all the SSDs are tightly bunched.

Samsung’s XP941 is by far the biggest threat overall. It nearly catches the 750 Series in the multi-threaded media test, and it’s clearly the faster of the PCIe alternatives.

Next, we’ll look at write speeds.



Score another win for the 750 Series, this time in the single-threaded work test, where the stakes are admittedly low. The P3700 regains the lead when the thread count increases, and the XP941 almost sneaks into second place. Samsung’s M.2 drive is nowhere near the NVMe duo in the media tests, though. Write speeds are much higher in those tests, and so are the gaps between the 750 Series and its neighbors.

Last, but not least, we’ll see what happens when reads and writes collide in copy tests.



Reading and writing simultaneously produces an exaggerated version of the pattern established in the previous tests. The 750 Series and P3700 have comfortable leads throughout, and their advantages are especially pronounced in the media and eight-thread tests.

Once again, the closest competition is the XP941. Closest doesn’t necessarily mean close, though. The Samsung drive copies media files at almost half the speed of the 750 Series, and it’s over 100MB/s behind in the multithreaded work test.

 

Boot times

Thus far, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.

We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused just on the time required to load the OS, but these new ones cover the entire process, including drive initialization.

Despite besting most of its competition with ease in our other tests, the 750 Series is by far the slowest to boot the system. It lags more than 10 seconds behind most of the competition in both tests, and it loses even more ground to the M.2 leaders.

We used a slightly different motherboard revision with the NVMe SSDs, but that didn’t slow the P3700 by the same margin, so it doesn’t explain the 750 Series’ sluggishness.

Load times

Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.

None of the SSDs set themselves apart in our first batch of load tests. Maybe the situation will change with games.

Nope. Nothing to see here… except for the six-year-old X25-M G2 matching the load times of the latest SSDs, including Intel’s wicked-fast PCIe drives. Kinda puts things into perspective, doesn’t it?

 

Test notes and methods

Here’s are the essential details for all the drives we tested:

  Interface Flash controller NAND
Crucial BX100 500GB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
Crucial MX200 500GB SATA 6Gbps Marvell 88SS9189 16-nm Micron MLC
Intel X25-M G2 160GB SATA 3Gbps Intel PC29AS21BA0 34-nm Intel MLC
Intel 335 Series 240GB SATA 6Gbps SandForce SF-2281 20-nm Intel MLC
Intel 730 Series 480GB SATA 6Gbps Intel PC29AS21CA0 20-nm Intel MLC
Intel 750 Series 1.2TB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Intel DC P3700 800GB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Plextor M6e 256GB PCIe Gen2 x2 Marvell 88SS9183 19-nm Toshiba MLC
Samsung 850 EV0 250GB SATA 6Gbps Samsung MGX 32-layer Samsung TLC
Samsung 850 EV0 1TB SATA 6Gbps Samsung MEX 32-layer Samsung TLC
Samsung 850 Pro 500GB SATA 6Gbps Samsung MEX 32-layer Samsung MLC
Samsung XP941 256GB PCIe Gen2 x4 Samsung S4LN053X01 19-nm Samsung MLC
Samsung 850 Pro 500GB SATA 6Gbps Samsung MEX 32-layer Samsung MLC
OCZ Vector 180 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Vector 180 960GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC

All the SATA SSDs were connected to the motherboard’s Z97 chipset. The M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941 requires more lanes, it was connected to the CPU via a PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.

If you’ve made it this far, you might enjoy a few more shots of the 750 Series.

We used the following system for testing:

Processor Intel Core i5-4690K 3.5GHz
Motherboard Asus Z97-Pro
Firmware 1304
Platform hub Intel Z97
Platform drivers Chipset: 10.0.0.13

RST: 13.2.4.1000

Memory size 16GB (2 DIMMs)
Memory type Adata XPG V3 DDR3 at 1600 MT/s
Memory timings 11-11-11-28-1T
Audio Realtek ALC1150 with 6.0.1.7344 drivers
System drive Corsair Force LS 240GB with S8FM07.9 firmware
Storage Crucial BX100 500GB with MU01 firmware

Crucial MX200 500GB with MU01 firmware

Intel 335 Series 240GB with 335u firmware

Intel 730 Series 480GB with L2010400 firmware

Intel DC P3700 800GB with 8DV10043 firmware

Intel X25-M G2 160GB with 8820 firmware

Plextor M6e 256GB with 1.04 firmware

OCZ Vector 180 240GB with 1.0 firmware

OCZ Vector 180 960GB with 1.0 firmware

Samsung 850 EVO 250GB with EMT01B6Q firmware

Samsung 850 EVO 1TB with EMT01B6Q firmware

Samsung 850 Pro 500GB with EMXM01B6Q firmware

Samsung XP941 256GB with UXM6501Q firmware

Power supply Corsair Professional Series AX650 650W
Operating system Windows 8.1 Pro x64

Thanks to Asus for providing the systems’ motherboards, Intel for the CPUs, Adata for the memory, and Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1200 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Conclusions

The 750 Series SSD is enterprise trickle-down elevated to a high art. This descendant of Intel’s latest datacenter drives is a rare beast even in PCI Express circles. With a four-lane Gen3 interface backed by the next-gen NVM Express protocol, the 750 Series delivers the future of solid-state storage today.

And oh, what a glorious future it is.

Performance is the real story. The 750 Series may not match Intel’s top server SSD in every workload, but it’s largely in the same ballpark for a third of the price. There’s no contest versus the Samsung XP941 and Plextor M6e, which have slower sequential speeds and much lower random rates. Those drives have a comparable cost per gig, making the 750 Series’ premium pricing entirely justifiable on speed alone.

The keys to such a sweet ride come with some strings attached, though. The biggest challenge is finding desktop tasks that can harness all the horsepower. While the 750 Series delivers incredible performance in targeted benchmarks and demanding sequential transfers, it doesn’t load big files, applications, or games appreciably faster than older SATA SSDs. Storage-bound workloads are required to get the most out of the drive.

There’s also the matter of motherboard compatibility. Although the 750 Series should work in all Z97 and X99 boards, only the latter have enough Gen3 lanes to avoid cannibalizing connectivity to a discrete graphics card. Support for the 2.5″ version’s fancy cable is spotty right now, too. It feels like the 750 Series is a little ahead of its time, and honestly, that’s part of the appeal. One of the best things about co-opting enterprise gear is pushing a system closer to the leading edge.

Another benefit is the additional goodies that tend to come with premium products. The 750 Series’ datacenter origins bestow power-loss protection and blinky diagnostic LEDs, and Intel kicks in a five-year warranty with a high endurance rating. The persistent read-only behavior at the end of the NAND’s life is a comforting bonus, too, as is compatibility with Intel’s excellent Toolbox utility.

As a high-end indulgence, the 750 Series ultimately posts the right numbers, ticks the right boxes, and incites the right emotional responses. It may not provide a palpable improvement for everyday desktop tasks in the same way the first SSDs delivered us from the sluggishness of mechanical drives, but it’s truly next-level storage by every other measure.

Comments closed
    • Delphis
    • 5 years ago

    File/Web/Database Server usage …. *drool*

    • stetrick
    • 5 years ago

    This seems like sloppy measurement to me.

    Did you clone from a single disk, or completely reinstall?
    Did you clear out all prefetches?
    Are you certain that each configuration is actually reading/writing the same amount of data from the disk under test?
    Reboot between runs?
    Did you rerun windows experience index? It has different behaviors if it thinks there is an SSD.
    Are you just testing the startup movies?

    • Sorny
    • 5 years ago

    Wouldn’t the NVMe drives significantly improve the responsiveness/throughput of having multiple VM’s on a single drive?

    Having multiple O/S’s doing work on a single drive has been a major bottleneck so far.

      • Vaughn
      • 5 years ago

      Are you talking about vm’s on a hard drive or ssds?

        • Sorny
        • 5 years ago

        I was referring to having multiple actively running VM’s (e.g., VMWare Workstation) on a single drive – SSD.

        I was wondering if having the NVMe protocol would provide a big boost in performance for each VM running on that SSD in that situation (vs. NVMe being used by a single O/S which is what most of the conversation has been about).

    • tsk
    • 5 years ago

    Hardwarecanucks has demonstrated some real world performance gains in their review, check out page 13 and 14. [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/69131-intel-ssd-750-series-review.html[/url<] This drive is certainly looking delicious, but I don't need it, surely I can resist the urge til prices come down... *throws money at screen*

      • smilingcrow
      • 5 years ago

      Interesting but those tests seem atypical to me so I’d rather give me focus on more typical real world tests.

    • Bensam123
    • 5 years ago

    I know this has been brought up a few times over the years, but at these price points have you considered throwing in some raid 0 configs? SSDs definitely have been shown to be quite reliable, pairing two up or a couple should yield some interesting results. Even if that’s not something you’d see in a datacenter, it’s something you’d see in some of our computers.

    • Kurlon
    • 5 years ago

    So, in the ye olden days you could coax Windows into installing onto a ‘non bootable’ device as long as you had a sacrificial bootable volume to hold a 100mb or so bootloader + drivers partition. Does that tricks still work with Win 8 and later? I’d be willing to use my old 60GB Vertex as a bootloader holder to put the main OS on the Intel…

    • ermo
    • 5 years ago

    How many parallel channels of NAND are needed to saturate SATA read speeds these days? 4x? 8x?

    To me it seems like we’re getting to the point where there is little sense in NOT buying a 128 or 256GB Crucial BX100 SSD drive, because everything else is basically limited by the SATA port rather than the on-chip hardware…?

      • Krogoth
      • 5 years ago

      AHCI overhead on top of that, but the problem is that bottleneck isn’t being felt in mainstream usage patterns. 😉

    • Klimax
    • 5 years ago

    I got quite few nice workloads which are perfectly suitable for this and its DC big brother. One of those loads can generate 50+ queue in few seconds. (Mass update of checkouts of open source code repositories)

    Of course video is next and compiling massive projects right after that… (WxWidgets, TortoiseSVN, …) and some other.

    • smilingcrow
    • 5 years ago

    Fast SSDs are starting to look like fast RAM; generally offering no significant gains in everyday usage ignoring gaming with IGPs.
    But still people are waiting for DDR4 for some reason as if it’s going to be significant performance wise.
    And others seem excited about even faster SSDs than this as if they will magically make a difference.

      • Andrew Lauritzen
      • 5 years ago

      Ultimately it’s all about balance in a system – you need to move every component forward to see real gains. That’s not to say the gains aren’t there, but if you use ~modern hardware and replace one component, it’s not necessarily going to shift the bottlenecks entirely.

      That said – on your RAM example – take a 5960x running a workload across its 8 cores and compare that with the stock quad-channel DDR4 to dual channel DDR4 clocked down to DDR3 speeds and you absolutely will see a large difference 🙂 You need those extra channels/speed to maintain a good ratio/”balance” between the computational throughput and the memory bandwidth. The same is true for other components in the system.

      So I agree that any single component is not going to “magically” make a difference in most balanced machines, but ultimately we need most of them to get faster to improve performance overall (assuming you’re even running up against performance limits; if not then the entire conversation of compute upgrades is kind of irrelevant :)).

        • smilingcrow
        • 5 years ago

        Your example is of a workstation class 8 core $1k CPU with quad memory channels so accounts for less than 1% of the home market. Workstations and servers often benefit from more memory bandwidth which is why they have it.
        But I’ve seen home users talking about waiting for DDR4 with Skylake as if it’s going to be a big deal for home users. I fail to see the excitement when you look at memory scaling with DDR3 for home users on dual channel motherboards. They often belong to a class of users that are overly impressed with synthetic benchmarks rather than focussing on real world performance.
        Hey it’s their money and if they want to spend it on cutting edge tech that looks great on paper but offers tiny gains in practice for much more money that’s their gig.
        DDR4 will be significant in terms of power consumption at least.

          • Andrew Lauritzen
          • 5 years ago

          > Your example is of a workstation class 8 core $1k CPU with quad memory channels so accounts for less than 1% of the home market.

          Absolutely, but that’s my point. You scale all of the parts of the system together and try to keep the *ratios* in check, not the absolute numbers. I am agreeing that just scaling one component is not going to be a panacea. Eventually if 6 and 8 cores become the norm in the consumer space then by that time we’ll need similar amounts of bandwidth to feed them.

          > But I’ve seen home users talking about waiting for DDR4 with Skylake as if it’s going to be a big deal for home users

          Well if you really want to talk *mass* market, is any of this really a big deal for home users these days? But ultimately – all else being equal – the RAM just needs to be as much faster as the CPU is, etc.

            • smilingcrow
            • 5 years ago

            ” You scale all of the parts of the system together and try to keep the *ratios* in check”

            Well that’s the illusion that I’m calling out as being BS. If you look at the relationship between CPU, RAM, Storage and GPU there is no constant ratio that needs to kept in check. The biggest constraints for home users seems typically to be GPU and/or CPU. If you have enough RAM of even modest speed and a modern SATA 3 SSD then the bottleneck assuming you even have one is either CPU or GPU or both. Hence my original post that SSD performance has become as irrelevant beyond a fairly low base level as RAM performance is.
            Even if consumer CPUs move from 4 to 6 or 8 cores that doesn’t mean that typical consumer applications will also require 50 or 100% more memory bandwidth to keep up. I’ve seen no hard data to suggest that is the case and all the data that I have seen points to the opposite.
            If you have a link to data which suggests otherwise please post it.

            • Andrew Lauritzen
            • 5 years ago

            > If you have enough RAM of even modest speed and a modern SATA 3 SSD then the bottleneck assuming you even have one is either CPU or GPU or both.

            Right but what you’re calling “modest speed” you’re constraining to a pretty narrow ratio. Try ripping one of the channels of memory out of a quad core and see if you don’t lose performance when all four cores are being utilized – and that’s only 2x 🙂

            I’m the first to agree that the minor deltas in memory bandwidth are not as important these days, largely due to big caches on CPUs. That said, you still need enough to cover the streaming component of workloads that don’t fit in the caches.

            > I’ve seen no hard data to suggest that is the case and all the data that I have seen points to the opposite. If you have a link to data which suggests otherwise please post it.

            Have you ever seen someone review a HSW-E with only 2 channels of DDR4 for instance? The reality is Intel wouldn’t build much more complicated memory controllers and motherboards if quad channel wasn’t useful. And indeed, all reviewers will pretty much use the stock config when testing, but don’t confuse that and memory overclocking with a thorough testing of memory bandwidth sensitivity! I’ve yet to see a review that really tests across any reasonably broad range both up *and* down from the stock configurations.

            That’s a much more telling story than the reviews of very high clocked memory (where they’ve also had to drop the timings significantly I might add). Obviously if you start from a nice balanced system and increase one component it’s not going to get much faster; that’s exactly the point I’m trying to make. But go ahead and *decrease* a component and see if it doesn’t get slower – that’s really the definition of it being in balance.

            And yes I have lots of data to this end from my work, but none that I can post unfortunately. Suffice it to say that there are a fair number of single-channel laptops in the wild (thanks OEMs!) and I’ve seen games run at literally half the speed in that config. The same thing happens on the upper end if you load up a HSW-E’s 8 cores with work, although games of course will not stress such a chip (which is incidentally why consumer chips haven’t seen much pressure to go beyond 4 cores).

            • smilingcrow
            • 5 years ago

            I mentioned gaming with IGPs in the first post.
            You can talk about workstations all you like but that’s a tiny niche so has no relevance to typical everyday usage.

            • Andrew Lauritzen
            • 5 years ago

            > You can talk about workstations all you like but that’s a tiny niche so has no relevance to typical everyday usage.

            If your argument is that “everyday usage” doesn’t require high end computing power then I fully agree. But that’s not a question of a specific component in the system – “everyday usage” works perfectly fine on a 15W dual core. Absolutely if you just want to surf the web and do office stuff or whatever you don’t need anything fancy at all.

            My point is that if you’re talking about actually stressing a system then the ratios of performance do matter. It’s very easy to model and test this stuff and so you can trust that there’s a reason why – for instance – Intel hasn’t pushed DDR3 memory speeds up much in the stock configs as you note, but also a reason why when you add more cores and so on you do need more bandwidth.

            Honestly I don’t think we’re disagreeing here, I’m just pointing out that this is more an issue of “typical everyday use” not being very demanding vs. some large single bottleneck in modern systems.

            PS: The down-voting is getting a bit immature dude… we’re having a polite discussion, no?

      • the
      • 5 years ago

      There are visions of the future of computing where there is no distinct storage system. Rather all data is exists in main memory and is directly accessible. This is an area of active research right now as technology is on the edge of making this feasible in certain high end niches. Some of these projects see DRAM has a large cache for SSD backed storage while others rely exclusively on other non-volatile memory technologies.

      Regardless, software and operating systems will need to be rewritten these new paradigm which means it is still at least a decade away from going mainstream.

        • smilingcrow
        • 5 years ago

        It will be interesting to see how that plays out as even current cutting edge RAM and SSDs don’t seem to impact the performance of most everyday home applications.

    • Saber Cherry
    • 5 years ago

    I’d rather buy a drive that consistently lands in ~4th place than one that sometimes hits 2nd, and sometimes is dead last by a large margin. Seems flaky.

    Also, until proven otherwise, I will not believe anything that any SSD manufacturer says about what happens after the endurance is exhausted, considering that every single one in the endurance test died leaving the data inaccessible, regardless of the claimed behavior.

    • strangerguy
    • 5 years ago

    Like CPUs, RAM and HDDs, SSDs have stagnated in performance. The only thing there of any interest are GPUs.

      • Froz
      • 5 years ago

      Really? That SSD is about 2 times faster then SATA SSDs. If we would see something like that in CPU, RAM or GPU noone would call it stagnation. It’s not stagnation, it’s just that anything faster doesn’t bring real life profits (yet?) + the drive has some big problems in real life performance.

      Yeah, the windows booting time. Why is it so damn slow? At first I thought maybe it’s bios issue that makes post much longer, but if you compare “bare” and “loaded” times you can see that it took 17 seconds to load 4 programms that took the oldest SSD in test just 15 seconds… Why is that?

        • Klimax
        • 5 years ago

        Don’t know. Fairly simple installation of Windows 7 on SB with 4GB can start under ten seconds (or whatever is delay on cheap LCD)

        Not even very loaded system with damn number of startup apps will load longer then 30 secs…

        ETA: Basic install is usually MS Office 2010, free Spybot (for passive immunization), Adobe Reader, Flash, Windows Defender (or version for 7),Xnview and maybe few other things. Adnaced install would could include VirtualBox for XP and sole DOS application.
        My own computers tend to have lots of installed like Visual Studio 2013 and maybe 2015, Netbeans, tons of games, Tortiose*, and more.

        Boot time including UEFI is from sub 10s for basic to 30s or so (don’t have recent measurement) for absolutely loaded computer. Something about 858 individual items in Programs and Features.

        The only exception I know is my own notebook. Something screwed up just before upgrade to 8.1 resulting in enormous registry, some of the longest boot times and quite bit higher memory consumption. Cause unknown. So far never encountered again.

        • the
        • 5 years ago

        I agree that SSD’s have not stagnated. They did receive a bit of a lull in between SATA3 and SATAe arriving as the first wave of SATAe controllers received delays, but they were always on the horizon.

        Then there is also the idea that this is the first wave of PCIe controllers. 4x and 8x PCIe 3.0 cards are likely coming to consumers with enough NAND channels to make use of that full bandwidth. NVMe will also improve as well. I’m predicting that Intel/AMD/ARM will start to include a high speed SSD controller on-die in their laptop/desktop SoCs with a dedicated accelerator block. Flash memory will move toward a DIMM-like form factor (though thin laptops will simply just solder the flash directly onto a motherboard much like main memory today).

        To top it all off, there is active research into replacing NAND based flash with alternatives to further boost speed and capacity while dropping latencies and power. The future is very, very exciting for SSDs over the next five years.

      • dragosmp
      • 5 years ago

      Not if you’re on 1080p

      I’d say monitors stagnated. I know one can have a 4K monitor, but at a pretty high cost. Luckily Nvidia (re)introduced FSAA

        • Firestarter
        • 5 years ago

        Monitors? 4K used to be a pipe dream and adaptive sync is ready to go mainstream, how is that stagnation? Also, I got my mother in law an IPS monitor because it was the best bang for buck, 5 years ago they were a lot more expensive

          • w76
          • 5 years ago

          Right, 4K has just become affordable and now 8K seems ready to start pouring out. I think monitors DID stagnate for quite a long time, but the dam has burst over the past year or two, ever since cheap Korean knockoffs really started forcing incumbents to innovate a bit. And, like you said, now adaptive sync too.

      • derFunkenstein
      • 5 years ago

      OK that’s the opposite of the results in the review, though people don’t seem to be noticing it in more mundane tasks (such as boot times and level load times).

        • f0d
        • 5 years ago

        i have to sort of agree with strangerguy here

        the things most of us care about (boot times and program/game loading) havnt improved much looking that the new nvme/satae hotness (intel 750)

        in fact boot times got SLOWER with this new intel drive and even when including the other new ssd drives (xp941/m6e) only the m6e showed a small improvement over the vertex 180 in boot times

        other than boot times none of the new drives showed any significant improvement in program/game loading times)

        have we gone back to a point where people think benchmarks matter over real world results? remember that people dismiss 3dmark just because its a benchmark and not a real world result and is the reason not many hardware sites use it anymore

    • willmore
    • 5 years ago

    On the “test notes and methods” page, the Samsung 850 Pro 500GB drive is listed twice in the table.

    • JJAP
    • 5 years ago

    If PCIe 3×4 can do 4GB, this drive is only half way there. How long until we see the Samsung 850 EVOs of PCIe 3×4 NVMe?

    • jjj
    • 5 years ago

    Gave up on reading when i got to the price ,, they can have DDR perf and it still wouldn’t matter in consumer unless they cut the price in half.
    And you guys could have at least included a few SATA models in RAID, that’s the sane solution for higher speeds given the price here. Guess you got lost in marketing.

      • Firestarter
      • 5 years ago

      for higher sequential throughput maybe, but for everything else SATA + RAID is a kludge when applied to SSDs

      • chuckula
      • 5 years ago

      [quote<]Gave up on reading when i got to the price[/quote<] What, because you were so excited that you jumped on Amazon to order one? The prices for these drives are EXTREMELY reasonable given the level of performance that we are seeing. If you are really that bummed since these drives cost more than a USB stick, then by all means run your system off the USB stick, but stop complaining that enterprise-level SSD performance is now available at price levels that were considered insanely great for regular SATA SSDs only 2 years ago.

        • A_Pickle
        • 5 years ago

        If anything, high-performance PCIe SSD’s have only come down in price with the what with SSD revolution and all.

      • A_Pickle
      • 5 years ago

      I mean… in what industry do you NOT pay top dollar to get the best? All the same, there exists a working solution for lower budgets: [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820785002[/url<] It's not as fast as the Intel, but it's a clean, easy solution for higher performance than your standard 2.5" SATA SSD, so...

      • jihadjoe
      • 5 years ago

      Dude, it’s less than half the $/GB compared to an [b<]OCZ[/b<] Revodrive which doesn't even use NVMe!

    • Krogoth
    • 5 years ago

    Overkill for majority of the users, but it is a steal for content creators and prosumers. 😉

    It is a like throwback to 15K RPM SCSI HDD of the old days.

      • davidbowser
      • 5 years ago

      Don’t make me bust out my old Seagate Cheetah with my Adaptec 160 SCSI controller!

      Those things sounded like a turbine spinning up. Powering up the old external arrays with staggered spin-up was something out of a sci-fi movie.

        • willmore
        • 5 years ago

        Pair of Cheetah’s, baby!

        We had a server at work that had two racks of 14 drives in it. You did not want to be near it when it was running. Holy cow! But, the BW was crazy!

        • albundy
        • 5 years ago

        i second that! still have my two Quantum Atlas 10KII 36GB drives. They were the big as a brick drives, twice the normal height of regular drives, but used SCA-2 80 pin connectors. it took me a while to find an SCA-2 adapter that worked with them, and even then the performance was $hit. the data writes sounded like al bundy carving the 9 commandments in stone.

          • Klimax
          • 5 years ago

          Would someday like to get them for collection.

          BTW: Velociraptor 10K can be loud too. Almost startled me that a drive is failing…

    • odizzido
    • 5 years ago

    With the single threaded small file robocopy tests, were they slow because it was capping out a CPU?

      • Klimax
      • 5 years ago

      Correct. All that scheduling and simple processing in serial takes time.

      • chuckula
      • 5 years ago

      I’d be curious to see how Linux filesystems perform since they have historically been better about handling small file I/O. There could be a greater delta in performance when the CPU is not the limiting factor…

        • Klimax
        • 5 years ago

        Last benchmark I have seen was some years ago by Toms Hardware and all three filesystems were mostly trading place with IIRC most win for NTFS.

        I suspect it would depend which NTFS driver is used (Windows 7, Windows 8 or Windows 8.1, later Windows 10)

          • the
          • 5 years ago

          And there are a few more file systems to test today. ReFS on Windows and ZFS on Solaris/BSD/Linux are relative new comers.

    • Chrispy_
    • 5 years ago

    That’s a reasonable $/GB price for such a high-end piece of hardware, can’t wait for NVMe to replace AHCI everywhere, either….

    What I really wanted to ask was:
    [quote<]Intel's other consumer SSDs are programmed to brick themselves at the next reboot, preventing users from accessing their data. [/quote<] Why? WHHHYYYYYYY? Seriously, who made that ridiculous decision? Were they punched appropriately afterwards? Sheesh.

      • chuckula
      • 5 years ago

      Never let rabid Mission Impossible fans be in charge of designing your SSDs….

      This [s<]message[/s<] [u<]SSD[/u<] will self destruct in 5 seconds....

      • GTVic
      • 5 years ago

      [quote<]Why? WHHHYYYYYYY? Seriously, who made that ridiculous decision? Were they punched appropriately afterwards? Sheesh.[/quote<] It took more time to write that than 99% of people (combined) will ever have to spend dealing with the issue.

      • Krogoth
      • 5 years ago

      NVMe isn’t going to replacing AHCI. It is more like M2. or SATA Express.

      NVMe is going to be the “new” SCSI.

        • Firestarter
        • 5 years ago

        Both M2 and SATA Express interfaces support NVMe and SATA as protocols

      • MadManOriginal
      • 5 years ago

      It’s to teach the lazy consumers that they need to make backups!

      • emphy
      • 5 years ago

      I assume this was done to prevent resale of expended drives and/or for data security.

      Still seems a bit silly to me, though.

      • Prototyped
      • 5 years ago

      I’m guessing it’s involuntary.

      I read somewhere (don’t ask me where) that NAND flash memory gets slower and slower to respond on [i<]read[/i<] as program/erase cycles progress. At some point it gets too slow to respond in time and the controller just times out. It's possible the controller goes read-only to prevent P/E cycles from progressing any further, but I'm guessing that after a power cycle it would time out and fail to initialize due to the condition of the NAND flash itself. I'm guessing the datacenter drives are set up to monitor this sort of response time more actively and stop accepting writes well before the point where they would fail to initialize.

        • tygrus
        • 5 years ago

        Datacenter drivers:
        * reduce the write voltage and thus lower the write speed to reduce wear and thus increase endurance;
        * can adjust the sensing amps to change the trigger levels based on the wear so that as pages age they expect the levels to shift (especially for MLC/TLC);
        * power protected ie. power stored in battery/capacitor to finish cached writes;
        * Select larger ##nm designs that are more costly but have better endurance;
        * Cherry pick the best dies closer to the centre of the wafers;
        * Larger over-provision to optimise writes and partial page re-writes to minimise writes;
        * may also monitor output levels to determine wear/life-remaining and adjust the signal processing and maintenance;
        * Controller may allow re-reading pages and attempt to correct more bit errors;
        * Sometimes the extra features a specifically built into the FLASH die and marketed as eMLC.

        They set the “datacenter” products up to better deal with data correction and marginal cells.

    • Ninjitsu
    • 5 years ago

    So it’s pretty much what Haswell-E is to the mainstream line. All the features are really nice, and the endurance rating is crazy. It’s arguably worth it if you have an appropriate use case (again, very prosumer).

    The plus point of this being an Intel product is that they’ll upgrade their chipsets and motherboards to support it, and will push the rest of the industry forward (like they tried to do the last time they found that SSDs were lacking).

    This may be useful for TR’s FCAT captures too.

      • Klimax
      • 5 years ago

      Not may. This will be useful. 4K recording is not cheap… 😉

    • Flatland_Spider
    • 5 years ago

    A comparison with Fusion-IO stuff would be nice. PCIe SSDs has been their bread and butter for a while now, and I’d like to see how they compare to Intel.

      • chuckula
      • 5 years ago

      Off the top of my head, I would expect faster but also much much more $$$$.

      The new drives from Intel are actually priced very reasonably for this class of product (but still definitely expensive compared to regular SSDs).

        • Flatland_Spider
        • 5 years ago

        I would expect them to be faster too, but as they say in sports, that’s why the race is run.

    • Ninjitsu
    • 5 years ago

    [quote<] The 750 Series is rated to absorb up to 70GB of writes per day over the length of its five-year warranty. [/quote<] That's downright [i<]mad[/i<].

      • Chloiber
      • 5 years ago

      You can reach that limit in 70 seconds… :>

      • smilingcrow
      • 5 years ago

      It does seem low for a drive of this price.

    • jihadjoe
    • 5 years ago

    I wish that read-only mode is propagated to the other consumer-line drives.

    • CheetoPet
    • 5 years ago

    I would love to see a future article looking at what actually affects level load times. If the drive isn’t the bottleneck then what is? CPU? GPU? RAM?

      • jihadjoe
      • 5 years ago

      I’d like to see that too! My guess is it’s probably a combination of everything.

      CPU comes into play when loading compressed assets, then the PCIe bus as textures go into the GPU, perhaps system RAM as a small amount will limit the working set of the CPU.

        • brucethemoose
        • 5 years ago

        Sometimes it’s limited by the code itself.

        Take Mass Effect 2. Every loading screen takes 10-20 seconds no matter how fast your hardware is, but if you remove/replace the default videos, loading is almost instant.

        Mass Effect 3 has to phone home to some slow EA server every time it starts up. Start it offline, and it loads much faster.

        In fact, I think a lot of games are limited by some internal timer/intentional pause rather than the speed of your hardware, but I don’t have any proof.

          • jihadjoe
          • 5 years ago

          Activity monitors on CPU/GPU/disk should reveal that. I mean if a game takes 20s to load and there’s zero activity on all three, then there’s probably a wait being executed.

            • Klimax
            • 5 years ago

            Or stack trace capture as any wait in Windows will go through particular function. VTune can also show what is bottleneck with regards of multithreading.

      • Krogoth
      • 5 years ago

      CPU (clockspeed) is the biggest factor in that equation. It is unfortunate that CPUs have been relatively stagnant in the past decade in this area and the operation is single-threaded so those extra cores do nothing.

      • Chloiber
      • 5 years ago

      Yes, that’s also what I’m asking myself. We have reached the peak a long time ago for consumers. The difference in performance for everyday tasks (including games) is non-existent. The only reason a consumer/”prosumer” could benefit from such an SSD is actually sequential performance.
      The difference in game loading times between SSDs and RAM is actually minimal. So no matter how fast SSDs get, we won’t see any benefit anymore. The bottleneck lies somewhere else, the most obvious being the CPU.

    • DPete27
    • 5 years ago

    [quote<]Intel says the 750 Series achieves peak performance at a queue depth of 128[/quote<] And most consumers run a max QD of about 4(ish?)..... definitely no more than 8. Be realistic here people.

      • Ninjitsu
      • 5 years ago

      No (to the max of 4), I’ve seen Planetside 2 do 8.

      And this will be excellent for prosumers as well.

      Edited for clarity.

        • brucethemoose
        • 5 years ago

        Planetside 2 is very disk-intensive, I’m not sure why you’re getting downthumbed.

        In fact, it’s the ONLY game where I can subjectively notice the difference between a RAMDrive and my OCZ Agility 4. If the game wasn’t so unpredictable/inconsistent, it could be a great benchmark for TR.

          • Ninjitsu
          • 5 years ago

          I so used to hate waiting for the thing to load from an HDD!

          EDIT: And maybe they could use the start-up and initial map loading times as benchmarks?

            • brucethemoose
            • 5 years ago

            It’s not just the loading screens. When you’re flying/driving across the terrain, textures and trees “pop in” faster with less stuttering if you have a fast storage system.

            Loading screen benchmarks wouldn’t really do it justice, as you can fly from one base with 200 unique players to another without a single loading screen… It loads all those assets on the fly. Disk usage benchmarks might be interesting though.

        • DPete27
        • 5 years ago

        Still a far cry from SATA QD=32.

        I won’t argue the prosumer bit. I just wonder what a “consumer-grade” server would have to be responsible for to consistently exceed QD 32. (disclaimer, I have little/no experience/knowledge about enterprise hardware/workloads)

          • Klimax
          • 5 years ago

          Large project compilations, large working copies checkout/updates, video liner or nonlinear editing, maybe swapfile target, video/screen recording. (Techreport could use this…), some sort of large database, data processing…

          • Andrew Lauritzen
          • 5 years ago

          I run a Steam cache on my home server (which BTW is *awesome* – [url<]http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/[/url<]) and that hits IO quite hard. Steam splits the depot up into lots of small files and grabs many of them at once over HTTPS, which then gets hashed on the proxy cache and hammers the disk with random IO at high queue depths. I'm using a 730 to drive it at the moment and the difference between that drive and even an 840 EVO I had in there previous is quite noticeable. I'd love to get a 750 in there to test. At this point I'm network limited though, but I have plans for Xeon-D and its nice 10Gbit ports... 😀 I won't claim this is a mass market usage by any means, but it's pretty awesome and I could see someone shipping a turnkey application to do it for folks in the future.

          • the
          • 5 years ago

          A Minecraft server can sometimes be limited by disk IO.

          There are consumer/prosumer usages for virtual machines and that tends to add their own set of IO requests on top of the requests from the host systems.

          A backup system target on a network can generated a lot of IOs with the help of numerous clients kicking off a backup simultaneously.

      • Andrew Lauritzen
      • 5 years ago

      And most people don’t need more than 4 cores either, but there’s still HSW-E with a very similar premium 🙂

      These things are ultimately for folks that just want the fastest stuff and have the money for it (overall high end computers are still not terribly expensive compared to other hobbies), or for people like me who need a cheap Xeon/P3700 for home server/workstation stuff 🙂 I highly appreciate having a much less expensive option available!

    • K-L-Waster
    • 5 years ago

    Interesting preview of future capabilities – but doesn’t look like there are enough real-world benefits for a home user / gamer system to make it worthwhile yet. (Not running out to replace my 840 Evo or my BX100 just yet….)

    • chuckula
    • 5 years ago

    This + 3D NAND for higher capacities and/or lower prices == Very Interestly.

    • derFunkenstein
    • 5 years ago

    Working with 4K uncompressed 12-bit 4:4:4 video? At ~40MB/frame, that 1.2TB beast will hold about 17 hours of video.

    Which makes me realize…man…video compression is REALLY good anymore.

      • Klimax
      • 5 years ago

      Bit confused with grammar in the last sentence. Is sentence missing “not”?
      BTW: Lagarith seems to be quite good loseless codec.

        • derFunkenstein
        • 5 years ago

        Nah, I mean, you look at relatively high bitrate compressed video and compare it to totally uncompressed video and yeah, you lose some fidelity, but the % shrink of the file size is enormous.

        I’m not talking about Youtube, but something like uncompressed 12-bit 1080p works out to around 10MB/frame, or 300MB/sec for 30fps. Compare that to a BluRay, and at least in my case I just kinda marvel at the size difference and the relative quality.

          • ermo
          • 5 years ago

          I’m with Klimax on this one — can you help show me how the grammar of “video compression is REALLY good anymore” is supposed to work?

          “video compression is REALLY good [s<]anymore[/s<][i<]these days[/i<]" or "video compression is REALLY [i<]no[/i<] good anymore" From your clarification, I'm leaning towards the former being what you intended to express? =)

            • derFunkenstein
            • 5 years ago

            yeah, my bad. I can see how that’s confusing.

            • A_Pickle
            • 5 years ago

            Well [i<]I[/i<] liked your phrasing, derFunk.

            • derFunkenstein
            • 5 years ago

            <3

            • Klimax
            • 5 years ago

            Thanks for clarification.

            As for compression for final product I am creating two files: for playback where goal is small file size and archival where goal is best quality. Playback is fairly easy, but archival is absolutely brutal on CPU. (50 minutes can take about whole day per pass…)

            For processing and editing, loseless obviously like Lagarith or Hufman.

    • anotherengineer
    • 5 years ago

    It’s too bad those 1.2TB drives are way way outside typical home enthusiast budgets. The 400GB drives would fit into that budget better.

    Any luck in procuring some 400GB drives for review?

      • smilingcrow
      • 5 years ago

      These drives are well into the range of being pointless for even typical home enthusiasts.

        • vargis14
        • 5 years ago

        I am equally unimpressed.
        BUT if I had a connection on my MB for that 1.2tb 750 and it was free, I would definitely use it 🙂

        I just want a SSD prices to drop so much we can all have multiple 2-4 TB SSD’s with 1TBs speeds for everything and platters are a thing of the past. God that would be nice to remove the biggest bottleneck in computers that has been around almost forever! In another 5-10 years maybe it will come true and we will have some new tech like Holographic storage, Moleculer Memory or maybe some biological DNA type storage. Imagine a living storage device in your computer you have to feed your blood or something creepy.
        Hey I would prick my finger once a week for a drop or 2 of blood to have a petabyte of storage that is 10 times as fast as the fastest storage we have now:)

    • adampk17
    • 5 years ago

    Meh, disappointing.

    Thanks, as always, for the reality check, TR.

    • tanker27
    • 5 years ago

    But how long will it survive under TR’s Endurance test?

    (See Geoff, you created a monster. For every SSD review I expect to see endurance numbers :P)

      • Buzzard44
      • 5 years ago

      I’d like to see that.

      Between the ability to write to it at well over a GB/s and 20nm NAND, I bet Geoff could kill it with ease.

    • balanarahul
    • 5 years ago

    What’s with putting 28 dies on it? Shouldn’t there be like 32?

    20 NM NAND is showing its age. 0.97$/GB is a lot considering it’s a 2015 SSD.

      • Farting Bob
      • 5 years ago

      The price is justifiable for its performance in many cases.

      • Firestarter
      • 5 years ago

      “slow” consumer SSDs were $1/GB less than 3 years ago, this drive would be a huge upgrade coming from my Samsung 830, and it would cost [i<]less[/i<] than I paid for it

      • dmjifn
      • 5 years ago

      You’re meant to pair it with your 15 core Ivy Bridget EX-15 or maybe your system with triple channel RAM!

Pin It on Pinterest

Share This