Intel’s SSD 760p Series 512 GB solid-state drive reviewed

NAND from the Intel-Micron Flash Technologies foundries is a common sight in the TR labs. The fruits of Intel and Micron’s (soon-to-be-defunct) joint partnership end up in a wide array of internal and removable media. It might surprise you to recall, then, that the last mainstream Intel-branded drive we reviewed was the trailblazing 750 Series nigh on three years ago. Well, there was Optane, but that one’s a bit of an odd duck. In any event, Intel hasn’t been sitting on its hands in all that time, but the drives constituting the company’s current lineup haven’t found their way into our hands of late. 

The name of the game for that lineup now is 3D NAND. Intel introduced IMFT’s second-generation, 64-layer 3D TLC into its client portfolio with the SSD 545s last summer. That left the older NVMe SSD 600p in a bit of an awkward position. The PCIe drive was left to languish on the previous-gen, 32-layer stuff and could barely outpace the new SATA drive on paper.

Today, though, Intel is completing its transition to 64-layer TLC with a trio of drives, one of which could put the 600p out to pasture. Behold the Intel SSD 760p Series.

SSD 760p Series
Capacity Max sequential (MB/s) Max random (IOps) Price
Read Write Read Write
128 GB ? ? ? ? $74
256 GB ? ? ? ? $109
512 GB 3230 1625 340K 275K $199

The sparse table may have tipped you off, but Intel has only released official performance figures for the 512 GB version of the 760p thus far. That’s fine, because as it happens, that’s the one the company sent us to test. Nonetheless, the drive launches today in 128 GB, 256 GB, and 512 GB capacities. 1 TB and 2 TB versions are set to follow later this quarter. Alongside the 760p, Intel is taking the wraps off of the SSD Pro 7600p Series and SSD E 6100p Series. These appear to be similar to the 760p, just targeted towards the business and embedded markets.

The performance numbers Intel is asserting for the 760p are pretty juicy: more than double the specs of the 512 GB 600p for only $10 more at launch. In fact, that’s just what the company’s press materials would like us to take away from this product. Claims like “2x performance” and “PCIe performance at near SATA pricing” abound. So whence come all these savings? Intel would have us believe it all comes down to foresight. Pursuing traditional floating-gate arrays in its 3D NAND product allowed IMFT to preserve smaller cell sizes where its charge-trap competitors were forced to go bigger, reducing density. Additionally, Micron’s CMOS-Under-the-Array tech frees peripheral logic from having to be, well, peripheral, putting the bulk of the control circuitry under the memory itself. All these space savings give IMFT a scalability advantage in terms of sheer bits-per-wafer.

What Intel is less forthcoming about is what gains it may have made via string stacking. With 3D NAND, the scaling challenge comes from reliably etching features all the way through the deposited layers. As more layers are added to NAND, tooling and technique must get more sophisticated to keep up. With string stacking, you essentially call it a day when the vertical deposition gets too hard and start gluing NAND dies together. The tradeoff of doing so is increased complexity of control logic in exchange for an easier time fabricating the individual chips. Keener eyes than mine have discerned that while Samsung continues to deposit layer after layer with each new generation,  IMFT has seen fit to join two 32-layer dies together with string stacking to make their 64-layer 3D NAND.

None of these niceties are obvious to the naked eye even once the drive is denuded. The 512 GB drive’s PCB is bare on its underside, while the drive’s two NAND packages, DRAM, and controller contend for space topside. It’s a Silicon Motion-branded controller, but Intel says that it worked closely with SMI to produce a chip tuned to its own architectural and firmware requirements. The 600p used a heavily-customized SM2260, for example, and it’s likely that this chip is a similarly-tweaked SM2262. The two NAND packages atop the SSD 760p each contain eight 256-Gb TLC dies to reach the drive’s 512-GB capacity.

As we touched on previously, Intel plans to sling this thing for $200 even. The price of admission includes a five-year warranty and an endurance rating of 72 terabytes written, but unfortunately encryption acceleration is reserved for the Pro 7600p Series only. That price only comes in at $20 more than the suggested price of Samsung’s evergreen 850 EVO, but in the real world, the Samsung drive has been available for around $130 or $140 for some time. The 760p will need to provide a nice shot of extra performance to prove its mettle versus one of our value SSD favorites.

Overall, Intel is promising more bang-for-buck for an NVMe SSD than we’ve seen in a long time with the 760p. Let’s see if the drive lives up to the hype.


IOMeter — Sequential and random performance

IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.

Our sequential tests use a relatively large 128 KB block size.

The 760p’s sequential reads are peppy as heck, easily breaking the 1000 MB/s barrier at both queue depths. Only the wildly more expensive DC P3700 and the Samsung fleet of NVMe SSDs are faster. Problem is, the 760p’s sequential writes just don’t have that same spring in their step. In fact, it’s the slowest PCIe sequential writer we’ve yet seen. At least it still stays ahead of the SATA pack, albeit not by much.

The budget Intel drive’s random read response times are stupid fast, but its random write response times don’t look nearly as good. A uncomfortable pattern of slow writing is starting to emerge.

The 760p’s performance is a bit of a mixed bag so far, but with the price for this drive being what it is, I’m leaving my pitchfork holstered for now. There are plenty of tests left to check out, too. Let’s get to it.


Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.

The 760p’s peak is about where we’d expect for a PCIe drive, but its steady-state performance is substantially lower than we’d like to see. Let’s see what the peak and sustained numbers look like.

The 760p’s peak write rate is up there with the best NVMe drives around, but its steady-state rates dip below quite a few SATA drives. Among NVMe drives we’ve tested, only the 960 EVO 250GB’s steady-state rate is lower.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.

For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.

A little limp for a PCIe drive. IOps do increase as we ramp up to QD64, but only a very little bit.

We’ve been calling out the Intel 750 Series mostly just for fun, as it’s not fair to compare really anything against it given its price, power draw, and massive flash parallelism. But Toshiba’s 64-layer BiCS-equipped XG5 is a reasonable comparison, and it scales much better than Intel’s string-stacked stuff.

Again, the 760p continues to elicit mixed feelings. But now we’re putting aside IOMeter in favor of real-world tests, so let’s see how the drive takes it.


TR RoboBench — Real-world transfers

RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

  Number of files Average file size Total size Compressibility
Media 459 21.4MB 9.58GB 0.8%
Work 84,652 48.0KB 3.87GB 59%

The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

Let’s take a look at the media set first. The buttons switch between read, write, and copy results.

The 760p grabs the crown in the the single-threaded read test. Folks, this thing reads absurdly fast, especially at low queue depths. That’s good news for a client drive. Writes still look a little lackluster, but the drive manages to put a bit more distance between itself and the unwashed SATA masses, especially at 8T. 

Now for the work set.

As usual, the work set compresses the spread. The 760p’s reads still look excellent, and writes look a little less lackluster than they have thus far.

RoboBench largely echoed what we learned in IOMeter. The 760p series puts up insane read speeds, but only middling write speeds for a PCIe drive. Next, we’ll ready Windows and a stopwatch to measure boot and load times.


Boot times

Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.

We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused on the time required to load the OS, but these new ones cover the entire process, including drive initialization.

The 760p lands right at the top of the rankings both bare and loaded. This is a pleasant reversal of the 750 Series’ boot times, which have always been its Achilles’ heel.

Load times

Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in the GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.

Nothing unusual manifests in application load times. The 760p handles them all just fine.

The same goes for games. The 760p is perfectly suited to storing two or three modern AAA titles.

Intel’s new drive fared well across all our boot and load tests. That marks the end of our testing, so flip the page to peruse our test methods or skip directly to the conclusion.


Test notes and methods

Here are the essential details for all the drives we tested:

  Interface Flash controller NAND
Adata Premier SP550 480GB SATA 6Gbps Silicon Motion SM2256 16-nm SK Hynix TLC
Adata Ultimate SU800 512GB SATA 6Gbps Silicon Motion SM2258 32-layer Micron 3D TLC
Adata Ultimate SU900 256GB SATA 6Gbps Silicon Motion SM2258 Micron 3D MLC
Adata XPG SX930 240GB SATA 6Gbps JMicron JMF670H 16-nm Micron MLC
Corsair MP500 240GB PCIe Gen3 x4 Phison 5007-E7 15-nm Toshiba MLC
Crucial BX100 500GB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
Crucial BX200 480GB SATA 6Gbps Silicon Motion SM2256 16-nm Micron TLC
Crucial MX200 500GB SATA 6Gbps Marvell 88SS9189 16-nm Micron MLC
Crucial MX300 750GB SATA 6Gbps Marvell 88SS1074 32-layer Micron 3D TLC
Intel X25-M G2 160GB SATA 3Gbps Intel PC29AS21BA0 34-nm Intel MLC
Intel 335 Series 240GB SATA 6Gbps SandForce SF-2281 20-nm Intel MLC
Intel 730 Series 480GB SATA 6Gbps Intel PC29AS21CA0 20-nm Intel MLC
Intel 750 Series 1.2TB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Intel DC P3700 800GB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Mushkin Reactor 1TB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
OCZ Arc 100 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Trion 100 480GB SATA 6Gbps Toshiba TC58 A19-nm Toshiba TLC
OCZ Trion 150 480GB SATA 6Gbps Toshiba TC58 15-nm Toshiba TLC
OCZ Vector 180 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Vector 180 960GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
Patriot Hellfire 480GB PCIe Gen3 x4 Phison 5007-E7 15-nm Toshiba MLC
Plextor M6e 256GB PCIe Gen2 x2 Marvell 88SS9183 19-nm Toshiba MLC
Samsung 850 EV0 250GB SATA 6Gbps Samsung MGX 32-layer Samsung TLC
Samsung 850 EV0 1TB SATA 6Gbps Samsung MEX 32-layer Samsung TLC
Samsung 850 Pro 512GB SATA 6Gbps Samsung MEX 32-layer Samsung MLC
Samsung 860 Pro 1TB SATA 6Gbps Samsung MJX 64-layer Samsung MLC
Samsung 950 Pro 512GB PCIe Gen3 x4 Samsung UBX 32-layer Samsung MLC
Samsung 960 EVO 250GB PCIe Gen3 x4 Samsung Polaris 32-layer Samsung TLC
Samsung 960 EVO 1TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung TLC
Samsung 960 Pro 2TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung MLC
Samsung SM951 512GB PCIe Gen3 x4 Samsung S4LN058A01X01 16-nm Samsung MLC
Samsung XP941 256GB PCIe Gen2 x4 Samsung S4LN053X01 19-nm Samsung MLC
Toshiba OCZ RD400 512GB PCIe Gen3 x4 Toshiba TC58 15-nm Toshiba MLC
Toshiba OCZ VX500 512GB SATA 6Gbps Toshiba TC358790XBG 15-nm Toshiba MLC
Toshiba TR200 480GB SATA 6Gbps Toshiba TC58 64-layer Toshiba BiCS TLC
Toshiba XG5 1TB PCIe Gen3 x4 Toshiba TC58 64-layer Toshiba BiCS TLC
Transcend SSD370 256GB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC
Transcend SSD370 1TB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC

All the SATA SSDs were connected to the motherboard’s Z97 chipset. The M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941, 950 Pro, RD400, and 960 Pro require more lanes, they were connected to the CPU via a PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.

We used the following system for testing:

Processor Intel Core i5-4690K 3.5GHz
Motherboard Asus Z97-Pro
Firmware 2601
Platform hub Intel Z97
Platform drivers Chipset:


Memory size 16GB (2 DIMMs)
Memory type Adata XPG V3 DDR3 at 1600 MT/s
Memory timings 11-11-11-28-1T
Audio Realtek ALC1150 with drivers
System drive Corsair Force LS 240GB with S8FM07.9 firmware
Storage Crucial BX100 500GB with MU01 firmware

Crucial BX200 480GB with MU01.4 firmware

Crucial MX200 500GB with MU01 firmware

Intel 335 Series 240GB with 335u firmware

Intel 730 Series 480GB with L2010400 firmware

Intel 750 Series 1.2GB with 8EV10171 firmware

Intel DC P3700 800GB with 8DV10043 firmware

Intel X25-M G2 160GB with 8820 firmware

Plextor M6e 256GB with 1.04 firmware

OCZ Trion 100 480GB with 11.2 firmware

OCZ Trion 150 480GB with 12.2 firmware

OCZ Vector 180 240GB with 1.0 firmware

OCZ Vector 180 960GB with 1.0 firmware

Samsung 850 EVO 250GB with EMT01B6Q firmware

Samsung 850 EVO 1TB with EMT01B6Q firmware

Samsung 850 Pro 500GB with EMXM01B6Q firmware

Samsung 950 Pro 512GB with 1B0QBXX7 firmware

Samsung XP941 256GB with UXM6501Q firmware

Transcend SSD370 256GB with O0918B firmware

Transcend SSD370 1TB with O0919A firmware

Power supply Corsair AX650 650W
Case Fractal Design Define R5
Operating system Windows 8.1 Pro x64

Thanks to Asus for providing the systems’ motherboards, to Intel for the CPUs, to Adata for the memory, to Fractal Design for the cases, and to Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.



Intel’s SSD 760p turned in one of the more uneven performances we’ve seen. The drive’s read performance was amazingly fast any way you slice it, but writes were merely SATA-beating. The usefulness of quick reads vastly outweighs that of quick writes for the everyman’s workload, so we’re OK with that tradeoff for the price tag this drive commands. Regardless, let’s take a look at how the numbers shake out. We distill the overall performance rating using an older SATA SSD as a baseline. To compare each drive, we then take the geometric mean of a basket of results from our test suite. Only drives which have been through the entire current test suite on our current rig are represented.

Right at the bottom edge of the PCIe contingent, comfortably ahead of the SATA pack. That’s a great performance for two hundred bucks, as our scatter plots will reveal. In the plots below, the most compelling position is toward the upper left corner, where the price per gigabyte is low and performance is high. Use the buttons to switch between views of all drives, only SATA drives, or only PCIe drives.


The 760p 512 GB is in a cozy spot. At $0.39 per gigabyte, it’s the cheapest PCIe drive in our set. It’s even cheaper than quite a few premium SATA products, despite rising considerably higher on the performance axis. Whatever fabricration voodoo Intel is leveraging to release a drive at this price, we’re better off for it. Even if this particular drive is not for you, its aggressive pricing could exert welcome downward pressure on the rest of the PCIe SSD market.

That’s not to say that the 760p is a slam dunk. We observed it struggling to write much faster than SATA SSDs throughout the breadth of our test suite. If you’re looking for bleeding-edge write speeds, reserve your M.2 slots for something else. But as we’ve said, typical client workloads are so heavily skewed towards reads that most buyers won’t notice that the 760p writes much slower than a 960 EVO 1TB, for example.

Overall, the SSD 760p delivers everything we want to see from a budget NVMe SSD. It’s more than fast enough under light workloads for how cheap it is, and its flaws will go unnoticed by the vast majority of its target audience. Presuming it lands in stores at the company’s $200 suggested price, Intel’s SSD 760 Series certainly deserves the honor of being TR Recommended.

Comments closed
    • designerfx
    • 5 years ago

    I’d still advocate a 1TB SSD over a 500GB SSD for reasons of reads and writes. The speeds won’t even make a difference, but the lifetime of the SSD will.

    • MadManOriginal
    • 5 years ago

    I see the sexy synthetic and superheavy I/O results and want to get one of these drives. Then I see the real-world tests that apply to my use (I don’t transfer gigabytes of data in the same PC) and realize there’s little point.

    • green
    • 5 years ago

    tone is not easily expressed over the internet

    my reading of your post made it seem like you were being flippant of the technological side of things, and were much more focused on the “why aren’t new things cheaper” side of it

    i myself had forgotten what was inside the evo 850 to begin with. so i did a google image search for “500GB EVO 850 inside” and “500GB EVO 960 inside” which, among various images displayed, had ones sourced from:

    [url<][/url<] [url<][/url<] (other peoples images results may vary as google customises results based on user search history, location, etc) the reviews show the over/under sides of the pcb for both the 850 and 960. the 850 was easy to take a chip count on the pcb. while they did not remove the sticker/label on the 960 chips, you can at least distinguish the chip count on the over/under sides. the 960 review text makes no mention of physically stacked nand (v-nand noted) which makes the chip count easier to guess. both reviews have spec listings indicating cache size of the chips (at different capacities too!). from it all i gathered there might be at least some higher cost associated with the production of higher performance+density chips. whether that equates to an $80-90 dollar premium may be suspect. searching and coming to a broad conclusion of "maybe", took less than 5 minutes of caring about the original comment. as you were legit asking, you at least now have the method to which to draw a similar conclusion with other tech where you're looking to explain some kind of price differential in end products. the tricky part would be determining which of the parts play a significant factor in terms of cost. thankfully there are many resources for that kind of query, such as online forums, that can help clarify by posting such a question (eg. what's the difference between v-nand vs physically stacked nand). where such question are much more interesting to answer compares to questions where it appears little effort was expended on self-research

    • DavidC1
    • 5 years ago

    There are legitimate reasons for smaller devices being expensive. It’s harder to build a system that’s smaller you know? With phones and laptops you get thinner and lighter, which makes all aspects of design more difficult.

    On top of that, they know people love thin and light so they add prices on top of that.

    There aren’t many reasons for NVMe being more expensive, other than the performance-based pricing.

    Well, I can think of one for NVMe. SATA SSDs have been in production for a while, so the cases, PCB, have been mass produced without needing design and supplier changes. NVMe is a totally new form factor, so the PCBs have to be the changed, and the design of the circuitry has to be changed, and you need to think of possible new suppliers. Eventually, as NVMe gets more popular, and more are produced, it’ll be lower priced than SATA.

    • moose17145
    • 5 years ago

    Okay. I was legit asking. But thanks for explaining it like a [email protected]$$.

    • green
    • 5 years ago

    [quote<]Someone want to explain that one to me... because I am seriously confused as to what makes that Gum Stick more expensive to manufacture than a "normal" 2.5" drive that would justify paying nearly twice as much...[/quote<] you mean other than using what appears to be 2 higher performance NAND chips in the NVMe gumstick format as opposed to 8 NAND chips in the 2.5" SATA format for the same storage capacity? (let alone having double the cache ram size) yea, why in the world should any company be allowed to profiteer from advances in technology. if anything, using 4 times as fewer chips should mean less than half the cost right? and while we're at it, same thing should apply with smartphones and laptops. the more compact the form factor, the cheaper it should be right? which means an iphonex should be well below $200 for being more than 4 times smaller than your bulk standard macbook. iwatches should be less than $10 a pop.

    • moose17145
    • 5 years ago

    I really wish they would just bring the NVMe prices down to being in line with 2.5″ SATA prices.

    A quick look on newegg tells me that simply jumping from a 500GB Samsung EVO 850 to a 500GB EVO 960 incurs a 90 dollar price premium.

    Jumping to a 512GB Intel 760p “only” incurs a 80ish dollar premium…

    Someone want to explain that one to me… because I am seriously confused as to what makes that Gum Stick more expensive to manufacture than a “normal” 2.5″ drive that would justify paying nearly twice as much…

    • albundy
    • 5 years ago

    my sandisk extreme pro was its competitor at the time, i think. i dont know…i really wanna go for nvme. i’ll wait for one more generation or prices to come down on the terry. i wish the review would include crystal diskmark numbers. all i really care about is 4k QD1 results.

    • anotherengineer
    • 5 years ago

    You were reading TR review comments while on a conference call?!?!?!

    Kids these days 😉

    • green
    • 5 years ago

    so if you bought a Samsung 950 Pro when it came out, more than 2 years later you still have absolutely no reason to upgrade

    • DavidC1
    • 5 years ago

    The 760p is actually supposed to be Intel’s high end for NAND SSDs.

    There’s a 660p coming which has same performance ratings as the 600p, but use QLC NAND to reduce cost. Rumors are that it’ll be in the $100 range for 512GB.

    • Ninjitsu
    • 5 years ago

    Yeah, I have a 128GB 320 series drive, and a 128GB 330 series drive. Not to jinx it, but they’ve been humming along nicely since around 2012.

    • Vaughn
    • 5 years ago

    I’m not even concerned about it to be honest.

    My two X25-M 160GB G2 drives in Raid 0 have been active for about 6 Years now and going to be retired this year I haven’t had a single issue related to them.

    How much data do you think I’ve written to that array in 6 years!

    • MOSFET
    • 5 years ago

    This is an important question with Intel SSDs.

    • Vaughn
    • 5 years ago

    I still have two X25-M 160GB G2 drives in Raid in my Current machine 🙂

    They will be replaced with a 1.2TB 750 drive I picked up today for a new build.

    • Ummagumma
    • 5 years ago

    Didn’t you hear the rumor about Intel Marketing:

    Since Microsoft Windows products have made reboots a common thing for PC users…

    … go ahead, UPGRADE that CPU firmware and ignore the periodic automatic reboots!!

    Besides, those reboots will keep those government spies from constantly watching your PC.

    • Klimax
    • 5 years ago

    I wonder if there would be difference when used with RST 15.x. Unfortunately it would require newer chipset and massive rebench. (Only 100 and newer series are supported by that version)

    • Klimax
    • 5 years ago

    I’d say unlikely to change ordering of SSDs.

    ETA: So I don’t think it affects recommendations of reviews.

    • benedict
    • 5 years ago

    One issue is omitted and it’s quite important imo. Will my SSD will get bricked automatically once it reaches some arbitrary number of writes or days active?

    • Chrispy_
    • 5 years ago

    I’m curious about this too. TLC’s inherently slow write performance means that more channels and interleaving are needed to make it stand out from the SATA crowd.

    IMO, any NVMe drive that isn’t getting significantly more than the 530MB/s of SATA isn’t justifying the change of interface or protocol, since the equivalent SATA M.2 drive will be far cheaper.

    • derFunkenstein
    • 5 years ago

    I’d get behind this idea. Samsung 960 Pro, Intel 760p, and a SATA Samsung 860 Pro should tell us all we need to know, right?

    • ColdMist
    • 5 years ago

    My first SSD drive, an X25-M 80GB, is still going strong in my HTPC box. It gave me a smile to see that as the ‘reference’ drive for the charts.

    • Vaughn
    • 5 years ago

    I can get the Intel 750 1.2TB for $488 CAD think i’m going to pull the trigger price is too good to give up. Based on the performance numbers I can live with slightly slower boot and more power usage just for the extra capacity.

    • mczak
    • 5 years ago

    This is a very nice product. By the looks of it a new wave of low-end nvme ssds is expected from different vendors – I’d say it’s about time…
    BTW it looks like the Corsair MP500 240GB cheated in the Gimp load time benchmark 🙂

    • cmrcmk
    • 5 years ago

    Definitely. The SSD is the data source of most every activity in a system so reads are critical. Writes OTOH typically come from the internet (10’s of Mbps), flash drives/SD cards (1’s of Mbps) or optical discs (1’s of Mbps if you’re lucky). Sky-high write speeds on an SSD are nice to have, but rarely beneficial compared to the read speed.

    • UberGerbil
    • 5 years ago

    Yeah, but that doubles the re-testing load — more than doubles, actually, since in most cases the original tests were only done with Intel so they’d have to be-redone pre-patch on AMD, then post-patch on both Intel and AMD.

    It’s probably enough to do that with a couple of the fastest SSDs and Optane drives — since the drives that sustain the highest volume of IO ops will show the biggest difference — plus whatever the “most common” or “popular” drive is (the 960 Evo and/or MX100, I’d guess). Maybe that would be a good poll question — what’s your go-to SSD for a build right now, or what’s the most-used SSD in your current rig?

    • bitcat70
    • 5 years ago

    Thank you for the reply and for the great job you’re doing! Might be interesting to see AMD/Intel performance delta between pre- and post-patch. Could it be a case of AMD Ryzen from the Spectre of the Meltdown?

    • weaktoss
    • 5 years ago

    Definitely. Sooner or later we’ll be overhauling the test image and bringing it up to date. It’ll be interesting to see how the numbers change then, since they’ll certainly be taking a hit of some kind. But since doing so will involve retesting everything to get a decently sized result set, it’s a big commitment. Storage testing is slow.

    Therefore, use our numbers to gauge the relative strength of drives against each other, not as an absolute indication of the performance you’ll get from them.

    • weaktoss
    • 5 years ago

    Very possible. Since it’s in all likelihood an eight-channel controller, that only allows for interleaving over two dies along each channel.

    • bitcat70
    • 5 years ago

    Would Meltdown/Spectre patches affect any of the results here?

    • DPete27
    • 5 years ago

    I’m not sure what to make of this:
    [quote<]The two NAND packages atop the SSD 760p each contain eight 256-Gb TLC dies to reach the drive's 512-GB capacity.[/quote<] Looking at the performance results between the 512GB 760p and the 1.2TB 750, is it possible that the controller isn't fully saturated at the 512GB capacity point?

    • auxy
    • 5 years ago

    [url=<](´・ω・`)[/url<] Nice burn tho. I giggled noisily on a conference call.

    • Waco
    • 5 years ago

    I’d much rather trade write performance for read performance. 99%+ of all IOs are read on client platforms.

    • chuckula
    • 5 years ago

    [quote<]But as we've said, typical client workloads are so heavily skewed towards reads that most buyers won't notice that the 760p writes much slower than a 960 EVO 1TB, for example.[/quote<] Yes, here at Intel our new catch-phrase of the decade is that the typical client won't even notice!

Pin It on Pinterest

Share This

Share this post with your friends!