Samsung’s 960 Pro 2TB SSD reviewed

Last fall, Samsung took the high-end storage market by storm with the release of its 950 Pro drives. While others had gotten lightning-fast NVMe drives to market first, Samsung’s effort was notable for combining a cohesive set of cutting-edge features at just the right time. NVMe had just gotten native Windows support, and Skylake motherboards were armed with M.2 slots and PCIe lanes aplenty to let those drives run.

One year later, it’s clearer than ever that M.2 and NVMe are here to stay, so perhaps the time is right for cautious late adopters to join in on the fun. To entice such holdouts, Samsung took the wraps off the 950 Pro’s successor at its SSD Global Summit last month. Much like its predecessor, the 960 Pro is powered by Samsung’s V-NAND in MLC configuration, gobbles up four lanes of PCIe bandwidth, slots into an M.2 2280 port, and uses the NVMe protocol instead of AHCI. Sounds like more of the same, right?

Wrong. Just look at it. The sticker is completely different! Samsung has integrated a “thin copper film” into the label on the underside of the drive. Despite looking and feeling like an ordinary (if somewhat thick) label, the new sticker supposedly dissipates heat much better than the 950 Pro used to. Nobody likes thermal throttling. Anyway, the 960 Pro comes in three bestickered capacities.

Samsung 960 Pro
Capacity Max sequential (MB/s) Max random (IOps) Price
Read Write Read Write
512GB 3500 2100 330k 330k $329
1TB 3500 2100 440k 360k $629
2TB 3500 2100 440k 360k $1299

The 950 Pro and 960 Pro lineups only share a 512GB version between them. Comparing the 512GB drives from each series, it’s clear that Samsung expects a whole lot more performance out of the new one. The company claims sequential speeds roughly 50% higher than the 950 Pro’s already-blistering figure. In the real world, the 950 Pro 512GB is one of the fastest drives we’ve ever tested, too. A lot of hopes are riding on that fancy stick-on heatspreader.

Samsung sent us the 2TB 960 Pro to test. Yes, two freakin’ terabytes on a drive barely larger than a stick of gum. Despite the 960 Pro’s outward similarities to the 950 Pro, the guts of the drive have actually changed quite a bit (joking about the label aside). First, the flash itself. Gone are the 32-layer, 128Gb V-NAND chips we saw in the 950 Pro, and in their place are the latest and greatest: Samsung’s third-generation, 48-layer 256Gb V-NAND. We were recently impressed by IMFT’s 384Gb 3D TLC chips, but Samsung’s 256Gb MLC chips prove that the density wars are still in full swing. All three versions of the 960 Pro bundle their V-NAND into four packages on one side of the PCB, but they vary the number of dies stuffed into each package. Our 2TB sample uses what Samsung calls Hexadecimal Die Packages, meaning that the packages each have 16 chips stacked inside.

And speaking of packaging, even the controller isn’t immune to the stacking trend. Samsung’s brand-spanking-new “Polaris” controller is making its debut underneath the drive’s DRAM in a space-saving package-on-package design. We may never get to see its pretty face. It’s what’s on the inside that counts, though, and Polaris packs five cores to shuttle data around. Well actually, four cores for shuttling data, since one of the cores is dedicated to “optimizing the communication between the host and controller.” Samsung credits a large part of the 960 Pro’s performance gain over the 950 Pro to Polaris, so we’re looking forward to seeing what effect the chip will have on speeds as it proliferates throughout the manufacturer’s SSD lineup.

All these improvements come at a cost. Samsung is asking a whopping $1300, or about 65 cents per gig, for the 2TB drive we’ve tested. As NVMe storage goes, that price isn’t really so bad. For the price of admission, you get TCG Opal-compliant 256-bit AES hardware encryption and a 5-year warranty. The 2TB version is rated to endure 1.2 petabytes written, so longevity should be no concern unless you’re some kind of SSD sadist.

No more dilly-dallying. Let’s see if Samsung can redefine top-shelf performance once again.

 

IOMeter — Sequential and random performance

IOMeter fuels much of our latest storage test suite, including our sequential and random I/O tests. These tests are run across the full capacity of the drive at two queue depths. The QD1 tests simulate a single thread, while the QD4 results emulate a more demanding desktop workload. For perspective, 87% of the requests in our old DriveBench 2.0 trace of real-world desktop activity have a queue depth of four or less. Clicking the buttons below the graphs switches between results charted at the different queue depths.

Our sequential tests use a relatively large 128KB block size.



Um. It’s not often that one feels giddy after opening a results spreadsheet. The 960 Pro’s sequential speeds are astonishing. At both queue depths, it puts up the fastest read speeds we’ve ever seen, nearly breaching the 2000 MB/s barrier. And while the 960 Pro can’t claim the sequential write crown (thanks to that meddling datacenter drive, the P3700), its results are still head and shoulders above the other client SSDs. If the drive’s random response times are as strong, the 960 Pro will be a fearsome contender indeed.



Whoo! It’s been a long time since a drive has delivered such an irreproachable performance in both our sequential and random tests. Again, the 960 Pro breaks the record for random read response times. And again, only Intel’s high-end NVMe stuff stops the 960 Pro from snagging the write record. So far, this drive looks crazy fast. If this keeps up, we’ll have a new top dog in our overall performance index. Let’s see if some more nuanced testing can reveal any weaknesses.

 

Sustained and scaling I/O rates

Our sustained IOMeter test hammers drives with 4KB random writes for 30 minutes straight. It uses a queue depth of 32, a setting that should result in higher speeds that saturate each drive’s overprovisioned area more quickly. This lengthy—and heavy—workload isn’t indicative of typical PC use, but it provides a sense of how the drives react when they’re pushed to the brink.

We’re reporting IOps rather than response times for these tests. Click the buttons below the graph to switch between SSDs.


Hm. Something odd is going on here. For the first several minutes of the test, the 960 Pro gasps and wheezes at barely over 100 IOps. Then it springs to life, jumping to heights only probed by Intel’s NVMe drives before now. We re-ran our sustained tests a few extra times without getting meaningfully different results. We saw this sort of behavior before back when we reviewed the Intel 750 Series drives, but we never reached a satisfactory conclusion as to why it occurred. We’ve reached out to Samsung in case the company can shed any light on the situation.

To show the data in a slightly different light, we’ve graphed the peak random-write rate and the average, steady-state speed over the last minute of the test.

Luckily for the 960 Pro, these graphs don’t capture the sluggish start—they only care about peak speed and the steady-state speed reached near the end of the time period. Both of those metrics cast the 960 Pro in a wonderful light.

Our final IOMeter test examines performance scaling across a broad range of queue depths. We ramp all the way up to a queue depth of 128. Don’t expect AHCI-based drives to scale past 32, though—that’s the maximum depth of their native command queues.

For this test, we use a database access pattern comprising 66% reads and 33% writes, all of which are random. The test runs after 30 minutes of continuous random writes that put the drives in a simulated used state. Click the buttons below the graph to switch between the different drives. And note that the P3700 plot uses a much larger scale.


Another record for the 960 Pro! It’s the first drive since the P3700 to force us to alter the standard 0-to-35000 IOps scale we like to use for this graph. The 960 Pro scales far better than anything we’ve seen recently. It offers nearly linear scaling until QD32. The 950 Pro didn’t scale anything like this, as the next set of graphs will make clear.


The 950 Pro of yesteryear scaled up to QD4 but then flatlined. The 960 Pro, on the other hand, outscales even Intel’s formidable 750 Series SSD all the way up to QD64, which is where the Samsung drive regresses in speed.

IOMeter synthetics were a massive win for the 960 Pro. Next, we see what kind of performance $1300 gets you in the real world.

 

TR RoboBench — Real-world transfers

RoboBench trades synthetic tests with random data for real-world transfers with a range of file types. Developed by our in-house coder, Bruno “morphine” Ferreira, this benchmark relies on the multi-threaded robocopy command build into Windows. We copy files to and from a wicked-fast RAM disk to measure read and write performance. We also cut the RAM disk out of the loop for a copy test that transfers the files to a different location on the SSD.

Robocopy uses eight threads by default, and we’ve also run it with a single thread. Our results are split between two file sets, whose vital statistics are detailed below. The compressibility percentage is based on the size of the file set after it’s been crunched by 7-Zip.

  Number of files Average file size Total size Compressibility
Media 459 21.4MB 9.58GB 0.8%
Work 84,652 48.0KB 3.87GB 59%

The media set is made up of large movie files, high-bitrate MP3s, and 18-megapixel RAW and JPG images. There are only a few hundred files in total, and the data set isn’t amenable to compression. The work set comprises loads of TR files, including documents, spreadsheets, and web-optimized images. It also includes a stack of programming-related files associated with our old Mozilla compiling test and the Visual Studio test on the next page. The average file size is measured in kilobytes rather than megabytes, and the files are mostly compressible.

RoboBench’s write and copy tests run after the drives have been put into a simulated used state with 30 minutes of 4KB random writes. The pre-conditioning process is scripted, as is the rest of the test, ensuring that drives have the same amount of time to recover.

Let’s take a look at the media set first. The buttons switch between read, write, and copy results.



For the most part, the 960 Pro lands near the top of the PCIe cluster, including record performances in 1T read and 8T write. The P3700 often beats it, but as we said before, that drive has datacenter mojo and a price to match. We won’t hold those results against the consumer 960 Pro.

Now for our work set. The 960 Pro was blisteringly fast in our IOMeter random results, so we’re confident it will do just fine against our work set, as well.



Pretty solid performance all around. The 8T write test reveals the 960 Pro’s weakest performance yet. Here, the 960 Pro gets beat out not only by the 950 Pro, but also by a couple of higher-end SATA drives: the Vector 180 960GB and 850 EVO 1TB. Our work set is hard work.

The 960 Pro put up a good performance in the work set and a great performance in the media set. It’s time for our last set of tests, in which we use the drive as the primary boot device.

 

Boot times

Until now, all of our tests have been conducted with the SSDs connected as secondary storage. This next batch uses them as system drives.

We’ll start with boot times measured two ways. The bare test depicts the time between hitting the power button and reaching the Windows desktop, while the loaded test adds the time needed to load four applications—Avidemux, LibreOffice, GIMP, and Visual Studio Express—automatically from the startup folder. Our old boot tests focused on the time required to load the OS, but these new ones cover the entire process, including drive initialization.

The 960 Pro boots at least as quickly as the 950 Pro, but not markedly quicker than your average SATA SSD. No surprises here.

Load times

Next, we’ll tackle load times with two sets of tests. The first group focuses on the time required to load larger files in a collection of desktop applications. We open a 790MB 4K video in Avidemux, a 30MB spreadsheet in LibreOffice, and a 523MB image file in the GIMP. In the Visual Studio Express test, we open a 159MB project containing source code for the LLVM toolchain. Thanks to Rui Figueira for providing the project code.

The 960 Pro can add new records for LibreOffice and Visual Studio to its belt. These might be the least impressive ones, but a record is a record. Last, let’s fire up some games.

Chalk up one last record to the 960 Pro for its speed in loading Tomb Raider, and we’re done. Using a 960 Pro 2TB as a dedicated Steam library drive would be an unconscionable waste of money, but it would do the job perfectly well.

We’re all out of tests for the 960 Pro to ace. Hit the next page for a breakdown of our test methods, or jump ahead to the conclusion if you like.

 

Test notes and methods

Here are the essential details for all the drives we tested:

  Interface Flash controller NAND
Adata Premier SP550 480GB SATA 6Gbps Silicon Motion SM2256 16-nm SK Hynix TLC
Adata XPG SX930 240GB SATA 6Gbps JMicron JMF670H 16-nm Micron MLC
Crucial BX100 500GB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
Crucial BX200 480GB SATA 6Gbps Silicon Motion SM2256 16-nm Micron TLC
Crucial MX200 500GB SATA 6Gbps Marvell 88SS9189 16-nm Micron MLC
Crucial MX300 750GB SATA 6Gbps Marvell 88SS1074 32-layer Micron 3D TLC
Intel X25-M G2 160GB SATA 3Gbps Intel PC29AS21BA0 34-nm Intel MLC
Intel 335 Series 240GB SATA 6Gbps SandForce SF-2281 20-nm Intel MLC
Intel 730 Series 480GB SATA 6Gbps Intel PC29AS21CA0 20-nm Intel MLC
Intel 750 Series 1.2TB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Intel DC P3700 800GB PCIe Gen3 x4 Intel CH29AE41AB0 20-nm Intel MLC
Mushkin Reactor 1TB SATA 6Gbps Silicon Motion SM2246EN 16-nm Micron MLC
OCZ Arc 100 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Trion 100 480GB SATA 6Gbps Toshiba TC58 A19-nm Toshiba TLC
OCZ Trion 150 480GB SATA 6Gbps Toshiba TC58 15-nm Toshiba TLC
OCZ Vector 180 240GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
OCZ Vector 180 960GB SATA 6Gbps Indilinx Barefoot 3 M10 A19-nm Toshiba MLC
Plextor M6e 256GB PCIe Gen2 x2 Marvell 88SS9183 19-nm Toshiba MLC
Samsung 850 EV0 250GB SATA 6Gbps Samsung MGX 32-layer Samsung TLC
Samsung 850 EV0 1TB SATA 6Gbps Samsung MEX 32-layer Samsung TLC
Samsung 850 Pro 500GB SATA 6Gbps Samsung MEX 32-layer Samsung MLC
Samsung 950 Pro 512GB PCIe Gen3 x4 Samsung UBX 32-layer Samsung MLC
Samsung 960 Pro 2TB PCIe Gen3 x4 Samsung Polaris 48-layer Samsung MLC
Samsung SM951 512GB PCIe Gen3 x4 Samsung S4LN058A01X01 16-nm Samsung MLC
Samsung XP941 256GB PCIe Gen2 x4 Samsung S4LN053X01 19-nm Samsung MLC
Toshiba OCZ RD400 512GB PCIe Gen3 x4 Toshiba TC58 15-nm Toshiba MLC
Toshiba OCZ VX500 512GB SATA 6Gbps Toshiba TC358790XBG 15-nm Toshiba MLC
Transcend SSD370 256GB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC
Transcend SSD370 1TB SATA 6Gbps Transcend TS6500 Micron or SanDisk MLC

All the SATA SSDs were connected to the motherboard’s Z97 chipset. The M6e was connected to the Z97 via the motherboard’s M.2 slot, which is how we’d expect most folks to run that drive. Since the XP941, 950 Pro, RD400, and 960 Pro require more lanes, they were connected to the CPU via a PCIe adapter card. The 750 Series and DC P3700 were hooked up to the CPU via the same full-sized PCIe slot.

We used the following system for testing:

Processor Intel Core i5-4690K 3.5GHz
Motherboard Asus Z97-Pro
Firmware 2601
Platform hub Intel Z97
Platform drivers Chipset: 10.0.0.13

RST: 13.2.4.1000

Memory size 16GB (2 DIMMs)
Memory type Adata XPG V3 DDR3 at 1600 MT/s
Memory timings 11-11-11-28-1T
Audio Realtek ALC1150 with 6.0.1.7344 drivers
System drive Corsair Force LS 240GB with S8FM07.9 firmware
Storage Crucial BX100 500GB with MU01 firmware

Crucial BX200 480GB with MU01.4 firmware

Crucial MX200 500GB with MU01 firmware

Intel 335 Series 240GB with 335u firmware

Intel 730 Series 480GB with L2010400 firmware

Intel 750 Series 1.2GB with 8EV10171 firmware

Intel DC P3700 800GB with 8DV10043 firmware

Intel X25-M G2 160GB with 8820 firmware

Plextor M6e 256GB with 1.04 firmware

OCZ Trion 100 480GB with 11.2 firmware

OCZ Trion 150 480GB with 12.2 firmware

OCZ Vector 180 240GB with 1.0 firmware

OCZ Vector 180 960GB with 1.0 firmware

Samsung 850 EVO 250GB with EMT01B6Q firmware

Samsung 850 EVO 1TB with EMT01B6Q firmware

Samsung 850 Pro 500GB with EMXM01B6Q firmware

Samsung 950 Pro 512GB with 1B0QBXX7 firmware

Samsung XP941 256GB with UXM6501Q firmware

Transcend SSD370 256GB with O0918B firmware

Transcend SSD370 1TB with O0919A firmware

Power supply Corsair AX650 650W
Case Fractal Design Define R5
Operating system Windows 8.1 Pro x64

Thanks to Asus for providing the systems’ motherboards, to Intel for the CPUs, to Adata for the memory, to Fractal Design for the cases, and to Corsair for the system drives and PSUs. And thanks to the drive makers for supplying the rest of the SSDs.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before every component of our test suite. For the IOMeter database, RoboBench write, and RoboBench copy tests, the drives were put in a simulated used state that better exposes long-term performance characteristics. Those tests are all scripted, ensuring an even playing field that gives the drives the same amount of time to recover from the initial used state.

  • We run virtually all our tests three times and report the median of the results. Our sustained IOMeter test is run a second time to verify the results of the first test and additional times only if necessary. The sustained test runs for 30 minutes continuously, so it already samples performance over a long period.

  • Steps have been taken to ensure the CPU’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the frequency at 3.5GHz. Transitioning between power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1920×1080 at 60Hz. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Conclusions

The 960 Pro broke a slew of our storage testing records, so we have little doubt it will take first place in our performance rankings. Drumroll, please…

First place, and how! It’s a meaty margin of victory. Intel’s 750 Series drive is forced to relinquish its prime position, while the once-tempting 950 Pro and RD400 suddenly seem slow and ungainly—at least in in a relative sense. The 960 Pro is a genuine triumph for Samsung. Other drive makers need to either scramble production on newer, faster offerings or slash prices on the existing ones if they expect to keep up. For now, the 960 Pro stands in a performance class all its own.

So what’s the price landscape like? Let’s take a look at our time-honored scatter plots. We’re going by Samsung’s suggested prices for the 960 Pro, since the drive isn’t quite on the market yet. Use the buttons to switch between views of all drives, only SATA drives, or only PCIe drives. The most compelling position in these scatter plots is toward the upper left corner, where the price per gigabyte is low and performance is high.


The message is clear. If you were going to buy a 950 Pro or Intel 750 Series SSD, well, don’t. The 960 Pro is around the same price per gig, and it’s much, much faster. The RD400 might still be a worthwhile purchase at five cents per gigabyte cheaper, but I think I’d be inclined to shell out a bit more for the big performance jump.

Samsung 960 Pro 2TB

October 2016

Well, there you have it. Rarely have we seen an SSD turn in such an unqualified success in our tests. With one or two minor exceptions, Samsung has vastly improved on the formula it pioneered with the 950 Pro. The only beef one might have with the drive is that it’s expensive, but it’s priced right in line with its competitors and even a little bit cheaper than its predecessor. If you have to ask yourself “is this really worth the cost?” then the answer is almost certainly no. If you must have absolute top-shelf performance (whether out of quantifiable need or merely for bragging rights), you’re going to want to get your hands on a 960 Pro.

With performance like the 960 Pro’s on tap, it’s no surprise this drive is going home with our coveted Editor’s Choice award. The 960 Pro 2TB’s peerless sequential performance set our hearts ablaze in the way only a new Samsung product can. We’re thrilled with the generational improvements the company has eked out of its V-NAND, the strong debut of its Polaris controller, and the sheer ingenuity of its sticker technology. We’re excited to see some of these advances eventually filter down to the rest of Samsung’s SSD lines. Samsung will follow the 960 Pro with a more affordable 960 EVO line shortly, and we can’t wait to see what those drives will mean for more affordable NVMe goodness.

Comments closed
    • DarkUltra
    • 3 years ago

    So this drive has capacitors to help write out anything in the buffer if the power goes out:

    [url<]https://youtu.be/nwCzcFvmbX0[/url<] skip to 2:00 Will the 960 Evo have that? Would this prevent something like this: [url<]https://youtu.be/-Qddrz1o9AQ[/url<]

    • Kurlon
    • 3 years ago

    So, has anyone done a comparison / write up on M.2 PCIe AIC adapters for those of us who don’t have native M.2 ports on our mobos? I suddenly seem to have a need after reading this review, it’s time to ditch the 320GB spinning rust and a 960 Pro looks about perfect for the job.

      • thecoldanddarkone
      • 3 years ago

      To answer this, as far as I know almost all passive pcie adapters will work just just fine (as long as they are rated for it). I’ve had a couple that I can give feedback on.

      I have one of these, works fine, just make sure your ssd has plenty of air.

      [url<]https://www.amazon.com/Lycom-DT-120-PCIe-Adapter-Support/dp/B00MYCQP38/ref=sr_1_1?ie=UTF8&qid=1477320526&sr=8-1&keywords=lycom+adapter[/url<] I also had one of these. It works very well but is definitely a bit overpriced. It has no switch for the LED's (which is hella annoying) and comes with thermal pads for cards up to 80 mm. [url<]https://www.amazon.com/Angelbird-Wings-PX1-PCIe-Adapter/dp/B018U79YQK/ref=sr_1_1?ie=UTF8&qid=1477320592&sr=8-1&keywords=px1+wings[/url<] Both worked on my Asus x79ws which doesn't officially support NVME. Both supported full speeds on the 950 pro I had.

        • Kurlon
        • 3 years ago

        That Angelbird looks good, and annoys me at the same time. Put the finning on the OUTSIDE where it can actually do something, and yeah it needs a jumper to put out the lights.

        Likely what I’ll end up with though.

    • HERETIC
    • 3 years ago

    Hi Tony,This comment-
    “I didn’t notice any QD1 4k benchmarks in the review.”
    is getting common-perhaps time to add them.
    Don’t know if there’s a direct correlation but random read response time
    looks like a good indicator-
    960 around 14K iops is twice as fast as MX300- and on your chart 1/2 the latency.

    Sammy is the master of QD1 4K (dare I say blazeingly fast) even the low end 750EVO
    is around 11K double what other planer TLC drives from the likes of OCZ can hit.
    Again you can see that in your random read response times.

    • wingless
    • 3 years ago

    This pleases me. The 950 Pro will now go on sale so I can buy it. It’s enough performance for my broke butt and i7-2600K system…

      • thecoldanddarkone
      • 3 years ago

      Honestly if your broke, buy something else. Have you confirmed your 2600k system can even boot nvme? If you need an nvme look at the decently priced Plextor m8pe drives. Right now it’s 44 cents a gig for the 512Gig version. [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820249086&cm_re=plextor_m8pe-_-20-249-086-_-Product[/url<]

    • GrimDanfango
    • 3 years ago

    If only they’d developed this new sticker technology in time for the Note 7…

    • mkk
    • 3 years ago

    As always its nice to see a good slew of practical tests. Those show what matters to the vast majority of systems.

    • PrincipalSkinner
    • 3 years ago

    Fastest drive on the market that will marginally speed up your boot and load times.

      • Vhalidictes
      • 3 years ago

      It’s amazing how little that speed actually helps in day-to-day use.

        • Pwnstar
        • 3 years ago

        That’s because day2day use is CPU limited, not drive limited.

      • jihadjoe
      • 3 years ago

      The funny thing is it’s 5 seconds slower than some cheap OCZ drives when loading SOM, and is less than one second faster than the rest of the pack including the ancient X-25M in most other tests.

      For day-to-day use literally any SSD will get the job done, and there isn’t really much advantage going to an NVMe PCIe drive unless you regularly work with and transfer very large amounts of data.

      • egon
      • 3 years ago

      Where’s the bottleneck?

        • tsk
        • 3 years ago

        The software.

        • Krogoth
        • 3 years ago

        Application load times have been CPU-bound (single-threaded) since we move onto second-generation solid-state media.

          • Waco
          • 3 years ago

          Source? Metrics of this? Or is it just hearsay?

            • Krogoth
            • 3 years ago

            Have you been sleeping for the past eight years? Almost every noteworthy reviewing site has been showing this since solid-state media became commonplace. Look at this site’s own archives if you doubt this. Here is an example from 2009 with Intel’s own X-25-M drive back in the day. [url<]https://techreport.com/review/17269/intel-second-generation-x25-m-solid-state-drive[/url<] It is one of the reasons why first-generation solid-state drives were a massive turn-off for most users. They were only marginally faster then 10K-15K RPM HDDs of the day and were expensive from a GB/$$$$ ratio. Solid-state drives only started to shine after second-units and beyond came out. Here is another review from this site that demonstrates this [url<]https://techreport.com/review/15931/intel-x25-e-extreme-solid-state-drive[/url<]

            • Waco
            • 3 years ago

            Those links do not corroborate your statement.

            • Krogoth
            • 3 years ago

            Either you are a weak troll or hopelessly dense. I can’t help you there.

            • Waco
            • 3 years ago

            What do you think they demonstrate?

            • Krogoth
            • 3 years ago

            SSDs in that line-up have significant differences in read/write throughput under synthetics loads but in real-world stuff they are evenly match in most common applications especially in load time. There’s a jump from HDDs to SSDs but that jump doesn’t match-up with the differences in I/O throughput.

            *gasp* I wonder why?

            • Waco
            • 3 years ago

            *gasp* There’s only one CPU in the comparison. Don’t draw conclusions from the review that aren’t there.

            Go read up on Amdahl’s law and how it applies here. Go learn how the scientific method works, then realize you’re drawing conclusions based on what you want to find, not what the data suggests.

            • Krogoth
            • 3 years ago

            The aforementioned reviews have different CPUs though. The review in the first link is powered by a E6700 @ 2.67Ghz (dual-core) paired with a P45 chipset while the review in the second link is powered by a P.4EE @ 3.4Ghz (single-core) paired with a 955X chipset.

            This points out that you aren’t even bothering to read the entire contents of the reviews or thoroughly analyze the data. Considering on what you had previously stated this is rather ironic.

            Amdahl’s law is the reason why the usefulness of parallel computing is limited to certain workloads ie not silly gaming and customer-tier applications. The majority of the software that is out there is single-threaded for this reason. The cost/benefits aren’t there for most developers. The main benefit for customer-tier multi-core chips is for multi-tasking not parallel computing. For this demographic, parallel computing is just icing on the cake.

            • synthtel2
            • 3 years ago

            There are no common tests of boot/load time between the two reviews, and lots of other factors differ between the two test rigs.

            [quote<]Amdahl's law is the reason why the usefulness of parallel computing is limited to certain workloads ie not silly gaming and customer-tier applications. The majority of the software that is out there is single-threaded for this reason. The cost/benefits aren't there for most developers. The main benefit for customer-tier multi-core chips is for multi-tasking not parallel computing. For this demographic, parallel computing is just icing on the cake.[/quote<] That's blatantly false, you've been called on it numerous times already, and I'm tired of reading your spam on it. Modern games are multithreaded, as is lots of other software. Amdahl's law is weaker than you think when it comes to gaming. My G3258 [url=https://techreport.com/forums/viewtopic.php?f=12&t=118650&p=1326746<]can't run whatever game[/url<] I want acceptably because you're wrong. Naughty Dog is [url=http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine<]right here[/url<] proving you wrong - they have to, [b<]as does anyone wanting to be performant on consoles,[/b<] because it's gotta run on 6 weak jaguar cores. [url=https://www.reddit.com/r/Planetside/comments/50qcpj/performance_update/d779scy/?st=iujb8y6w&sh=49f64803<]Planetside's tech[/url<] and Apple's [url=https://en.wikipedia.org/wiki/Grand_Central_Dispatch<]GCD[/url<] are doing their best to prove you wrong. [url=http://s09.idav.ucdavis.edu/talks/04-JAndersson-ParallelFrostbite-Siggraph09.pdf<]Here's[/url<] a Frostbite presentation on it from 2009. Bungie talks about it [url=http://advances.realtimerendering.com/s2013/Tatarchuk-Destiny-SIGGRAPH2013.pdf<]here[/url<]. Check how badly [url=https://www.youtube.com/watch?v=yS_sVz-8gUk<]this[/url<] experiment ended.

            • Krogoth
            • 3 years ago

            Amdahl’s law is quite alive and running strong. It has been a constant thorn in the HPC world where parallelism has been reigning king.

            Multi-threaded software is still a minority outside of the HPC/enterprise world and that it will be the case for some time yet. The majority of those “allegedly” multi-threaded customer-tier software solutions are really dual-threaded. That’s why you do not see any significant returns from moving to a dual-core to a quad-core chip for the overwhelming majority of the “multi-threaded” software solutions out there.

            The main reason why single-core chip struggle in that review has more has more to do with resource contention. That single-core chip not only has to handle the application in question, it has manage resources between OS and other background processes on a single core. Having spare cores remedies that so the application can have a CPU core(s) to itself.

            It is why dual socket platforms got their whole “smooth creaminess” meme back in their heyday. This is when most desktops systems were single core and single socket. That “smooth creaminess” experience became commonplace once multi-core chip systems were ubiquitous. What that reviewer did really was a “throwback” to the old days before SMP became commonplace.

            Those developer initiatives are just reactionary efforts on trying to keep the whole illusion that “Moore’s Law” is still alive. Most of those efforts are going to burn-up once they realize that coding properly threaded software that scales well is much easier said then done.

            Planetside 2 is a poorly-coded piece of obsolete junk. It is the Crysis I of online FPS games. Sony stopped caring about it years ago and sold the assets to a private investment group. Planetside 2 is dying out to the current crop of MBOAs.

            • Waco
            • 3 years ago

            You just keep doubling down on the misinformation.

            You keep making these statements without any actual evidence, then act all surprised when called on it.

            You’ve shared nothing to back up your claims.

            • Krogoth
            • 3 years ago

            You are now suggesting that Amdahl’s law is completely bogus and coding multi-threaded software with any dataset that scales well is easy as pie? 🙄

            • Waco
            • 3 years ago

            I never said, or implied, anything of the sort.

            • synthtel2
            • 3 years ago

            Watch that Naughty Dog presentation. It’s the direction the whole industry is moving, because it works. I object to it being in video format as much as anyone, but I don’t regret the time I spent on it one bit. You’ll learn a lot (unless you don’t want to).

            That’s not what’s going on with the GTA V or AoTS on a single-core CPU. Go back, actually watch the video this time, and then try saying that again with a straight face.

            • Krogoth
            • 3 years ago

            They are revisiting what programmers and software engineers had explored back almost 50 years ago with mainframes.

            The only reason that mainstream software industry is moving towards “Multi-threaded” stuff is because we ran out of steam with cranking-up the “MEGAHURTZ!” and IPC. They are going to find out again why parallel computing hasn’t take off outside of its own niches. Parallel computing isn’t some magical remedy that will continue the illusion of “Moore’s Law”.

            The whole thing is just the death throes of the rapid growth that we experienced with digital computing since the discovery of the IC and transistor.

            • synthtel2
            • 3 years ago

            It has taken off, it is working, and the gains aren’t just theoretical. If you followed my links you’d know that.

            You can find plenty of material to bash in Naughty Dog’s presentation (it thrashes caches like a boss), but they’re keeping six cores legitimately busy to semi-ridiculous levels. Amdahl’s law was making it tough for them, but then they did a bit of simple pipelining so they could interleave work from two frames, and it’s safe to say Amdahl’s law isn’t a worry anymore. (The latency hit works out to not exist, before you ask.) It wasn’t even all that heavy in terms of implementation effort compared to multithreading of the old style. </tl;dr>

            Just for the lulz, I’m curious how you think anyone gets any CPU performance out of consoles at all.

            • Waco
            • 3 years ago

            Jaguar lolz

            • Krogoth
            • 3 years ago

            I’m not saying there isn’t any gains from going multi-threaded and efforts for it aren’t completely fruitless. It is just the gains are not that clear cut and it doesn’t magically scale up 100% when you keep throwing more cores at it.

            The whole argument with move to multi-threaded for developers is basically “Is worth the extra time and effort to properly coded well-threaded software?” The answer depends on the workload and unfortunately for most developers it is not worth the time and effort ($$$$) for most mainstream-tier stuff.

            It is primary reason why Intel and AMD* stuck with “quad-core” chips as top of the line for their mainstream demographics and budget units are”dual-core”. While the silicon that has tons of cores are catered towards the enterprise, HPC and SMB types.

            Gaming consoles run their content at reduced quality and settings versus their PC ports. The also have a lot less overhead to deal with versus a normal desktop. The game on a console requires a fraction of time to fine tune and optimize when you have a predictable hardware platform.

            It is not that surprising a 8-psudeo “Jaguar” cores chip on a specialized platform can yield similar gaming performance to a lowly i5 Sandy Bridge/Ivy Bridge chip on a general purpose platform.

            • Waco
            • 3 years ago

            A single Jaguar core is equivalent to what…1/4 of a desktop SB/IB? At best?

            You’re negating your own point here, and you’ve still offered no evidence to back up your assertions.

            • Krogoth
            • 3 years ago

            That’s the benefits of having a specialized hardware and software platform versus something that is general purpose.

            Strangely enough, you aren’t providing solid evidence to support the contrary either.

            • Waco
            • 3 years ago

            You made the original claim. Back it up.

            • Krogoth
            • 3 years ago

            I did provided some evidence a while back

            [quote<] Have you been sleeping for the past eight years? Almost every noteworthy reviewing site has been showing this since solid-state media became commonplace. Look at this site's own archives if you doubt this. Here is an example from 2009 with Intel's own X-25-M drive back in the day. [url<]https://techreport.com/review/17269/intel-second-generation-x25-m-solid-state-drive[/url<] It is one of the reasons why first-generation solid-state drives were a massive turn-off for most users. They were only marginally faster then 10K-15K RPM HDDs of the day and were expensive from a GB/$$$$ ratio. Solid-state drives only started to shine after second-units and beyond came out. Here is another review from this site that demonstrates this [url<]https://techreport.com/review/15931/intel-x25-e-extreme-solid-state-drive[/url<] [/quote<] The only rebuttal I got back was essentially "NO U! U ARE WRONG!" instead of providing evidence to the contrary. Not exactly production eh?

            • Waco
            • 3 years ago

            There’s nothing backing up your assertions in that quote.

            • Krogoth
            • 3 years ago

            You are not providing anything of substance either. It is perhaps because you cannot find or do not have data yourself that proves having multi-core chips provide a significant reduction in load times on majority of the mainstream-tier applications over single-core chips on SSD media.

            • Waco
            • 3 years ago

            You made the claim. Not my responsibility.

            • Ninjitsu
            • 3 years ago

            [quote<] All file operations go through a dedicated thread. This offloads some processing from the main thread, however it adds some overhead at the same time. The reason why threaded file ops were implemented was to serve as a basement for other threads ops. When multiple threads are running at the same time, OS is scheduling them on different cores. Geometry and Texture loading (both done by the same thread) are scheduled on different cores outside the main rendering loop at the same time with the main rendering loop. [/quote<] [url<]https://community.bistudio.com/wiki/Arma_3_Startup_Parameters#exThreads[/url<]

            • synthtel2
            • 3 years ago

            I see multiple retcons, multiple direct contradictions of the stuff I’ve linked with nothing to back up your side, multiple large misunderstandings of the game development industry and associated tech, and zero well-informed conclusions. When you’re right it seems to be little more than a happy coincidence. You should consider fixing that.

            • Krogoth
            • 3 years ago

            The whole argument started with that load times for mainstream applications being CPU-limited on second generation SSD media because the application in question single-threaded and dual-threaded for most part.

            I provided reasons on why this is the unfortunate case and debunk why parallel computing isn’t some silver bullet to the problem.

            The majority of the rebuttal comes down to “NO U! YOU ARE DUMB!” “SHOW ME PROOFS NAO!!” even after providing some charts and the said rebuttal has yet to provide proof to the contrary.

            For some reason it turned into a debate about the value of parallel computing itself.

            • synthtel2
            • 3 years ago

            I can’t find a single accurately represented thing in your whole post. It’s good for some lulz, thanks.

            Edit: I also can’t find a single accurately represented thing in your posts in the other recent sub-thread.

            • Ninjitsu
            • 3 years ago

            Funny thing about PS2, it benefits tremendously from an SSD 😀

            • Ninjitsu
            • 3 years ago

            I don’t agree with [i<]all[/i<] the things that Krogoth's been saying, but I do remember coming across benchmarks that seemed to imply this. In whatever testing I have done myself, I've noted that there's a lot of disk reads, and then the SSD isn't doing much, but the application hasn't fully "loaded", so the implication is that stuff is being processed. However I don't know if that's a CPU bottleneck or just an application bottleneck.

            • Waco
            • 3 years ago

            CPU bound or [i<]single-threaded[/i<] CPU bound as Krogoth is claiming? I know my quad core / eight thread CPU goes to nearly 100% usage on all threads when loading many modern games.

            • Krogoth
            • 3 years ago

            Most of those threads aren’t involved with I/O, compiling and loading assets though. There are only a handful of games that are utilize four-threads or more. They tend to be sandbox titles, strategy games and MMORPGs. The overwhelming majority of modern games are either single (Yes it still happens in 2016 releases) or dual-threaded at best.

            The same holds true for the majority of mainstream applications. The mainstream applications that do utilize tons of threads are pretty much limited in realm of content creation. It will likely remain to be the case in the foreseeable future.

            • Waco
            • 3 years ago

            Evidence required.

            • Krogoth
            • 3 years ago

            Just take a look at any CPU comparison round-up out there in the web.

            The performance and load-times of the applications in question speak for themselves when you run the application across different cores counts on the same architecture family. Assuming the application is [b<]coded well[/b<] and multi-threaded there is going to be a jump from going to moving to dual-cores. If it the application does take advantage of more than two threads you may see a jump from going to quad-core but it is much smaller from the previous jump. The problem is that coders for most "multi-threaded" mainstream applications and games don't even bother to spend the time and $$$$ to properly thread the I/O, loading assets and compiling. As an unfortunate consequence these poor coded applications do not see any significant reduction in load time when you move from a single core chip to dual-core chip. You get a lot better millage in reducing load times just by overclocking the silicon.

            • Waco
            • 3 years ago

            You keep stating things, yet you’ve provided no actual evidence to back up your claims. Do you understand that?

            • Ninjitsu
            • 3 years ago

            Oh no, not single threaded bound, no. And yeah, i’ve seen all four cores go to full with both my Core 2 Q8400 and my i5 4690K. (During loading)

        • Pwnstar
        • 3 years ago

        The CPU.

    • Neutronbeam
    • 3 years ago

    So Tony…when are you raffling off the test samples? 😉

      • derFunkenstein
      • 3 years ago

      This isn’t Hardware Canucks. /zing

      • weaktoss
      • 3 years ago

      Heh, I’d trade Jeff for a sample 1080 or two. The rest of you clowns have no shot 😀

    • Krogoth
    • 3 years ago

    Awesome workstation and scratch disk, but otherwise it is bloody overkill for majority of the desktop-tier workloads out there.

    The V-NAND cell technology is too immature/unproven to risk using this drive in a server despite its impressive performance in server-ordinated workloads.

    I hope that 960 Pro doesn’t include “secret” pyrotechnics when the drive dies. 😉

      • Neutronbeam
      • 3 years ago

      See, now I can’t tell if you’re impressed or unimpressed. Impressive.

      • cygnus1
      • 3 years ago

      [quote<] The V-NAND cell technology is too immature/unproven to risk using this drive in a server despite its impressive performance in server-ordinated workloads. [/quote<] And also that it comes in the consumer/SFF oriented m.2 form factor...

      • K-L-Waster
      • 3 years ago

      [quote<]I hope that 960 Pro doesn't include "secret" pyrotechnics when the drive dies[/quote<] When they said they were going to "blow up the data center space" I'm not sure everyone was on the same page....

      • TheJack
      • 3 years ago

      So now, are you impressed or not. You pretty much leave it open.

      • boing
      • 3 years ago

      I was thinking the same thing. Might as well buy a much cheaper SSD drive if the only practical benefit as an average desktop user means just shaving 1-2 seconds off boot and loading times.

        • Ifalna
        • 3 years ago

        Yeah, since my 120Gig SSD is getting a little claustrophobic, I’m currently thinking about buying sth bigger.

        Thought about getting a 521GB 260 evo, but I’m not sure whether a 1TB reactor would be the smarter move, given that my workloads hardly use the horsepower of such a M2 drive and the need for an adapter card (z77 board, no NVMe boot capability here).

    • TwoEars
    • 3 years ago

    What people tend to forget is that it’s the 4k performance which matters most. In the real world when you’re not just copying files this drive is more or less as fast as any other. So if you get a better deal in terms of MB/$ with a SATA solution don’t be afraid to go that way.

      • Firestarter
      • 3 years ago

      well, looking at the random read response time and database IO results from IOMeter at queue depth 1 , this drive looks to be roughly twice as fast as my current SATA SSD, which is pretty cool I guess. The eye watering sequential performance is a nice cherry on top of that

      • albundy
      • 3 years ago

      i didnt notice any QD1 4k benchmarks in the review. what i did notice is the load times, which were not really distinguishable between m.2 and sata ssd’s. so besides the i/o graphs, where will performance be noticeable? also, what makes the AIC ssd cards numbers so much higher than m.2 drives? i thought that both are using the same medium path.

        • MOSFET
        • 3 years ago

        [quote<]QD1 4k benchmarks[/quote<] The answer may indeed by, "Why bother with QD1" [quote<]AIC ssd cards numbers so much higher[/quote<] Probably cooling, simply being more exposed to airflow. I recently got a pair of Plextor (Lite-On) PCIe SSDs in Add-In Card formfactor, and even on [b<]PCIe 2.0 x4[/b<] I have to run at least 4 VMs at a time to tell it's a different drive than before (which was a Kingston HyperX 3K SSD.) The NVMe PCIe drives are fast, but fast in the way that Broadwell-E is faster than the Pentium G3258. If you're just booting a Windows desktop and firing up Office and Chrome, there won't be much difference.

        • TwoEars
        • 3 years ago

        That’s exactly it. The m.2 interface is superior to sata but when it comes to 4k performance we’re still limited by how the drives function internally. You will notice improvements in file copying and when reading large assets, but other than that you won’t.

        • Pwnstar
        • 3 years ago

        Load times are actually limited by your CPU, not your SSD. All those drives are within the margin for error among each other, because it is CPU performance that really matters.

          • Krogoth
          • 3 years ago

          Load times for certain applications (not games/mainstream stuff) can be limited by memory speed too if the application is trying to recall data from memory instead of solid-state media.

    • Wirko
    • 3 years ago

    $144,000 per kilogram, or 1,800,000,000,000,000 bits per kilogram. Not bad, Samsung.

      • DPete27
      • 3 years ago

      It’s like crack, for your PC.

        • Ninjitsu
        • 3 years ago

        you mean…speed.

      • tacitust
      • 3 years ago

      Still great value compared with the (near?) infinite cost per kilo of digital products…

    • derFunkenstein
    • 3 years ago

    Wow, that’s a fast drive!

    On the sustained I/O rates test, that unsightly blob on the drop and the relatively low last half, is that a result of some sort of throttling? I’m not seeing in the text anything about an SLC-like “cache” mode for any of the NAND so I don’t think that would be a cause.

    “Low” is relative here, of course – it’s far better than anything else on that test outside of the 750 series.

      • weaktoss
      • 3 years ago

      Yeah, there’s no TurboWrite on these Pro drives. Will be interesting to see if the 960 EVO ends up outpacing the Pro in some tests because of that. The blob on the drop does look like it could very well be throttling, which would make sense after the drive had been running full-tilt for 300 seconds or so.

    • chuckula
    • 3 years ago

    [quote<]Samsung credits a large part of the 960 Pro's performance gain over the 950 Pro to Polaris, so we're looking forward to seeing what effect the chip will have on speeds as it proliferates throughout the manufacturer's SSD lineup.[/quote<] THANK YOU AMD.

      • RAGEPRO
      • 3 years ago

      For those of you confused by this comment, read the article.

      [sub<]Polaris is the name of Samsung's new SSD controller.[/sub<]

        • chuckula
        • 3 years ago

        Stop ruining all my fun!

        • cygnus1
        • 3 years ago

        Lol, people should just read the article. It’s a good review.

      • Duct Tape Dude
      • 3 years ago

      It actually is the AMD chip. Samsung’s 3D-NAND called for better massively parallel processing, and the Multi-Level AntiAliasing keeps every bit looking great no matter where it is.

      • JosiahBradley
      • 3 years ago

      What if someone strapped two of these things to the new Polaris Pro card with SSD slots….

        • derFunkenstein
        • 3 years ago

        Heh, I always figured Samsung would be the one to give new meaning to CrossFire.

          • Tirk
          • 3 years ago

          I think he was referring to this setup:

          [url<]https://techreport.com/news/30435/radeon-pro-solid-state-graphics-keeps-big-data-close-to-the-gpu[/url<] Since AMD has already strapped an ssd to a GPU before its not out of the realm of possibilities 😉

            • derFunkenstein
            • 3 years ago

            Oh, I know, I just wanted the cheap pop of making a Samsung fire joke. Sorry. Back to being inciteful. I mean insightful.

Pin It on Pinterest

Share This