OCZ’s RevoDrive 3 X2 240GB solid-state drive

Manufacturer OCZ
Model RevoDrive 3 X2
Price (240GB) $680
Availability Now

Ivy Bridge is coming, and with it, a wave of new motherboards based on Intel’s latest platform hub. The Benchmarking Sweatshop has been preparing for the deluge by tweaking the mix of peripheral tests we use to punish I/O ports. Real-world file transfers have been added to the mix thanks to some clever coding by our resident developer. To remove potential bottlenecks for those tests, we’ve had to source a very fast solid-state drive.

The lab is already brimming with the latest and greatest 2.5″ SSDs, providing no shortage of candidates to plug into 6Gbps Serial ATA ports and the docking station we have primed for USB 3.0 testing. These 2.5-inchers may not be quite fast enough to saturate the latest SATA and USB interfaces, but they’re as good as it gets for compatible devices. The bottleneck is unfortunate but unavoidable. Fortunately, it only applies to one half of the equation. Our file transfer tests use separate source and target drives, which frees us from the shackles of Serial ATA and USB for the other half.

Once you shed those interfaces, 2.5″ solid-state drives start to look rather, well, slow. The next step up is PCI Express, whose now-ubiquitous second generation offers about as much bandwidth per lane as 6Gbps SATA and USB 3.0—plus the ability to dedicate multiple lanes to a single device. Bottleneck, begone!

A lot of the PCIe-based SSDs on the market are multi-thousand-dollar offerings designed explicitly for enterprise applications; they’re a little outside the realm of what’s reasonable for even an expensive desktop. OCZ’s RevoDrive 3 X2 240GB is considerably more attainable, however. For not much more than a high-end graphics card, this PCI Express NAND sandwich claims it can push transfer rates up to 1500MB/s—three times the peak speed of the fastest 2.5″ SSDs. That sort of potential throughput is perfect for our motherboard testing, and it made us curious about how the RevoDrive stack might stack up against more traditional SSDs. So, we decided to take a closer look.

Virtually an SSD

My, that’s a handsome expansion card. Not an ounce of bling adorns the RevoDrive’s black circuit boards. The matching back plate is riddled with venting holes that impart a bit of industrial style while providing a path for ambient airflow around the low-key heatsink. Under that hunk of finned metal sits OCZ’s SuperScale storage controller, which ties together four cutting-edge SandForce controller chips and a four-lane PCI Express 2.0 connector. OCZ calls this arrangement its Virtualized Controller Architecture, or VCA.

One might expect the SuperScale chip to be a simple RAID controller that stripes data across an array of SandForce SSDs. That’s not exactly what’s going on here, although OCZ has resisted our attempts to tease out precise details about of how everything works. It does, however, say that the SuperScale controller “combines processing and full DMA (direct memory access) cores, as well as internal PCIe, SATA and SAS interfaces.” Indeed, there are no bridge chips to be found anywhere on the card.

There has been some speculation that the SuperScale chip is in fact a Marvell SAS controller. That wouldn’t be surprising, but we were too chicken to yank the glued-on heatsink for further inspection. OCZ has surely silk-screened its own name on the chip, anyway.

What matters more than the underlying silicon is the virtualization layer that makes four SandForce SSDs appear to Windows as a single drive. This VCA voodoo includes “unique command queuing and queue balance algorithms” that run on the SuperScale chip rather than on the host CPU. Instead of arbitrarily striping data across the SandForce controllers, VCA appears to distribute incoming I/O requests intelligently between the Native Command Queues associated with each chip. Here’s the arrangement in block diagram form:

Source: OCZ

We’ve asked for more details on how the “Complex Command Queuing Structure” works, but OCZ is thus far keeping specifics close to its chest. There doesn’t appear to be much integration between VCA and the SandForce controllers, though. OCZ says the SuperScale chip is tuned for the performance characteristics of the SandForce SF-2281, but claims “minor software changes” could adapt the scheme for other NAND controllers, including its own Indilinx Everest chip.

The RevoDrive’s NAND controllers operate as they would in a stand-alone SSD, complete with their own layer of secrecy, this time surrounding SandForce’s DuraWrite compression mojo, which remains intact under VCA. The SandForce feature set also includes a RAISE die redundancy scheme that protects against data loss due to physical flash failures. However, RAISE is disabled in the RevoDrive 3 120GB and the X2 240GB. The scheme consumes a slice of NAND capacity, and that space may be required by VCA overhead in those lower-capacity models.

Regardless of the capacity, the data flowing through the RevoDrive is protected against power loss. If the lights go out, OCZ says the card’s built-in capacitance can provide enough juice to complete all in-flight I/Os.

Designed primarily for enterprise SSDs, OCZ’s Virtualized Controller Architecture includes a number of features specific to that market. Some of those perks aren’t supported by the RevoDrive 3, including a special “recovery” mode and the ability to create a virtual pool of logical drives. Although it’s not advertised explicitly in the product literature, the RevoDrive does inherit support for the SCSI command set from VCA 2.0. OCZ also mentions compatibility with TRIM and its SCSI equivalent, dubbed UNMAP. There’s just one problem: neither works in current versions of Windows, which is kind of a big deal.

TRIM and its counterpart were introduced to combat the block-rewrite penalty associated with solid-state storage. This hit to write performance arises from the very nature of flash storage, which is much slower when writing to occupied flash pages. When data is deleted by the OS, the relevant flash pages are marked as available but not erased, allowing the SSD to run out of empty flash pages even when there’s plenty of available storage capacity. TRIM informs the solid-state drive that the flash pages associated with deleted data can be wiped; the SSD is then free to clear those pages as it sees fit.

Although Microsoft added TRIM support in Windows 7, it only works with AHCI devices. The RevoDrive 3 is a Storport device, which is Microsoft’s recommended interface for “high-performance buses, such as fibre channel buses, and RAID adapters.” One might expect this interface to support the SCSI UNMAP command, but that’s not the case, according to OCZ. Windows 8’s Storport driver is expected to add an extension that supports TRIM, though.

Given these limitations, the RevoDrive essentially cannot make use of TRIM and SCSI UNMAP in Windows 7. Fortunately, the garbage collection routines built into the SandForce controllers should prevent the RevoDrive’s long-term write performance from completely tanking. These algorithms, managed independently by the storage controller, periodically reorganize data to ensure that empty flash pages are available for incoming writes.

Garbage collection routes tend to run during idle periods, so the RevoDrive may be slow to recover after particularly punishing workloads. Our suite is full of demanding tests, and there are few opportunities for background garbage collection to kick in. We also test drives in a simulated used state to best represent their long-term performance, which means the RevoDrive faces an uphill battle against traditional SSDs with OS-level TRIM support.

TRIM issues aside, the RevoDrive 3 is very much a Windows-compatible storage device. Once the drivers are installed (which can be done with an existing OS or during the installation of a new one), the Revo appears as a SCSI disk. That drive can then be used as secondary storage or configured as a boot device.

The rest of the RevoDrive

The third-generation RevoDrive is available in a bunch of different flavors. Today, we’re looking at the RevoDrive 3 X2, whose suffix refers to the double-decker circuit board used to accommodate the four SandForce controllers and their associated NAND. The vanilla RevoDrive 3 uses a single circuit board but still features dual SandForce controllers. To avoid confusion, it would probably be better to call the single- and dual-board models the RevoDrive 3 X2 and X4, respectively. In my view, counting controllers makes more sense than counting circuit boards.

In addition to two board configurations, the RevoDrive 3 is available with a couple of different kinds of memory. The Max IOps versions are equipped with Toshiba’s 32-nm Toggle DDR NAND, the same memory used in the Vertex 3 Max IOps edition. The standard variants employ ONFI-compatible flash built on Micron’s 25-nm process. This asynchronous flash is similar to what’s found in OCZ’s Agility 3 SSD and other mid-range SandForce SSDs.

Somewhat surprisingly, OCZ doesn’t offer a version of the RevoDrive 3 based on synchronous NAND. That particular breed of flash is commonly found in high-end SandForce SSDs like OCZ’s vanilla Vertex 3, which is positioned just below the Max IOps model. There probably isn’t enough of a market for the RevoDrive 3 to justify a third memory tier.

Unfortunately, asynchronous NAND is definitely slower than the synchronous stuff. Micron’s datasheet for the RevoDrive’s 29F64G08CBAAA flash packages lists a per-pin transfer rate ceiling of 50 MT/s. The synchronous version of this NAND (part number 29F64G08CBAAB, for those following along with their decoder rings home) is rated for speeds up to 200 MT/s. Both versions will endure at least 3,000 write-erase cycles, according to Micron.

Capacity NAND

dies

Dies per package Max sequential (MB/s) Max 4KB random writes (IOps) Price
Read Write
240GB 32 x 64Gb 1 1500 1225 200,000 $680
480GB 64 x 64Gb 1 1500 1250 230,000 $1,600
960GB 128 x 64Gb 2 1500 1300 230,000 $3,150

Our RevoDrive 3 X2 240GB is the least expensive of the async X2 cards, and its specifications are detailed in the table above. OCZ adds NAND dies to hit the higher capacities, and the price climbs accordingly. The performance ratings also rise with capacity, but only slightly. The difference in sequential writes between the fastest and slowest models amounts to only 6%, and the gap in random writes is still fairly modest at 14%. All three members of the family have the same sequential read speed rating.

Take a moment to let those projected performance numbers really sink in. We’re talking about transfer rates in excess of 1GB/s. The X2s are purportedly capable of pushing hundreds of thousands of 4KB random writes, although OCZ oddly doesn’t quote a figure for random reads.

The single-board RevoDrive 3 is available in 120, 240, and 480GB capacities—exactly half the storage at each step up the X2 ladder. With fewer controllers at their disposal, these models have lower performance ratings than X2 drives with the same capacities. The Max IOps variants of the single-board and X2 configs predictably boast higher performance specifications than their asynchronous counterparts. In the X2, switching to Toggle DDR NAND purportedly boosts sequential throughput as high as 1600MB/s, while random writes climb to 240,000 IOps.

Regardless of whether they have a second layer, all the RevoDrive 3 cards are single-slot designs. The base circuit board measures 6.6″ (168 mm) long, so it should be easy to squeeze into smaller systems.

The drive’s power consumption looks relatively modest given the hardware involved. OCZ’s says the X2 consumes 13.5W at idle and 14.3W when active, which seems like a lot for a 240GB SSD. But remember that this is more like four 60GB SSDs with an additional controller tacked on.

Somewhat surprisingly given the RevoDrive’s workstation aspirations, OCZ’s warranty runs out after three years. We’re used to premium storage solutions offering five-year warranty coverage, making the shorter term a notable blemish on the Revo’s record.

To its credit, OCZ has improved the Toolbox application used to manage its SSDs. The interface is much prettier than early incarnations, and the app does a fine job of downloading and applying firmware updates. It can also secure-erase the RevoDrive if the SSD is connected as secondary storage (rather than as a boot drive). Wiping the Revo is much easier than secure-erasing an SSD RAID array, which must first be broken before member drives are wiped individually. That’ll be loads of fun with a comparable four-drive array. OCZ would do well to make the message indicating a successful secure erase more prominent, though; right now, it’s buried in the lower-left corner and barely visible.

We’d also like to see a revamped interface for monitoring SSD-specific SMART attributes. In addition to logging the total volume of host reads and writes, the RevoDrive can tell you how much life it has left and what percentage of its NAND blocks have been retired from service. These attributes pop up in a simple text window and would be better presented within the main interface.

Our testing methods

We have a full suite of performance results for literally dozens of different SSDs, but today, we’ve narrowed the field to include only models around the same 240GB capacity as the RevoDrive 3 X2. That should give us a sense of how this exotic storage solution stacks up against typical desktop SSDs, which are admittedly a fraction of the cost.

Apart from OCZ’s own Octane 512GB SSD, we don’t have anything in the Benchmarking Sweatshop that comes close to the RevoDrive’s asking price. The only matched pairs of drives are 64GB and smaller, making our RAID results poor candidates for comparison. We’ve included the Octane, along with a Western Digital Caviar Black mechanical desktop drive for reference, which gives us more than enough fodder for overstuffed graphs. Our test methods and systems haven’t changed, so the RevoDrive’s scores can be compared to those in any of our storage reviews dating back to last September.

If you’re familiar with our test methods and hardware, the rest of this page is filled with nerdy details you already know; feel free to skip ahead to the benchmark results. For the rest of you, we’ve summarized the essential characteristics of all the drives we’ve tested in the table below. Our collection of SSDs includes representatives based on the most popular SSD configurations on the market right now.

  Interface Cache Flash controller NAND
Corsair Force Series 3 240GB 6Gbps NA SandForce SF-2281 25-nm Micron async MLC
Corsair Force Series GT 240GB 6GBps NA SandForce SF-2281 25-nm Intel sync MLC
Crucial m4 256GB 6Gbps 256MB Marvell 88SS9174 25-nm Micron sync MLC
Intel 320 Series 300GB 3Gbps 64MB Intel PC29AS21BA0 25-nm Intel MLC
Intel 510 Series 250GB 6Gbps 128MB Marvell 88SS9174 34-nm Intel MLC
Intel 520 Series 240GB 6Gbps NA SandForce SF-2281 25-nm Intel sync MLC
OCZ Octane 512GB 6Gbps 512MB Indilinx Everest 25-nm Intel sync MLC
OCZ RevoDrive 3 x2 240GB PCIe 2.0 x4 NA 4 x SandForce SF-2281 25-nm Micron async MLC
Samsung 830 Series 256GB 6Gbps 256MB Samsung S4LJ204X01 2x-nm Samsung Toggle DDR
WD Caviar Black 1TB 6Gbps 64MB NA NA

The SATA drives were plugged into the Intel 6Gbps ports hanging off our test system’s motherboard. The RevoDrive was dropped into the motherboard’s secondary PCIe x16 slot, which offers eight lanes of PCIe 2.0 connectivity linked directly to the Sandy Bridge CPU—more than enough PCIe lanes for the RevoDrive 3 X2.

We used the following system configuration for testing:

Processor Intel Core i7-2500K 3.3GHz
Motherboard Asus P8P67 Deluxe
Bios revision 1850
Platform hub Intel P67 Express
Platform drivers INF update 9.2.0.1030

RST 10.6.0.1022

Memory size 8GB (2 DIMMs)
Memory type Corsair Vengeance DDR3 SDRAM at 1333MHz
Memory timings 9-9-9-24-1T
Audio Realtek ALC892 with 2.62 drivers
Graphics Asus EAH6670/DIS/1GD5 1GB with Catalyst 11.7 drivers
Hard drives Corsair Force 3 Series 240GB with 1.3.2 firmware

Corsair Force Series GT 240GB with 1.3.2 firmware

Corsair m4 256GB with 0009 firmware

Intel 320 Series 300GB with 4PC10362 firmware

Intel 510 Series 250GB with PWG2 firmware

WD Caviar Black 1TB with 05.01D05 firmware

OCZ Octane 512GB with 1313 firmware

Samsung 830 Series 256GB with CXM03B1Q firmware

Intel 520 Series 240GB with 400i firmware

OCZ RevoDrive 3 X2 240GB with 2.15 firmware

Power supply Corsair Professional Series Gold AX650W
OS Windows 7 Ultimate x64

Thanks to Asus for providing the systems’ motherboards and graphics cards, Intel for the CPUs, Corsair for the memory and PSUs, Thermaltake for the CPU coolers, and Western Digital for the Caviar Black 1TB system drives.

We used the following versions of our test applications:

Some further notes on our test methods:

  • To ensure consistent and repeatable results, the SSDs were secure-erased before almost every component of our test suite. Some of our tests then put the SSDs into a used state before the workload begins, which better exposes each drive’s long-term performance characteristics. In other tests, like DriveBench and FileBench, we induce a used state before testing. In all cases, the SSDs were in the same state before each test, ensuring an even playing field. The performance of mechanical hard drives is much more consistent between factory fresh and used states, so we skipped wiping the HDDs before each test—mechanical drives take forever to secure erase.

  • We run all our tests at least three times and report the median of the results. We’ve found IOMeter performance can fall off with SSDs after the first couple of runs, so we use five runs for solid-state drives and throw out the first two.

  • Steps have been taken to ensure that Sandy Bridge’s power-saving features don’t taint any of our results. All of the CPU’s low-power states have been disabled, effectively pegging the 2500K at 3.3GHz. Transitioning in and out of different power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune — Transfer rates

HD Tune lets us present transfer rates in a couple of different ways. Using the benchmark’s “full test” setting gives us a good look at performance across the entire drive rather than extrapolating based on a handful of sample points. The data created by the full test also gives us fodder for line graphs.

To make the graphs easier to interpret, we’ve greyed out the mechanical drive. The SSD results have been colored by drive maker, with the RevoDrive set apart from OCZ’s other offering in a yellow shade halfway between mustard and gold.

Weird. Although the RevoDrive has a higher average read speed than the other SSDs, its transfer rate bounces between high and low extremes with alarming frequency. Even the Revo’s maximum read speeds are substantially slower than the drive’s projected peaks.

The RevoDrive’s transfer rate profile looks even stranger in HD Tune’s write test. There are actually two oscillation patterns here. The series of regular spikes mostly matches the behavior we’ve observed from other SandForce-based SSDs. However, the spikes also follow a secondary wave that we haven’t seen before.

Average out all those peaks and valleys, and the RevoDrive 3 X2 scores lower than the Samsung 830 Series SSD by a wide margin. The drive’s average write speed barely breaks 300MB/s, which is a far cry from the drive’s published speed ratings.

These first tests were run with HD Tune’s default 64KB block size, which is obviously less than ideal for the RevoDrive. Out of curiosity, I fired up the same tests with an 8MB block size, the largest one available. The RevoDrive got a lot faster, so I ran the same tests on the Intel 520 Series and Samsung 830 Series for comparison.

The larger block size works out exceptionally well for the RevoDrive 3 X2, whose read speed averages nearly 1400MB/s. Perhaps more importantly, there’s none of the wild variance we saw in the previous tests. The other SSDs also post higher sustained read speeds than they did with the 64KB block size, but their gains are in the 16-28% range. The RevoDrive’s read speed improves by a massive 170%.

Switching to the 8MB block size also improves the RevoDrive’s write rate. The drive still exhibits a lot of oscillations in this test, but the secondary wave is gone and the average speed has tripled versus the 64KB block size. While the other SSDs enjoy performance gains, too, the improvements amount to less than 10%.

We asked OCZ if the RevoDrive is optimized for a particular block size and were told that 128KB should be “pretty close to best performance.” Not in HD Tune. The Revo only manages an average read speed of 716MB/s with the 128KB block size. Oddly, the benchmark spits out an error when trying to perform its write test with the same block size.

HD Tune’s burst speed tests are meant to isolate a drive’s cache memory. There’s no option to change the block size here.

The RevoDrive 3 X2’s read and write burst speeds are notably slower than most of the other SSDs, including those based on the very same SandForce controller—albeit only one of them.

HD Tune — Random access times

In addition to letting us test transfer rates, HD Tune can measure random access times. We’ve tested with four transfer sizes and presented all the results in a couple of line graphs. We’ve also busted out the 4KB and 1MB transfers sizes into bar graphs that drop the mechanical drive and should be easier to read.

As the line graph illustrates, the gulf in access times between mechanical and solid-state storage is vast—we’re talking multiple orders of magnitude. The SSDs are tightly packed by comparison, and the RevoDrive 3 X2 comes out looking pretty good. We typically see the biggest separation between SSDs in the 1MB read, where the RevoDrive is much quicker than its peers.

The trend continues in HD Tune’s random write test. While the RevoDrive 3 X2 falls 0.002 milliseconds short of the lead trio in the 4KB test, it’s way out ahead in the 1MB test.

TR FileBench — Real-world copy speeds

Concocted by resident developer Bruno “morphine” Ferreira, FileBench runs through a series of file copy operations using Windows 7’s xcopy command. Using xcopy produces nearly identical copy speeds to dragging and dropping files using the Windows GUI, so our results should be representative of typical real-world performance. We tested using the following five file sets—note the differences in average file sizes and their compressibility. We evaluated the compressibility of each file set by comparing its size before and after being run through 7-Zip’s “ultra” compression scheme.

  Number of files Average file size Total size Compressibility
Movie 6 701MB 4.1GB 0.5%
RAW 101 23.6MB 2.32GB 3.2%
MP3 549 6.48MB 3.47GB 0.5%
TR 26,767 64.6KB 1.7GB 53%
Mozilla 22,696 39.4KB 923MB 91%

The names of most of the file sets are self-explanatory. The Mozilla set is made up of all the files necessary to compile the browser, while the TR set includes years worth of the images, HTML files, and spreadsheets behind my reviews. Those two sets contain much greater numbers of smaller files than the other three. They’re also the most amenable to compression.

To get a sense of how aggressively each SSD reclaims flash pages tagged by the TRIM command, we’ve run FileBench with the solid-state drives in two states. We first test them in a fresh state after a secure erase. The SSDs are then subjected to a 30-minute IOMeter workload, generating a tortured used state ahead of another batch of copy tests. We haven’t found a substantial difference in the performance of mechanical drives between these states.

Fresh from a secure erase, the RevoDrive hangs with the leaders through our first wave of FileBench tests. It trades blows with the Samsung 830 Series in the tests with larger, compressed files. The other SandForce-based SSDs provide better competition in the TR and Mozilla tests, which are dominated by smaller files that are more amenable to compression.

Given the RevoDrive’s impressive peak throughput in HD Tune, it’s hard not to be a little disappointed by these results. Remember that this first batch of tests was conducted right after a secure erase that emptied all of the RevoDrive’s flash pages. Things are about to get much worse.

Our torture test fills all the RevoDrive’s flash pages, and without TRIM, the drive’s garbage collection routines aren’t fast enough to reclaim them. Thus, the RevoDrive 3 X2’s copy speeds plummet after the drive is put into a used state. The bigger the average file size, the more precipitous the performance drop, pushing the Revo way down the standings in the Movie, RAW, and MP3 tests. We’ve seen similar behavior from RAID setups that also lack TRIM support, so the results aren’t entirely unexpected. Given its prominent place on the RevoDrive’s spec sheet, though, we did initially expect TRIM to work.

TR DriveBench 1.0 — Disk-intensive multitasking

TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back as fast as possible on different drives. We’ve used this app to create a set of multitasking workloads that combine common desktop tasks with disk-intensive background operations like compiling code, copying files, downloading via BitTorrent, transcoding video, and scanning for viruses. The individual workloads are explained in more detail here.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score with each multitasking workload.

We run DriveBench 1.0 in a simulated used state, right after the drives have been hammered with IOMeter tests that fill all their available flash pages. That doesn’t give garbage collection algorithms any time to kick in, creating a likely worst-case scenario for the RevoDrive given its lack of TRIM support in Windows. Although the Revo manages to beat our lone mechanical drive, the PCIe SSD lags well behind all the other solid-state drives.

Let’s break down DriveBench’s overall score into individual tests to see if there might be a silver lining for the RevoDrive somewhere in the results.

Nope. The Revo is the slowest SSD across the board.

TR DriveBench 2.0 — More disk-intensive multitasking

As much as we like DriveBench 1.0’s individual workloads, the traces cover only slices of disk activity. Because we fire the recorded I/Os at the disks as fast as possible, solid-state drives also have no downtime during which to engage background garbage collection or other optimization algorithms. DriveBench 2.0 addresses both of those issues with a much larger trace that spans two weeks of typical desktop activity peppered with multitasking loads similar to those in DriveBench 1.0. We’ve also adjusted our testing methods to give solid-state drives enough idle time to tidy up after themselves. More details on DriveBench 2.0 are available on this page of our last major SSD round-up.

Instead of looking at a raw IOps rate, we’re going to switch gears and explore service times—the amount of time it takes drives to complete an I/O request. We’ll start with an overall mean service time before slicing and dicing the results.

The extra downtime afforded by DriveBench 2.0 appears to give the RevoDrive 3 X2 enough of a breather to perform background garbage collection. Despite being tested in a used state, the Revo posts the lowest mean service time we’ve ever recorded in this test. The Samsung 830 Series and synchronous SandForce drives from Intel and Corsair aren’t far behind, though.

Unlike most of the other SSDs, the RevoDrive’s median read and write service times are nearly identical. The Revo’s read performance is good enough for first place, but it’s just a little bit slower than the Samsung 830 Series with writes.

There are millions of I/O requests in this trace, so we can’t easily graph service times to look at the variance. However, our analysis tools do report the standard deviation, which can give us a sense of how much service times vary from the mean.

Score another one for the RevoDrive, at least for reads. There’s much less variance in the PCIe SSD’s read service times than there is with the other drives. However, the Samsung 830 Series again comes out ahead when we home in on write performance. The RevoDrive 3 X2 still manages a second-place performance, closely matching the standard deviation of synchronous SandForce SSDs from Corsair and Intel.

Our last bit of statistical DriveBench analysis narrows the focus to I/O request that take over 100 milliseconds to complete—an extremely long time within the confines of a modern PC. For reasons that will become clear in a moment, we’re showing the number of 100+ ms service times rather than expressing that figure as a percentage of total I/O requests.

Over the course of our two-week trace, the RevoDrive 3 X2 logs hardly any service times longer than 100 milliseconds. Indeed, none of the drive’s write service times exceed that mark. Arguably more impressive are the read results, even though the Revo stumbled a total of 59 times over the course of the test. That’s still a huge improvement over its next closest competitor, which broke the 100-ms barrier over 1,700 times running the same trace.

IOMeter

Our IOMeter workloads feature a ramping number of concurrent I/O requests. Most desktop systems will only have a few requests in flight at any given time (87% of DriveBench 2.0 requests have a queue depth of four or less). We’ve extended our scaling up to 32 concurrent requests to reach the depth of the Native Command Queuing pipeline associated with the Serial ATA specification. Ramping up the number of requests also gives us a sense of how the drives might perform in more demanding enterprise environments.

We run our IOMeter tests using the fully randomized data pattern, which presents a particular challenge for SandForce’s write compression scheme. We’d rather measure SSD performance in this worst-case scenario than using easily compressible data.

Clearly, the RevoDrive 3 X2 needs heavier loads to achieve peak performance. The nature of the workload also matters quite a bit. The web server test is made up entirely of read requests, and the Revo’s transaction rates are much lower than the competition’s. OCZ’s PCIe SSD fares better in the other three tests, which unleash a mix of reads and writes.

Within that trio, the database test has the highest percentage of writes at 33%. The RevoDrive has its best showing there, but it doesn’t fare as well in the file server and workstation tests, which are made up of only 20% writes. When crunching a barrage of randomized I/O requests, it seems the RevoDrive prefers writes to reads. Perhaps that’s why the official spec sheet offers performance ratings for 4KB random writes but not for reads.

Boot duration

Before timing a couple of real-world applications, we first have to load the OS. We can measure how long that takes by checking the Windows 7 boot duration using the operating system’s performance-monitoring tools. This is actually the first time we’re booting Windows 7 off each drive; up until this point, our testing has been hosted by an OS housed on a separate system drive.

Booting Windows from the RevoDrive requires little more than the appropriate 32- or 64-bit driver. The OS doesn’t load faster than it does on other SSDs, though. The RevoDrive 3 X2 is 2.5 seconds behind the fastest SSD in our Windows 7 boot duration test. At least it’s still well ahead of the Caviar Black.

Level load times

Modern games lack built-in timing tests to measure level loads, so we busted out a stopwatch with a couple of reasonably recent titles.

The Duke Nukem Forever results are too close to call, but the RevoDrive is up with the leaders. It has a half-second lead over the field in Portal 2, and given the closeness of the other results, I’m inclined to call that one a definitive win. Not that half a second matters a whole lot when you’re loading a game, mind you.

Power consumption

We tested power consumption under load with IOMeter’s workstation access pattern chewing through 32 concurrent I/O requests. Idle power consumption was probed one minute after processing Windows 7’s idle tasks on an empty desktop.

We typically measure SSD power consumption at our test system’s SATA power connector. That doesn’t work with the RevoDrive, which pulls power from the motherboard’s PCI Express slot. This time around, we’ve measured the power consumption of the entire system, sans monitor, at the wall outlet. To provide a frame of reference, the Intel 520 Series was also tested this way.

The RevoDrive config consumes about 5W more at idle and double that under load. Those results are reasonable considering the number of controllers on the PCIe SSD. If you’re curious how the Intel 520 Series’ power consumption compares to other SSDs, check out this page of our review of the drive.

The value perspective

Welcome to our famous value analysis, which adds capacity and pricing to the performance data we’ve explored over the preceding pages. We used Newegg prices to even the playing field, and we didn’t take mail-in rebates into account when performing our calculations.

First, we’ll look at the all-important cost per gigabyte, which we’ve obtained using the amount of storage capacity accessible to users in Windows.

Waiting for dollar-per-gigabyte SSDs? Stay away from the RevoDrive 3 X2, which rings in at nearly $3/GB for the 240GB model. The leading SandForce drives cost less than half that, as does the Crucial m4. Buying four OCZ Agility 3 60GB SSDs based on the same NAND and controller configuration as the Revo yields an equivalent capacity at $1.50 per gig. Of course, you’ll need some sort of RAID controller to make up for the lack of OCZ’s SuperScale chip and its VCA sidekick.

Our remaining value calculations use a single performance score that we’ve derived by comparing how each drive stacks up against a common baseline provided by the Momentus 5400.4, a 2.5″ notebook drive with a painfully slow 5,400-RPM spindle speed. This index uses a subset of our performance data described on this page of our last SSD round-up. The Intel 510 Series was actually worse than our baseline in one of the tests—100+ ms writes in DriveBench 2.0—so we’ve fudged the numbers a little to prevent that result from messing up the overall picture. We’ve also had to tweak the score for the RevoDrive in that test; it didn’t log any extra-long write service times, and the resulting zero is problematic for our baseline comparison formula.

Even being generous with the fudging doesn’t change the RevoDrive’s overall score appreciably. It’s stuck in the middle of the pack, which is a poor showing for a $700 SSD. I’m not surprised given the lack of TRIM support and the index’s focus on used-state performance with desktop workloads, though. The RevoDrive is at its best in a fresh state and at the sort of high queue depths rarely seen in desktop systems.

Now for the real magic. We can plot this overall score on one axis and each drive’s cost per gigabyte on the other to create a scatter plot of performance per dollar per gigabyte.

For everyday desktop use, the RevoDrive 3 X2 is simply isn’t a very good value. At least it’s fast enough to stay out of the lower-right corner, the least attractive region on the plot. The Revo’s position all the way to the right highlights its high cost per gigabyte, though.

Speaking of cost, let’s switch gears a little and consider the RevoDrive in the context of a complete system. This time, we’ve divided our overall performance score by the sum of our test system’s components. Those parts total around $800 before we add the cost of the SSDs.

Same story, different angle. The RevoDrive 3 X2 is simply too expensive to offer a better value proposition given its position on the performance axis.

Conclusions

There’s something very cool about the fact that OCZ’s SuperScale chip and Virtualized Controller Architecture can, without resorting to RAID, make a quartet of SandForce SSDs appear to Windows as a single SCSI drive. The RevoDrive 3 X2 becomes even more intriguing when one considers the peak throughput that arrangement provides. However, the $680 price tag on the 240GB model quickly puts things into perspective.

A more serious concern is the fact that Windows 7 doesn’t support TRIM on Storport devices like the RevoDrive 3 X2. SSD RAID arrays share a similar affliction, and this limitation is almost surely beyond OCZ’s control. However, the firm’s flat assertion that the RevoDrive supports TRIM is potentially misleading, given the OS support situation. Our results clearly show the lack of TRIM support hampering the Revo’s used-state performance in several tests. As we saw in DriveBench 2.0, though, the RevoDrive can sustain a high level of performance in real-world desktop workloads when there’s sufficient idle time for background garbage collection.

Thing is, the Samsung 830 Series actually scores better in a few DriveBench 2.0 metrics, and it’s very nearly as fast in that test overall. The RevoDrive simply doesn’t provide an appreciable performance advantage over the best 2.5″ SSDs in typical desktop tasks. Neither games nor Windows loads substantially quicker on the PCIe SSD, and even when the Revo is in a fresh state, its file copy speeds are nothing to write home about. Those results shouldn’t be affected by the lack of TRIM at all.

OCZ technically classifies the RevoDrive 3 X2 as a workstation solution, and it has more appeal in that realm. Under the right circumstances, the drive really can push obscenely high transfer rates. The Revo’s performance also ramps up nicely at the higher queue depths we’d expect from extremely demanding workloads. The nature of those workloads is key, though. The RevoDrive appears to require very large block sizes for optimal sequential throughput, and its prowess with random I/O seems more prevalent with writes than with reads.

For our purposes, at least, the Revo looks like a good fit. We needed an extremely fast secondary storage device for sequential transfers, and the RevoDrive 3 X2 is well-equipped for that task. The drive’s impressive throughput may even allow us to find weaknesses in motherboards’ PCIe implementations.

The RevoDrive also gives us a glimpse into the future SSDs. 2.5″ drives are already close to eclipsing the bandwidth available in the 6Gbps Serial ATA interface, and you won’t find a new SATA spec waiting in the wings with a fatter pipe. PCI Express, or more likely a derivative spiced up with storage-specific features, is the next step for high-performance solid-state drives. With several generations of PCI Express SSDs under its belt, proprietary controller technology from its Indilinx division, and a virtualization architecture that can tie multiple controllers together, OCZ looks well-positioned for future generations.

Comments closed
    • rjseo
    • 8 years ago
    • merryjohn
    • 8 years ago
    • Buzzard44
    • 8 years ago

    “the RevoDrive 3 X2 is simply isn’t a very good value” – 3rd to last paragraph of page 11.

    Good write up.

    Bad product.

    • Bensam123
    • 8 years ago

    Sad and disappointing results. OCZ seems to spit out new revisions of these things as fast as humanely possible without actually trying to improve them.

    It is true that SSDs are getting close to saturating Sata3 (still a ways off in all of the tests if saturation happens at 600MB/s), but that doesn’t stop people from achieving greater results then the theoretical maximum of one cable with raid, which would be the equivelant competition to this particular solution.

    Since this solution itself relies on a raid scheme on a single board, it could easily be compared to, say, two or three Samsung SSDs matching up to the same size in a raid 0 array.

    • Meadows
    • 8 years ago

    What’s up with the constant spike patterns on ALL the [b<]240[/b<] GiB drives regardless of brand?

      • willmore
      • 8 years ago

      Is that only seen on 240GiB drives? I thought it was on all SF drives.

      • indeego
      • 8 years ago

      It is caching, and the inherent differences in algorithms.

      • Dissonance
      • 8 years ago

      It’s a SandForce thing, regardless of the brand. Drives based on Intel and Marvell controllers don’t exhibit the same spikes, which aren’t limited to 240GB SandForce models.

      [url<]https://techreport.com/articles.x/22415/4[/url<] [url<]https://techreport.com/articles.x/22358/4[/url<] [url<]https://techreport.com/articles.x/21672/5[/url<] We started seeing this first with the Force F120, which was one of the first SandForce drives to ditch enterprise-style overprovisioning for the 7-8% typical of consumer SSDs. [url<]https://techreport.com/articles.x/19079/3[/url<]

        • Meadows
        • 8 years ago

        Ah.

    • Chrispy_
    • 8 years ago

    It’s out of place, clearly an enterprise part that’s been re-spun to get some extra consumer sales.

    I fail to see how this can compete with two 128GB SSD’s in a RAID0 at half the cost/GB. Most new motherboards have at least two SATA 6Gb/s ports and a free ICH10R controller to make a decent job of the striping.

      • kamikaziechameleon
      • 8 years ago

      Not trolling just inquiring.

      Why is it when a part is slower in basic functionality and its expensive its always a enterprise or workstation part?

      It seems to be the trend that, hey xeons, and workstation cards, and enterprise HDD really just don’t preform well with basic tasks. But guess what they don’t have to they compete in different price categories. Why don’t more people just re-purpose consumer components and save the money? I’m constantly jump between premium hardware in my office and my WAY cheaper consumer computer at home that I built for about 1/4 of my work machine… my home computer is frequently better at flipping web surfing and it drives me crazy they can’t add some basic functionality for a dynamic work environment to these “premium” parts. Meanwhile I notice next to no slow down in my work on my consumer setup.

        • Voldenuit
        • 8 years ago

        [quote<]Why don't more people just re-purpose consumer components and save the money? [/quote<] Reliability. Enterprise products cost more because they have more stringent quality control, higher reliability and warranty. They also tend to perform better at workstation and server tasks (see Revodrive web server tests, Quadro OpenGL and CAD tests, etc) than their consumer equivalents. That said, I don't think the OCZ Revodrive is necessarily the best example for a halo workstation/server product.

          • willmore
          • 8 years ago

          Don’t forget support. That’s probably the biggest enterprise feature. All of the actual improvements to the product stem from the manufacturer wanting to make the support costs as low (for them) as possible. Benefits to the customer are often incidental.

        • Chrispy_
        • 8 years ago

        the “Enterprise” label is a very broad umbrella term. At the low end a “workstation” can be barely different to a consumer PC, whilst at the top end parts are basically bespoke, custom hardware and silicon for a small number of elite clients.

        Enterprise parts tend to be poor value for money compared to consumer parts for a combination of durability under full load 24/7, certification and dedicated support solutions. The underlying hardware is often identical (eg, SATA-based enterprise SAS drives) but the cost difference is a result of long-term compatibility testing, different firmware versions for specific applications and what is often 4-hour, on-site dedicated hardware swap-out policies.

        Going back to your workstations, a $2000 workstation is almost certainly inferior to a $1000 consumer desktop, which is why you feel like your home machine is better. Unless you are making use of the very few applications where Xeons and Opterons beat their i7 and Phenom twins, you are essentially being charged anything up to 4x the cost for little ‘everyday’ benefit.

        My personal pet-peve is in the GPU market where the Quadros and FireGL cards are often marked up by a factor of more than eight, yet the identical hardware of the consumer versions is intentionally crippled in software for much of the product range. Unlike general office programs, CAD packages and graphics tools are co-developed with Nvidia and AMD to specifically use the enterprise-only driver functions that are deliberately handicapped in the consumer drivers. There is no option to use a Geforce instead of a Quadro if you’re not fussed about support or outright perfect accuracy. You have the all-or-nothing approach with no middle ground, where a $500 flagship card is getting trounced in professional applications by the same silicon normally used in a $99 budget cards, but it’s been relabelled by BIOS or other means to make use of the non-crippled code-path in the enterprise drivers.

          • Vaughn
          • 8 years ago

          Great Post!

        • Bensam123
        • 8 years ago

        15k SAS drives perform quite well in all tasks compared to 7.2k drives and even in some areas compared to SSDs… It’s just some markets that exhibit this behavior.

    • Compton
    • 8 years ago

    I think the main takeaway is that the 256GB 830 is pretty damn excellent. Overall, the RD3x2 is 1) hampered by a ridiculous name and 2) doesn’t really fit in anywhere. When you need a PCIe solution, this isn’t what you’re gonna need. You’ll need something enterprise-y, and this is not it. When OCZ moves to an all-Marvell solution (meaning, not SF controllers), I think these drives will make a lot more sense. And they look to be headed that way.

    If you’re willing to give up TRIM, two smaller drives in R0 will yield far more impressive results, and with the right controller, performance over time will be better. By a factor of… a lot.

      • Voldenuit
      • 8 years ago

      [quote<]When OCZ moves to an all-Marvell solution (meaning, not SF controllers), I think these drives will make a lot more sense.[/quote<] +1 The Corsair Performance Pro and Plextor M3 (both Marvell based drives) return performance to nearly unused levels with idle garbage collection alone. [url<]http://www.xbitlabs.com/articles/storage/display/marvell-ssd_7.html#sect0[/url<] I could see OCZ trying to shoehorn Indilinx into their Revodrives instead, though.

    • tygrus
    • 8 years ago

    Needs a wider range of tests.
    It really needs larger work queues (256+ outstanding IO’s) or multiple high IO bound apps. Single app desktop use is going to look similar or worse than a single SandForce SSD because it can’t use the parallel nature of the array of SSD controllers. QD4 on one SSD = performance of QD1 x3 on one SSD = performance of QD4 on RD3x2.
    It might look better for multitrack audio mixing or database/fileserver tasks with many users. Video mixing is normally only 2-4 streams which may not be enough to make use of all the resources.

      • willmore
      • 8 years ago

      I’d agree. The few places I’ve seen anything at all positive about this drive, it’s been where they could ramp up the number of parallel tasks accessing the drive. …and that is very much not a ‘typical desktop or workstation’ workload. Sure, some desktops and some workstations may, but it’s not *typical*.

    • slash3
    • 8 years ago

    Has anyone been able to physically track down a 240GB RevoDrive 3 X2 Max IOPS version? A friend of mine has been wanting to buy one of these, but it’s been in a perpetual state of nonexistence at retail for the last six months.

    • Plazmodeus
    • 8 years ago

    Thanks very much for this review. I don’t know how many times, faced with a sale on these at NCIX that *almost* brings them close to ‘reasonable’, I’ve almost punched the ticket. Fortunate that I didn’t.

    One question I have is how the X2 would fare in some kind of large size, high throughput application, like HD video editing.

      • willmore
      • 8 years ago

      You’d probably see little benefit from it. This drive only offers two things: very high BW for large block reads and the ability to sustain a large number of parallel reads–the more the better.

      The number of workloads that need exactly that are pretty small. And, even then, it’s only going to help for a portion of the task. See Amdahl’s Law [url<]http://en.wikipedia.org/wiki/Amdahl%27s_law[/url<]

    • internetsandman
    • 8 years ago

    Ironic that I was parading the worth of this drive around not two days ago. Seeing a drive of this caliber being eclipsed in the majority of average desktop workloads is both incredibly disheartening and slightly relieving. It’s sad that it doesn’t provide the lightning fast load times and speeds that I had been expecting, but it’s relieving that I can get an eqivilant capacity RAID array of SDD’s for cheaper and it would hopefully provide the instantaneousness I’m looking for

    I think TR should do a roundup of today’s high end 64 and 128GB SSD’s, both on their own and in RAID arrays of 2, 3, and even 4 drives (this PCIe card is seemingly 4 60GB drives in a similar array) to test the performance and value propositions of such solutions in comparison to this…..disappointment.

    • rhysl
    • 8 years ago

    The more I look at these results , the more the SAMSUNG stands out .. begging me to buy it ..

    Gosh ..

    • UberGerbil
    • 8 years ago

    I mentioned this in a late response to that “New Intel SSDs” news post a while back but it’s a better fit here so I’ll repost:

    It’s true that SATA is essentially a dead-end and we’re going to need an alternative with more bandwidth (and more room for growth) in the future. However, I’m not convinced sticking cards in PCIe slots is the solution in the long- or even medium-term. That may be fine for 2U+ server racks and enthusiast desktops, but the hot growth in the industry is smaller form factors, and even tricks like right-angle risers and mSATA connectors are pretty limiting. Moreover, do you really want the one PCIe slot in your ITX mobo taken up by storage? Sure, you could wedge another dedicated storage slot in there somewhere. But there’s a real virtue to being able to stick your storage on the end of a flexible cable in whatever corner it fits and hook it up to a small connector on the mobo. For Intel, that sounds like a job for LightPeak/Thunderbolt — and of course they have other imperatives to push that solution. But even on purely technical merits, there’s definitely something to be said for using the same cable standard for both internal and external connections. And since internal drives are easily powered through separate connectors, they really could go optical without any compromises (nor do internal drives threaten quite the same risk as running the PCIe protocol — and its free access to the system memory map — to external devices). Intel has a foot on every rung of this ladder — chipset/controller, interconnect, and storage — so it seems like a compelling and obvious thing for them to do. Whether they can get the rest of the industry to go along is a separate question, but presumably Apple at least would be on board.

    Of course, fast & durable memristors (or another NAND alternative) could arrive in the meantime and completely overturn the CPU/memory/storage hierarchy.

      • Krogoth
      • 8 years ago

      PCIe SSDs were never meant for normal desktops. They don’t have the need or demand for that level of I/O performance. HDDs are more then sufficient for this market. If they want to lower I/O latency, any of the SATA SSDs are up to the task.

      PCIe SSDs only make sense for servers/workstations where there is constant demand for more I/O performance.

        • phileasfogg
        • 8 years ago

        I don’t understand why you were downvoted -3 for a most-common-sensical observation. I couldn’t agree more, so I upvoted you.

          • willmore
          • 8 years ago

          Because he was off topic?

    • derFunkenstein
    • 8 years ago

    Wowza…what a total dud.

    • ew
    • 8 years ago

    So…

    * performance is nothing special
    * very expensive even by SSD standards
    * takes up a PCI-E slot
    * has OCZ infamous customer support

    Did I miss anything?

      • Voldenuit
      • 8 years ago

      [quote<]Did I miss anything?[/quote<] You missed how OCZ's Sadforce drives have like a 9+% return rate, so putting 2 of them together would give a... 20% failure rate?

        • MadManOriginal
        • 8 years ago

        So…you take a made-up [b<]return[/b<] rate, can't even double it correctly (hint: 9*2=18) and then call it a [b<]failure[/b<] rate. FUD much?

          • Voldenuit
          • 8 years ago

          Made up? Not really:

          [url<]http://www.behardware.com/articles/843-7/components-returns-rates-5.html[/url<] 9.14% return rate on OCZ Vertex 2 240 GB. 7 out of the 8 highest SSD returns are for OCZ. And with the complicated circuitry and control logic on the RevoDrive, it's very likely that the failure rate (yes, I am compounding the two, but who in their right mind returns a high performance drive if it's working?) will be higher than a simple linear extrapolation.

            • Waco
            • 8 years ago

            The Vertex 2 is an old drive at this point that has been through MANY firmware revisions. The newest drives are far more reliable.

          • grantmeaname
          • 8 years ago

          You wouldn’t double it. You would want 1-(1-9%)^2, which is 1-(91%)^2, which is 17%.

          • JustAnEngineer
          • 8 years ago

          Math hint: 1 – (1-0.0914)² = 17.44% failure rate for a two-drive array, assuming that the component drives fail at the same rate as individual drives do. 1 – (1-0.0914)^4 = 31.85% failure rate for a four-drive array.

          For individual probabilities less than 10% (especially if they’re less than 1%), the approximation of multiplying rather than exponentiating isn’t too far off because the x² and higher order terms become quite small. (1-x)² = 1 – 2x + x² is close to 1 – 2x.

          One wonders why “RAID-0” isn’t called “AID-0” since there is no [b<]R[/b<]edundancy whatsoever.

      • indeego
      • 8 years ago

      “* takes up a PCI-E slot”

      How is this a bad thing? That is its purpose. Do you knock a SSD drive for taking a SSD port?

        • UberGerbil
        • 8 years ago

        SSD port? Is that what we’re calling SATA 6 Gb/s now?

        • ew
        • 8 years ago

        Considering a PCI-E 8x slot can easily be turned into 4 “SSD ports” or a lot of other things I don’t think it’s too hard to see how using a whole slot for one drive is a bad thing.

    • kuraegomon
    • 8 years ago

    The drive absolutely does support ATA TRIM and SCSI unmap – it’s just that the Windows 7 Storport drivers issue neither command to the hardware. I.e. it’s an OS/driver limitation, not a hardware limitation. This has been widely reported ever since the RD3 X2’s release. For independent proof, just search for “Revodrive 3 Linux TRIM”, in particular robbat2’s posts here:

    [url<]http://www.ocztechnologyforum.com/forum/showthread.php?95151-Linux-patch-support-for-RevoDrive3-RevoDrive3-X2-zDrive-R4&highlight=linux+revodrive3[/url<] I'm not sure why Geoff was taken by surprise by this state of affairs. I definitely think that something to the effect of: "This drive is expected to offer a significantly better experience in Windows 8, due to the addition of TRIM/unmap support" should have been mentioned in the conclusions. That said, the Revodrive 3 X2's Linux support is rather execrable. Considering how crippled the drive is by Windows 7, OCZ should definitely have made more of an effort with Linux drivers, to highlight the capabilities of the drive.

    • eluderm3
    • 8 years ago

    [quote<]Although Microsoft added TRIM support in Windows 7, it only works with AHCI devices.[/quote<] TRIM is not limited to AHCI drives. It functions in IDE mode as well. [url<]http://en.wikipedia.org/wiki/TRIM[/url<] edit: providing links and fixing my poor spelling...for a 3rd time

    • DrDillyBar
    • 8 years ago

    I have the 120GB version of the first generation RevoDrive, and I love it. The cost of SSD’s has come down considerably since my purchase, but I’ve no regrets. The hardest thing about getting it working was my PCIe layout (trying to keep the graphics card in 16x mode).
    Also, my card shows that spiky read graph in HD Tune, although at a lower speed being first gen.

    • UberGerbil
    • 8 years ago

    I suspect that conclusion is not what some of the folks around here were expecting.

      • Waco
      • 8 years ago

      Considering you can get better performance from a quartet of 60 GB drives for about HALF the cost…yeah…I was a bit surprised. I wouldn’t expect the X3 to be any more reliable than 4 drives in RAID 0 either.

        • JustAnEngineer
        • 8 years ago

        The Corsair Force GT looks better, though Corsair’s RMA process is no picnic, either.

        • ew
        • 8 years ago

        Or if you spend the same amount of money you could get two 830s and raid them. Double the size and probably faster as well.

          • shank15217
          • 8 years ago

          If you use some silly software raid I highly doubt it, if you get a real RAID controller with on board cache then maybe.

            • Waco
            • 8 years ago

            Nobody uses hardware RAID for striped arrays. It’s slower.

            “Real” RAID controllers these days for striping are CPUs. Case in point – FusionIO uses software RAID for striping across their IoDrive Octals. I’ve played with them. There’s no hardware RAID controller in existence that can even hope to keep up with them.

            • JustAnEngineer
            • 8 years ago

            Processors on RAID host adapters are mostly there to handle parity calculations. In the cases of a rAID-0 stripe or a RAID-1 mirror, there are no parity calculations so an embedded processor wouldn’t have much to do.

            If you were running RAID5, you could see an improvement in write throughput with a high-end RAID card versus something handled by the CPU at the driver or OS level.

        • Compton
        • 8 years ago

        It almost looks like you can get the same performace from a 256GB 830.

        And anyway, real enterprise-y drives are typically optimized for higher queue depths — this much is true. But perhaps more importantly, they’re optimized to work well in steady state. Even with TRIM (or UNMAP) the RD would struggle under anything approaching steady state.

        So however performance is out of the box, it just could not handle enterprise-style loads. It’s really more of a workstation product, but I can’t imagine a workstation user liking the performance a couple of weeks from now when speeds are 40% lower overall.

          • indeego
          • 8 years ago

          It is an incomplete workstation product. It isn’t server ready. Anybody that built this for either needs to be fired.

            • 5150
            • 8 years ago

            I shudder to imagine the poor sap putting any kind of Sandforce drives in their servers right now. To be honest, I’m nervous about putting any kind of SSD in a server despite my good luck putting them in every desktop and laptop I purchase.

            • Waco
            • 8 years ago

            SSDs are in production servers around the world. They tend to be just as reliable if not more reliable than disks at the enterprise level.

      • Krogoth
      • 8 years ago

      Not really,

      It is just a RAID 0 of four Sandforce SSDs slapped onto a PCIe lane. The bottleneck is the flash chips writing speed and shows in the benchmark suite while it suffers from overhead involved in a RAID.

      This card is only good if your workload demands ultra-high reading I/O throughput (datacenters, servers).

Pin It on Pinterest

Share This