Western Digital’s Caviar Green 3TB hard drive

Manufacturer Western Digital
Model Caviar Green 3TB
Price (street)
Availability Now

There are two kinds of data in this world: that which we need to access quickly and that which mostly just needs to be stored. In the PC world, the former is typically made up of files associated with one’s applications, games, and operating system. You want that data on the fastest drive possible—ideally, a solid-state disk better equipped than mechanical storage to handle the random access patterns commonly associated with OS and application files.

If SSDs were cheap, we’d be using them to store everything. However, flash memory remains an exceedingly expensive proposition next to capacious platters that cost a couple of orders of magnitude less per gigabyte. When you need a lot of capacity for data that mostly just needs to be stored, be it gigabytes of RAW family photos, an MP3 collection fueled by years of rampant piracy, or the complete collection of Sex and the City episodes you ostensibly downloaded for your better half but have yet to delete, only a mechanical hard drive will do. Folks who have already upgraded to an SSD for their OS and application files will want to seek out one of a new breed of low-power hard drives spawned by Western Digital’s Caviar GP, which went on to become the Caviar Green.

Introduced more than three years ago, the Caviar GP compromised performance in the name of reduced power consumption and noise levels, while delivering what was at the time a generous terabyte of capacity. SSDs were an even more indulgent luxury back then, but the soon-to-be Caviar Green became an attractive option for folks building home-theater PCs and those in need of quiet, power-efficient secondary storage for their desktops. A new class of hard drive was born, and today, Hitachi, Samsung, and Seagate all have their own spin on the Green recipe.

However, none of them have a drive that can match the latest Green’s three-terabyte capacity. Although Seagate was first to reach the 3TB threshold with an external drive released this summer, the hard disk that lives within isn’t being offered as a bare Barracuda. Western Digital announced its own 3TB external drive earlier this month, and already, an internal version has become available.

Breaking new ground on the capacity front is nothing new for the Caviar Green family. Back in February of last year, it became the first drive line to reach the 2TB mark. Jumping from two to three terabytes over a year and a half puts one more nail in the coffin of Kryder’s law, which predicted a doubling of areal density, and thus hard drive capacity, every year. Western Digital seems intent on being secretive about the areal density of the new platters that fuel its 3TB monster. However, it has confirmed that each one packs 750GB—a 50% increase over the 500GB platters that reside in 2TB Caviars. Those last-gen platters have an areal density of 400 Gb/in², so the new ones are probably squeezing at least 600 gigabits into every square inch of platter surface area.

With 750GB apiece, only four platters are needed to hit three terabytes. A 2.5TB variant is also on the way, but no three-platter model is planned at 2.25TB.

Interface speed 3Gbps
Max drive transfer rate 110MB/s
Spindle speed 5,400 RPM
Cache size 64MB
Platter size 750GB
Available capacities 2.5, 3TB
Idle acoustics 24 dBA
Seek acoustics 25-29 dBA
Idle power 5.5W
Read/write power 6.0W
Warranty length 3 years

Like previous Caviar Greens, this one rotates its high-tech magnetic turntable at about 5,400 RPM. Western Digital has always been reticent to reveal exact spindle speeds for Caviar Greens, which it says are fine-tuned for each capacity point (or, more likely, each platter config) to hit specific power and acoustic targets. However, the company has conceded that this is essentially a 5,400-RPM drive.

Somewhat surprisingly for a product that doesn’t aspire to better than “solid” performance, the new Green comes equipped with a substantial 64MB of DRAM cache memory. Western Digital isn’t moving the Green line into 6Gbps Serial ATA territory, though. The drive’s 3Gbps interface is more than fast enough considering that the spec sheet quotes a maximum disk transfer rate of 110MB/s. However, I wouldn’t put too much stock into that number given that the same one is listed for every member of the Caviar Green family, including drives with lower-density platters.

Higher areal densities lead to faster sequential transfers because more data pass under the drive head in a given moment as the platter spins. Thus, the Caviar Green 3TB’s 750GB platters should offer higher transfer rates than the 2TB drive’s 500GB discs. The amount of available outer-edge area—the fastest portion of a platter—can also affect transfer rates. Since they’re both four-platter designs, the 2TB and 3TB Caviar Greens should be even in that department.

Of course, performance is hardly the Caviar Green’s raison d’être. This drive is all about quiet, power-efficient storage, and its specs are certainly impressive on that front. We’ll test noise levels and power consumption a little later in the review to see how the new Green measures up to the competition.

Like much of that competition, the Caviar Green 3TB is covered by a three-year warranty. That’s sufficient, I suppose, but the five-year warranties attached to premium 7,200-RPM hard drives would be a welcome addition to the Caviar Green line. Despite the fact that longer warranty coverage doesn’t guarantee a lower failure rate, it would be nice to have the extra coverage for the added peace of mind, however irrational. 3TB is a heck of a lot of data to lose, and reliability is one thing we can’t test without delaying this review well beyond the point of irrelevance.

Complications on the road to 3TB

The Caviar Green 3TB makes use of Advanced Format to use the capacity it has available more efficiently. Rather than breaking platters up into 512-byte blocks, Advanced Format relies on 4KB sectors that waste less storage capacity on overhead. Advanced Format is new enough to create compatibility issues with some software, which is why the Caviar Green uses 4KB sectors internally but presents itself as a drive with 512-byte sectors thanks to an emulation scheme WD dubs 512e.

Using 512e is all well and good on drives with capacities less than 2.19TB, but you run into problems with anything larger because Master Boot Record partition tables can only address up to 232 blocks. With 512-byte sectors, that adds up to a maximum capacity of 2.19TB, or considerably less than the 3TB offered by the new Caviar Green. The storage industry’s answer to the MBR’s addressing limitation is the GUID Partition Table, or GPT, which can address up to 264 sectors. Windows XP doesn’t work with GPT partitions, so Western Digital isn’t supporting the drive under that OS, although it notes that users may be able to find workarounds using third-party controllers and drivers.

There are issues for users running Windows 7 and Vista, as well. Both support GPT and will detect a full 3TB of capacity when the Green is run as secondary storage. However, if you want to use the Caviar as a boot drive, you’ll need a 64-bit version of either OS and a motherboard with a Unified Extensible Firmware Interface (UEFI) BIOS. Motherboards equipped with UEFI BIOSes are few and far between, so Western Digital is shipping this Caviar with a HighPoint RocketRAID 62X Serial ATA card with a PCI Express x1 interface. Folks with motherboards that lack UEFI BIOSes will be able to boot off the drive if it’s connected to the HighPoint card, but they’ll still need to be running a 64-bit version of Vista or Windows 7 to exploit all three terabytes.

Interestingly, the Caviar Green had no problem booting into Windows 7 x64 when connected to our test system’s P55 storage controller. However, this system’s motherboard doesn’t have a UEFI BIOS, so we couldn’t tap the drive’s full capacity. 746GB was inaccessible even with the drive converted to GPT, although we didn’t run in to any issues getting at all 3TB of capacity with the Green installed as a secondary hard drive.

That was with the Microsoft AHCI drivers built into Windows 7. We had more trouble with Intel’s latest RST storage controller drivers, which came out way back in March and are apparently unprepared to cross the 2.19TB threshold. When running the Caviar Green as a boot drive, we could only see 746GB of storage capacity, presumably from the portion of the disk beyond the 2.19TB mark. Even worse, that same 746GB was all that was available when running the Green as secondary storage! Intel is aware of the issue and has committed to address it with an updated RST driver that will be released this quarter. However, you’ll still need a motherboard with a UEFI BIOS or a compatible auxiliary storage controller to boot off the Green’s full 3TB capacity.

Or you need a Mac. The Cult of Jobs can rejoice knowing that Apple’s Intel-based systems have UEFI BIOSes and that OS X 10.5 and 10.6 both support GPT partitions. Those folks should be able to plug in the 3TB Green and use it however they wish.

Watch for motherboard makers to pounce on what looks like an opportunity to differentiate their boards with gaudy “3TB-ready” stickers. Asus has already developed an application that creates a virtual drive to give users access to the Green’s full capacity, even under Windows XP. Old XP licenses are great for closet file servers, which seem like a natural home for the first 3TB Caviar.

Our testing methods

Before dipping into pages of benchmark graphs, let’s set the stage with a quick look at other the players we’ve assembled for comparative reference. We’ve called up a wide range of competitors, including a selection of desktop hard drives, traditional notebook drives, Seagate’s Momentus XT hybrid, and a stack of pure solid-state goodness. Below is a chart highlighting some of the key attributes of the contenders we’ve lined up to face the Caviar Green 3TB.

Flash controller Interface speed Spindle speed Cache size Platter capacity Total capacity
Corsair Force F100 SandForce SF-1200 3Gbps NA NA NA 100GB
Corsair Force F120 SandForce SF-1200 3Gbps NA NA NA 120GB
Corsair Nova V128 Indilinx Barefoot ECO 3Gbps NA 64MB NA 128GB
Crucial RealSSD C300 Marvell 88SS9174 6Gbps NA 256MB NA 256GB
Hitachi Deskstar 7K1000.C NA 3Gbps 7,200 RPM 32MB 500GB 1TB
Intel X25-M G2 Intel PC29AS21BA0 3Gbps NA 32MB NA 160GB
Intel X25-V Intel PC29AS21BA0 3Gbps NA 32MB NA 40GB
Kingston SSDNow V+ Toshiba T6UG1XBG 3Gbps NA 128MB NA 128GB
OCZ Agility 2 SandForce SF-1200 3Gbps NA NA NA 100GB
OCZ Vertex 2 SandForce SF-1200 3Gbps NA NA NA 100GB
Plextor PX-128M1S Marvell 88SSE8014 3Gbps NA 128MB NA 128GB
Samsung Spinpoint F3 NA 3Gbps 7,200 RPM 32MB 500GB 1TB
Seagate Barracuda 7200.12 NA 3Gbps 7,200 RPM 32MB 500GB 1TB
Seagate Barracuda LP NA 3Gbps 5,900 RPM 32MB 500GB 2TB
Seagate Barracuda XT NA 6Gbps 7,200 RPM 64MB 500GB 2TB
Seagate Momentus 7200.4 NA 3Gbps 7,200 RPM 16MB 250GB 500GB
Seagate Momentus XT NA 3Gbps 7,200 RPM 32MB 250GB 500GB
WD Caviar Black 1TB NA 6Gbps 7,200 RPM 64MB 500GB 1TB
WD Caviar Black 2TB NA 3Gbps 7,200 RPM 64MB 500GB 2TB
WD Caviar Green 2TB NA 3Gbps 5,400 RPM 32MB 500GB 2TB
WD Caviar Green 3TB NA 3Gbps 5,400 RPM 64MB 750GB 3TB
WD Scorpio Black NA 3Gbps NA 16MB 160GB 320GB
WD Scorpio Blue NA 3Gbps 5,400 RPM 8MB 375GB 750GB
WD SiliconEdge Blue JMicron JMF612 3Gbps NA 64MB NA 256GB
WD VelociRaptor VR150M NA 3Gbps 10,000 RPM 16MB 150GB 300GB
WD VelociRaptor VR200M NA 3Gbps 10,000 RPM 32MB 200GB 600GB

Obviously, the SSD and mobile hard drive results won’t be as relevant to our discussion of the new Caviar Green. You’ll want to pay particular attention to how the Green compares to its 2TB predecessor and Seagate’s low-power Barracuda LP. The LP tops out at 2TB, which is as big as you can get Seagate’s internal hard drives at the moment.

On the SSD front, we’ve collected all the other relevant players, including drives based on Indilinx, Intel, JMicron, Marvell, SandForce, and Toshiba controllers. Although it might not seem like a fair fight, we’ve also thrown in results for a striped RAID 0 array built using a pair of Intel’s X25-V SSDs. The X25-V only runs a little more than $100 online, making multi-drive RAID arrays affordable enough to be tempting for desktop users. Our X25-V array was configured using Intel’s P55 storage controller, the default 128KB stripe size, and the company’s latest Rapid Storage Technology drivers.

The block-rewrite penalty inherent to SSDs and the TRIM command designed to offset it both complicate our testing somewhat, so I should explain our SSD testing methods in greater detail. Before testing the drives, each was returned to a factory-fresh state with a secure erase, which empties all the flash pages on a drive. Next, we fired up HD Tune and ran full-disk read and write speed tests. The TRIM command requires that drives have a file system in place, but since HD Tune requires an unpartitioned drive, TRIM won’t be a factor in those tests.

After HD Tune, we partitioned the drives and kicked off our usual IOMeter scripts, which are now aligned to 4KB sectors. When running on a partitioned drive, IOMeter first fills it with a single file, firmly putting SSDs into a used state in which all of their flash pages have been occupied. We deleted that file before moving onto our file copy tests, after which we restored an image to each drive for some application testing. Incidentally, creating and deleting IOMeter’s full-disk file and the associated partition didn’t affect HD Tune transfer rates or access times.

Our methods should ensure that each SSD is tested on an even, used-state playing field. However, differences in how eagerly an SSD elects to erase trimmed flash pages could affect performance in our tests and in the real world. Testing drives in a used state may put the TRIM-less Plextor SSD at a disadvantage, but I’m not inclined to indulge the drive just because it’s using a dated controller chip.

To make our massive collection of results a little easier to interpret, we’ve colored our bar charts by drive type. This color coding separates the SSDs from 2.5″ and 3.5″ mechanical drives and marks low-RPM versions of the latter, allowing the Caviar Green to stand out from the crowd, at least visually.

Most of our tests run on drives connected as secondary storage, so we were able to use the Caviar Green’s full 3TB with our test system’s default configuration, which uses the Microsoft AHCI drivers built into Windows 7. For the few tests that required booting off the Green, we elected to stick with the same config, since moving to the HighPoint card would’ve made the results less comparable—we’d be switching storage controllers, as well. The impact of running the Green at slightly less than full capacity should be negligible considering that our boot and system partition only amounts to 100GB, most of which is unused.

With few exceptions, all tests were run at least three times, and we reported the median of the scores produced. We used the following system configuration for testing:

Processor Intel Core i5-750 2.66GHz
Motherboard Gigabyte GA-P55A-UD7
Bios revision F4
Chipset Intel P55 Express
Chipset drivers INF update
Storage controller drivers Microsoft AHCI 6.1.7600.16385
Memory size 4GB (2 DIMMs)
Memory type OCZ Platinum DDR3-1333 at 1333MHz
Memory timings 7-7-7-20-1T
Audio Realtek ALC889A with 2.42 drivers
Graphics Gigabyte Radeon HD 4850 1GB with Catalyst 10.2 drivers
Hard drives Western Digital VelociRaptor VR200M 600GB
Western Digital Caviar Black 2TB
Western Digital VelociRaptor VR150M 300GB
Corsair Nova V128 128GB with 1.0 firmware
Intel X25-M G2 160GB with 02HD firmware
Intel X25-V 40GB with 02HD firmware
Kingston SSDNow V+ 128GB with AGYA0201 firmware
Plextor PX-128M1S 128GB with 1.0 firmware
Western Digital SiliconEdge Blue 256GB with 5.12 firmware
OCZ Agility 2 100GB with 1.0 firmware
OCZ Vertex 2 100GB with 1.0 firmware
Corsair Force F100 100GB with 0.2 firmware
Crucial RealSSD C300 256GB with 0002 firmware
Western Digital Scorpio Black 320GB
Western Digital Scorpio Blue 750GB
Seagate Momentus 7200.4 500GB
Seagate Momentus XT 500GB
Corsair Force F120 120GB with 30CA13F0 firmware
Hitachi Deskstar 7K1000.C 1TB
WD Caviar Black 1TB
Samsung Spinpoint F3 1TB
Seagate Barracuda 7200.12 1TB

Seagate Barracuda LP 2TB

Seagate Barracuda

WD Caviar Black 2TB

WD Caviar
Green 2TB

WD Caviar Green 3TB
Power supply OCZ Z-Series 550W
OS Windows 7 Ultimate x64

You can read more about the hardware that makes up our twin storage test systems on this page of our VelociRaptor VR200M review. Thanks to Gigabyte for providing the twins’ motherboards and graphics cards, OCZ for the memory and PSUs, Western Digital for the system drives, and Thermaltake for SpinQ heatsinks that keep the Core i5s cool.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

HD Tune

We’ll kick things off with HD Tune, our synthetic benchmark of choice. Although not necessarily representative of real-world workloads, HD Tune’s targeted tests give us a glimpse of a drive’s raw capabilities. From there, we can explore whether the drives live up to their potential.

I’ve removed the SSDs from the line graphs because the data is too densely packed to be readable. Plus, Excel really doesn’t have enough colors. If you’d like an idea of how the SSD transfer-rate profiles look in comparison, check out this page of our 7,200-RPM terabyte round-up.

First, a quick primer on color coding for the bar graphs. The Caviar Green 3TB is, as one might expect, green. Low-power desktop drives are dressed in a more evergreen hue, while blue highlights the pack of 7,200-RPM desktop drives and 10k-RPM VelociRaptors. We’ve greyed out the results for the 2.5″ crowd, with separate shades for SSDs and mechanical notebook models.

Let this first graph set your expectations for the rest of our performance results. The Caviar Green probably won’t be particularly quick in the realm of mechanical hard drives, which means it’s going to be quite a bit slower than the solid-state offerings that dominate the top of the standings.

The 3TB Green’s sustained read rate is, however, much faster than 2TB’s—by 18MB/s if we look at the averages. Seagate’s Barracuda LP is quicker than the 2TB Green but not fast enough to catch the 3TB model.

If you look at the line graph, you’ll notice that the ‘cuda’s read rates level and then plummet in a stair-step fashion across the length of the disk. The Caviar Green’s reads follow a similar profile, but the steps are less pronounced, particularly through the middle of the drive’s capacity.

Our write-speed results play out much like the reads. The Green is once again quicker than its 2TB cousin and the Barracuda LP, and this time it even beats Intel’s flagship X25-M SSD. Don’t get too excited, though. The X25-M line is notorious for its comparatively slow write rates.

Next up: some burst-rate tests that should test the cache speed of each drive. We’ve omitted the X25-V RAID array from the following results because it uses a slice of system memory as a drive cache.

The Green’s slow spindle speed doesn’t hamper short burst transfers to and from the drive’s hefty 64MB cache. While certainly not the quickest drive of the bunch, the 3TB Caviar finds itself nicely in the middle of the pack. It’s ahead of the 2TB Green but 12MB/s short of the Barracuda LP.

Our HD Tune tests conclude with a look at random access times, which the app separates into 512-byte, 4KB, 64KB, and 1MB transfer sizes. Let’s start with reads.

Betcha didn’t expect that. No, not the SSDs dominating. I’m talking about the 3TB Green edging out a couple of 7,200-RPM Seagates in random reads, at least up to the 64KB transfer size. All the drives slow considerably with 1MB random reads, but the 3TB Green still edges out the 2TB model and ties with the Barracuda LP.

The Caviar isn’t nearly as competitive with writes, where it’s well behind all of the 3.5″ desktop offerings. Only the Scorpio Blue, a 2.5″ notebook drive with a similar 5,400-RPM spindle speed, has slower access times with random writes. This is why you probably don’t want the Caviar Green as your system drive, at least on a desktop PC.

File Copy Test

Since we’ve tested theoretical transfer rates, it’s only fitting that we follow up with a look at how each drive handles a more typical set of sequential transfers. File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. We’ve converted those completion times to MB/s to make the results easier to interpret.

Windows 7’s intelligent caching schemes make obtaining consistent and repeatable performance results rather difficult with FC-Test. To get reliable results, we had to drop back to an older 0.3 revision of the application and create our own custom test patterns. During our initial testing, we noticed that larger test patterns tended to generate more consistent file creation, read, and copy times. That makes sense, because with 4GB of system memory, our test rig has plenty of free RAM available to be filled by Windows 7’s caching and pre-fetching mojo.

For our tests, we created custom MP3, video, and program files test patterns weighing in at roughly 10GB each. The MP3 test pattern was created from a chunk of my own archive of ultra-high-quality MP3s, while the video test pattern was built from a mix of video files ranging from 360MB to 1.4GB in size. The program files test pattern was derived from, you guessed it, the contents of our test system’s Program Files directory.

Even with these changes, we noticed obviously erroneous results pop up every so often. Additional test runs were performed to replace those scores.

Given its strong sustained write rates in HD Tune, I expected better from the 3TB Green in these file creation tests. The Caviar is particularly sluggish with the MP3 file set, but it’s more competitive with the other two, at least when compared with the 2TB Green. That said, the Barracuda LP has notably faster file creation speeds across all three file sets.

Switching to reads gives the Seagate drives fits with the MP3 and program file sets. The 3TB Green doesn’t have any issues with read performance, and it maintains a healthy lead over its 2TB counterpart throughout. Notice that the latest Caviar is only a little bit slower than the last-gen VelociRaptor, which has a blistering 10k-RPM spindle speed.

These copy tests combine read and write operations, so it’s not surprising to see the 3TB Caviar faltering with the MP3 file set. The drive otherwise manages to best its 2TB predecessor, although it’s still slower than the Barracuda LP across the board.

File copy speed

Although FC-Test does a good job of highlighting how quickly drives read, write, and copy different types of files, the app is antiquated enough to completely ignore the command queuing logic built into modern hard drives and SSDs. FC-Test only uses a queue depth of one, while Native Command Queuing can stack up to 32 I/O requests when asked. To get a better sense of how these drives react when moving files around in Windows 7, we performed a set of hand-timed copy tests with 7GB worth of documents, digital pictures, MP3s, movies, and program files. These files were copied from the drive to itself to eliminate any other bottlenecks.

We run this test on SSDs in a factory fresh and simulated used state since there are often performance differences between those two conditions. To put our SSDs into a simulated used state, I run our IOMeter workstation access pattern with 256 concurrent I/O requests for 30 minutes before launching into a second batch of copy tests.

IOMeter creates a massive test file that spans the entirety of a drive’s capacity, and deleting it to make room for a batch of copy tests nicely puts solid-state disks into a tortured used state. What we’ve essentially done here is filled all of an SSD’s flash pages, subjected the drive to a punishing workload with a highly-randomized access pattern, and then marked all of the flash pages as available to be reclaimed by garbage-collection or wear-leveling routines.

Mechanical hard drives aren’t subject to the block-rewrite penalty that causes SSD performance degradation as flash pages become occupied, so there’s no difference between their fresh- and used-state performance. We’ve double-checked to be sure. To avoid confusing the issue, we’ve omitted the fresh-state copy speeds of the SSDs in the graph below.

This real-world file copy test shows the Caviar Green 3TB in a more positive light than FC-Test. Here, the Green is just a smidgen ahead of our low-power 2TB drives. The 7,200-RPM desktop models are quite a bit quicker, of course, with the Spinpoint F3 boasting a copy speed 67% faster than the new Caviar’s.

Application performance

We’ve long used WorldBench to test performance across a broad range of common desktop applications. The problem is that few of those tests are bound by storage subsystem performance—a faster hard drive isn’t going to improve your web browsing or 3ds Max rendering speeds. A few of WorldBench’s component tests have shown favor to faster hard drives in the past, though, so we’ve included them here.

Just because the 3TB Green offers largely equivalent performance to the others in most of our application tests doesn’t make it a good app drive. As the Nero test nicely illustrates, the Green is substantially slower than its 3.5″ counterparts with higher spindle speeds, including the Barracuda LP. The 3TB Green does have a sizable advantage over the 2TB model in that particular test, though.

Boot and load times

Our trusty stopwatch makes a return for some hand-timed boot and load tests. When looking at the boot time results, keep in mind that our system must initialize multiple storage controllers, each of which looks for connected devices, before Windows starts to load. You’ll want to focus on the differences between boot times rather than the absolute values.

This boot test starts the moment the power button is hit and stops when the mouse cursor turns into a pointer on the Windows 7 desktop. For what it’s worth, I experimented with some boot tests that included launching multiple applications from the startup folder, but those apps wouldn’t load reliably in the same order, making precise timing difficult. We’ll take a look at this scenario from a slightly different angle in a moment.

Once again, the 3TB Green is quicker than its 2TB cousin but slower than the Barracuda LP. The difference between the three only amounts to a couple of seconds, which is longer than it’ll take to figure out what your system needs to access the Green’s full capacity as a boot drive.

A faster hard drive is not going to improve frame rates in your favorite game (not if you’re running a reasonable amount of memory, anyway), but can it get you into the game quicker?

If you’re worried about games spilling over from your solid-state system drive and into secondary storage, pay attention to these results. The 3TB Green loads our two gaming scenarios several seconds slower than 7,200-RPM desktop drives. Seagate’s 5,900-RPM Barracuda has a lead of nearly five seconds in Modern Warfare 2, but the gap shrinks to less than two seconds in Crysis. Versus the 2TB Green, the 3TB drive saves about a second in each gaming test.

Disk-intensive multitasking

TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back on different drives. We’ve used this app to create a new set of multitasking workloads that should be representative of the sort of disk-intensive scenarios folks face on a regular basis.

Each workload is made up of two components: a disk-intensive background task and a series of foreground tasks. The background task is different for each workload, but we performed the same foreground tasks each time.

In the foreground, we started by loading up multiple pages in Firefox. Next, we opened, saved, and closed small and large documents in Word, spreadsheets in Excel, PDFs in Acrobat, and images in Photoshop. We then fired up Modern Warfare 2 and loaded two special-ops missions, playing each one for three minutes. TweetDeck, the Pidgin instant-messaging app, and AVG Anti-Virus were running throughout.

For background tasks, we used our Firefox compiling test; a file copy made up of a mix of movies, MP3s, and program files; a BitTorrent download pulling seven Linux ISOs from 800 connections at a combined 1.2MB/s; a video transcode converting a high-def 720p over-the-air recording from my home-theater PC to WMV format; and a full-disk AVG virus scan.

DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignored—IOs are fed to the disk as fast as it can process them. This approach doesn’t give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. We know the number of IOs in each workload, and armed with a completion time for each trace playback, we can score drives in IOs per second.

Below, you’ll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score in each multitasking workload.

DriveBench doesn’t produce reliable results with Microsoft’s AHCI driver, forcing us to obtain the following performance results with Intel’s RST drivers. We couldn’t get DriveBench to play nicely with our the X25-V RAID config, either, which is why it’s not listed in the graphs below. The app will only run on unpartitioned drives, so we tested drives after they’d completed the rest of the suite.

As we’ve mentioned, Intel’s current RST drivers don’t properly support 3TB hard drives. Instead of detecting the Green’s full capacity, we were limited to the latter 746GB of the drive. That’s still enough capacity to run our DriveBench workloads, but given that the Intel drivers aren’t handling the Green properly, I’m hesitant to draw too many conclusions from the results.

Looking at the overall DriveBench scores, we see the 3TB Caviar just barely ahead of the 2TB model. That’s not a surprise, nor is the fact that the Barracuda LP is out ahead of both. The ‘cuda’s margin of victory here is particularly impressive, though; it’s 20% quicker than the Greens.

Let’s break down the overall average into individual test results to see if anything stands out.

The Caviar’s performance with the virus-scanning workload certainly jumps out, but not in a good way. The Green is particularly slow, finishing dead last in a pack that includes a 5,400-RPM notebook model. Given the fact that drives churn out fewer IOps with the virus-scanning workload than any other, I suspect it’s our most demanding multitasking test.

Curious to see whether removing the multitasking element of these tests would have any bearing on the standings, I recorded a control trace without a background task.

Nope. Nothing to see here.

DriveBench lets us start recording Windows sessions from the moment the storage driver loads during the boot process. We can use this capability to take another look at boot times, again assuming our infinitely fast system. For this boot test, I configured Windows to launch TweetDeck, Pidgin, AVG, Word, Excel, Acrobat, and Photoshop on startup.

The 3TB Green puts a little more distance between itself and the 2TB model, but both are still slower than the Barracuda LP.


Our IOMeter workloads are made up of randomized access patterns, presenting a good test case for both seek times and command queuing. The app’s ability to bombard drives with an escalating number of concurrent IO requests also does a nice job of simulating the sort of demanding multi-user environments that are common in enterprise applications.

SSDs are orders of magnitude faster than mechanical hard drives in this test, and that makes graphing the results rather challenging. So we didn’t. The graphs below only have results from the mechanical hard drives. If you’d like to see how the SSDs compare, scroll down this page of our four-way 7,200-RPM terabyte comparison.

Western Digital’s newest Caviar Green has mostly been faster than the 2TB model, and that holds true with three of four IOMeter access patterns. With the file server access pattern, the 3TB Green’s transaction rates lag behind those of its predecessor by a decent margin.

Both Caviars outclass the Barracuda LP with the file server, workstation, and database access patterns. In those tests, the ‘cuda’s transaction rates flat-line until we hit 32 outstanding I/O requests—the queue depth for Native Command Queuing—before ramping up aggressively. The LP’s transaction rates scale more linearly with the web server access pattern, which is made up entirely of read requests.

Power consumption

For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. We were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive. Drives were tested while idling and under an IOMeter load consisting of 256 outstanding I/O requests using the workstation access pattern.

Finally, the Caviar Green gets to play on its native turf, and it shines, drawing fewer watts under load than any other 3.5″ desktop drive (the VelociRaptors have 2.5″ form factors inside 3.5″ sleds). The Green’s idle power draw is also impressively low, although not quite as sparing as the Barracuda LP or terabyte Deskstar. Given its capacity, the Caviar would look even better on a watts-per-terabyte scale.

Noise levels

Noise levels were measured with a TES-52 Digital Sound Level meter 1″ from the side of the drives at idle and under an HD Tune seek load. Drives were run with the PCB facing up.

Our noise level and power consumption tests were conducted with the drives connected to the motherboard’s P55 storage controller.

I’ve consolidated the solid-state drives here because they’re all completely silent. The SSD noise level depicted below is a reflection of the noise generated by the rest of the test system, which has a passively-cooled graphics card, a very quiet PSU, and a nearly silent CPU cooler.

The 3TB Caviar is far from quietest mechanical drive of the bunch. However, drives with fewer platters tend to generate less noise than those with more, and the Green is quieter than any other four-platter model. Under a seek load, only a couple of two-platter terabyte drives generate less noise than the 3TB Caviar. The new Green has quieter seek acoustics than the 2TB model, as well.

Most mechanical hard drives have an Automatic Acoustic Management (AAM) value that can be set between 128 and 254. Manipulating this setting tends not to affect idle noise levels, but it can dramatically impact seek noise and access times. To get an idea of the sort of performance and acoustic range available with our collection of mechanical drives, we’ve tested the seek noise level and random access time of each at the extremes of the AAM scale. By default, all of the mechanical drives had AAM disabled or set to 254, which is the most aggressive seek setting. AAM doesn’t appear to work at all on the Barracuda XT.

Fiddling with AAM levels will only cut a decibel from the 3TB Green’s seek noise levels. That’s not enough to get the drive into Spinpoint territory, and my ears were hard pressed to notice the difference from a couple of feet away. Is there much of a performance penalty to the less aggressive AAM setting?

Oh yes. The Green’s seek time jumps by a third when we push the AAM slider back to 128. Even with the drive deployed as secondary storage, I probably wouldn’t bother messing with the default AAM setting.

The value perspective

After spending pages rifling through a stack of performance graphs, it’s time to broaden our horizons a little and take each drive’s price into consideration. First, we’ll look at capacity per dollar.

To establish an even playing field for all the contenders, we’re using Newegg pricing for all the drives. Mail-in rebates weren’t included in our calculations. Rather than relying on manufacturer-claimed capacities, we gauged each drive’s capacity by creating an actual Windows 7 partition and recording the total number of bytes reported by the OS. Having little interest in the GB/GiB debate, I simply took that byte total, divided by a Giga (109), and then by the price. The result is capacity per dollar that, at least literally, is reflected in gigabytes.

In part because it sets a new standard for overall capacity, the Caviar Green 3TB commands a rather hefty price premium compared to other mechanical desktop drives. The drive’s $240 asking price includes the HighPoint controller, of course, but that doesn’t add any gigabytes to the equation. Flagship capacity points have always been a costly proposition, so the Green’s position isn’t unexpected.

Overall performance per dollar is up next, but before we get there, we need to come up with an overall performance score for each drive. Using a single number to represent a drive’s performance across a range of different benchmark tests can be tricky business. After reading through numerous papers on the subject, we’ve settled on calculating a harmonic mean of all the results you’ve seen today. A harmonic mean can be useful for quantifying overall performance for a benchmark suite when individual test results can be compared to a reference baseline, and it’s not prone to being skewed by the fact that we have performance differences of several orders of magnitude in some cases. We just happen to have a full suite of results normalized to a performance baseline provided by an ancient 2.5″, 4,200-RPM IBM Travelstar mobile drive, and as you’ll see in a moment, the harmonic mean generates an overall score that nicely tracks with expectations based on the performance we’ve observed thus far.

I should note that we considered using an arithmetic average to calculate our overall score. However, this simple mean is easily skewed by the enormous performance gaps in IOMeter and HD Tune’s random access time tests, which are several orders of magnitude larger than the performance deltas in the other tests. The resulting overall score doesn’t track with expectations based on the performance we’ve already quantified. Weighting the average to account for those orders-of-magnitude differences would have been arbitrary at best, so we’ve settled on a harmonic mean, which seems to provide useful results.

Our overall score includes individual results for DriveBench and IOMeter rather than the averages we presented in the first set of value graphs. There are five DriveBench multitasking loads and four IOMeter access patterns, giving us a total of 19 test results from which to calculate the harmonic mean. This collection of tests is a little biased towards random access patterns rather than sequential transfers, but we think it strikes a good balance for drives that will store a system’s OS and applications. The power-efficiency and noise-level results have been left out to keep this a strictly performance-per-dollar affair.

Because they had to sit out at least one of the tests that make up our overall average, the PX-128M1S and X25-V RAID array haven’t been included in the graphs below. We wouldn’t recommend the former, anyway, and with two drives at its disposal, the RAID config would’ve had an unfair advantage—you know, like it’s had all day already.

Our overall performance index isn’t particularly friendly to the new Caviar Green. The drive’s slow random-write access times and poor performance with DriveBench’s virus-scanning workload certainly don’t help, as the 3TB Green lags behind all other desktop drives, including the 2TB model.

SSDs obviously have a huge advantage when we consider overall performance. However, capacity is an equally important component of any storage device. We’ve divided each drive’s overall performance score by its cost per gigabyte to get a look at overall performance per dollar per gigabyte. Try saying that five times fast.

I’ve omitted SSDs from the scatter plots for the sake of readability. Once more, if you’re curious to see how the solid-state field compares, consult this section of our last hard drive round-up.

With room to spread out, the mechanical drives make for an interesting scatter plot all on their own. The Caviar Green is the slowest of the desktop drives, and much better performance can be had without moving up much on the price axis. In fact, you can step up in performance and spend a fair bit less on both the Caviar Green 2TB and the similarly sized Barracuda LP.

Another way to look at this data is to divide each drive’s performance by the cost of a complete system built around it. The aim here is to determine whether spending a little (or a lot) more makes sense when the price premium is absorbed as part of the cost of a complete system. The step up from a $70 drive to a $95 one is hardly daunting to start, and once you factor in the cost of a complete build, the price difference practically disappears.

For our system price calculations, we’ve used our test rig as the inspiration for a base config, to which the price of each drive will be added. Our example system includes a Core i5-750, a P55-based ASUS P755D-E motherboard, 4GB of DDR3-1333 memory, a passively-cooled Radeon HD 4850, Antec’s Sonata III enclosure, and Windows 7. Its base price is $814.94, although you’ll probably want to tack on the cost of secondary mass storage for configurations that will use an SSD.

The Caviar’s relatively high price does it no favors here. Once more, the 3TB Green looks like a relatively poor value, at least from a performance perspective. Of course, you shouldn’t be buying a Caviar Green for its performance.


There’s something to be said for being the first hard drive maker to offer an internal model with three terabytes of total capacity. So, there, I said it. The fact is that arbitrary milestones aren’t that important, and this one in particular is complicated by the compatibility issues that arise when one moves beyond 2.19TB. I’m somewhat surprised WD is going so far as to ship the Green with a HighPoint controller card, but that approach at least ensures that users will be able to tap the Caviar’s full capacity whether it’s being pressed into service as a boot drive or secondary storage.

For me, the 750GB platters inside the Caviar Green 3TB hold the most intrigue. In the past, WD has spun up new platter technology first in the Green line, where kinks can be ironed out at lower speeds, before migrating it to high-performance 7,200-RPM models. I suspect it won’t be long until we see 750GB platters trickle into the Caviar Black family, which could raise the bar of performance for 3.5″ mechanical hard drives.

But the Caviar Green is not a high-performance hard drive, and as a result, it probably looks rather unimpressive after pages of benchmark results. Even against its low-power competition, the Barracuda LP, the Caviar is often a step behind. There’s little wisdom in fussing over minor performance differences for a product category that consciously concedes on spindle speed, though. In its natural habitat, the Caviar Green will typically be tasked with light duties like streaming media, serving as a dumping ground for backups and incoming downloads, and hosting image files for photo editing. Besides, the Caviar is quieter than the Barracuda LP, and it consumes less power under load. Then there’s the extra terabyte of capacity, which is a big deal if you’re working in a small-form-factor enclosure that only has one 3.5″ hard drive bay.

As our performance results plainly illustrate, folks probably shouldn’t be using the Caviar Green 3TB as a system drive in anything but a home-theater PC that doesn’t need to be particularly snappy. This drive is really best suited to secondary mass storage, ideally riding shotgun with a fast solid-state system drive. In that role, the Green’s low power consumption and quiet noise levels mean far more than a few megabytes per second here and there. The drive’s monstrous capacity can pay further power and acoustic dividends if it means you can run fewer drives in a system.

In the end, the Caviar Green 3TB is well-suited to home-theater PCs and simple storage, but it’s not the best value for either. That honor belongs to lower-capacity notches on the mechanical storage ladder that are unburdened by the premium affixed to new high-water marks in hard drive capacity.

Comments closed
    • swaaye
    • 12 years ago

    Oh yeah, the 128GB limit. And the 8.4GB limit. And the 528MB limit. And that’s obviously not all of the barriers that we’ve run into. 😀 How many times can we not plan well enough for the future!

    That’s true that Intel 810 had UDMA66 but like you said nobody wanted that. It was a decent chipset for cheap PCs running Celerons but who really wanted a motherboard without AGP for a high-end box. That was an annoying time of Intel trying to meddle with the future by forcing RAMBUS junk down our throats and crippling the alternatives. VIA was an option but their chipsets were horrible back then.

    • Bensam123
    • 12 years ago

    Derp… most people don’t know how or don’t want to use Unix, so yes, it’s good for closet file servers.

    • Ryu Connor
    • 12 years ago


    • Firestarter
    • 12 years ago

    I agree that it’s preferable to do critical system updates without rebooting, but for the home user it’s a non-issue. A Windows XP box may not be as easy to set-and-forget as some Linux distros are, but at least the ones who have to occasionally touch it know where the various buttons are and what they do.

    • Next9
    • 12 years ago

    Users of Microsoft Operating system have to deal with “problems”. Microsoft just convinced them that “it is normal”. For example it is NOT NORMAL to reboot each time you installed some system application or system update. Just because users are used to deal with those problems, and just because they are brainwashed that it is “normal behavior”, it does not mean they do not have to deal with problems. They are just satisfied this system somehow works (if they comply some bizarre rules) for let say two years, until it became bloated and need to be reinstalled.

    It is hard to explain those users, that there may exist better solutions, and they don’t have to do all those ugly things (which are said to be “normal”), just to make their computer work.

    • Bauxite
    • 12 years ago

    ZFS and IPv6 use 2^128, which is equivalent to a separate 64bit space for every 64 bit address. They should handle any “earthly” limits and a good number of solar systems nearby to boot.

    So after half a dozen or so iterations of various systems it looks like we are finally learning our lesson and planning farther into the future than a few years or decades: the shortsighted-ness of 2 digit years, 8 bit this and 16 bit that.

    32 bit barely last 2 decades, computer ram blew it by already and the internet is about to.

    GPT limits might not last through our lifetime either.

    • yuhong
    • 12 years ago

    Well, you forget about how the first 160GB hard drives shipped with the Promise Ultra133. BTW, the ICH had UDMA 66 support but the transition from the 440BX to the i8xx chipsets was proving difficult (remember Rambus and the i820 chipset?). By the time of the release of the i815, the ICH2 was released at the same time offering UDMA 100 support.

    • Vhalidictes
    • 12 years ago

    You’re not an idiot. You’re fighting with a very specific problem while using XP, that 99% of users don’t have to deal with.

    This may be why many people consider Windows XP to be very stable and “good enough” for many things – except whatever bizarre COM-port scanning your application is doing, apparently.

    • Lans
    • 12 years ago

    Microsoft’s page on PAE says chipset also needs to support 36-bit addressing.

    Most code compiled nowadays have virtual segments for code, stack, heap, etc (something like a relocation table). So by “segment+offset”, I thought you meant having a different upper 4-bit for code, stack, heap, etc… This doesn’t change the fact you can’t use more than 4 GiB virtual space in 32-bit applications (unless pointers are more than 32-bit) but does allow running multiple 32-bit applications to actual make use of having more than 4 GiB of physical memory.

    While I am at it, for disks, the current 48-bit LBA scheme supports 128 PiB but most BIOS need to find a MBR in LBA 0 to actual boot OS. This only means you can’t place your boot loader above 4 GiB limit (otherwise you’ll fail to boot OS) but that is where BIOS limit ends; as Next9 explained.

    EDIT: failed reply to #52… oh well…

    • muyuubyou
    • 12 years ago

    Actually, in Windows 32bit the limit is 3GB not 4GB. Most other 32 bit OS’s allow 4GB (but the real limit is 64GB, I explain next paragraph if you’re really really bored).

    Physically all i686 32bit compatible computers (Pentium Pro and later IIRC) can address a 36 bit space (64 GB RAM) however the registers are 32 bit, so if you want to address something above the 4GB line, you’d have to implement some segment+offset scheme like you had to in IBM PC-XT days (or PC-AT in real mode, in protected mode you had 24bit addressing => 16MB) . I reckon you’d have to do in assembler since no compilers seem to support it, and it would take a bunch of cycles more than direct addressing.

    Thankfully we’ve just jumped to a flat 64bit space, a lot like Motorola did with their epic 68000 that had a 32 bit flat address space (Amiga, Atari ST, Apple Lisa, Apple Mac) while in Intel world it was necessary to use 16 bit “segments” and 4 bit “offsets” to address a whopping 1MB RAM maximum.

    • Lans
    • 12 years ago

    You sound like you really know this. 🙂 Do you know for certain Linux would boot and work fine with a single gigantic partition (presumably over 2 TiB)?

    I know Linux file systems generally have /boot at the beginning of the disk but I can’t seem to find any hard requirement on that vs. having just a single gigantic partition (generally not recommended but I don’t think recent kernels would choke). I think it really boils down to “just in case” of needing to clone OS on to older hardware/disk, and performance (generally files a beginning of disk can be accessed more quickly) and maybe old bugs.

    • DrCR
    • 12 years ago

    GRUB FTW. I couldn’t do what I do without it — I dynamically hide partitions depending on boot selection. That sounds like a better solution i.e. it can do what I want, not “middleware,” to me…

    • Kaleid
    • 12 years ago

    People used to say the same thing about 640GB

    • Next9
    • 12 years ago

    It is not a theory. You can try it yourself.

    Just because of crippled BIOS implementation you can meet various additional problems. Last moth I obtained HP laptop with BIOS/UEFI onboard, and this miracle even fail to POST if USB pendrive formated with filesystem different from FAT was connected. Just freeze with black screen. I can only guess why that genius BIOS was checking USB-drive filesystem before POST. 🙂

    • yuhong
    • 12 years ago

    I think it is by increasing the sector size presented to the host computer and translating it to the sector size used by the target drive. This would render the format incompatible with the format used when the target drive is installed internally, so expect enclosures to provide a jumper option to enable or disable this translation.

    • yuhong
    • 12 years ago

    In theory. In practice, some BIOSes do try to parse the partition table. The Protective MBR helps a lot, but even that don’t always work.

    • indeego
    • 12 years ago

    Yikes. I would not trust this. Creates a “virtual drive” that the OS may or may not support at a future date, and could possibly break. It probably can’t be seen outside of windows, and uninstalling the software removes the virtual drive. Rather riskier than just not using this drive as a boot device for the time beingg{.}g

    • glynor
    • 12 years ago

    Exactly. We’re actually using an enterprise-class SAN, of course. And, no, I’m not at a school. (But of course, it is VERY useful to have 3TB drives in the SAN, so capacity increases are always appreciated, even if you wouldn’t use these particular drives in our system.)

    Even considering that, you need to store that data locally somewhere until it can be moved to the SAN (because even these “slow” drives are way faster than network SAN access). These high-capacity WD Green drives are perfect for those uses. And then it is nice to be able to stick one in a SATA docking station and mirror the data over to a drive so that you can take it home and continue working on your project from there.

    But my real point was that saying “no one needs” is stupid and narrow minded.

    • Suspenders
    • 12 years ago

    Let them keep going, it’s a fascinating discussion 😉

    • Next9
    • 12 years ago

    Since it is bootloader – MS-DOS bootloader – who looks into MS-DOS partition table, obvious solution is change bootloader, not BIOS. I think it is clear and Simple.

    Grub is not any middleware or workaround. It is just equivalent solution to MS-DOS bootloader, but much more powerfull.

    I do not insist on BIOS. In fact I hate it. But I can not be excited by UEFI as a replacement, because it is same piece of shit. You replace wrong solution (bios) by another wrong solution (uefi). I have never needed exact boot partition with exact filesystem. Now I need efi system partition using FAT! It is nonsense to compare firmware chip (small capacity and hardly accessible) with 100MB FAT partition on your drive. It is much more easier to corrupt such partition and I can not even understand why the worst filesystem ever invented is used for this!

    And… What if one day GPT became technically inadequate and obsolete – like MS-DOS partition table today? What would we do then? And what if some operating systems would like just use different – better partitioning scheme, but majority of Microsoft occupied world would insist on GPT? There will be no solution then, except designing new firmware with new mechanism of booting again. What a waste of time and resources, when even BIOS today allows you to boot whatever you like, with partition table you like.

    Search what Coreboot developers think about U/EFI. You’ll realize, why it is not solution of many ugly problems bios suffer today.

    • Sir Sagamore
    • 12 years ago

    Asus unveils DiskUnlocker

    §[<http://event.asus.com/mb/2010/Disk_Unlocker/<]§ Disk Unlocker- a brand new exclusive (patent pending) technology from ASUS. This is the first software solution to overcome current operating system limitations that prevent a hard disk drive from utilizing more than 2048GB (also known as 2.2TB). With just a few clicks, Disk Unlocker taps into hidden storage space beyond the nominal 2048GB range, helping you use large hard drives to their maximum potential.

    • Krogoth
    • 12 years ago

    The BIOS and MBR are both the problem. They are legacies of the old x86 ISA.

    BIOS by default needs look for a certain area of the disk to look for the boot partitions on a given boot disk. By design, it was meant to look at a MBR format. The problem is that GPT is not quite like MBR. BIOS gets confused how to deal with it.

    For protective and compatibility purposes, GPT includes a “Protective MBR” layer. This is what the BIOS actually “sees” when tries to look for bootable partitions on the given disk. Anything that goes beyond that layer (2TiB) never gets look at by the BIOS. That is the exactly where the problem lies. Yes, you can create partitions larger than 2TiB or go beyond 2TiB layer on a BIOS-equipped system. They just cannot be directly be booted from.

    You need to middleware solutions like GRUB and HBAs which takes care it for the BIOS or design a new, low-end solution that can naively works with GPT (UEFI).

    It is kinda like memory addressing issue with 32-bit OS and systems with more than 4GiB of memory. 32-bit OS can “natively” only address up to 4GiB of memory (it can be less due to number of other factors). You need to utilize hacks like PAE to overcome this or use a 64-bit OS.

    What I do not understand, is why you are so insistent on keeping BIOS alive? It is already hopelessly dated and carries too many annoying kludges from ISA/x86 days. UEFI is a nice improvement on it. The fears of malware are nothing more than exaggerated FUD. Ever heard of BIOS viruses? They do exist, but fortunately are rare. It is because it requires low-level access. UEFI will be no different. If it is because of “trusted computing”, you are barking up the wrong tree.

    Edit: Okay, it is actually the boot loader that looks for partition information not the BIOS. The problem and solutions still remains though.

    • Meadows
    • 12 years ago

    Krogoth, I think the guy knows more than you do, so you pretty much lost 4 replies ago or something.

    • Next9
    • 12 years ago

    Yes it is. So why the hell are sou using MS-DOS partition table? It could be abandoned many years ago no matter using BIOS or NOT. BIOS does not require any partition table. That is just FUD.

    • Krogoth
    • 12 years ago

    It does both functions.

    Geez, when do you wake up on the wrong side of the bed?

    Data storage and partitioning is serious business!

    • Next9
    • 12 years ago

    Protective MBR is not for compatibility reason. It is called PROTECTIVE not compatible!!! It is designed to protect the GPT disk against obsolete OS and aplications, that could damage the data.

    For compatibility reasons, you can use HYBRID MBR, which is different story. In protective MBR, there is only single GPT partition up to 2TB as a masquerade for legacy SW, next sectors are occupied by GPT. In hybrid MBR, 4 primary partition are created, where first, second and third are the same in MS-DOS and GPT partition tables, fourth is GPT partition. But that is different story.

    If you look at EFI boot process, soon you realize, that it bring boot process grub have used many years. It use EFI partition to embed bootloader data. That is exactly what GRUB does. I do not see any difference.

    • Next9
    • 12 years ago

    95% of users does not know how to manage any Operating system, no matter whether it is Windows or Linux. I do not promote any solutions. I just become crazy each time I see somebody promoting Windows XP as a good OS. Just because I fight with this shit 5 days / week.

    I am sorry for that “idiot”. My fault.

    • Krogoth
    • 12 years ago

    LBA0 = Protective MBR layer. It is there for compatibility purposes, but it has a drawback. It cannot be larger than 2TiB. Thus creating a boot partition larger than 2TiB is not bootable under a BIOS-equipped system without workarounds.

    The reviewer ran into this problem, which is why he was forced to make boot partition under 2TiB limit. The mystery is why Windows 7 forbids him from creating a second partition with the remaining capacity. (I suspect it has to do with how Windows writes its boot partition).

    GRUB and other boot managers are middleware solutions to the problem. You can also use a HBA of some kind and boot indirectly from it. (WD’s bundle solution).

    In the end, UEFI support is a more seamless and permanent solution to the problem.

    • Firestarter
    • 12 years ago

    I used a 10 year old club because it worked just fine for bashing, and that’s what I needed it for. Upgrading for the sake of upgrading is best left for applications in which being current is more important.

    • Firestarter
    • 12 years ago

    You have to consider this in the context of a home server, to be administrated by the people living there. I am not an experienced Linux administrator, and it may surprise you but I bet that’s the case for the majority of the Tech Report audience.

    As such, it is of no concern of me whether the RDP service dies first in case of some random lockup, as one may have a old 15″ CRT stuffed in that closet as well anyway, just in case. It is also of no concern how much better Samba is than XP’s admittedly limited filesharing, because XP’s filesharing is just *[

    • Meadows
    • 12 years ago

    Not to mention he said nothing about what servers might or might not be used, and potatohead assumed that no servers were being used at all.

    • Next9
    • 12 years ago

    No. BIOS does not search anything. BIOS run code from LBA0 (MBR) where first 440B are ocupied by bootloader, rest is MS-DOS partition table. It is up to you, what you save into those 440B area. If you save MS-DOS bootloader there, it will ask BIOS to search MS-DOS partition table and find “boot” flag. Only and only then all problems appear because of limitations of MS-DOS partition table.

    You can use GRUB. GRUB do not ask BIOS to explore anything, it take control of boot process itself. So you can use whatever partition scheme you want – MS-DOS, GPT, or even design you own. If GRUB support it, it will work, no matter what BIOS support or not, because is just runs those 440B portion of bootloader. That is how it works.

    I have GPT on all my HDDs and I boot from them. No one of my system use EFI (only Intel mobos today include EFI support, and I have non of them). In addition, I do not think EFI is better than BIOS, since it does not solve biggest BIOS problems. Yes, It does solve Microsoft inability to write good bootloader, but that is not BIOS fault. EFI is just more bloated, advanced BIOS, where you again need drivers for Firmware and OS independently, where is EFI FAT partition used suitable for bunch of malware executed by Firmware even before operating system starts… 🙁

    GPT is not dependent on EFI and you can use it, whenever you like, since GPT drives are self identifying. Thus BIOS is not limitation. Limitation is operating system from Microsoft, as always.

    • Krogoth
    • 12 years ago

    You are not fully understanding the issue at hand.

    The problem is the because of the BIOS itself. It relies on MBR format in order to find boot information for the boot disk. MBR itself has a hard limit of 512 bytes (sector blocks) * 2^32 = 2TiB.

    512 bytes for sector blocks is an aging legacy of original floppy disks and early HDDs. The addressing limit used to be 16-bit (pre-i386) days. 512byte * 2^16 = 32MiB. i386 architecture/ISA spanned it to 32bit which gives us the current limit of MBR format.

    GPT is a newer format that was developed during early 2000s in order to address the implementing problems of MBR within the enterprise world. GPT not only spans the addressing space to 64bit. It also allows for larger sector blocks. More details can be read here. §[<http://en.wikipedia.org/wiki/GUID_Partition_Table<]§ BIOS was never coded in mind to boot under a GPT format. It will most likely never will, because UEFI for all intentions and purposes is slowly replacing the venerable BIOS. Hopefully, the limits of MBR will be the catalyst for it.

    • eloj
    • 12 years ago

    I guess when all you have and know is a 10-year old club, everything looks like a nail. What you said can be boiled down to “XP is great because many don’t know any better”.

    • Falanx
    • 12 years ago

    Only if those lot of small platters had the same areal density, yes. But that’s not an available option, increases the disk stack high, the rotating mass and load on the bearings, which in turn affects reliability, power consumption and heat generated.

    So, no.

    • vvas
    • 12 years ago


    • vvas
    • 12 years ago

    Whoa, what’s with the name calling? Can we not keep it civil?

    Guys, it’s a fact that TR is a Windows-centric publication, as that is what most of its readers use and are comfortable with. So yeah, assuming someone is mostly a Windows user, an old XP license lying around /[

    • Next9
    • 12 years ago

    Just because you do not know how to manage Linux server it must be less usable than WinXP? 🙂

    I am sorry to say that, but if you run torrent client in Wine, you must be lame idiot. I have never had any problems with torrents except some bizzare trackers blocking some of the linux clients (i do not kno why).

    Samba is way more powerful than Windows sharing, and you can provide service for more clients using SAMBA on the same hardware.

    Among the others, I manage approx. 20 Windows XP servers collecting measured data. It was not my selection, XP are the only supported platform for this kind of solution. I can tell you that this Operating system parody often fails or halt just because of little workload produced by connected peripherals. If one of this machines lock, first service unavailable become RDP, so you can not reboot it remotely. If there is something NOT SUITABLE for Windows XP, it is headless remote server usage.

    • Firestarter
    • 12 years ago

    What about being familiar to many and easy to setup? To get Ubuntu to do what my XP closet file/print/torrent server was already doing perfectly fine (and stable!) for many a month, I spent many nights fiddling with all kinds of software and configuration issues. And after I was fed up with fiddling with it, it was still not as usable as the XP box was. The torrent clients that were available at the time were either not quick enough (downloading noticably slower) or not stable enough (uTorrent in Wine), and having the computer share its printer was another lesson in how things work in a Linux environment, for better or worse.

    I’m not saying XP works better for everyone, but I agree 100% with Geoff’s opinion that spare XP licenses are great for this kind of stuff.

    edit: this was a while ago, around 2008. YMMV

    • Next9
    • 12 years ago

    AFAIK, there is no 2TB limit built in BIOS. Those limitations are built in MS-DOS partition table and MS-DOS bootloader (both in MBR). Since the Windows, even the latest version 7, in 21th century still boots like MS-DOS, only Microsoft should be blamed for this situation.

    It is possible to boot Linux using GRUB and GPT on BIOS machine, thus large drives should be bootable without problems.

    • Dissonance
    • 12 years ago

    Correct. The file copy results are going to be more important for most folks. The FC-Test scores simply highlight weak points in the controller’s ability to handle sequential transfers that don’t take advantage of NCQ and get along with the Microsoft AHCI drivers.

    • MadManOriginal
    • 12 years ago

    I see. So basically would you say for Sandforce drives the FC-Test results don’t mean much, in terms of real-realworld use versus psuedo-realworld use (FC-Test being the latter)?

    • Dissonance
    • 12 years ago

    Depends on the SATA-to-USB bridge used in the enclosure. According to WD, ones that have “Large Drive” support will be able to use the full 3TB across all operating systems, including XP. I expect newer bridge chips will and do support this, but I haven’t seen any bare enclosures that explicitly claim large drive support.

    FWIW, the 3TB external drives from WD and Seagate haven’t carried any OS restrictions. I’m not aware if either of them is bootable, though.

    • Dissonance
    • 12 years ago

    The SandForce drives have issues with FC-Test for a couple of reasons we delve into a bit more in a review of those SSDs:


    • MadManOriginal
    • 12 years ago

    The most interesting thing I saw in the review data was not about the reviewed drive or any mechanical drive at all, rather it was the huge difference between the FC-Test and TR DriveBench ‘copy’ results for Sandforce-based SSDs. In the former they are at the bottom but in the latter they are much nearer the top. Is either of those two particular tests inaccurate in that it doesn’t reflect real-world use very well for those SSDs? Is there some technical reason why the results are so different Geoff? (I’ll go read those pages more carefully now…)

    • albundy
    • 12 years ago

    i would think that the boot timings would place the drive at the bottom of the boot graph, due to the time it takes for the controller to boot up.

    • albundy
    • 12 years ago

    he never mentioned that the lab was in a school. did you just assume that? and what makes you think that a lab worker has any implications on allowing such assets to go beyond budgeting restrictions?

    • potatochobit
    • 12 years ago

    ‘We’ or ‘Your’

    if you expect one hard drive to hold enough data for your entire school it is easy to see who the moron is

    stop being a cheap-o and buy a server or raid setup

    • eloj
    • 12 years ago

    “Old XP licenses are great for closet file servers”

    No. No they’re really not that great for closet file servers. In fact, I can’t think of a single positive thing about XP as the OS for a file server, old licenses or not.

    A Debian GNU/Linux netboot image? /That’s/ useful for a close file server.

    Sad to see the “512e” shenanigans persist.

    You should have tested the drive on a Mac too to verify there are no problems.

    • kilkennycat
    • 12 years ago

    Geoff, a technical question or 3:-

    If this 3TB drive is packaged by WD (or third-party) in a USB2.0 (or USB3.0) external enclosure, how are the XP sector-issues and the current-default Win7/Vista/BIOS 2.19GB limitations going to be rendered invisible to the present-generation motherboard hardware/OS, and….. particularly in the case of XP, without any obvious performance hit…..?

    Maybe WD could carve the 3TB into two sub-2.19Gbyte “disks” in the USB interface hardware in their external enclosure ? These disks could be further partitioned down with current partitioning software, if desired by the user .

    Also, are we still going to be able to boot from the USB disk, should we desire to do so and if the motherboard physically supports the boot function for the current generation of sub-2.19Gbyte USB disks?

    A little enlightement/speculation on the potential technical approaches to plug-in backward compatibility of future large-capacity USB drives without losing performance would be a very interesting follow-up to this article.

    • glynor
    • 12 years ago

    I hate it when people say crap like this. It always ends up being claim chowder later on, so it isn’t a good idea in the first place, but for now… You absolutely have no idea what you’re talking about.

    We have a single microscope at my lab that generates around 1TB of new data per week. When we get our new high-throughput sequencers, that department will probably be generating closer to 1 TB per day in data.

    There are PLENTY of uses for that amount of data storage space that you aren’t thinking about. What you /[

    • srg86
    • 12 years ago

    This reminds me most of all, of the 512MB limit and 8.4GB limits that called for the move from CHS and ECHS to LBA.

    Now have the need to move from the MBR completely to GPT and possibly UEFI as well.

    • potatochobit
    • 12 years ago

    wouldnt a lot of small platters be better to increase search speed

    no one needs 3TB unless ur computer is mainly a DVR

    • cygnus1
    • 12 years ago

    That might be a while considering they’re only at 750GB platters now…

    • swaaye
    • 12 years ago

    The last time I remember controller cards shipping with drives was back when UltraDMA 66 drive were coming in and almost all mobos had UDMA33.

    The Promise Ultra66 came with some drives I recall. It also came in some systems from Dell. 440BX mobos that only did UDMA33.

    • Duck
    • 12 years ago

    I will wait for a 2TB single platter for my next drive. It will avoid these issues 😉

    • indeego
    • 12 years ago

    I wonder if people in year 2467 will be pissy with us because they are hitting the 64-bit barrier and why couldn’t we think of the limitations!!!! If so, hello my great great grandbotg{

    • Krogoth
    • 12 years ago

    Ah, the pains of transition.

    • bthylafh
    • 12 years ago

    All hardware sucks. All software sucks.

    Your favorite program? It sucks.

    • indeego
    • 12 years ago

    q[<"DriveBench doesn't produce reliable results with Microsoft's AHCI driver, forcing us to obtain the following performance results with Intel's RST drivers. We couldn't get DriveBench to play nicely with our the X25-V RAID config, either, which is why it's not listed in the graphs below. The app will only run on unpartitioned drives, so we tested drives after they'd completed the rest of the suite. As we've mentioned, Intel's current RST drivers don't properly support 3TB hard drives."<]q what a messg{<.<}g

    • Krogoth
    • 12 years ago

    I find it strange that Vista/7’s build-in disk management is unable to create a secondary partition out of the remaining capacity of the 3TB HDD. There’s no technical reason for it.

    2TiB limit only applies to “Protective MBR” layer of GPT table. This is only an issue if you want to create a bootable partition with a BIOS-equipped motherboard.

    Otherwise, great article Geoff. It pretty much confirms what the drive’s role is. It is an excellent archiving drive that will help start the push for GPT compatibility/workarounds. It is like FAT16 all over again. 😉

Pin It on Pinterest

Share This

Share this post with your friends!