Hitachi’s Deskstar 7K1000 hard drive

Manufacturer Hitachi
Model Deskstar 7K1000
Price (Street)
Availability Now
IMAGINE ONE THOUSAND thousand thousand thousand bytes. A terabyte, if you will. But more than just that—a milestone in storage capacity that hard drive manufacturers have been chasing for years. After more than a decade of living in a world of gigabytes, the bar has finally been raised by Hitachi’s terabyte-capacity Deskstar 7K1000.

Being first to the terabyte mark gives Hitachi bragging rights, and more importantly, the ability to offer single-drive storage capacity 33% greater than that of its competitors. Hitachi isn’t banking on capacity alone, though. The 7K1000 is also outfitted with a whopping 32MB of cache—double what you get with other 3.5″ hard drives. Couple that extra cache with 200GB platters that have the highest areal density of any drive on the market, and the 7K1000’s performance could impress as much as its capacity.

Has Hitachi achieved a perfect balance of speed and storage with its Deskstar 7K1000? We’ve tested it against nearly 20 competitors—including its closest 750GB rivals from Seagate and Western Digital—to find out.

When a terabyte isn’t
By now I’ve no doubt been heckled by someone insisting that the 7K1000 doesn’t actually offer a full terabyte of storage capacity. This person probably sounds like the comic book guy from The Simpsons, but don’t dismiss him. He has a point, sort of.

According to the International System of Units (SI), a terabyte consists of 1,000,000,000,000 bytes—10004, or 1012. Windows confirms that the 7K1000 delivers 1,000,202,240,000 bytes, which is more than it needs, so what’s the comic book guy on about?

Look a little closer, and you’ll see that while the 7K1000 does indeed offer over a trillion bytes, that capacity only translates to 931 gigabytes. For an explanation of why, we have to delve into the always exciting world of numerical systems. SI units are built on the same base 10 decimal system we’ve been using since grade school. Computers, however, use a binary base 2 system. So, while a kilobyte in decimal is 1,000 bytes, a kilobyte in binary translates to 1,024 bytes. A binary terabyte, then, is not 1,0004, but 1,0244, or 240.

Multiplying that out, a binary terabyte yields 1,099,511,627,776 bytes, which is why the 7K1000 falls short of a thousand gigabytes. The drive would actually need 1,024 gigabytes to achieve terabyte status in the binary world. This translation problem isn’t unique to the 7K1000, either. Virtually all hard drives advertise their capacities in SI units, so their actual capacities fall short of binary expectations.

Back in the day, the gap between decimal and binary capacity wasn’t big enough to ruffle feathers. Gigabyte drives were only “missing” 24 megabytes, and that was easy to swallow. However, higher capacities widen the disconnect between decimal and binary, leading the terabyte 7K1000 to pull up 69GB short. If you take out multimedia files, 69GB is probably more than enough capacity for what most of us have on our hard drives, so it’s hardly a drop in the bucket.

To help ease the confusion surrounding the PC’s base 2 binary system, various standards bodies are pushing a set of alternative binary prefixes. A terabyte would remain one trillion bytes, while “tebibyte” would denote 1,099,511,627,776 bytes. Needless to say, that hasn’t caught on yet. However, as growing hard drive capacities increase the amount of space “lost” in binary to decimal conversion, the tebibyte’s time may come.

The drive
Now that we have the math sorted out, it’s time to take a look at the Deskstar. Not that there’s much to see.

The 7K1000 looks like just about any other desktop drive. Only a couple of characters on the label serve as evidence of its monstrous capacity. Hard drives don’t need to score high on artistic impression, of course, but I’m continually surprised to see manufacturers wrapping their flagship products in the same generic skin as budget models. You’d think the reigning capacity king would have a little more flair, but there’s nothing to visually set the 7K1000 apart from other Deskstar models or even competitor drives.

Maximum external transfer rate 300MB/s
Buffer to disk transfer rate 1070Mbps
Read seek time 8.5ms
Write seek time 9.2ms
Average rotational latency 4.17ms
Spindle speed 7,200RPM
Available capacities 750GB, 1TB
Cache size 32MB
Platter size 200GB
Idle acoustics 2.9 bels
Seek acoustics 3.0-3.2 bels
Idle power consumption 8.1-9.0W
Read/write power consumption 12.8-13.6W
Native Command Queuing Yes
Recording technology Perpendicular
Warranty length Three years

To see what makes the 7K1000 special, you have to dig into the drive’s spec sheet. Terabyte capacity is obviously what makes this drive unique, but how it gets there is also important. The 7K1000 uses five platters to achieve its industry-leading capacity, perpendicularly packing an impressive 200GB onto each disk. These 200GB platters give the 7K1000 a higher areal density than competing drives that typically feature 188GB platters, and since higher areal densities can lead to better performance by allowing the drive head to access more data across the same physical area, the Deskstar is nicely set up for speed.

Copious amounts of cache should also help the 7K1000 in the performance department. Serial ATA hard drives typically come with either 8MB or 16MB of onboard cache, but the Deskstar packs a whopping 32MB thanks to a single Hynix memory chip. Hitachi didn’t add capacity to the Deskstar by turning the drive into a clumsy minivan, then; they built the hard drive equivalent of an Audi RS4 Avant.

With the 7K1000 breaking new ground in cache size and capacity, it’s almost amusing to see the drive hanging onto an old-school molex power connector. Power plug flexibility isn’t a problem, of course, and it may actually come in handy for those looking to deploy the drive in extremely large storage arrays, since power supplies typically only come with a handful of SATA power connectors.

A RAID array might be a good idea if you actually have a terabyte’s worth of data you’d like to store on the 7K1000. That’s a lot to lose, and despite the fact that Hitachi covers the drive with a three-year warranty, that warranty will only get you a replacement drive if yours fails—it won’t restore your data.

Before diving into testing, we should take a moment to give the folks at NCIX a shout out for hooking up with the 7K1000 we used for testing. We’ve been dealing with NCIX for a long time, and you can now sample their wares stateside at NCIXUS.

Test notes
We’ll be comparing the performance of the Deskstar 7K1000 1TB with that of a slew of competitors, including some of the latest and greatest Serial ATA drives from Hitachi, Maxtor, Samsung, Seagate, and Western Digital. These drives differ when it comes to external transfer rates, spindle speeds, cache sizes, platter densities, NCQ support, and capacity, all of which can have an impact on performance. Keep in mind the following differences as we move through our benchmarks:

Max external transfer rate Spindle speed Cache size Platter size Capacity Native Command Queuing?
Barracuda 7200.7 NCQ 150MB/s 7,200RPM 8MB 80GB 160GB Yes
Barracuda 7200.8 150MB/s 7,200RPM 8MB 133GB 400GB Yes
Barracuda 7200.9 (160GB) 300MB/s 7,200RPM 8MB 160GB 160GB Yes
Barracuda 7200.9 (500GB) 300MB/s 7,200RPM 16MB 125GB 500GB Yes
Barracuda 7200.10 300MB/s 7,200RPM 16MB 188GB 750GB Yes
Barracuda ES 300MB/s 7,200RPM 16MB 188GB 750GB Yes
Caviar SE16 300MB/s 7,200RPM 16MB 83GB 250GB No
Caviar SE16 (500GB) 300MB/s 7,200RPM 16MB 125GB 500GB Yes
Caviar SE16 (750GB) 300MB/s 7,200RPM 16MB 188GB 750GB Yes
Caviar RE2 150MB/s 7,200RPM 16MB 100GB 400GB Yes
Caviar RE2 (500GB) 300MB/s 7,200RPM 16MB 125GB 500GB Yes
Deskstar 7K500 300MB/s 7,200RPM 16MB 100GB 500GB Yes
Deskstar 7K1000 300MB/s 7,200RPM 32MB 200GB 1TB Yes
DiamondMax 10 150MB/s 7,200RPM 16MB 100GB 300GB Yes
DiamondMax 11 300MB/s 7,200RPM 16MB 125GB 500GB Yes
Raptor WD740GD 150MB/s 10,000RPM 8MB 37GB 74GB No*
Raptor X 150MB/s 10,000RPM 16MB 75GB 150GB Yes
Raptor WD1500ADFD 150MB/s 10,000RPM 16MB 75GB 150GB Yes
SpinPoint T 300MB/s 7,200RPM 16MB 133GB 400GB Yes

Note that the 250GB Caviar SE16 and the Raptor WD740GD lack support for Native Command Queuing. The WD740GD does support a form of command queuing known as Tagged Command Queuing (TCQ), but host controller and chipset support for TCQ is pretty thin. Our Intel 955X-based test platform doesn’t support TCQ.

We have test results from several versions of Western Digital’s Caviar SE16 and RE2. To avoid confusion, we’ll be listing their capacities in parentheses in each of our graphs.

Since Seagate makes versions of the 7200.7 both with and without NCQ support, the 7200.7 in our tests appears as the “Barracuda 7200.7 NCQ” to clarify that it’s the NCQ version of the drive. The other drives aren’t explicitly labeled as NCQ drives because they’re not available without NCQ support.

Finally, we should note that our WD1500ADFD has a slightly newer firmware revision than the Raptor X sample we’ve had since February, 2006. The drives still share identical internals, but firmware optimizations could give our newer Raptor an edge over the X in some tests.

Performance data from such a daunting collection of drives can make our graphs a little hard to read, so I’ve highlighted the 7K1000 in bright yellow and its high-capacity competitors—the Barracuda 7200.10 and ES, and the Caviar SE16 750GB—in pale yellow to set them apart from the others. We also have two sets of IOMeter graphs: one with all the drives, and another with just the Deskstar and its 750GB rivals. Most of our analysis will be limited to how the 7K1000 compares with its direct rivals, so it should be easy to follow along.

Our testing methods
All tests were run three times, and their results were averaged, using the following test system.

Processor Pentium 4 Extreme Edition 3.4GHz
System bus 800MHz (200MHz quad-pumped)
Motherboard Asus P5WD2 Premium
Bios revision 0422
North bridge Intel 955X MCH
South bridge Intel ICH7R
Chipset drivers Chipset 7.2.1.1003
AHCI/RAID 5.1.0.1022
Memory size 1GB (2 DIMMs)
Memory type Micron DDR2 SDRAM at 533MHz
CAS latency (CL) 3
RAS to CAS delay (tRCD) 3
RAS precharge (tRP) 3
Cycle time (tRAS) 8
Audio codec ALC882D
Graphics Radeon X700 Pro 256MB with CATALYST 5.7 drivers
Hard drives Hitachi 7K500 500GB SATA
Western Digital Caviar SE16 750GB SATA
Maxtor DiamondMax 10 300GB SATA
Seagate Barracuda 7200.7 NCQ 160GB SATA
Seagate Barracuda 7200.8 400GB SATA
Seagate Barracuda 7200.9 160GB SATA
Seagate Barracuda 7200.9 500GB SATA
Seagate Barracuda 7200.10 750GB SATA
Western Digital Caviar SE16 250GB SATA
Western Digital Caviar RE2 400GB SATA
Western Digital Raptor WD740GD 74GB SATA
Western Digital Raptor X 150GB SATA
Western Digital Raptor WD1500ADFD 150GB SATA
Western Digital Caviar RE2 500GB SATA
Western Digital Caviar SE16 500GB SATA
Seagate Barracuda ES 750GB SATA
Samsung SpinPoint T 400GB SATA
Maxtor DiamondMax 11 500GB SATA
Hitachi Deskstar 7K1000 1TB SATA
OS Windows XP Professional
OS updates Service Pack 2

Thanks to the folks at Newegg for hooking us up with the DiamondMax 11 we used for testing. Also, thanks to NCIX for getting us the Deskstar 7K1000.

Our test system was powered by OCZ PowerStream power supply units. The PowerStream was one of our Editor’s Choice winners in our last PSU round-up.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

WorldBench overall performance
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results.

The Deskstar doesn’t quite manage to top the field in WorldBench, but it’s only one point off the lead. Note that the drive’s closest competitors are all within a point of the 7K1000, as well.

Multimedia editing and encoding

MusicMatch Jukebox

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Premiere is the only one of WorldBench’s multimedia editing and encoding tests to tax the storage subsystem, and again, the Deskstar finds itself a little off the lead. The 7K1000 is seven seconds slower than Western Digital’s 750GB Caviar SE16 in this test, but that still makes it the second fastest 7,200-RPM drive.

Image processing

Adobe Photoshop

ACDSee PowerPack

Photoshop is a wash, but ACDSee manages to spread the field a little. There, the Deskstar again finds itself sitting between Western Digital and Seagate’s 750GB drives, with the former taking the lead once again.

Multitasking and office applications

Microsoft Office

Mozilla

Mozilla and Windows Media Encoder

There isn’t much to see in WorldBench’s multitasking and office application tests. The 7K1000 does take top honors with the suite’s Office XP and Mozilla media encode workloads, but it doesn’t really distance itself from the other drives.

Other applications

WinZip

Nero

WinZip and Nero give the Deskstar a chance to really stretch its legs. Unfortunately, those legs are a little short, as the 7K1000 finds itself behind not only the 750GB Caviar, but also Seagate’s Barracuda ES and 7200.10.

Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch.

Our system boot time test proves disastrous for the Deskstar. The drive is nearly six seconds slower than its closest competitor and more than 20 seconds slower than its Caviar-based competition.

Things improve for Hitachi when we move to game load times, where the Deskstar manages to trump its 750GB rivals.

File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition. Scores are presented in MB/s.

To make things easier to read, we’ve busted out our FC-Test results into individual graphs for each test pattern. We’ll tackle file creation performance first.

With the highest areal density of the lot and a beefy 32MB cache, the Deskstar should have the edge here. Except it doesn’t. At best, the drive manages a third-place performance behind the 750GB Caviar SE16 with the Install test pattern, but it lags farther behind with other test patterns. At least the 7K1000 is consistently faster than the Barracuda ES and 7200.10.

Now that we’ve created the files dictated by all these test patterns, let’s see how fast the drives can read them.

Finally, the 7K1000 starts to flex its muscles. The drive manages to top the field with the Windows and Programs test patterns, which feature a large number of small files, and it’s not far off the lead with the others. To be fair, though, the Deskstar’s wins come by the slimmest of margins; the 750GB Caviar SE16 is the fastest high-capacity drive overall.

FC-Test – continued
Next, File Copy Test combines read and write tasks with some, er, copy tests.

Results are mixed here, with the Deskstar jumping from a pack-leading performance with the Windows test pattern to dragging its feet behind even the 750GB ‘cudas with the ISO and MP3 test patterns. The 7K1000 appears to favor test patterns with larger numbers of small files, which is why it does reasonably well with the Install and Programs test patterns.

FC-Test’s second wave of copy tests involves copying files from one partition to another on the same drive.

The Deskstar struggles with this batch of partition copy tests, finding itself behind its closest rivals in three of four test patterns—and by a large margin in two of those tests. Things look better with the Programs and Windows test patterns, but even then, the Deskstar still trails the 750GB Caviar SE16.

iPEAK multitasking
We’ve developed a series of disk-intensive multitasking tests to highlight the impact of command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

The Deskstar exacts revenge on the Caviar SE16 in our first batch of iPEAK tests, managing to trump all takers with two of five test patterns. The Barracudas aren’t even close in these tests. Although the Caviar fares better with compressed file extraction, the Deskstar is faster at compressed file creation.

iPEAK multitasking – con’t

Our second batch of iPEAK workloads finds the Deskstar holding its own rather nicely. The drive blazes to victory in workloads that combine Outlook import and export tasks with file copy operations, easily beating its high-capacity competitors. Western Digital’s 750GB Caviar does nose ahead when we combine an Outlook export with a VirtualDub import, but it’s well behind when that Outlook export becomes an import.

IOMeter – Transaction rate
IOMeter presents a good test case for command queuing, so the NCQ-less Western Digital Caviar SE16 250GB and Raptor WD740GD should have a slight disadvantage here under higher loads. To keep things easy to read, we’ve busted out two sets of graphs here. The first includes the Deskstar 7K1000 and its closest competitors, while the second has results for all the drives we’ve tested. With close to 20 drives, those latter graphs are a little difficult to read, so we’ll focus our attention on the first set and the Deskstar’s direct rivals.

The 7K1000’s IOMeter transaction rates are interesting to say the least. With lower numbers of concurrent I/O requests, the drive’s throughput places it at the back of the class. However, when we crank up to higher I/O levels, the Deskstar’s throughput ramps more aggressively, as if we’d just woken the drive up. So, the Deskstar is the slowest drive up to between 32 and 64 outstanding I/O requests, but it’s actually the fastest—or nearly there—when we peak at 256 I/O requests.

Good luck with these, folks.

IOMeter – Response time

Turning our attention to IOMeter response times, we see a similar patter emerge. The Deskstar trails its rivals with fewer I/O requests, but seems to warm up and surge into contention as the load approaches a peak of 256 outstanding I/Os. Of course, you’ll need a busy multi-user environment to slam a drive with that many concurrent I/O requests; typical desktop workloads don’t even come close.

IOMeter – CPU utilization

CPU utilization is below half a percent for all four drives.

HD Tach
We tested HD Tach with the benchmark’s full variable zone size setting.

The Deskstar’s best-in-class areal density should give it an edge in HD Tach’s sustained read and write speed tests, but the drive only manages a fourth-place performance in each. To be fair, the 7K1000 does lose to a couple of 10K-RPM Raptors. However, even if you take those out of the equation, the 7,200RPM Caviar SE16 750GB is still faster by about 5MB/s.

Burst performance is a little disappointing on the Deskstar. The drive falls 16MB short of the Caviar SE16 here and more than 37MB/s short of the ‘cudas. The 7K1000’s 32MB cache may be capacious, but it doesn’t look that fast in this test.

At least the Deskstar’s access times are quick. In fact, they’re nearly the fastest of any 7,200-RPM drive we’ve tested and about a millisecond faster than the 7K1000’s direct rivals.

CPU utilization is within HD Tach’s +/- 2% margin of error in this test. Move along.

Noise levels
Noise levels were measured with an Extech 407727 Digital Sound Level meter 1″ from the side of the drives at idle and under an HD Tach seek load. Drives were run with the PCB facing up.

The Deskstar 7K1000 isn’t a loud drive by any means, but it’s not a particularly quiet one, either. This fact is especially apparent under seek loads, where the Deskstar is close to three decibels louder than the 750GB Caviar SE16. The Caviar is actually louder at idle, though, with the 750GB Barracudas claiming the lowest noise levels of any high-capacity drives at idle.

Power consumption
For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in line with the 5V and 12V lines connected to each drive. Through the magic of Ohm’s Law, we were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive.

For a five-platter drive, the 7K1000’s power consumption is impressively low at idle. However, when you kick the drive into action, power draw almost doubles, pushing the Deskstar to the back of the pack. Note the nearly 5W gap between the 750GB Caviar SE16 and the 7K1000 under load.

Conclusions
Everyone remembers their first, be it a kiss, a car, or the clumsy back-seat combination of the two. As the first hard drive to reach the terabyte mark, Hitachi’s Deskstar 7K1000 will be remembered, too. Squeezing a trillion bytes into a 3.5″ hard drive form factor is a monumental engineering achievement—one that rival hard drive manufacturers have yet to replicate and bring to market. The question, of course, is whether the 7K1000 has value beyond its unique status as the first terabyte drive you can buy.

I’m not so sure.

On the performance front, the Deskstar may be faster overall than Seagate’s 750GB Barracudas, but those drives are more than a year old. Against Western Digital’s latest 750GB Caviar SE16, the Deskstar most often finds itself trailing. That’s a disappointing result considering the 7K1000’s higher areal density and massive 32MB cache. Frankly, we expected the drive to be faster.

Of course, the Deskstar’s rivals fall 250GB short on the capacity front. Their cost per gigabyte is also substantially lower, though. The 7K1000 sells for as low as $345, so you’re paying about $0.35 per gigabyte. Meanwhile, the 750GB Caviar SE16 can be had for as little as $189, which is only just a little over $0.25 per gigabyte.

One expects to pay a premium on the highest capacity drive in a given range, but $0.10/gigabyte is quite a gap to span for a drive that offers few performance highlights and unremarkable noise levels. You’re paying for the milestone, I guess, or the exclusivity that comes with being the only terabyte drive on the market. In the end, we’d only recommend the Deskstar 7K1000 if you absolutely must have terabyte of capacity in a single 3.5″ drive. It’s fitting, then, that the 7K1000’s defining feature turns out to be the best reason to buy it.

Comments closed
    • fenacv
    • 12 years ago
    • WaltC
    • 12 years ago

    I thought this was a pretty good review in that it covered the highlights of interest to most people. Of course, the main attraction to this drive is the storage capacity, and the people who buy it will buy it for that and be satisfied with everything else as it lays.

    One thing I’d like to see in TR review bar charts from now on is some imagination in the color of the bar representing the product under review. It’s fine to alternate the compared products in shades of yellow, but how about picking a much different color for the bar representing the review product–you know, like green–or red–or blue, so that at a glance the position of the reviewed product in the chart stands out like a sore thumb…? This would make the bar charts much more enjoyable to study, imo.

    As well, I did catch the opening snipe at RAID configurations–RAID in general seems to be somewhat of a common editorial voodoo doll that people cannot resist sticking a pin or two in whenever the subject is hard drives–for some reason that I have never been able to divine…;) The inference here, of course, is that if a drive fails in RAID then the only thing Hitachi replaces is the drive and you lose the data. I think it would have been nice for TR just to also mention that if a single drive fails running in IDE that the same exact scenario applies…;) (Hopefully, TR did not mean to imply that if you’re a good boy and eschew RAID then Hitachi will reward you by replacing not only your drive, but also your data, in the event you should lose your single drive…;))

    • Nitrodist
    • 12 years ago

    I think the real question here is where is the overclocking results page?

    • Nele
    • 12 years ago

    Hi, How do you test system boot times? Do you restart the system two times before you test? Do you test a hard drive three or more times and calculate the average time? I guess something in this benchmark goes wrong…can’t believe those results…

    • moose17145
    • 12 years ago

    I believe there to be a Typo on page 11

    y[

    • Geatian
    • 12 years ago

    Holy cow, look at all of the diggs! Up to 646 as of this writing… Has TR ever gotten than many before? Putting the Digg link on the front page: good choice. πŸ˜‰

    • tygrus
    • 12 years ago

    Re: Super capacitor for power storage to keep drive working at shutdown.
    1F 12V Super capacitor is the size of a fly spray can and is able to supply 1A for 1s or about 1s of HD use if you’re lucky (voltage drops, resistance etc means you can’t use all of it’s capacity).

    How small can you make a battery for 12v@1A ? like having 2 PocketPC PDA batteries or 4xphone batteries). About 1/3 size of 3.5″ drive. A bit bulky.

    Could they use the spindown of the disk platters to supply enough power to copy data from cache to FLASH ? problem being limited <8MB/s write (small/narrow FLASH is slow, larger drives use wider controller and larger cips for more MB/s).

    AAM (Automatic Acoustic Management) would be slower. Some drive models don’t allow you to fiddle with the AAM setting.

    Noticed mixed results with the drives. Usage pattern and drive optimisation (priorities of designers/algorithms) greatly effect some results.

    PC CPU/system benchmarks at start don’t stress the HD so don’t need all to be in every review (comment on skipped).

    Would like to see more RAID controllers with more RAM (& battery backup). FLASH is not good for lots of random small writes.

      • just brew it!
      • 12 years ago

      q[

        • evermore
        • 12 years ago

        Power down the read/write amplifiers? How is it going to write? Writing already takes more power than reading.

        Take a lot of redesign of the controller chip, to enable it to shut down portions of itself like CPUs and GPUs are starting to do. Disk platters also spin down pretty quickly, might not be able to actually generate enough energy to hit the voltage/amperage needed.

        Plus, as the platters spin down, the drive would need more and more power in order to complete the write, since it’s taking longer and longer for the heads to reach the necessary areas. (Though they could provide a track along the edge dedicated to emergency writes so no seeking would be needed.) It might be like trying to get to a gas station before your tank is empty by driving faster, therefore emptying the tank sooner.

          • just brew it!
          • 12 years ago

          No, you misunderstood my post. Use the energy of the spinning platters (turn the motor into a generator) and write the cache to *[

      • evermore
      • 12 years ago

      Most drives use at least a couple of amps minimum during seeks, so even that 1F 12V capacitor wouldn’t cut it.

      Many drives now already use the spin of the platters to physically maneuver the read heads off the platter to a safe zone. I don’t know if they’d generate all that much electricity as they spin down anyway. They spin down pretty quick, and although ideally they’d produce as much electricity as it would take to make them spin at that speed, losses from the conversion would be significant.

    • alpha754293
    • 12 years ago

    Overall, it’s a good review. Very thorough I think.

    I would like to mention that sometimes, absolute performance isn’t the best.

    Hitachi drives have never been stellar for performance. They’re usually in the middle of the pack, ever since IBM sold off it’s storage division, and Hitachi Global Storage Technologies was formed from the purchase, to which they solved the ailing “Deathstar” problem by dumping the consumer product line technologies in favor of implementing the Enterprise market technologies. And it has worked well for them.

    I, myself, own a total of at least 50 IBM/Hitachi drives. At least 20 or so of them are Hitachi 500 GB. In the last 4 years, of the 50 (total), only 6 have died. Two due to thermal, sitting behind the heating vent in the middle of winter, the rest from actual usage (progressive erosion).

    What is hard for you to measure is reliability in the field. And that is something that Hitachi drives have ALWAYS excelled in, to the point that large OEMs have started using Hitachi drives exclusively for their storage solutions. (e.g. Sun Fire X4500 features 48 Hitachi 500 GB hard drives).

    You don’t find that with any other “consumer” level drive being used in the Enterprise market as mass storage medium.

    That is a true testament of the Enterprise heritage in Hitachi hard drives.

      • continuum
      • 12 years ago

      y[

    • grenadier
    • 12 years ago

    It appears that you forgot to turn the AAM (Automatic Acoustic Management) on for any of the noise tests in this article resulting in much worse scores than other reviews already floating around the web.

    If you in fact did use AAM and still ended up with these results could you make a note of that in the article and if not could you add AAM results to your findings? I think it’s a bit misleading not to at least try it for the sake of the article, especially since Anand, etc. have found that it results in such a dramatic noise reduction.

      • GreatGooglyMoogly
      • 12 years ago

      Yeah I’ve been running all my Hitachi (and prior to that, IBM) drives with AAM on. There’s just no point in NOT doing it (on a desktop) since it’s well worth the slight performance drop.

    • FireGryphon
    • 12 years ago

    Excellent review. It’s probably not worth buying the drive unless you need tons of space and have only a few bays to put them in. Otherwise, smaller drives would be a better buy. WD is fastest, Seagate is quietest. Hitachi is just freakin’ big!

    Some corrections:

    On page 14, second to last paragraph: “…to the back of the back.” Second ‘back’ should be ‘pack’.

    In the conclusion, “…but a $0.10/gigabyte is quite a gap to broach…” In this sentence, the word “difference” or “disparity” should come after “$0.10/gigabyte”, and the word ‘broach’ is misused. At least, your usage doesn’t match any meaning I know or can look up.

      • Damage
      • 12 years ago

      It’s a broach, dude, like the jewelry. Geoff is into that stuff.

        • eitje
        • 12 years ago

        yeah – it’s like he was saying “to pin down”.

        except SLIGHTLY more girlie.

          • Anomymous Gerbil
          • 12 years ago

          To pin down? No. The closest meaning might be “raise a topic for discussion”.

          For example “After reading FireGryphon’s post, Damage and Geoff broached the topic of correct word usage, and found they had made a mistake in this case.”

          Maybe they should have written “…but a $0.10/gigabyte disparity is quite a gap to close”.

      • Nullvoid
      • 12 years ago

      If you look at the noise figures the Seagate is very much not the quietest. Unless you value idle noise more than seek?

    • Luminair
    • 12 years ago

    A late and not great review. I also wasn’t impressed to see the NCIX retail sticker still on the drive — that looks unprofessional because most professionals don’t buy their own hardware from a store to review!

    Also, the author skipped over the real issue when he talked about the size. Windows displays size in GiB, but it SAYS it is in “GB”. Windows is simply wrong and causes confusion. The gibibyte vs gigabyte explanation is what he should have discussed.

      • Damage
      • 12 years ago

      A late and not great comments post. I also wasn’t impressed to see the complaint about the retail sticker on the drive — that looks unprofessional since most professionals complain loudly about the sourcing of hardware directly from manufacturers!

      Also, the poster skipped over the real issue when he talked about Windows being wrong. The real issue is that Linux is the One True Religion, which is what he should have discussed.

      • evermore
      • 12 years ago

      Good reviewers DO buy their own hardware to review when possible, as this ensures they haven’t been provided with a hand-picked unit that the manufacturer knows is going to perform slightly better than average.

      Failing that, like when a product costs a thousand bucks, having a retailer provide a unit for testing is okay too, and it’s commonplace to give credit to the retailer with a link to buy it.

      Reviewers who solely rely on manufacturers providing review units are more likely to be writing shill pieces, paid reviews with a foregone positive conclusion.

      • ew
      • 12 years ago

      professional

      You keep using that word. I do not think it means what you think it means.

        • FireGryphon
        • 12 years ago

        /Inigo Montoya πŸ™‚

          • Anomymous Gerbil
          • 12 years ago

          πŸ™‚

      • eitje
      • 12 years ago

      seems buying your own hardware is the right way to go, if you want to test what everyone else will get from the store.

    • albundy
    • 12 years ago

    wow, it was like drawing a line down a page. only i/o and file copy tests showed better variations.

    • Bensam123
    • 12 years ago

    You know, I can’t help but wonder why they still only slap 8, 16, and now 32mb of memory on HDs. That was top of the line ages ago.

    If they concentrated on better prefetching to, lets say, fill up a 512mb module all the time you would really have something to go off of. I guess that would be a lot like a hybrid HD, only it wouldn’t be as it wouldn’t use flash memory which IMO would be better.

    Then there is the whole aspect of SI vs none SI ways of measuring bits and bytes. I’m just waiting for a company to come out with a “true” 1TB model or whatever. Not so disimilar to how PSU makers are and were touting “true power” vs “peak power”.

    I’m sure a good graphic like was shown with this review on the box will make people realize what they mean.

    I dunno, but these just seem rather no brainers to me for improving a HD.

      • evermore
      • 12 years ago

      There is a risk to increasing cache sizes (and 32MB is certainly not old hat as far as hard drives are concerned). Imagine having a power failure on a system without a UPS, or just a power supply or other component failure, while you’re transferring a huge file which the disk has put into a 512MB cache before final writing. Half a gig of data lost in a moment (though possibly with the chance of software recovery).

      Operating systems already provide some level of write caching by default, you could modify that to something huge.

      Adding multiple chips of course would also result in higher costs due to the PCB space needed and trace routing. Drive control chips would also require modification so they could access multiple chips.

        • Peldor
        • 12 years ago

        The write cache and read cache wouldn’t have to be the same size though. You could write cache (if you even have it on) say a maximum of 16MB, but use the full 512MB for read cache. I would think the read cache is probably more significant for most users. Aren’t reads far more common than writes?

          • evermore
          • 12 years ago

          Read cache being larger makes a lot less of a difference to performance. You can read data from the cache to memory far faster than the drive can read from the platter to the cache. Current drives do a lot of read-ahead and other things to try to fill the cache with data during times the OS is not reading, but expecting it to be able to gather up half a gig of “might be needed soon” data is a bit much.

          Write cache on the other hand makes a big difference, as it lets the OS and CPU dump the data to cache as fast as the interface can handle, then go back to doing other things. With older OSes that didn’t have any sort of write caching built-in, I expect that made a HUGE difference, particularly when the CPU had to be involved more with older interfaces. A very large write cache would make a big performance difference, but then goes back to the issue of data security.

          As is also mentioned by others, modern OSes do a heck of a lot of caching type things. Write caches that are gigs in size are possible, but since the memory is dynamically used for other things as well, that can have drawbacks. I think 4MB is the minimum that Windows (2000 and up) will use, if you don’t disable it. But if you have applications that regularly use a lot of memory, you end up having less available for the cache (and if you’re running an app like that, then you might be doing a lot of writing to disk of large files, so needing that cache even more). Windows also does some level of read caching, however it’s not part of the disk drive configuration directly. Prefetching of application files is one type, but those are loaded directly into memory, not into any reserved cache.

          Cache on the drive is dedicated, always doing nothing but caching for the drive, and doesn’t have to worry about memory usage by the OS. But, it also can’t increase in size to take advantage of available memory.

          I hadn’t considered OS-level caching before all the replies to this article. With current operating systems, it certainly seems like OS-level write caching at least should be more effective and cost-effective than huge caches on-disk. That does of course depend on the OS caching being well-written, but drive firmware could be crappy at caching too.

          It would still be nice to have a significant cache available anyway, considering how cheap DRAM is. I have to wonder if they couldn’t get 64MB chips cheaper than they’re paying for 32MB or 16MB chips. DRAM makers might be happy to stop making the small sizes so they can consolidate their product lines. A 16MB chip is about the smallest SDRAM made these days for system memory modules, and 128MB modules aren’t a huge market anymore.

      • ew
      • 12 years ago

      Unless your running DOS without smartdrv then built in disk cache isn’t all that useful.

        • Bensam123
        • 12 years ago

        Why not make it more useful then?

        Simple logic. If a user loads a program, why not cache more of the program that is used often? There are tons of ways to make cache useful.

        The idea is to write and read from the cache instead of the drive and keep the drive always working at maximum effecieny, rather then going all willy nilly on whims notice of a program. Memory can be re-organized on the fly to, so if lots of things are done to the cache in a short amount of time they can be reorganized and written to the drive in the most efficient manner. Similar to the way NCQ works now, only expanded on and smarter.

          • ew
          • 12 years ago

          The problem with disc cache is that it is redundant. Operating systems have been managing disk cache in system memory very well for a long time. My PC is currently use almost 1GB of system memory for disk cache.

          NCQ is also kind of redundant because the OS will reorder commands. I’m not sure if Windows does this though.

            • Bensam123
            • 12 years ago

            Hmmm… I wonder what those yahoos were thinking that put cache on a HD or made NCQ. πŸ˜›

            • just brew it!
            • 12 years ago

            Cache on the drive is there mainly to match the physical data transfer speed of the drive with the speed at which the OS can read/write disk blocks during sequential transfers. Historically, it has also helped compensate for OSes which had no (or brain-dead) disk caching algorithms. So yes, /[

            • Bensam123
            • 12 years ago

            You’re saying conflicting things.

            Why not just give the OS low level knowledge of the drive or just make the drive a bit smarter? Either of them would drastically increases the efficiency of what they do, but splitting them up yields current results.

            It’s like having two stupid people trying to think up a solution to a problem instead of one smart one.

            Even if a drive has no knowledge of the filesystem or files that are getting accessed, it should be able to tell certain areas of the disk are getting accessed more then others and cache them.

            I see it as being no different then how pre-fetching and caching works on a processor, only quite a bit more simplistic.

            • just brew it!
            • 12 years ago

            I don’t see the conflict.

            The drive knows about the physical attributes of the hardware, the OS knows how the filesystem data structures are laid out. It’s a sensible partitioning of responsibilities IMO.

            When you’re dealing with cached data which is to be written to the disk, just figuring out which blocks are accessed most frequently isn’t good enough if you want to maintain data integrity. File system data structures should be updated in a specific order if you don’t want to potentially leave the filesystem in a corrupted state.

            The crucial difference versus pre-fetching and caching in a processor is that with a processor’s cache, you don’t care about writing the contents of the cache to non-volatile media in the event of an unexpected system shutdown.

            • Bensam123
            • 12 years ago

            I wasn’t talking about maintaining data integrity by knowing what areas of the disk are access more often, I was stating that it could be used to improve performance. A drive doesn’t need full understanding of the file system on it or the data beyond the bit level in order to improve performance. It could base all the algorithms off of behavioral patterns. Many things in science are based off of approximations rather then pure fact.

            There are many ways around power down corruption as was described in the other tangent. Once again I was talking about it pertaining to performance, not data security as there are ways to handle that. The original post was about improving HD performance with a bigger cache and making the drive ‘smarter’.

            • just brew it!
            • 12 years ago

            It still makes more sense to have most of the cache on the system side (or in an intelligent disk controller) instead of in the drive. If you cache in the drive, then every time the data is accessed you need to transfer it over the disk interface (300MB/sec assuming both your drive and controller support SATA-2). Bandwidth of system memory is at least an order of magnitude better. The whole point of cache is to put the data where it can be accessed quickly…

            • Bensam123
            • 12 years ago

            It is…

            but as it looks like the major OS manufacturer isn’t doing a great job of trying to fully utilize a HD that just leaves it up to the people plugging them out. Really who do you think would be better at making enhancements to a product, the people who make the product or the people that utilize the product?

            I’d put my money on a HD manufacturer like Seagate over Microsoft. If they cared enough about it then they would’ve already done it. As someone like Seagate churns out the actual product you would assume they would want to better their product. Not just to make a better product that the consumer can benefit from, but it would also be a way to differentiate your product in a very stagnate world. In other words they would make more money. Microsoft has no incentive or reason to increase the performance of HDs. They’ll sell their OS regardless of how fast it accesses a disk as long as it’s reasonable.

            Besides who do you think is more likely to read the comments on a HD review, Microsoft or one of the HD manufacturers? I care not which one it is that improve performance or implements ways to make drives faster, but there are three major HD manufacturers and only one Microsoft.

            Plus this was a review of a HD not a OS. I’m sure faster HD access and better data pre-fetching would be on the list for a OS, but this isn’t about the OS. Hell, if both try and improve things in their own way it may yield results that stack.

            • evermore
            • 12 years ago

            Does Windows (or any other OS) treat the disk cache as a virtual extension of the drive, reading and writing to it as if it were the actual disk? I thought it was only a write cache in Windows at least. Even if it does read caching, I’d have assumed it is only caching ahead data from the drive. I would think if it needed data that was in the write cache, it would have to wait until it was written to the disk first.

            • just brew it!
            • 12 years ago

            Windows caches data from recently accessed (not necessarily written) files. The cache competes with the VM working set of any running processes for available physical RAM. There’s a complicated algorithm that tries to balance these competing needs for physical RAM… it gets it mostly right most of the time, but there are pathological cases where it can mess up quite badly. E.g., a program which quickly reads through several large files on disk can cause a lot of code/data associated with running processes to get paged out to make room for disk cache, causing the entire system to feel sluggish for a while until stuff gets paged back in.

            • bhtooefr
            • 12 years ago

            y[

    • moose103
    • 12 years ago

    Great review.

    All I get out of it is the WD drive rocks.

    • swaaye
    • 12 years ago

    It does look like they’ve hit the point of diminishing returns on cache size.

    You know, I have to say that 1 TB is really big. I’m still happy in the 150-200 GB realm. And I’m not particularly low on the multimedia factor here. I don’t tend to keep stuff on the HDD tho, I burn stuff to DVD because I don’t really trust HDDs. Just imagine though if one of these 1 TB monsters puked out and you lost all that data! OMG! πŸ™‚

    And my HDDphobia exists even though I’ve never had one totally go out on me. A few have become unstable though, with read errors popping up on big transfers. WD’s website-based quick RMA ship’n’swap is really great for times like that. One doesn’t even need to give them a call; just fill in your serial number and address and here comes a refurb!

    What I really would rather have is a solid state beast without any access time. I am happy enough with a couple hundred gigs of storage. The speed of HDDs is just so disappointing. We have these 3.0 Gb/s interfaces but drives that can barely maintain 60 MB/s (and that’s only when seeking is very minimal).

      • sigher
      • 12 years ago

      Yes indeed, mechanical is so old fashioned, I secretly feel the same about cars, explosion engine basically the same as 100 years ago, pfft, can’t we think up something better already.

      As for burning to DVD, even if you do that it’s still more convenient to have a copy on your HD, point and click rather than searching that damn DVD, “where can it be, oh I forgot I gave it to my pal to watch and he didn’t return it” fun.

      • droopy1592
      • 12 years ago

      When it’s 120 bucks, I’ll bite

    • Krogoth
    • 12 years ago

    Excellent HDD review as always Geoff.

    I have to say that I am not that all impressed by any of the current batch of mainstream HDDs. They are about as fast as each other with some marginal differences here and there. The only there factors that I consider these days are warranty, price and capacity.

    The increasing on-board cache trick was always a band-aid to overcome some of the physical limitations in HDD performance. The 32MBs on 7K1000 only proved useful in IOpeak suite where it could fit 32-64 outstanding IO request onto its cache unlike its competitors.

    The reason why 7K1000 fall short in other areas is that 200GB platter technology isn’t mature and additional latency from having to drive 5 platters. I suspect firmware updates could bridge the gap.

    • Vaughn
    • 12 years ago

    The only thing I was looking at in those graphs were the WD 750GB results.

      • sigher
      • 12 years ago

      hehe, yeah it comes out a winner eh that WD drive.

    • Fighterpilot
    • 12 years ago

    Nicely put there Damage,did someone diss that guy when he joined up or perhaps his G/F ran off with a TR regular?
    Nice review as always and extra points for the cool,controlled response to the unjustified whine.
    ps..that was a good one ssidbroadcast LOL

    • skm
    • 12 years ago

    I wish this compared the drives I buy, even for reference: Cheetah u320’s and Cheetah SAS drives.

    Then I would know how this drives stands up to the ones I am currently using, instead of standing up against drives that I passed over for the Cheetahs.

    I really want to know.

    • slasktratten
    • 12 years ago

    Where on earth are the real tests? Where are the random seek tests? The small file burst tests? The large file tests? The data transfer continuity tests? Worst test of a hard drive I’ve ever read.

    Here’s a snippet from /. that says everything about the tech value presented in this article:

    “OS loading times”? “Game loading times”? Wow. What incompentet, irrelevant, generalizing, illogical, no-knowledge nonsense. They should never put a humanitarian to do the job of a technician. If the same guy were to do a review of furnite he’d compare comfort of a sofa in terms of what color cloth was used, or the brand of after-shave used by the salesperson flogging the furniture.”

      • Damage
      • 12 years ago

      Let me start my saying thanks for asking so nicely about our testing methods.

      I suppose you’re unfamiliar with some of the tools we use for testing, such as the various test patterns in FC-Test (which tend to stress different file sizes, depending on the pattern) or HD Tach (which generated the random seek time and sustained/burst transfer rate numbers we, ahem, reported). We probably could have explained what some of those tests do in more detail, although much of it does seem obvious given the data being reported (like, err, random seek times and sustained transfer rates). You also seem to be unfamiliar with tools like IOMeter, which is odd since it’s so commonly used to test scenarios where drives must service many outstanding I/Os simultaneously.

      We don’t emphasize purely synthetic drive benchmarks too greatly for good reasons, though, including the fact that drives with 8MB or better (this one has 32MB) of onboard cache tend to ace most traditional synthetic tests. We’ve even seen the same basic drive mech produce much different scores based on firmware mods. And, of course, at the end of the day, what you want from a drive is strong real-world performance, which doesn’t always correspond to good scores on synthetic tests. Seek times and such are important, but today’s drives are too complex for a simple set of metrics to tell you everything you need to know about a drive’s performance. That’s why we use a mix of synthetic and application tests. We even developed our iPeak tests to check a drive’s ability to handle multiple outstanding I/Os in a single-user type scenario.

      I believe we’ve presented enough synthetic data for those who wish to lean primarily on it, but if you have suggestions for other tools we can use in testing, do feel free to offer them. Otherwise, I’m not sure I see the point of your drive-by hit from the /. comments.

        • Dposcorp
        • 12 years ago

        Nice reply Scott. I probably would have had been less polite.

        However, I agree with #20, Fighterpilot.

        His hard drive reviews were probably the worlds best, but TR showed up and knocked him down to #2 on the web, and now Newegg wont advertise with him.

        Man, its like the tech bubble bursting all over again.

        Time to sell all my shares of slasktrattenDriveReviews.com.

        On another note, good review as always.
        Thanks for the final bit about price per Gig. I am one of those guys who has yet to buy something larger then 500Gigs, as that and 320Gig seem to be the sweet spot at this time.

          • continuum
          • 12 years ago

          heh. I’m not sure how polite I would have been, either.

        • PenGun
        • 12 years ago

        I would like to see a large file write test. Video capture at HD resolution needs very fast and sustained writes of gigabyte files. Arrays for this purpose are becoming more common.

          • titan
          • 12 years ago

          That’s what the FC Tests are for. The files may not be as large as an HD video file, but they’re large enough that you’ll get the same results with or without an HD file.

        • sigher
        • 12 years ago

        You obviously have a point in your reply but I do notice you call his negative comments ‘drive-by hit’ and suggest they are ‘pointless’ whereas the many meaningless ‘GREAT TEST, KISS KISS ON YOUR ASS’ comments don’t get an “Oh please stop brownnosing” reply.
        As if those comments are not more pointless.

          • flip-mode
          • 12 years ago

          If you’re going to criticize someone, it’s appropriate to have a basis for it. The same goes if you’re giving critical acclaim. If you’re just showing appreciation, which is what most of the “great review” comments are about, then there’s no need. IMO.

            • sigher
            • 12 years ago

            Well this fellow thought he had a legit complaint, even felt he was bolstered by others.

      • indeego
      • 12 years ago

      Nice. Not only did you not justify your comment with specifics about what was lacking, but the post you quoted lacked the same. The analogy presented within the /. post had no relation to what Damage presented as well.

      ggg{<.<}g

      • swaaye
      • 12 years ago

      Yeah and apparently you read the review really closely. Heh.

      This is why I don’t bother with slashdot. The place is just jam packed full of people who “know better” about everything. Lots of philosophers and prophets over there. Uh huh.

        • titan
        • 12 years ago

        You beat me to that. Reading through the comments on /. I was surprised at the sheer volume of people who proudly tooted their horn about how there are so many flaws in this review and how this statement or that is way wrong. Someone made an accusation that it was a paid review I think. Or marbe it’s just the quote that slas posted. The guys here at TR are very good at making sure the provide accurate reviews, and there are quite a few minutegerbils ready for war. πŸ™‚

      • 5150
      • 12 years ago

      slasktratten, what you’ve just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent comment were you even close to anything that could be considered a rational thought. Everyone in this thread is now dumber for having read it. I award you no points, and may God have mercy on your soul.

      Heh, thought of that instantly.

        • Fighterpilot
        • 12 years ago

        y[< Everyone in this thread is now dumber for having read it.<]y heheh ....funny one πŸ™‚

      • titan
      • 12 years ago

      When did Geoff say he is a humanitarian and not a tech?

      • Krogoth
      • 12 years ago

      Congrats, you got a C- at trolling. Try better next time around.

    • Unanimous Hamster
    • 12 years ago

    Pentium 4 Extreme Edition 3.4GHz??? This hard drive was tested on a P4? I mean, I know the CPU isn’t really a major factor in hard drive tests, but isn’t the P4 a bit old to be used for benchmarking?

      • Vaughn
      • 12 years ago

      I think you answered your own question with that post.

      • indeego
      • 12 years ago

      Better to keep a legacy CPU around for consistency in test results than upgrading the CPU needlessly and invalidating all previous drives or having to retest them all one by one on the new CPU.

      In fact, testing on a mature platform is IMO preferred. other bugs like driver interaction or BIOS/firmware for a MB has likely been applied so you are more likely to get consistent resultsg{<.<}g

    • toxent
    • 12 years ago

    y[

    • Spotpuff
    • 12 years ago

    Bets on how long until someone compalins that you lose 7% of the space in formatting.

      • evermore
      • 12 years ago

      Technically they just lose 7% of the space period. Formatting isn’t what does it. Using it in a PC that calculates capacity in binary does. It’s all been hashed out, lawsuits done, drive makers keep using their numbers but have to stick a little note on it indicating that 1GB equals 1 billion bytes.

    • evermore
    • 12 years ago

    Heh. I monitor clients’ servers. We got an event last night alerting that a disk drive’s free space was below 5% (the standard critical threshold), around 4%. I called their on-call tech and he said no big deal, it’s a 10 terabyte drive array. Even 1% free space is still a hundred gigabytes.

    What’s with the triple-layer steel plating on the underside of the drive?

    What does that thing weigh? Could we build a moderately strong bunker wall out of them?

    Drive makers ought to focus on really fine-tuning single-platter drives for noise and power without losing performance. I know they’re GOOD, better than the old days, but still. I want a new-model SE16 single platter, with perpendicular recording. I don’t even consider drives without that at this point, for my primary drive.

    Highest density platters? Seagate’s got single platter 250GB drives. ST3250310AS and ST3250410AS for instance. According to the spec sheets anyway. Seagate says 164Gb per square inch, Hitachi says 149 for the 7K1000.

      • sigher
      • 12 years ago

      I wondered about that plate (or those plates) too in the past, what’s that all about? They’ve been on drives for ages.

        • evermore
        • 12 years ago

        Well I figure it’s for physical strength. Drive casings are nothing but crappy soft metal (either pot metal or aluminum) so the motor might be able to torque the things out of shape. What I wondered was why there are 3 of them, I’ve only seen one on any other drives. But I may have answered my own question now. The motor in this drive must be a beast, 3 of the plates may be to ensure there’s no deformation.

    • Thebolt
    • 12 years ago

    Seems like a step in the right direction(increased capacity) but I don’t see this product as being worthwhile. I’d much rather buy 2 500GB drives and perhaps have one being the boot drive and one for additional storage/other applications. Same capacity, less price, potentially better performance.

      • shalmon
      • 12 years ago

      -Same capacity, less price, potentially better performance.

      and twice the drive electronics…i’d wager better performance fo sho

      -edit: and twice the cache πŸ˜‰

      • DASQ
      • 12 years ago

      It’s better for people who like to keep a lot of things of the same type all on one drive, like me.

        • evermore
        • 12 years ago

        Options:
        JBOD.
        NTFS drive mapping to folder (or similar ability on Linux and presumably Mac).
        RAID0.
        Stop pirating so much. πŸ™‚

          • shank15217
          • 12 years ago

          you forgot dynamic disk spanning through windows. Infact software raid can be as fast as hardware raid depending on the type.

            • evermore
            • 12 years ago

            That’s kind of covered by JBOD but yeah. It’s also not really a RAID level, and wouldn’t require much in the way of processing since it’s not attempting to break data into pieces to spread over the disks or doing any parity. I imagine JBOD with a hybrid RAID controller, a hardware controller or OS-level disk spanning would all perform exactly the same.

    • gratuitous
    • 12 years ago
      • imtheunknown176
      • 12 years ago

      This is one of the first time I’ve read one of your comments lol. Simulating use would be hard because it’s the things you don’t think of to put in the simulation that would kill the drive.

      • evermore
      • 12 years ago

      I just remember it because it’s a funny name. I had a T7K250 as my primary drive for about a year, then migrated it to my PVR as the video capture drive and added a second to RAID them. They’re still going fine, been about 8 months with them in RAID. Of course they could both die simultaneously tomorrow.

      • ssidbroadcast
      • 12 years ago

      y[

        • evermore
        • 12 years ago

        That is freaking hilarious.

      • Mithent
      • 12 years ago

      That was only the 60GXP/75GXP though – I would expect that not only have IBM/Hitachi learned from that, but pretty much everything has been changed since 7 or so years ago?

        • Delphis
        • 12 years ago

        They really should have not carried over the name. I still don’t understand why Hitachi wouldn’t want to come up with something better that doesn’t remind people of a statistically high number of drive failures.

        • sigher
        • 12 years ago

        The deathstar was from the time it was just IBM, before they got to be Hitachi/IBM
        Just for clarity because you mention the old model from the IBM days and hitachi in one breath.

      • Nullvoid
      • 12 years ago

      I’ve still got an old 31gb IBM Deskstar 75gxp and an 82gb Hitachi Deskstar 7k250, in each case bought not long after they first were available (so they are roughly 6.5years and 4years old respectively). They’ve never been out of service since being bought and neither has shown any signs of dying. Lucky me?

        • sigher
        • 12 years ago

        I had 2, they lasted a few years, but one died in a RAID setup (bit vexing) and the remaining one died in an external casing.
        Not sure they were THAT bad for me, but certainly not the best drives I ever had, I have drives still running that I got long before those IBM’s.

      • just brew it!
      • 12 years ago

      IIRC the “Deathstar” fiasco was pretty much isolated to one model of drive, manufactured in one factory (IBM’s Hungarian plant). I’ve used quite a few Hitachi Deskstar drives since then, and have been quite happy with them.

      I do have to agree with #21’s sentiment though — why in the world didn’t Hitachi change the name of the product line when they bought IBM’s hard drive business?

        • Shinare
        • 12 years ago

        Quite possibly because the positives of the already established product branding outweighed the one negative. Couple that with the fact that most people are probably not ignorant enough to believe that because of one bad apple, it doesn’t spoil the whole bunch. Or maybe because people already know that does not a rose by any other name smell just as sweet?

        But I have no idea why, really.

      • bhtooefr
      • 12 years ago

      y[

Pin It on Pinterest

Share This