Super Talent’s 2.5″ IDE Flash hard drive

Manufacturer Super Talent
Model 2.5″ IDE Flash
Price (street)
Availability Now

IN AN ATTEMPT TO improve performance and extend battery life, mobile hard drive manufacturers are working on hybrid designs that combine flash memory with traditional platters. Flash memory’s fast access times, low weight and power consumption, and lack of moving parts make it ideal for mobile environments, which is perhaps why Microsoft has made hybrid drives a requirement for Windows Vista Premium certification starting in June of 2007.

Of course, the first hybrid hard drives aren’t even expected to become available until early next year. However, you don’t have to wait six months to get flash memory in a 2.5″ notebook hard drive; Super Talent’s 2.5″ IDE Flash drives are available today in sizes up to 16 GB. The prospect of a silent, lightweight notebook hard drive with frugal power consumption is certainly tantalizing, but what about performance? Join us as we run Super Talent’s IDE Flash drive through the wringer to determine whether it’s a worthy notebook upgrade.

Why flash?
Before we dive into Super Talent’s IDE Flash drive, it’s worth taking a moment to explore why hard drive manufacturers are bothering with flash memory at all. Traditional hard drives store data on spinning platters that are accessed by a drive head that darts back and forth across the platter’s surface. This design has stood the test of time, but because it relies on the physical movement of the platters and drive head, it’s bound by mechanical latencies. Cranking up the spindle speed of the platter, increasing cache sizes, and using more intelligent command queuing logic can reduce the impact of those mechanical latencies, but not by as much as ditching the mechanics completely.

Switching from spinning platters to memory chips takes drive mechanics out of the equation, enabling significantly lower disk access latencies. If data needs to be read from or written to a given address, there’s no need to wait for the drive head and platter to move into position—the address can be accessed instantly.

Moving to a memory-based storage solution like Gigabyte’s i-RAM can have a profound impact on disk performance. However, the i-RAM relies on volatile DRAM memory for storage, so it needs a steady stream of power to hold data. That simply won’t do in the mobile space, where users can’t afford to lose data just because their notebook battery has run dry.

Enter flash memory. Made famous by countless USB thumb drives, flash memory enjoys the low access latencies of chip-based storage without the volatility of DRAM. It’s not as fast as DRAM, but it doesn’t require power to retain data, making it ideal for mobile applications.

The drive
Rather than combine flash memory with mechanical platters, Super Talent is betting the farm on flash with its IDE Flash drive. That should ensure low access latencies, but interestingly, Super Talent doesn’t make any bold performance claims about the drive. That’s particularly notable because while flash memory has excellent access latencies, transfer rates aren’t always as impressive. We’ll see for ourselves when we get to the benchmarks.

Super Talent’s IDE Flash drive isn’t much to look at, but then, few hard drives are.

At first glance, the IDE Flash drive actually looks quite similar to a traditional 2.5″ mobile hard drive. Picking up the IDE Flash immediately reveals that it’s significantly lighter than its platter-based counterparts, though. The IDE Flash weighs a scant 40 grams—much less than traditional mobile ATA drives, which typically weigh around 100 grams.

Replacing a traditional hard drive’s mechanical internals and magnetic platters with memory chips allows the IDE Flash to drop a lot of weight. It also lets Super Talent build a slimmer drive than traditional 2.5″ ATA designs. The IDE Flash is just 6 mm thick, while most notebook drives are 9.5 mm thick. Still, the IDE Flash uses the same 44-pin interface and mounting screws as other 2.5″ ATA drives, making it a drop-in drive replacement for ATA-equipped notebooks.

Because it has no moving parts, the IDE Flash is considerably more shock-resistant than traditional ATA drives. That’s particularly important for notebooks that tend to get moved around a lot. The IDE Flash drive also includes internal ECC logic to ensure data integrity, and Super Talent claims that the drive will retain data for at least 10 years. According to Flash IDE vendors, the drive’s rated for more than 1 million write/erase cycles, as well. However, Super Talent doesn’t publish write/erase cycle specs on its website.

Super Talent currently offers the IDE Flash in 4, 8, and 16 GB capacities, and only with an ATA interface. Those capacities aren’t particularly jaw-dropping in an era where notebook drives with perpendicular recording are pushing the 200 GB mark, but the 8 and 16 GB models offer enough storage for Windows and a few applications. Road warriors more concerned with battery life and weight may find the capacity trade-off easier to swallow, although the lack of a Serial ATA model will keep the IDE Flash out of most newer notebook designs.

 

Test notes
Today we’ll be comparing the IDE Flash drive’s performance to a slew of drives from our recent 2.5″ ATA hard drive round-up. We’ve thrown in 2.5″ mobile and 3.5″ desktop Serial ATA hard drives for good measure, as well. The IDE Flash is a unique product, so that’s about as close to direct competition we we’re going to get.

All testing was conducted on the same platform as our 3.5″ hard drive reviews, so you can compare the performance of the IDE Flash with a wider range of 3.5″ drives by flipping back to our Western Digital Raptor WD1500ADFD review. Our test system is also identical to the one used in our 2.5″ Serial ATA hard drive comparo, so those results are comparable, too. However, we have changed the way we conduct power consumption tests, so you won’t be able to compare power consumption scores from this review with those in our 2.5″ Serial ATA hard drive round-up.

Our testing methods
All tests were run three times, and their results were averaged, using the following test system.

Processor Pentium 4 Extreme Edition 3.4GHz
System bus 800MHz (200MHz quad-pumped)
Motherboard Asus P5 WD2 Premium
Bios revision 0422
North bridge Intel 955X MCH
South bridge Intel ICH7R
Chipset drivers Chipset 7.2.1.1003
AHCI/RAID 5.1.0.1022
Memory size 1GB (2 DIMMs)
Memory type Micron DDR2 SDRAM at 533MHz
CAS latency (CL) 3
RAS to CAS delay (tRCD) 3
RAS precharge (tRP) 3
Cycle time (tRAS) 8
Audio codec ALC882D
Graphics Radeon X700 Pro 256MB with CATALYST 5.7 drivers
Hard drives Seagate Momentus 7200.1 100 GB SATA
Seagate Momentus 5400.2 120 GB
Seagate Momentus 5400.3 160 GB
Seagate Momentus 7200.1 100 GB
Western Digital Scorpio WD1200VE 120 GB
Hitachi Travelstar 5K100 120 GB
Hitachi Travelstar 7K100 100 GB
Fujitsu MHV2040AT 40 GB
Super Talent IDE Flash 8 GB
OS Windows XP Professional
OS updates Service Pack 2

Our test system was powered by OCZ PowerStream power supply units. The PowerStream was one of our Editor’s Choice winners in our last PSU round-up.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

WorldBench overall performance
WorldBench uses scripting to step through a series of tasks in common Windows applications. It then produces an overall score. WorldBench also spits out individual results for its component application tests, allowing us to compare performance in each. We’ll look at the overall score, and then we’ll show individual application results alongside the results from some of our own application tests.

Well that’s not encouraging. The IDE Flash scores lower than even the 4,200-RPM Fujitsu MHV2040AT in WorldBench. Let’s break down the overall score into individual application tests to see what’s holding back the flash drive’s performance.

Multimedia editing and encoding

MusicMatch Jukebox

Windows Media Encoder

Adobe Premiere

VideoWave Movie Creator

Although it’s competitive in the MusicMatch Jukebox and Windows Media Encoder tests, the IDE Flash drive is way off the pace in Premiere, where it’s nearly two times slower than our traditional 2.5″ ATA drives. The IDE Flash is also a little slow in Movie Creator, although not by as massive a margin.

 

Image processing

Adobe Photoshop

ACDSee PowerPack

Photoshop doesn’t present any problems for the IDE Flash drive, but ACDSee is a disaster. The drive takes three times longer than even our slowest 4,200-RPM drive to complete the test.

Multitasking and office applications

Microsoft Office

Mozilla

Mozilla and Windows Media Encoder

Despite its poor showing in the ACDSee test, the IDE Flash drive takes top honors in two of WorldBench’s multitasking and office application tests.

Other applications

WinZip

Nero

But the drive falls to the back of the pack in Nero and WinZip. Performance is poor in both tests, with the IDE Flash drive proving to be almost a third the speed of our fastest 2.5″ ATA drives.

 

Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch.

The IDE Flash drive doesn’t fare particularly well in our system boot time test, but it manages quick level load times in both DOOM 3 and Far Cry.

 

File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. File copying is tested twice: once with the source and target on the same partition, and once with the target on a separate partition. Scores are presented in MB/s.

Ouch. Seriously. Ouch.

Super Talent’s IDE Flash drive is painfully slow in FC-Test’s file creation tests, turning in transfer rates much slower than even our lowly 4,200-RPM mobile ATA drive.

The IDE Flash drive’s performance is much improved when we look at reads, but it’s still significantly slower than the 2.5″ hard drives.

FC-Test’s copy and partition copy tests combine read and write operations, so it’s no surprise to see the IDE Flash drive languishing at the back of the field. It’s no wonder Super Talent doesn’t make any outlandish claims regarding the drive’s performance.

 

iPEAK multitasking
We’ve developed a series of disk-intensive multitasking tests to highlight the impact of command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

Although iPEAK would run on our 8 GB IDE Flash drive, the app did warn us that tests would have to be wrapped due to the drive’s small size. This means that any I/O requests that would have referenced areas of the drive beyond 8 GB would be wrapped around to the beginning of the drive.

The IDE Flash drive shows, er, flashes of brilliance in our first round of iPEAK multitasking loads, taking second place in tests that include a VirtualDub import as a secondary task.

 

iPEAK multitasking – con’t

iPEAK multitasking loads that include a VirtualDub import continue to be fertile ground for Super Talent’s IDE Flash drive, although it’s considerably slower in tests that involve a file copy operation as the secondary task.

 

IOMeter – Transaction rate

Super Talent drops an ace in IOMeter, as the IDE Flash drive’s transaction rates dominate the field with three of four test patterns. Performance is particularly impressive with the web server test pattern, which is made up exclusively of read operations.

 

IOMeter – Response time

The IDE Flash drive continues its strong performance when we look at IOMeter response times. Again, the drive is quicker than the rest of the field with all but the database test pattern. It continues to own the web server test pattern, too.

 

IOMeter – CPU utilization

CPU utilization is pretty consistent across the board, although the IDE Flash drive consumes more processor cycles with the web server test pattern. Given the drive’s extremely high transaction rates with that test pattern, a little overhead is to be expected.

 

HD Tach
We tested HD Tach with the benchmark’s full variable zone size setting.

No wonder the IDE Flash drive faltered in FC-Test; it’s sequential transfer rates are incredibly slow. The drive doesn’t even manage to push 14 MB/s when reading, and write performance is closer to 6 MB/s.

Burst performance is poor, as well, with the IDE Flash drive barely eclipsing 14 MB/s.

However, the tables turn when we look at random access times. Here, the IDE Flash drive is faster than the competition by more than an order of magnitude.

CPU utilization scores are well within HD Tach’s +/- 2% margin for error in this test.

 

Noise levels
Noise levels were measured with an Extech 407727 Digital Sound Level meter 1″ from the side of the drives at idle and under an HD Tach seek load. Drives were run with the PCB facing up.

Since the IDE Flash drive has no moving parts, it’s essentially silent. However, we’ve still included noise level results to illustrate the drive’s impact on overall system noise.

The IDE Flash drive’s silent design shaves a few decibels off our system’s noise levels. That’s not a whole lot in the grand scheme of things, but the IDE Flash drive’s utter silence could have a much bigger impact on systems designed specifically for quiet operation.

Power consumption
For our power consumption tests, we measured the voltage drop across a 0.1-ohm resistor placed in-line with the 5 V and 12 V lines connected to each drive. Through the magic of Ohm’s Law, we were able to calculate the power draw from each voltage rail and add them together for the total power draw of the drive.

This is why flash memory is making its way into hybrid mobile hard drive designs. Even with 8 GB of storage, the IDE Flash drive barely sips power when compared with traditional ATA drives.

 

Conclusions
Super Talent warned us that the IDE Flash drive’s transfer rates weren’t anything to write home about, and they were certainly right. Sequential transfers are much slower than even our 4,200-RPM mobile ATA drive, with write performance lagging behind reads by a considerable margin. That proves disastrous for the drive’s performance in FC-Test, and likely also contributes to its sluggish showing in certain WorldBench component tests.

Despite its poor transfer rates, the IDE Flash drive showed its potential in our iPEAK multitasking tests. The drive also dominated IOMeter’s file server, web server, and workstation test patterns. Clearly, there’s some value to flash memory’s blazing-fast access times, thanks to the banishment of the mechanical latency associated with platters and heads. The drive’s nonexistent noise levels and minimal power consumption also have considerable appeal.

Unfortunately, the IDE Flash’s most attractive attributes don’t match up all that well. Silent operation and low power use would make this thing ideal for laptops, but slow transfer rates and a low WorldBench score blunt its appeal considerably. The IDE Flash drive’s real performance potential lies with applications that, like IOMeter’s server-oriented test patterns, take advantage of its quick access times. Those applications seem less likely to benefit significantly from lower noise levels or power consumption, though.

In the end, Super Talent’s IDE Flash drive is an intriguing alternative to 2.5″ ATA hard drives, but probably not one that’s likely to have widespread appeal. Fortunately, the drive isn’t prohibitively expensive—4 GB versions are selling for under $200 online, with 8 GB models at about $320, and 16 GB flavors running closer to $530. Those prices make the IDE Flash drive affordable enough for niche applications like ruggedized notebooks, silent media center systems with remote storage, and even automotive applications. We just wouldn’t drop one into an everyday business notebook. 

Comments closed
    • eguy
    • 13 years ago

    §[<http://www.dvnation.com/nand-flash-ssd.html<]§ is on top of the SSD market with new SSDs up to 64GB, sata and ata from PQI and the Samsung line of SSDs any Hybrids coming out soon. I got a PQI 16GB drive from them. It really does boot a PC faster than a mechanical disk, even with a lower throuput rating. My laptop boots 21% faster on an SSD thats only rated for 15MB/s read, 13MB/s write... Probably due to the 1ms seek time. 13X faster than a mechanical drive. They are touting 5,000,000 write cycles, 10 year life expectancy, etc....SuperTalent drives are not very good. Do NOT use them as system drives. Windows installation will last about a day before a blue screen of death and you'll have to completely reload windows. For a data drive, they may be o.k. Stick with PQI and Samsung. They are robust.

    • wierdo
    • 13 years ago

    Sorta half arsed design imho, looks like they just slapped some flash on a board and called it a hard drive… might as well make a 20000rpm ATA drive with zero cache while yer at it 😛

    … someone do the job right. The potential is there, just needs better engineering.

    • stmok
    • 13 years ago

    I wouldn’t use this solution on a desktop, but definitely for something low-powered and silent. (sitting quietly in the corner).

    I use those IDE “Disk-On-Modules” (DOMs) when I build BSD and Linux based firewalls. (I set it such that its Read-Only for everything except when you need to save the settings such as IP filter settings. Logs go to a NAS or an admin PC via Syslog)…They’re wonderfully silent and ridiculously (overkill) powerful for the purpose.

    I hope one day, we’ll reach to the point where we can completely replace these hard disks, with solid state solutions that are very reliable and acceptable in performance, but reasonably affordable. I guess we’re still at the very first baby steps to that path.

    Maybe they should have a write buffer of say, 256MB for constant read/write operations? (I suppose it would be a bit difficult in finetuning an algorthim such that it knows when to write data from buffer to storage only when necessary to prolong life of the drive…Of course, there’s also a problem when there’s a sudden blackout or batteries running out, etc or some power-out scenario).

    I know a potential problem arises with WinXP (and possibly Vista). Even when you turn off the swap file in WinXP, it will override you as a “safety feature” by creating a small one. (In Linux or BSD, you don’t create a swap partition and that’s it…But make sure you have plenty of RAM, because if it ever runs out, it will crash!).

    • TSchniede
    • 13 years ago

    Flash drives seem to be in some kind of dilemma, the easy design like this one work like the classical drive spanning with raid-systems and are quite slow on sequential transfer patterns.
    Interleaving needs a more complex logic (it must have very small blocks) and thus draws more power. Unfortunately because of the simplified clear logic of flash rom even greater data must be accessed on a write access. Read acces would be mostly faster, but write slower.
    I suppose the good old cache on the harddisk would work miracles combined with a very efficient drive logic.
    I’m only afraid the power advantage would disapperar.
    MRAM or any non-volatile memory with significantly faster read/write access is more suitable, but I suppose the old rule of computer industry wins – if the new technology needs several years and millions of dollars tobe better than current high-end parts of the old technology, better improve the old one (less risk involved).

      • Stranger
      • 13 years ago

      as long as its onchip the power requirerment of more logic would be minimal. even if it wasn’t the drive could suck 9 times as much power and still only consume half of the nearest competitor. The problem with more of the chip has to be dedicated to buses and over head stuff the less bits you can packinto any given area of silicon, which I have a feeling the flash companies are more interested in increasing storage copacity then anything else.

    • albundy
    • 13 years ago

    yeah, your gonna need that extra power for dx10 running 24/7.

    oh well, i guess i can chuck my super slow 15k scsi drive and get a speedy 7200rpm monster.

    • Captain Ned
    • 13 years ago

    Looking at the first picture on Page 1, I’d like to know who the pin-bender is. 😉

      • Dirge
      • 13 years ago

      Haha yes I noticed the bent pin as well, just too polite to point a finger 😛

        • Capsaicin
        • 13 years ago

        Better not let me near one of your IDE drives… 😳

      • Dissonance
      • 13 years ago

      Yeah, this mobile ATA pin adapter I have’s a pain to get on and off, and I have little patience. I’ll take the blame 😉

    • Chrispy_
    • 13 years ago

    So:
    Flash disks like this have awful transfer speeds and incredible access times, as well as minimal power consumption.

    Traditional disks have excellent transfer speeds but relatively poor access time and power consumption.

    When will we see a disk that combines both technologies, but specifically – with logic that determines which files go on which media? Horrifically oversimplified, but files over 16KB in size go on the mechanical platter and the smaller files on the flash disk…..

    In other words, when will we see the “performance” hybrid drives rather than the “power-saving” hybrid drives?

      • muyuubyou
      • 13 years ago

      I guess Samsung offering for the desktop are/will be just that.

    • swaaye
    • 13 years ago

    USB 2.0 flash sticks are not what I would call fast either, especially writing. I was sorta wondering about why everyone has been drooling over flash hard drives supposedly being faster. They will save power though. I’m sure that’s their primary reason for existence.

      • Trymor
      • 13 years ago

      Are you sure people have been drooling over Flash drives? Mabey you are thinking about the more general ‘Solid State’ drives, which includes flash drives, but also faster ‘memory’ technologies.

      Give me a large, affordable i-Ram with memory included, and I’m all over it.

        • swaaye
        • 13 years ago

        Nah I’m thinking of the coming hybrid notebook HDDs with both a normal mechanism and lots of flash RAM.

          • Trymor
          • 13 years ago

          I have been wondering the same thing for a while now. At least for enthusiasts, it seems that these hybrids would just slow down performance, unless there is a mechinism to bypass the flash on larger transfers…

    • Stranger
    • 13 years ago

    I think these kind of drives have alot of potential in the future. Right now i don’t need a ton of read/write speed on my laptop that I mostly use for creating documents, email and webbrowsing. While I don’t need a massive ammount of computing power I’m always running out of battery during the day. to bad IBM decided to use a freaking 1.8 in drive over whatever the next size small is in my laptop.

    What I found interesting was how many apps seemed incredibly latency limited rather then Bandwidth limited. While the drive had something like a tenth of the bandwidth it still scored within 20% of the leading score in world bench.

    I’d Imagine that in the future the speed of flash chips will go up drasticly. The way I see it, it doesn’t seem like most flash chips are designed for high speed. while it might not be feasible to drasticly increase the the rate at which a bit is read and writen to, it seems relativly easy to increase the number of bits being read/writen to at the cost of more over wiring and suport overhead.

    IANAEE

    Edit: I love the article. This is the cool off the wall kind of technology that Tech Report should be covering all the time.

    I have one request though. Record some boot times for windows XP starting up. I wonder what kind of results you would get.

    added random thoughts.

      • babybalrog
      • 13 years ago

      the 10 MB/s transferrates just sound too fishy to me. Like the whole thing is running single channel. I know you can have multiple channels with flash RAM just like DRAM so why dont they use 2 or 4 channels internally to get the transfer speeds up?

      Or maybe theres a problem with there ATA to Flash memory transition logic?

      half thought product personally… good concept

      • Dissonance
      • 13 years ago

      Our system boot time test includes Windows XP startup.

        • Stranger
        • 13 years ago

        Man I’m really ADDing out today. Thanks for pointing that out. I think its interesting that booting windows is not very bandwidth intensive.

        I have one more quick question. did you have a chance to pop the drive open and check out the circuit board inside? It would be interesting to see how the chips are layed out and what types they used.

          • Dissonance
          • 13 years ago

          Unfortunately, it doesn’t look possible to pop the drive open without busting the casing. Not sure I want to maul it just yet.

            • Stranger
            • 13 years ago

            Understandable. Any idea what kind of chips they used?

            Edit: or know of any documentation you could point me to?

    • Beomagi
    • 13 years ago

    that would complement the panasonic r5 nicely – 11hours already.
    §[<http://dynamism.com/r5/specs.shtml<]§ when you're pulling that little, even small changes make a difference.

    • ripfire
    • 13 years ago

    I would think this would be a good candidate as a UMPC drive.

    • UberGerbil
    • 13 years ago

    Hybrid drives are /[http://arstechnica.com/journals/microsoft.ars/2006/6/14/4328<]§

    • Crayon Shin Chan
    • 13 years ago

    Flash drives’ limited read/write capacities have me worried, even more so if it’s in a hard drive. People may say it’s not a big deal, but once you’ve got a system installed and running, the leftover space is going to be written to and read from a lot.

    • SpotTheCat
    • 13 years ago

    imagine 8 of these in raid-0, might turn around the transfer rate problem

      • Logan[TeamX]
      • 13 years ago

      Assuming 100% throughput… it’s still not feasible. Not worth doing, anyways aside from the pure power benefit of it.

        • Vhalidictes
        • 13 years ago

        128GB for a web/database server? With none of the reliability questions that normally occur on a RAID-0 stripe set? Seriously, sign me up.

        No power use, no latency, and the boosted transfer rate can cover up the Flash’s speed problems.

        Add in the fact that most Web/application server programs are read-heavy, write-light, and it’s a natural fit. Heck, it’s not even that expensive.

        The only problem here is that I don’t have any servers in my home lab that require this kind of access-speed or the power savings.

    • wierdo
    • 13 years ago

    Doesn’t look like it’s a good design. these guys trying to just put something on the market in a hurry or something? 😛

      • Saribro
      • 13 years ago

      What other design would you suggest ? A new type of flash that doesn’t suck at bandwidth ?

        • wierdo
        • 13 years ago

        there are many performance variations between different flash chips, but that aside, they could do better with perhaps a better caching design akin to what regular hard drives do to hide the weaknesses of common hard drive technology.

    • IntelMole
    • 13 years ago

    I wonder if this would find use in a server. TR has reviewed a server hard drive array as I recall that only used 2.5″ drives.

    Provided you don’t need huge amounts of storage, a few of these in some form of redundant array (write cycle limitations and all that), and you’ve got a wicked fast array for small transfers.
    -Mole

    Edit: Not a drive array, I was thinking of this:
    §[<https://techreport.com/reviews/2004q4/seagate-savvio/index.x?pg=1<]§ I wonder how it would have done in comparison with things like IOMeter...

    • exitstageleft
    • 13 years ago

    Lets not forget these guys…//http://www.bitmicro.com/products_storage_devices.php

    • indeego
    • 13 years ago

    Test the laptop’s uptime with this thingg{!}g

    • Logan[TeamX]
    • 13 years ago

    Power consumption: thumbs up

    Everything else: thumbs WAY down. Diss had the application areas right too – for those who would likely never notice the loss in speed. Most IT guys would rather chew nails than suffer that level of performance.

    This is the letdown of the year.

      • indeego
      • 13 years ago

      I’m an IT guy and my users demand light, small gadget’s with long uptime and rarely never complain about performance. Most are running W2K with 933 mhz proc’s just fine. It’s not groundbreaking but those power and noise numbers don’t lieg{<.<}g

        • Logan[TeamX]
        • 13 years ago

        Oh it is nice… but the guys with Latitude D810s complain about low battery times, D510s complain about video card, D410s that complain about media bays being needed and X1s complain about slow performance due to ULV processors. Oy.

          • indeego
          • 13 years ago

          I have attorneys with 4 pound laptops compaining about the weight………..

          yeah, OK. Give me a break, you weak ass mofosg{<.<}g

            • Jigar
            • 13 years ago

            Sure u r very smart … cool * fool*

    • Trymor
    • 13 years ago

    Edit: ment as a reply to Dirge.

    Put 2 Gig of system memory in, and turn the friggin pagefile off. All of my machines with 1Gig or better have the pagefile off. There are a few programs that complain without one tho (none that I use).

      • _Shorty
      • 13 years ago

      and every single one of those machines would perform better, get work done quicker, if you turned the pagefile back on. Do some research on memory management. It is never a good idea to turn the pagefile off.

        • Stranger
        • 13 years ago

        Thats a strong statement to make. Theres plenty of computing devices that get away without a pagefile on a disk. Theoreticly as long as your working set never exceeds the size of your ram you should be fine, although i’ve heard Windows is bad at handling large ammounts of memory but that could change if its not allowed to write pages to the drive. Do you have any data that backs up your claim?

          • _Shorty
          • 13 years ago

          ram is used for much more than just a given app’s working set. The fact that pretty much any OS in use today goes by the same general guidelines for memory management should be more than enough proof for anyone. Do you honestly think all those engineers writing the memory management code for all those OSes are doing things similarly because they’re stupid? Honestly? Anyway, you seem to have a much firmer grasp of things than Trymor does. It’s clear he doesn’t really understand computers and what’s involved with memory management much at all. Sounds like he’s read a lot of windows tweaking sites written by the ‘experts’ though, heh. Sorry, Trymor, but there’s more to memory management than what you think you know. Perhaps you’ll feel the urge to do some real research on the subject, rather than believing what some ‘expert’ who tells you to turn on ‘DisablePagingExec’ because that disables the pagefile…

            • Trymor
            • 13 years ago

            Sorry _Shorty, I understand more about memory management then you think. I also know that concessions have to be made to work in multiple computer configurations. When XP was designed, 256-512 Meg system memory was the ‘norm’. That was the target for the most efficient use of memory. XP is known to have less efficient memory management with the pagefile on and large amounts of system memory.

            Did you use windows 98? Have you heard of [386Enh] ConservativeSwapfileUsage=1? Oh my god did that ever improve system performance! Put 256-512 Meg of memory in the machine with that enabled, and you have a LOT better machine performance. Even with only 128 meg of memory performance was much improved while only running smaller programs.

            That was one example of of how memory management worked well for low amounts of ram (32-128 Meg) but poorly for larger amounts of ram. Hmm, were the engineers stupid? Or did they have a target machine configuration they optimized for?

            I have already stated in a different post that disabling the pagefile isn’t for everyone, and that in my original post I was making a bit of a smart-ass reply to another post regarding flash cycle life, not just telling everyone to turn off their pagefiles.

            I did not read any articles written by ‘experts’. I read articles written by ENTHUSIASTS (as I have stated before), who like to get better performance out of their computers, and have done so, and like to share their knowledge with others (and no, they are not ALL right).

            By the way, I hope your were kidding about the ‘DisablePagingExec’, because that isn’t the way to turn off the pagefile….and I don’t see how anyone could even think that it is?

            You are welcome to dwell on theory and design, but many of us ‘enthusiasts’ are enjoying better performing systems right now.

            (edited for spelling)

            • _Shorty
            • 13 years ago

            why you mention win9x at all is beyond me, it isn’t relevant in any manner. And I mentioned DisablePagingExec because that’s exactly the advice that your ‘enthusiast’ tweaking sites give. Tons of them honestly think that it turns off paging. And I’m sorry to tell you, but you don’t know all you think you do about memory management. Your point of view proves it. Turning off the pagefile is not advantageous.

            • Trymor
            • 13 years ago

            I mention Windows 98 as an example of poorly working memory management. Memory management isn’t auomatically perfect.

            DisablePagingExec is not mentioned for turning off the PAGEFILE. It is talked about as a setting relating to kernel paging, and I haven’t changed my setting. That can have detrimental effects to the operating system. I don’t know what sites you visit.

            My point of view, is actually experiencing a performance increase after turning of the pagefile. And it is advantageous.

        • Trymor
        • 13 years ago

        Shorty, research isn’t everything. I have run every single one of my machines with a page file, and every one without. THERE IS A VERY NOTICEABLE INCREASE IN PERFORMANCE (snappiness) WITHOUT A PAGEFILE!!!!

        There is no more hard drive thrashing. There is lower power usage.

        Microsoft brainwashed people into thinking XP had great memory management in the first year of it’s existance. If you did any ‘enthusiast research’, you should find a bunch of information saying to ‘dump a bunch of ram into your box and turn off the pagefile’. Most of these articles also noted that there are cicumstances and certain programs that you are better off with a pagefile, if not just a 2 meg one for the system dump on a crash. They also warned to make sure you have a minimum of 256 meg free after loading your programs, to make sure there is enough left for the disk cache.

        Proof is in the pudding – reading and theory is fine, but untill you try something yourself, you won’t know for sure.

        Edited to a little nicer discussion tone 😉

          • Usacomp2k3
          • 13 years ago

          More power usage? 🙄

          I always recommend leaving the paging file on. Always. I didn’t always think like such, but then I realized that if I stopped tinkering and trying to outsmart the computer, the system ran much more stable. My advice: Just put a windows managed paging file on a different spindle than the OS and call it a day.

            • Trymor
            • 13 years ago

            Whatever works for you. I have 0 stability problems, but the only tweeks I have done to the OS (that I can think of) are turning off the pagefile, putting the OS, the temp directory, and the programs directory all on their own physical hard drives, and turning off some of the interface ‘eye candy’. Makes for a very ‘snappy’ computer.

            Yeah, the power usage line was a stretch…even if it is true. 😉 LOL

            • Trymor
            • 13 years ago

            And by the way, the whole pagefile thing was supposed to be in response to the concern of having the pagefile on a flash based drive, and the longevity of the drive, not just blatantly telling everyone to turn their pagefile off…

    • tygrus
    • 13 years ago

    If the Read (<17MB/s) and write (<5MB/s) are this slow … I don’t want it. Need Read>35MB/s and write>15MB/s and about 32GB bare minimum. Make the internal interface wider and faster (more power consumption but still much less than HD). It might be OK for the really small devices but doubling the time required to complete a task means I get half as much done per charge even if the battery runtime is longer.

    • Perezoso
    • 13 years ago

    Nice review, Geoff. Thanks.

    • ssway
    • 13 years ago

    Wait a couple more years for MRAM based drives….=)

    • Prototyped
    • 13 years ago

    Did you folks check to see how the limited write cycles affected the flash memory? I imagine repeated erase/write cycles would severely limit the longevity of the flash-based drive compared to a magnetic-platter design.

      • Perezoso
      • 13 years ago

      Let me tell you… We have been running around 100 IDE flash drives for more than 3 years and we have yet to see the first drive failure. Before that, HDD failures were very frequent due to vibrations and heat. This was a major maintenance issue. Sustained transfer rates are meaningless for our application. We don’t need more than 60MB so we use smaller 128MB IDE flash drives.

      §[<http://www.transcendusa.com/Products/ModDetail.asp?ModNo=26<]§

        • indeego
        • 13 years ago

        You don’t have a pagefile on there, rightg{

          • Perezoso
          • 13 years ago

          These industrial computers run DOS. 😀

            • Trymor
            • 13 years ago

            Can’t compare that with running Windows XP. DOS doesnt access the ‘hard drive’ at all running many programs once they are booted up and loaded…

            • Perezoso
            • 13 years ago

            True. But our application writes to the disk every few seconds, 24x7x365.

            • Trymor
            • 13 years ago

            If it is a miniscule amount of data, the flash logic should put the data in a different area after a certain numbert of writes. Wonder what that works out to for ‘number of writes per block/sector’.

            Even so, sounds pretty reliable so far…

            • Stranger
            • 13 years ago

            you’re missing the point. It seems as though the average times writen per bit per given time is comparable to standard hard drives.

            one point that was never mentioned in the article is that I believe linux has special file systems that can be used to enhance the life expectancy of flash based drives. For example the OLPC uses flash and I think JFFS2

            §[<http://wiki.laptop.org/go/Hardware_specification<]§ Edit: got the file system name wrong

            • Trymor
            • 13 years ago

            My responses were aimed specifically at Perezoso, who has been running ‘IDE flash drives’ for 3+ years. I was assuming he is using old technology (possibly compact flash with IDE adaptors). If that is the case, the cycle rating should be much lower, yet they are still working…

            • Perezoso
            • 13 years ago

            Hmm… We tested five or six different drives from different mfgs during the first year and finally standarised on Transcend’s TS128MDOM40V (link in my first reply) in 2004. It’s simpler than you suggest and nothing new on the market. The only things that changed recently are the endurance rating (from 100,000 program/erase cycles to 1,000,000 now), greater capacity models and sticker color (from magenta/blue to white/blue). :p

            Updated datasheet: §[<http://www.transcendusa.com/Support/DLCenter/Datasheet/DOM40V_128MB_2GB.pdf<]§

            • Trymor
            • 13 years ago

            My bad. Didn’t look at the drive form factor you linked, but the lower cycle life was correct. Still wish there was a way to know how many read/writes are on the most used bits. Seems like cycle life won’t be a concern for many uses/users.

            • Perezoso
            • 13 years ago

            I guess you’re thinking JFFS2.

            • Stranger
            • 13 years ago

            Thats it.

            Man I must be losing it I could have sworn Trymors post read something different this morning.

            Edit: heres the wikipedia link for anyone interested
            §[<http://en.wikipedia.org/wiki/Jffs2<]§

    • melvz90
    • 13 years ago

    pretty dismal thing!!!

    aren’t flash drives supposed to have a limited number of read/write cycles… like some CFs abt a yr ago.. like 100K read/write cycle… what happens after that?!? will hybrid’s flash mem have maximum read/write cycles too?!?

      • droopy1592
      • 13 years ago

      read teh article… supposedly has 1 meeeeeeeeeeeeeeeellion write cycles.

        • melvz90
        • 13 years ago

        yeah… so they claim?!? but does it deliver?!? dont see a test proving it can take on a million cycle…

          • ripfire
          • 13 years ago

          How long do we really need the drives until we need to upgrade anyway?

            • indeego
            • 13 years ago

            1 meeeeeeelion and one cyclesg{<.<}g

            • melvz90
            • 13 years ago

            just a hunch.. about a year or 2 maybe… depends on on how the OS/App u put in on the drive.. say you use it as a boot drive and put in a sizeable pagefile/swapfile… it might reach that read/write cycle limit pretty fast…

            • TSchniede
            • 13 years ago

            pagefile… since the pagefile on one of these disks tend to be VERY slow and large amounts of RAM seems tobe quite common nowadays, probbably using a pagefile is somewhat worse than none at all.

            • _Shorty
            • 13 years ago

            no, a pagefile allows for better use of your ram. It’s never better to disable the pagefile. Disabling the pagefile only accomplishes one thing, making your box perform worse.

            • Stranger
            • 13 years ago

            can you cite your sources for that claim?

            • stmok
            • 13 years ago

            That’s a very bold statement to make.
            Have you actually read what the pagefile or swap partition does?

            I’m not sure what substance you’ve been smoking lately, because in our world, its certainly not for making better use of your RAM. Its for when you don’t have enough RAM on your system to accomplish a task!

            People get more RAM to avoid paging. Paging is a performance killer. Windows starts doing it when you have about 5MB of RAM left.

            • Saribro
            • 13 years ago

            Actually it happens all the time, infrequently used pages get moved to the pagefile, regardless of how much RAM is being used. It’s meant to clear RAM for system cache and keep more space available for (new) active working sets.
            Now, if you have, say, 1GB of RAM, and don’t have a too high memory use, somewhere in the 300MB range for example, that still leaves 700MB for system cache or new programs, which is more than plenty in all but corner cases. In such situations, a pagefile is not a gain. Now, in the same situation with only 512MB of system RAM, you might actually be running into slowdows without a pagefile, because the system cache is too small, in this case, having 100-150MB in rarely used pages moved out of RAM would be an advantage.
            It’s very dependent on situation and use really.

            • Trymor
            • 13 years ago

            Even with 2 Gig of memory, and 512 of it free, If you have 4 programs open, Windows can page the memory of the least used program to the hard drive after a period of time. When you switch to that program, the hard drive will thrash, and there will be a delay before you have full use of that program.

            Now in the same senario, except with the pagefile off, when you switch to the least used program, you can use it instantaniously, with no pause and hard drive access.

            Those situations may change depending on what types of programs are running, but in my situation (1Gig or more ram), there has never been an increase in performance with the pagefile turned on.

            Now, if you only have 512 meg of ram in Windows XP, leave the pagefile on. Drive cashe verses program space become important, and I feel sorry for you as you are losing a lot of performance (if not only snappiness of the operating system and changing between open programs).

            And like most things, there are no absolutes. Certain situations change certain requirements, but I believe more than 50% of people with 2Gig of ram will be able to disable the pagefile and gain performance (snappiness being the most noticeable).

            • Saribro
            • 13 years ago

            Yes, thank you for repeating everything I said :p.

            • Trymor
            • 13 years ago

            No problem. People like _Shorty seem to understand better with examples, rather than direct design discription…

            • _Shorty
            • 13 years ago

            what you fail to understand is, the apps that are actively doing work will more than likely get their work done faster because they’ll have more ram to work with. Yes, your switch to the least active app took longer to accomplish. That’s because it wasn’t doing anything worth keeping those pages of memory in the RAM. That RAM was put to better use elsewhere, by your other apps that were active. Anyone that has even a basic clue about operating systems and memory management will tell you exactly the same thing. I’ve been part of similar discussions several times over the years during Windows beta testing programs, and every single time someone who deals with, writes, the memory management code pipes up and gives the exact same explanation I’ve given. And if you look around you’ll see similar discussions among linux developers concerned with memory management.

            More RAM is always better. But no matter how much you have, it is always a good idea to use whatever amount you have as effeciently as possible. If memory pages aren’t actively being used, no matter how much ram you have, that means they are extremely good candidates to take out of ram and put them in the pagefile/swapfile instead. So that the ram can be used elsewhere. It’s a pretty simple concept. Like I said, the fact that it took some pagefile activity to switch back to one of the apps that had been in the background doesn’t mean that your performance is worse. Quite the contrary. It means that the other apps that were actually doing work were able to do their work better because they had more ram to work with. The app that was in the background was doing sweet piss all, and so it was shoved off onto the hard drive, so that the apps actually doing work could zoom through it all the faster. If that background app had actually been active enough and doing any work, its pages would have stayed in ram and would not have been flagged as candidates for the pagefile and consequently moved to the pagefile. The app wasn’t busy, and other apps were, so the busy apps got the ram. And they got their work done faster as a result. That enough repitition to get the ideas through your head yet? Or should I say the same thing over and over a few more times?

            • Saribro
            • 13 years ago

            All your repitition is only valid for cases where your entire RAM is being used by application working sets, which happens rarely on most boxes.
            Perhaps you should take a look at a Linux system: swapping to HD only happens when it is -needed- to clear up RAM for applications, not just because some memory page meets the swap-out specification.
            You’ll always want some RAM used for file cache, but as long as you have plenty, there’s no point in increasing it by swapping out application-memory to HD. Leaving RAM unused is pretty useless too, you have it, so you might as well put stuff in it, leaving it empty is just wasting it.
            The swapfile is a usefull spillover system for situations where application memory + useful file cache size outgrows physical RAM, however, only if it is used as a spillover system, like in Linux, not the way it gets managed in Windows.

            • _Shorty
            • 13 years ago

            nothing you stated contradicts anything I’ve said.

            • Saribro
            • 13 years ago

            Yes it does, but you can’t seem to think beyond the single principle situation where the pagefile is useful, so you don’t undestand -what- it is I am contradicting.

            • Trymor
            • 13 years ago

            Oops, Saribro beat me to it and posted before I finished this…

            Mabey I need to repeat myself as well. ‘Snappiness’ is considered performance. Waiting for a program to swap from the pagefile to memory makes a person wait. Waiting means lack of performance. If a person has 2 Gig of memory, and 512 Meg (or more) is ALWAYS empty, then there is no good reason to take the time to swap program memory to the hard drive. The ram doesn’t need to be used elsewhere. There is ram just sitting there unused. Heavy multitaskers benifit most from turning off the pagefile.

            If there is free memory, why take the time to swap memory to the hard drive? Memory does not need to be freed as you say.

            I have had END USERS with 1 Gig of ram comment on how there machine is faster, and ask what I did (after I turned off their pagefile). I don’t do this for everyone, but for certain ones that I know how they use their machines.

            As an example, end users, who have a Word document open, an Excel spreadsheet open, and are browsing the internet for research, frequently switch between the 3 programs. Yes, Windows should to be designed to NOT swap any of those programs to the pagefile if there is free memory, but it doesn’t always work that way. If a person only has 512 Meg of system ram, then swapping out some of the currently unused programs memory to the pagefile will allow the currenly used program to run without accessing the pagefile and slowing it down. I know how memory management is SUPPOSED to work. I am not arguing that point, and have already stated in a previous post that turning off the pagefile is not for everyone. Thats why I have been using specific examples. How things should work, and how things do work can be different. Why does Windows swap out memory pages of a program that hasen’t been used for a while, but will be used again shortly when there is still pleny of unused free memory available?

            Quote:
            “what you fail to understand is, the apps that are actively doing work will more than likely get their work done faster because they’ll have more ram to work with. Yes, your switch to the least active app took longer to accomplish.”

            Reply:
            What you fail to understand is the apps that are actively doing work will not get their work done faster because they already have all the ram they need, and there is still free, unused system ram sitting there doing nothing. I now switch to the least used app instanly meaning less time, meaning better performance.

            I realize you only define ‘performance’ as how fast or efficient a single program runs, but for others, performance is the whole machine, and how fast or efficiently it runs. If a person has a program he/she only switches to every 15 minutes, but Windows keeps swapping it to the hard drive, then thats 4 times in an hour the machines performance was slowed.

            Yes, I know how memory management is supposed to work. But you are saying that if a person has 4 Gigs of system memory, but only has a 512 Meg working set, that they shouldn’t turn off the pagefile because it will degrade the performance. Does that even sound correct to you?

            As I have stated, I have tried my PC’s with a pagefile, and without, and I get better ‘performance’ without a page file. I have felt this over and over.

            • _Shorty
            • 13 years ago

            it’s quite clear you think you know more than you do. I won’t try any further to change your mind, because you already have all the answers. As I already stated before, an app’s working set is not the only ram that the app is actually causing to be used. The more you speak the more apparent it is you do not know very much about OSes. Sorry to tell you, man, but your ‘enthusiast’ sites are not written by people with a clue. They’re written by people that think they have a clue. And like I said, the DisablePagingExec is a great example of their cluefullness. Anyway, I’m done trying to educate, because you already know everything. Have a good one.

            • Trymor
            • 13 years ago

            Hmm, no response for Saribro who agrees with me? He already covered the use of memory for cache , so I did not mention it. I have Windows performance counters visible in varying programs, so I can monitor program memory usage, cache memory usage, virtual memory usage, pages swapped, etc… A person can clearly see that there is unused physical memory. A properly performing memory management system would not swap memory out to a pagefile, yet Windows XP does. As I type, out of 2 gig of physical memory, I have 1.1 gig of available physical memory, 860 Meg used for system cache, and a listed 10% of virtual memory in use. Tell me why turning on my pagefile will increase performance? Clearly your experience is limited. Go ahead and stick to your explanations, and try not to use anything in the real world.

            If you know so much, how come real people have noticed a performance increase after turning the pagefile off in Windows XP? Some of these people weren’t told that anything was done, but notice better performance. You can’t say it’s all in their heads because they didn’t know anything was done. These are facts, and can’t be argued by you, so I don’t know why you think you need to change my mind.

            And what is your facination with the DisablePagingExec setting? I have not changed that setting. None of the places I read from even mentioned it as a recommended ‘tweak’. These were not ‘tweak sites’. They were case studies on the effect of running with or without a pagefile in Windws XP.

            • Saribro
            • 13 years ago

            q[

      • DStauffer
      • 13 years ago

      I’ve been running a very old dual Pentium Pro system with a CaliforniaPC 2 GB IDE flash drive as the system drive for about a year now. No problems. I only put the Windows installation on the flash drive – program files are on the hard drive. I notice the non-graphical part of the boot is much faster, but I don’t see much difference otherwise. 128 MB RAM and swap file on the flash drive. I have been concerned about limited lifetime issues, but ironically I’ve had a regular hard drive fail in the meantime! This flash drive does have some kind of “load balancing” feature to “spread around the wear” (my paraphrase). How they do that with no moving parts I’m not sure.

      Anyhow, no problems so far, some advantage (I could still boot when the hard drive failed, for example), but I expected more speed advantage.

    • A_Pickle
    • 13 years ago

    Why the frack are these running on fricken IDE?

    Sorry if I seem ungrateful, but whatever happened to the notion that if you’re going to do it, do it right. SATA doesn’t seem like very much to ask, at /[

      • droopy1592
      • 13 years ago

      Does it really matter? Interface speed doesn’t really matter when it struggles to match a 4200rpm drive at most tasks.

        • absinthexl
        • 13 years ago

        Not only that, but if they’re aiming at tiny, niche-market machines, they’re better off with IDE. My five-year-old synth uses SIMMs for sampling memory. That’s right. SIMMs.

    • absinthexl
    • 13 years ago

    I love how the benchmarks it won were completely unsuitable for a laptop. Who runs a webserver on a mobile computer?

    Besides that, what are the advantages of this over 8GB USB flash drives which go for about half the price?

      • Corrado
      • 13 years ago

      You don’t need to have a machine capable of booting from USB to run a machine off it, it doesnt hang out the side like a flash drive. It would be FABULOUS in an automotive environment.

        • dragmor
        • 13 years ago

        Alot of motherboards have internal USB ports so that handles that part.

        Maybe this has better write cycle management or something.

          • Vrock
          • 13 years ago

          Internal USB ports? Did I miss that? Which boards have them?

            • droopy1592
            • 13 years ago

            I think he meant headers

      • Dirge
      • 13 years ago

      y[

        • cass
        • 13 years ago

        Shoot. sign me up.. one drive 16GB for storage and one 2GB for page file or just load 1GB + mem and scrap the page file. I don’t really see What apps I use day to day that are going to be affected by the low transfer rates.

    • coldpower27
    • 13 years ago

    Very nice power Consumption numbers.. though it’s kinda slow in some things…

    • droopy1592
    • 13 years ago

    Goooood Lawdddd it’s early.

    What’s the purpose of Vista requiring a hybride drive when the flast portion of the drive may suck worse than the mechanical part.

    I expected at least 30% faster than mechanical drives…. which is what many have been preaching.

    FP

      • sbarash
      • 13 years ago

      That’s a great questions. It seems to me that something’s amiss here.

      We know that burst speeds to an from an IDE drive’s buffer / cache are much faster that accessing the platters. But shouldn’t it be slower based on these results?

      Also makes me wonder wonder to what extent device driver optimisation could be involved. I’m sure this drive’s controller is using standard Windows IDE drivers – which would be optimized for Winchester drives.

      I would love to see this drive vs. a USB drive vs. a ram disk vs. IDE drive benchmarks….

      • wilreichert
      • 13 years ago

      /[

        • packfan_dave
        • 13 years ago

        Given the performance numbers, and the write cycle limitations of Flash, it seems like a few megabytes of DRAM cache (which all the traditional hard drives have) would help a lot.

Pin It on Pinterest

Share This