Gigabyte’s i-RAM storage device

Manufacturer Gigabyte
Model i-RAM
Price (Street)
Availability Now
WHILE MICROPROCESSORS HAVE enjoyed rapid performance increases thanks to new chip fabrication technologies, higher clock speeds, and multiple cores, hard drives have struggled to overcome the mechanical latencies and challenges associated with spinning rewritable media at thousands of rotations per minute. Hard drives have picked up a few tricks over the years, growing smarter thanks to command queuing and learning to team up in multi-drive RAID arrays, but they’re still the slowest components in a modern PC.

Those dissatisfied with the performance of mechanical storage solutions can tap solid-state storage devices that substitute silicon for spinning platters. Such devices shed the mechanical shackles that limit hard drive performance, but they’ve hardly been affordable options for most users. Then Gigabyte unveiled the i-RAM, a $150 solid state-storage device that plugs directly into a motherboard’s Serial ATA port, accommodates up to four run-of-the-mill DDR SDRAM modules, and behaves like a normal hard drive without the need for additional drivers or software.

Gigabyte first demoed the i-RAM at Computex last summer, and cards have finally made their way to the North American market. One has also made its way to our labs, where it’s been packed with high-density DIMMs and run through our usual suite of storage tests. Read on for more on how the i-RAM works, what its limitations are, and how its performance compares with a collection of single hard drives and multi-disk arrays.


i-RAM, packed to the gills with 4GB of OCZ Value Series memory

i-RAM revealed
The i-RAM’s greatest asset is easily its simplicity. Just populate the card with memory, plug it into an available PCI slot, attach a Serial ATA cable to your motherboard, and you’ve got yourself a solid-state hard drive. There’s no need for drivers, extra software, or even Windows—the i-RAM is detected by a motherboard BIOS as a standard hard drive, so it should work with any operating system. In fact, because the i-RAM behaves like a standard hard drive, you can even combine multiple i-RAMs together in RAID arrays.

Gigabyte equips the i-RAM with four DIMM slots, each of which can accommodate up to 1GB of unbuffered memory. The card is sold without DIMMs, giving users some flexibility in how it’s configured. However, most will probably want to shoot for that 4GB maximum. After all, if you’re going to have a solid-state hard drive, you want it to be as big as possible.

Be careful when adding memory, though. The i-RAM’s DIMM slots are mounted on an angle to ensure that the card doesn’t interfere with adjacent PCI slots, and there isn’t enough room for DIMMs with thicker heat spreaders—at least not if you’re planning on packing the card with four memory modules.

While tight DIMM spacing limits compatibility with thicker heat spreaders, it’s not a major concern, because it’s unlikely you’ll want to waste high-end memory on the i-RAM. You see, the i-RAM’s Serial ATA controller is limited to 150MB/s transfer rates, creating a bottleneck that will constrain performance long before memory speeds or latencies enter the picture. In fact, even DDR200 memory has ample bandwidth to saturate the i-RAM’s Serial ATA interface.

Translating Serial ATA requests for a bank of four DIMM slots is no small task, but Gigabyte gets the job done with a Xilinx Spartan-3 field programmable gate array (FPGA) chip. The Spartan-3 is programmed to act as the i-RAM’s memory controller, Serial ATA controller, and the link between the two, accomplishing three tasks with one piece of silicon. The single-chip solution is elegant, but it’s also the source of the i-RAM’s biggest limitations. For example, the memory controller doesn’t support ECC memory or 2GB DIMMs, both of which would be useful. And then there’s the Serial ATA controller’s lack of support for 300MB/s transfer rates, which will probably be the card’s most serious performance impediment.

Since it relies on volatile memory chips for storage, the i-RAM will lose data if the power is cut. Fortunately, the card can draw enough juice from a motherboard’s PCI slot to keep its four DIMM slots powered, even when the system is turned off. The system does have to be plugged in and its power supply turned on, though.

To allow users to unplug their systems for periods of time and to protect against data loss due to a power failure, Gigabyte also equips the i-RAM with a rechargeable lithium ion battery that packs 1600 milliamp-hours of power. The battery charges while the system is plugged in, and according to Gigabyte, it can keep four 1GB DIMMs powered for more than ten hours. Battery life will vary depending on the i-RAM’s memory module configuration, though. It’s probably a good thing to back up anything you actually store on the drive, just in case.

 
Test notes
Today we’ll be comparing the i-RAM’s performance with that of a handful of the fastest Serial ATA drives on the market and a couple of SATA RAID configurations ripped from our recent chipset Serial ATA RAID comparison. Even against a four-drive RAID 0 array, we expect the i-RAM to clean up. However, it will be interesting to see by how much.

We’ll be subjecting the i-RAM to our standard gauntlet of storage tests, but we’ve had to cut a few apps to accommodate the i-RAM’s capacity constraints. Even packed with 4GB of memory, the i-RAM doesn’t offer enough storage to complete the WorldBench suite or File Copy Test’s partition-to-partition file copy test. The i-RAM’s limited capacity also impacts our iPEAK multitasking tests, as we’ll explain further in a moment.

Our testing methods
All tests were run three times, and their results were averaged, using the following test systems.

Processor Pentium 4 Extreme Edition 3.4GHz
System bus 800MHz (200MHz quad-pumped)
Motherboard Asus P5WD2 Premium
Bios revision 0422
North bridge Intel 955X MCH
South bridge Intel ICH7R
Chipset drivers Chipset 7.2.1.1003
AHCI/RAID 5.1.0.1022
Memory size 1GB (2 DIMMs)
Memory type Micron DDR2 SDRAM at 533MHz
CAS latency (CL) 3
RAS to CAS delay (tRCD) 3
RAS precharge (tRP) 3
Cycle time (tRAS) 8
Audio codec ALC882D
Graphics Radeon X700 Pro 256MB with CATALYST 5.7 drivers
Hard drives Maxtor DiamondMax 10 300GB SATA
Western Digital Caviar SE16 250GB SATA
Western Digital Raptor WD740GD 74GB SATA
Hitachi 7K500 500GB SATA
Western Digital Caviar RE2 400GB SATA
Seagate Barracuda 7200.9 160GB SATA
Seagate Barracuda 7200.9 500GB SATA
Gigabyte i-RAM 4GB SATA
OS Windows XP Professional
OS updates Service Pack 2

We packed our i-RAM with four 1GB Value Series DDR400 modules from OCZ. The DIMMs in question are among the least expensive 1GB modules around, making them perfect for the i-RAM.

Our test system was powered by OCZ PowerStream power supply units. The PowerStream was one of our Editor’s Choice winners in our last PSU round-up.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Boot and load times
To test system boot and game level load times, we busted out our trusty stopwatch. The entire system partition was housed on the i-RAM during the system boot test, but there was only room for each game’s install files for our level load tests.

The i-RAM allowed our test system to boot faster than any other configuration, but only by a few seconds. The RAID arrays look a little slow here because of the extra time it takes for the motherboard to initialize them during the boot process.

In-game level load times benefit more from the i-RAM than the Windows XP boot process, but to be honest, we were expecting a little more. Clearly, the storage subsystem isn’t the only bottleneck that constrains game level load times.

 
File Copy Test
File Copy Test is a pseudo-real-world benchmark that times how long it takes to create, read, and copy files in various test patterns. Scores are presented in MB/s.

Now that’s more like it. The i-RAM rips through FC-Test’s file creation tests two to three times faster than any other single drive. Only our four-drive RAID 0 array exceeds the i-RAM’s performance here, and then only with a couple of test patterns.

File read performance isn’t even close; the i-RAM tears through this test, offering better performance across the board.

The i-RAM doesn’t slow down in the copy test, either. It’s more than four times faster than some of our single-drive configs, and easily much faster than any of our multi-drive RAID arrays.

 
iPEAK multitasking
We recently developed a series of disk-intensive multitasking tests to highlight the impact of command queuing on hard drive performance. You can get the low-down on these iPEAK-based tests here. The mean service time of each drive is reported in milliseconds, with lower values representing better performance.

Although iPEAK would run on our 4GB i-RAM partition, the app did warn us that tests would have to be wrapped due to the drive’s small size. This means that any I/O requests that would have referenced areas of the drive beyond 4GB would be wrapped around to the beginning of the drive.

The i-RAM continues to rip up the field, taking top honors in all but one of our first round of iPEAK multitasking tests.

 
iPEAK multitasking – con’t

Our second round of iPEAK tests proves just as fruitful for the i-RAM, which has little problem outclassing the rest of the field.

 
IOMeter – Transaction rate

Wow. Seriously.

The i-RAM is in another league in IOMeter, offering transaction rates that are an order of magnitude higher than its closest competition. It doesn’t take long for the i-RAM to get revved up, either. The card hits its peak transaction rate with just two simultaneous I/O requests.

 
IOMeter – Response time

The i-RAM’s IOMeter response times are as low as we’ve ever seen, by a long shot. Not even our four-drive RAID arrays are in the same ballpark.

 
IOMeter – CPU utilization

The i-RAM’s blistering transaction rates and seemingly impossibly-low response times do come at a price. CPU utilization is much higher than our single-drive configs and RAID arrays. The i-RAM still pushes more I/Os per CPU cycle than any other configuration, though.

 
HD Tach
We tested HD Tach with the benchmark’s full variable zone size setting.

The i-RAM doesn’t quite make it to 150MB/s in HD Tach’s sustained read and write speed tests, but it’s faster than any other configuration.

Unfortunately, a lack of support for 300MB/s Serial ATA transfer rates keep the i-RAM out of the running in HD Tach’s burst speed test. It is the fastest drive among those that only support 150MB/s transfer rates, but that’s not good enough to catch several competitors.

Since the i-RAM uses random access memory, it’s no surprise that its random access time is significantly faster than our hard drives and RAID arrays.

CPU utilization results are just within HD Tach’s +/- 2% margin of error in this test. Given these results, it seems likely that the high CPU utilization we observed in IOMeter was more a factor of the i-RAM’s incredibly high transaction rates than any inherent CPU utilization penalty associated with using the device.

 
Noise levels
The i-RAM is silent, so it doesn’t add any noise to a system—well, at least not unless you have some funky active DIMM cooling.

Power consumption
Power consumption was measured for the entire system, sans monitor, at the outlet. We used the same idle and load environments as the noise level tests.

Given its performance, the i-RAM’s power consumption is certainly reasonable. It doesn’t really consume less power than your average Serial ATA hard drive, but it’s much more frugal than multi-drive RAID arrays, and those are the only configurations with a chance of even coming close to matching its performance.

 
Conclusions
To be honest, we didn’t actually expect Gigabyte to turn the i-RAM into an actual end-user product, much less make it available in North America. But they have, and at $150 online, the i-RAM is actually pretty affordable, all things considered. With the price of 1GB DDR modules is hovering around $80, it’s possible to build a 4GB i-RAM drive for under $500. That’s a horrific cost per gigabyte for a hard drive or RAID array, but it’s pretty good for a solid-state storage device with this kind of performance.

Of course, the i-RAM isn’t without limitations. Performance is undoubtedly constrained by the 150MB/s Serial ATA interface, and I shudder to think how much faster the i-RAM could be if it supported 300MB/s transfer rates. Size is an issue, as well. With only four DIMM slots and no support for 2GB modules, the i-RAM hits a capacity ceiling at 4GB. That might be enough storage for certain applications, but it leaves us wanting more. We’d gladly accept a double-wide design if it allowed for a greater number of DIMM slots and a larger overall capacity. As it stands, you’ll have to rig up multiple i-RAM drives in RAID to breach the 4GB barrier.

While we’re griping, it’s tempting to suggest that Gigabyte skip Serial ATA altogether and build an i-RAM that taps the bandwidth of multiple PCI Express lanes. Such a card could offer considerably more throughput than even a 300MB/s Serial ATA interface, but it would require drivers, if not additional software, and that would ruin some of the i-RAM’s elegance. As it stands, the i-RAM should work in any system with a Serial ATA port and PCI slot, regardless of the operating system.

Although the i-RAM’s cost and limitations ultimately constrain its appeal, they don’t take away from the fact that it’s significantly faster than any other storage solution we’ve tested. Performance oscillates between impressive and awe-inspiring, and for those niche markets that demand blistering I/O, the i-RAM may be just the ticket. 

Comments closed
    • Chryx
    • 14 years ago

    I’m mildly interested in the idea of a high(‘ish) end thing based on this sort of concept.

    gimme a 1u rackmount with fiberchannel or Infiniband interfacing and a few dozen so-dimm slots (ECC of course)

    Would make a useful piece of hardware for some tasks. (be completely useless most of the time, as is this) but… oooh, my nerd sense is tingling.

    • alphaGulp
    • 14 years ago

    Ultimately, I find these results pretty disappointing.

    For starters, the server tests are practically meaningless for desktop use. I read an interesting piece on this on storagereport.com, but you can see it pretty easily when you compare the orders of magnitude improvement in server tests and the 50% at best improvement for desktop tests. Even with ECC memory this thing is useless for a server, IMO.

    Anyhow, at a minimum, as mentioned in other posts, SATA II, ECC ram is definitively necessary for desktop use. 8 GB of space would be a nice to have, but buying all that ram might make it too expensive a product for most.

    I saw a few people saying that there is no point in using this drive, since adding more ram would give you the same results. The reason the drive is interesting is that the data needs to go from your hard drive to your ram, and that is very slow – if you think about it, the more RAM you have, the more data needs to be transfered. In addition, from what I hear Windows has a screwy way of caching, which makes it hit the disk more than you would want.

    I can see one dramatic way to improve on their design though:

    the i-RAM is already on the PCI bus (ideally it would be the PCI-E), so to get orders of magnitude improvements in throughput I would do the following: use the SATA interface when booting and when drivers aren’t installed. But then, make drivers so that, once installed, the OS is actually routing it’s IO calls through the PCI bus – voila!

    If SATA II is used, we may see throughputs that are 2-3 times higher than the fastest hard disk. Maybe that’s enough to warrant buying the thing. Having a disk that is well over 10x faster would be another matter entirely, I believe.

    • Fighterpilot
    • 14 years ago

    Check out this forum link to the I-RAM in action…its awesome
    §[< https://techreport.com/forums/viewtopic.php?t=37744<]§

    • sativa
    • 14 years ago

    this will be a good way to test processor power eh? You can use it to do game level loads and find the new bottle-necks… (well it would be more helpful if it was at 300mb/s).

    • Prospero424
    • 14 years ago

    It’s very simple:

    If data becomes corrupted in memory before it is stored on a RAID array, it will be recorded in its corrupted form and the system will henceforth treat it as valid.

    ECC Ram is intended to prevent this .

    Edit: Dammit, this was intended as a response to #74. That’s what I get for forgetting to allow JavaScript on this site.

    • Mr Bill
    • 14 years ago

    What if you installed a shooter game or some game with lots of scenery directly to this device? 4 Gb is enough to accomodate several game CD’s. Back in ancient days (circa 1986) I used to keep my FORTRAN compiler and output files on a couple of 4 Mb JRAM cards. Also Ultima ran pretty slick when copied onto and run from the ram disk.

    I guess the level load tests suggest that once a level is loaded, the game has no need to hit the storage device. So, no speedup?

    • DrDillyBar
    • 14 years ago

    Worthy addition for Temp file space and a Pagefile.

      • indeego
      • 14 years ago

      Or neither of those. System performance won’t nessesarily benefit from using this depending on use. Managing your system use around a 4 gig limit could quickly become tiresome. For instance, I could care less the boot times of most systems, as I rarely reboot/power off. That fast access could be put to use for me in areas of the system of frequent use, say a logfile on an Exchange server.

      But without ECC or a tested market, this thing ain’t getting within ten yards of my * servers.

      If I played games: I could see real benefit. But swapping out my games for this just adds setup time and I’d quickly find that the $300+ investment a tad silly.

      A valiant idea, one with eventual real applicable use, but a tad too immature for it’s market.

      In essense: A toyg{<.<}g

        • Crayon Shin Chan
        • 14 years ago

        what about a Photoshop swap file?

          • absinthexl
          • 14 years ago

          You’re better off just getting 4-8GB of RAM. There’s no point in using this as swap space in any way, since the advantage is non-volatility.

            • BobbinThreadbare
            • 14 years ago

            Photoshop is written to use a swap file on a harddrive no matter how much ram you have.

    • albundy
    • 14 years ago

    you’d better pray that power is always available. Especially if you live on a ranch in the middle of nowhere. Thats alot of data to be lost. The only thing that will save you is a hdd to back up to. I thought long and hard on this as I was seriously considering it. Way back I heard of N-RAM or somethin like that that holds the info without power. I forget where it was, but it would be really useful in a situation like this. Very fast, very quiet, and very cool(more like cold next to hdd’s) solution. Me Likey.

      • GodsMadClown
      • 14 years ago

      If you’re computing on a ranch in far-off Nowheresville, you’re likely to have you box hooked to a decent UPS, especially if you’re the type to be in the market for this sort of doohicky.

    • Buub
    • 14 years ago

    Interesting idea, but I think a 64-bit OS with an extra 4GB of RAM on the motherboard would give much better overall results.

    • mogmios
    • 14 years ago

    This sounds like a great start. I would like to see it compared to using a flash-based hdd next. I’ve used compact flash based drives in the past as being solid state they are less prone to sudden death, use less power, are often faster, and fit in small places. I’d like to know how they compare to this ram-based drive.

    It might be awesome to have a combination of the two. The major downside of flash-based drives is that they can only be written to a given number of times before wearing out. (100,000 times or some such number.) A hybrid RAM/flash device could do most of it’s reading and writing from the RAM with occassional syncs and a sync on power loss so that the unit could retain information for long periods of time without power.

    Another option would be a standard hdd combined with this RAM drive to use the RAM as a giant cache which should speed things up a lot for most operations. A 600GB drive probably still only accesses 4GB of data on a regular basis.

    • Bensam123
    • 14 years ago

    Why wasn’t I-Ram compared to SCSI devices? It’s up there in it’s price range and it’s storage capacity is quite low. I would think the best competitor for this product would be SCSI products not desktop storage devices, even Raptors are alot cheaper then this…

    And the only people that would care about drivers for a device that is such a important part of your computer are people that run Linux… “It’s not Linux compatible! I can’t run my one game on it!”, oh nos…

      • albundy
      • 14 years ago

      agreed, no scsi performance graphs.

      • muyuubyou
      • 14 years ago

      Actually, for this kind of stuff, Linux matters. Linux (and other *nixes) own Windows in server market share. This thing aims there too – probably most of their sales will be there, together with research-specific applications.

        • Bensam123
        • 14 years ago

        I don’t know how this would target Linux specifically… Anyone wanting to use it for databases would have more then enough money to invest in either large SCSI raid arrays or even better solid-state devices.

        I don’t think small niche and databases fit together for some reason…

          • muyuubyou
          • 14 years ago

          Who said small? the share of *nix in the DB server market is huge. There is a correlation between usage and OS.

          It’s like saying ATIs main target is Windows PCs, then someone says “no, it’s games, and you can play games on Linux” when the correlation is obvious.

            • Bensam123
            • 14 years ago

            Wasn’t what I was implying by saying niche… The device is made for a niche market (not Linux is a niche market) since SCSI devices have speed and quite a bit more storage capacity anyone who is a corperation or conglomerate running databases or anything else that needs speed won’t look at this device.

            4 gigs of space that connects to a sata port is quite a niche market. You more then likely won’t see giant servers running these instead of SCSI devices or other solid state devices if they have enough money.

            Maybe if this was 4 gigs riding a SCSI bus then you might have a argument or if it provided 16/32 gigs of storage space.

            Otherwise the only people I can see using this are enthusiasts and as far as I can tell enthusiasts are mainly a windows affair (mainly).

            BTW having drivers doesn’t mean a Linux driver can’t be made…

    • Prospero424
    • 14 years ago

    Very interesting stuff. They definitely need to offer 300MB/s SATA, ECC, and PCI-E support, though.

    Though I was kinda curious what your observations on the effect of running the Windows paging file from this device were. I see that boot times were a bit quicker, but was Windows noticeably snappier, especially after having been running for a while with tasks running in the background?

    To me, a significant performance advantage in this regard would be a major selling point alone.

    • DancesWithLysol
    • 14 years ago

    I think a product like this would be ideal for OLTP databases. In Oracle you could create a tablespaces that contains performance-critical objects. Your logs would still run off of a conventional hard drive, so you don’t need to put all of your objects into RAM. I think 2 or 4 of these cards running in a RAID array would be a cheap way to put together a very impressive database server.

    Related link:

    §[< http://www.dbazine.com/oracle/or-articles/ault6<]§

    • Pagey
    • 14 years ago

    Well, I suppose this is in some way a reply to my question on the Back Porch last weekend about hitting a technology plateau. Very interesting read. Good work as usual, TR!

    • firestorm02
    • 14 years ago

    Regardless of weather you think this particular product is good or not, the concept is rather interesting. I am a little confused as to why they would not use flash memory in place of RAM sticks which require continuous power….. I think I read that flash degrades with every read/write, is that the case?
    Any way, judging by the results the shear speed of the device is promising. I think in 5 years we will start to see PC storage solutions start to goto some sort of solid state device….or as soon at manufacturing becomes more affordable.

      • absinthexl
      • 14 years ago

      Flash RAM is slower in every way, and yes, it has a limited write ability. Last I checked it was between 10,000 and 1,000,000 writes. At any rate, it’s easy to run into a barrier like that when you’re using it like a hard drive.

    • drffreeze
    • 14 years ago

    Not in stock at Tigerdirect. Looks like Compumusic has them with a Jan 31st ETA.

    • SpotTheCat
    • 14 years ago

    eh, I’de like to see this move to a 5.25 bay with a larger battery and 8 dim slots, while at the same time utilizing cheaper, specialized circuitry and supporting multiple SATA channels.

    This would be really useful to a wider audience if you could buy 2 or 3 of them for that price with support for 8 dimms a piece. 512mb dimms get pretty damn cheap, and a lot of us have a few laying around or waiting to be upgraded, but with only 4 dimm slots per card you save money by going with larger dimms.

    • Ryu Connor
    • 14 years ago

    Product with not enough forethought.

    Would be an excellent solution for a Linux/Apache based web server, but the lack of ECC support kills it.

    As for general system usage… Just don’t see where adding it would be any benefit over just slapping in more RAM and letting the system cache do it’s job.

    The 4GB limit is a real pain in the ass too. I can’t even fit my favorite game on that. 😛

    The lack of ECC support also makes it poorly suited for the pagefile or scratch disk. Who wants to suffer a soft error in their pagefile or scratch disk? Hello BSOD.

    I have to agree with Alanzilla on this. A solution looking for a problem.

    • Freon
    • 14 years ago

    Actually not too bad. 4GB is enough for a boot drive and swap file, but I’d want some sort of automated way to backup and recover from a power loss to the device. IO is not suprisingly stupid fast, although I question how much that will really improve your computing experience.

    Good idea slapping a Li-Ion battery on it. Looks like something straight out of my digicam. If it really does consistently last 10 hours without power, that’s enough to take the rig to a LAN party a state away, or survive a power outage.

    So, give me software so that my computer will still boot normally and recover itself in the event of a power and battery failure (i.e. it automatically backs up the data to a magnetic drive, and rebuilds itself), and I’m there. Just need to justify $400-500.

      • Delphis
      • 14 years ago

      I wonder if putting the iRAM in a RAID-1 with a partition on a regular harddrive would do the trick. If the RAID is sensible, then reads will come from the first RAID-1 drive, i.e. the iRAM. Writes have to go to both, so that might limit things a little, but on a boot drive you’re mainly reading which would be fast. Swap considerations aside though, doing swap on a pair like that would really stuff things up.

      So, if the iRAM fails the RAID-1 mirror pair falls back to the secondary drive and boots the machine, from there RAID reconstruction will rebuild the contents of the iRAM.

      – If it fails due to power failure, is what I mean

        • babybalrog
        • 14 years ago

        That’s actually darn interesting if you *could* get it to Auto backup. The other Idea I had was that the i-Ram should work as a cache, similar to that on a controller card, it would need Two SATA plugs one to go down to the mobo and the other to go up to the HDD, read mainly from i-RAM and save to both, your idea jsut goes the extra step into incorperating backup..

        Here’s another option, place flash RAM on the Back of the card, then if the device seances a power failer, it uses the battery to move all the data from the RAM to the Flash and boom, it’s secure.. Cost a lot more though.. 4 GB flash == $300

    • Usacomp2k3
    • 14 years ago

    Imagine 4 of these bad boys in raid-0. 16GB should be enough for running off of, I would think. As long as you’re not installing games or anything like that.

      • FireGryphon
      • 14 years ago

      You mean RAID-1. RAID-0 just makes things faster.

        • muyuubyou
        • 14 years ago

        No, he means RAID-0. RAID-1 wouldn’t allow 16GB…. oh, and who wants “faster”, when we can have “slower”? 😉

        • Usacomp2k3
        • 14 years ago

        Raid1 wouldn’t help you get a larger sized drive.
        4x4gb in RAID 1 = a 4gb c: mirrored
        4x4gb in raid0 = a 16gb c: striped

          • FireGryphon
          • 14 years ago

          Oh, uh, yeah, I uh…. nm 😐

    • FireGryphon
    • 14 years ago

    Awesome review; thorough and entertaining as usual.

    I think Gigabyte is completely aware of the device’s shortcomings, but purposely did not overcome them. This is a test product: it’s easy to setup and install and demonstrates the power of using random access memory as hard drive space. I can totally see this i-RAM as the first in a line of RAM cards using more beefy controllers and multi-slot designs — if and only if this first iteration sells well.

    • Alanzilla
    • 14 years ago

    Bottom line: solution looking for a problem.

      • wierdo
      • 14 years ago

      or rather an expensive solution for a still manageable problem.

    • Shintai
    • 14 years ago

    Its not hard to tell, that it wouldnt be any fast really. All it got is the better “seek” time.

    PCI is 132MB/sec shared over all PCI connections. If it has just been PCI-x1 or x4…

    So 150MB/sec or 300MB/sec SATA doesnt really matter.

      • LSDX
      • 14 years ago

      if i am not mistaken, PCI bus isn’t used at all for data transfer. At least if the SATA controller is intergrated in the chipset

        • Shintai
        • 14 years ago

        It may be me, but I only saw 1 SATA port on that card.

          • Steel
          • 14 years ago

          Since when do hard drives need more than one SATA connector?

            • Shintai
            • 14 years ago

            Ohh ye, my fault. I was thinking on a pass through type of card.

        • Sargent Duck
        • 14 years ago

        you’re right. The PCI bus isn’t used at all for transfer, merely to power the device. Hence the SATA.

    • excession
    • 14 years ago

    Is there any reason that this couldn’t have been designed in such a way to emulate a standard PCIIDE controller? It’s touched on in the article but surely it would be a much more elegant solution than using SATA? Yes, you
    -might- have to install extra drivers but a lot of people do that anyway.

    • Philldoe
    • 14 years ago

    This is a step into a new relm of computing.
    After time passes and the tech is perfected we will soon see VERY large main drives using this iRAM.

    It won’t be long before Hybrid drives arrive.

    Think of it…
    4Gb+ on a 300mbs SATA bus in the same casing as a large HDD to store the other “crap/stuff”

    This is a step in the right direction for iRAM

    We now see what it can do and Gigabyte will improve this tech to the point a Reg HDD even in a quad RAID config won’t be able to touch it.

    • LSDX
    • 14 years ago

    BTW, this reminds me about an idea i had sometime ago:

    Instead of buying expensive Raid-controllers with cache, or exchaning ram modules over and over again (as most MB only have 4 of those), why not create a SATA or IDE interface, that plugs between the mainboard and the drive, where you could add an SD-RAM or DDR-RAM module.

    You could easily boost the cache of your drives to 128 mbytes and more.

      • Klopsik206
      • 14 years ago

      As far as I remember M$ is looking to add support for HDD with large FlashRAM cache. It’s mainly targeted for mobile use, so you can save energy on spinnig the drive.

      It would have some advantage over the iRAM, as it will have OS support (so hopefully windows will use it more “wisely”) and no batteries – which is safer.

      (I saw somewhere the samsung presenting working test sample – I’ll try to find the URL).

      Edit:
      here you go:
      §[< http://news.com.com/Samsung+hybrid+hard+drive+works+while+it+sleeps/2100-1041_3-5683836.html< ]§ and here: §[<http://www.pcworld.com/news/article/0,aid,120950,00.asp<]§

      • Flowboy
      • 14 years ago

      This is what the operating system does with main memory. More main memory would let it cache more here, and have more flexibility.

        • LSDX
        • 14 years ago

        Let me explain: at the moment i’m using 3 x 512 MB ram modules on my nforce2 MB. If I want to expand memory I’ll have too remove 1 module and replace it with a 1GB module. Now I could use the 512MB module to boost my HDD’s cache, without using any system memory. (i know i could resell it… )

        plus i have lot of older SDRAM left which is useless right now, but would still be fast enough as HDD cache

        • UberGerbil
        • 14 years ago

        Yeah, but what MS is talking about is different. It’s /[

    • UberGerbil
    • 14 years ago

    First thought — hey, here’s a use for all that cast-off PC2100 and/or PC2700 RAM people have left over from their old rigs. Second thought: crap, that’s all 128MB or 256MB sticks, so you’re not going to get a useful size disk out of it. D’oh.

      • eitje
      • 14 years ago

      swap file. 🙂

    • blitzy
    • 14 years ago

    well since gigabyte is in the mobo business, hopefully sometime they will release a mobo with onboard capacity for solid state storage which would bypass any bottlenecks of the sata or pcie interfaces. That’d be sweet.

    • Aphasia
    • 14 years ago

    Well, up this to SATA 300 and ill get one as a huge nice and fast swap-disk. Doing larger panoramas from film-scans puts me through the 2GB roof of the RAM in no time at all. Having a dedicated 4GB sawp-disk would be great at times like that. Sure, its slower then true RAM, but it sure is alot faster then normal drives. But SATA 300 would be the best.

      • yokem55
      • 14 years ago

      If you are thinking of using this for swap, you are better off putting more ram on the motherboard. Granted if your system is maxed out on memory (most dual channel a64 systems can take up to 8 gigs if you buy 2 gig modules), then this would make sense for faster swap access
      .

    • Klopsik206
    • 14 years ago

    Well, I am surprised actually.

    First, I would expect iRAM to give much higher gains over mechanical HDD (c’mon! RAM is way faster than any drive!)

    Second – if SNM (#16) right – what is the problem with Windows to make much better use of large RAM? Is this that hard to do? Anyone heard will Vista address this issue?

      • UberGerbil
      • 14 years ago

      SNM isn’t exactly right, and you don’t have to wait for Vista. All the server versions of 32bit Windows can take advantage of >4GB of physical memory (up to 64GB depending on the version), provided you have a motherboard that supports it. 32bit apps are nevertheless each restricted to 2GB of virtual address space (though obviously having more physical memory means less paging when you are running several of them at once) but certain 32bit apps can take advantage of more than that (primarily server-oriented apps like SQL Server, though Photoshop CS2 has support for 3GB). Windows x64 can also take advantage of >4GB of memory and 64bit apps are not limited to 2GB of virtual address space; 32bit apps remain limited in the same way under x64 that they are under 32bit Windows (and if they are written to go past 2GB on 32bit, they will do that on x64 too).

      So in general, if you have a motherboard that supports >4GB of memory and the right version of Windows (x64 or 32bit server versions) you’d generally be better off using your RAM to populate the motherboard than putting it in the i-RAM drive. However, Photoshop is a weird app that uses its own virtual memory system so there might be an advantage to using the i-RAM for the PS “scratch disk” — somebody’s bound to try it. The other situation I could see a big benefit is a badly-written app (or chain of apps) that reads and writes a lot of temp files and is gated by that operation: having such an app do its work on the i-RAM would boost it considerably (reads can be cached but writes can’t and neither can subsequent dependent reads).

        • Klopsik206
        • 14 years ago

        Thank you informative reply 😉

    • CodeMonkey
    • 14 years ago

    I guess I’m a little confused as to how it could still take 45 seconds to boot if it was all on solid state, even if limited to 150 meg/sec. I would have expected < 30 seconds. Maybe it was allocating a large swapfile on the hard drive and loading a bunch of drivers/apps from hard drive still during bootup?

    My system only takes 43 seconds (15 seconds for hardware initialization, 28 seconds from the Windows loading screen) to boot up to the XP login screen, and I just have a non-RAID hard drive. I don’t use virtual memory, so maybe the difference is in allocating the swapfile?

      • Dissonance
      • 14 years ago

      Different motherboards boot at different speeds depending on the time it takes to initialize various onboard components, and so on. For our boot time tests, the i-RAM was the only hard drive in the system, so it wasn’t held back a mechanical hard drive.

        • crichards
        • 14 years ago

        Probably not fair, then, to include the hardware detection/initialisation phase in the boot timings?

          • UberGerbil
          • 14 years ago

          But hard to accurately exclude it, perhaps. Besides, nobody excludes it when they’re actually waiting for their machine to boot — you want to know how long it’s going to actually take from the time you hit the power button. This just points up the fact that drive speed isn’t the major factor some people think it is.

          That said, I think boot from hibernation might be a more interesting test for this drive. You still have hardware initialization, but you eliminate some of the other steps and raw transfer rate is more important.

            • RHITee05
            • 14 years ago

            A hibernation test would be amusing – see how long it takes to save the RAM to… other RAM!

            • cheesyking
            • 14 years ago

            That’s the only use I can see for it.

            I suppose the real reasons it’s been made are:
            1) They can
            2) It’s made me read the word “Gigabyte” many times recently
            3) It’s something they can sell at a good margin. I mean, it sells for more than my last mobo cost me and I can only count 5 chips on the thing!

          • Dissonance
          • 14 years ago

          It’s fair because it’s the same motherboard for each hard drive. The only difference is for the RAID configs, which have the SATA controller in RAID rather than AHCI mode.

      • UberGerbil
      • 14 years ago

      If you have a normal (predefined) pagefile in Windows there’s nothing to “allocate” and it should have no effect on boot speed.

      • Saribro
      • 14 years ago

      My parents’ new system (k8 sempron 2600+, msi board with integrated graphics, WD 40gig PATA drive) even manages <30 seconds. It’s quite frustrating, as my own, much faster running system can barely make 45 seconds :).

        • IntelMole
        • 14 years ago

        That’s long enough to brew a cup of tea.

        Instant solution :-P,
        -Mole

    • Saribro
    • 14 years ago

    On the IOMeter CPU-usage numbers:
    Could it be plausible that, -because- the i-RAM is able to reach such enormous requestnumbers, that CPU-usage is increasing ? To put it differently: The CPU actually has to do something to keep up with the i-RAM, instead of being constantly stalled waiting for the HDs ?

      • UberGerbil
      • 14 years ago

      Yeah, Geoff actually suggests that in the comments on CPU utilization under HDTach. That was my first thought too: it would be interesting to see “normalized” CPU usage numbers for all the drives in IOMeter: (CPU / Transactions)
      I suspect it will look quite good in that light.

    • LSDX
    • 14 years ago

    Nice piece of hardware. Should be updated to 300MBs.
    But why put it on a pci solution?
    I would prefer a standart 3.5 or 5.25 inch case solution.
    The Pci solution has some advantage for users with small cases.
    Most important: Keep it driverless !

      • UberGerbil
      • 14 years ago

      They made it a PCI card because that gives it a source of power even when the machine is “off” — as long as the power supply itself isn’t off, the PCI bus is kept hot for wake-on-X support even if the machine isn’t actually running. So the battery backup isn’t necesary unless there’s an actual power failure: the computer can be “off” indefinitely without losing the contents of the RAM. They could have made it an actual 3.5 “drive” but the molex connectors don’t offer power, so they’d still need a PCI card that did nothing but feed power to a cable you’d then plug into the drive. Not exactly elegant. Or you’d just have to put up with losing your data whenever your machine was off for more than 10 hours. Not very convenient.

        • Lucky Jack Aubrey
        • 14 years ago

        Anyone have any thoughts on why this hasn’t been tried with NAND memory instead of DRAM? Admittedly, I know very little about NAND, but on the face of it, that seems like a much more plausible solution. It would seem to solve the continuous power problem, for starters.

    • FubbHead
    • 14 years ago

    I can see this might be nice for database and similar disk tasks, and perhaps videoediting. But it would certainly need error correction/detection before that. It would also make more sense, if memory companies manufactured cheap, large, low-speed memory modules for it. Hell, even DDR200 would be enough.

    • Maph
    • 14 years ago

    which version of i-ram was used in the test? there was at least one updated version that addressed some complaints in the original i-ram release.

      • Dissonance
      • 14 years ago

      Rev 1.2.

        • Maph
        • 14 years ago

        Thanks Dissonance. That is the one i was thinking of. However I just checked the gigabyte site and apparently there is a newer rev1.3 listed now. Only difference appears to be “Support PCI 3V & 5V Slot” and it made a reference to being usable in Macs.

    • just brew it!
    • 14 years ago

    The lack of ECC support is unforgivable IMO. Soft error rates of current DRAMs are good, but not good enough to be used for long-term storage without some sort of error correction. Eventually you’ll start getting randomly flipped bits here and there, and you’ll be left scratching your head wondering why things have gotten flaky.

      • BobbinThreadbare
      • 14 years ago

      That’s what RAID is for.

        • just brew it!
        • 14 years ago

        RAID won’t help you with randomly flipped bits. RAID helps you when the hardware tells you “I can’t read this sector” or “this disk won’t spin up”.

        If a bit randomly flips in a non-ECC DIMM, the system has no way of telling that this has happened; the memory location will still read successfully, but return the wrong data. If you read different data from the two stripes of a RAID-1, you have no way of telling which one is right.

          • BobbinThreadbare
          • 14 years ago

          Thanks for clearing that up.

          • Kurlon
          • 14 years ago

          Raid 5 – read then compare, if the checksum mismatches, correct and move on.

            • just brew it!
            • 14 years ago

            RAID-5 still has issues for an application like this. RAID-5 allows you to regenerate /[

            • Vertigo
            • 14 years ago

            What exactly do you think ECC RAM does? Notice how there’s one extra chip on an ECC stick compared with a regular one? Notice how theres one extra drive on a RAID4/5 compared with RAID0? This is not a coincidence. ECC works by storing parity info in that extra chip, just like RAID stores parity info on an extra disk.

            • Buub
            • 14 years ago

            ECC is not parity, that’s why it’s called “ECC” and not “parity”. ECC uses more than one bit and is actually much stronger than a single parity bit check. That extra “chip” is actually 4 bits wide.

            But that’s beside the point. You need at least three drives to do effective protection. If you only have two, you don’t know which one is correct when a bit gets flipped. Then if you go with RAID5, you get a severe write penalty. All this just to make the thing safe to use?

            I think ECC RAM would be a much simpler solution. Until that appears, this is a novelty item or something useful for holding temporary data.

            • just brew it!
            • 14 years ago

            Even 3-stripe (or more) RAID-5 doesn’t help. You’ve still got only one parity stripe, so you still can’t tell which drive returned the wrong data unless the disk controller tells you which drive reported a read error.

            3+ stripe RAID-5 doesn’t improve your ability to recover from errors versus RAID-1. It decreases the amount of overhead (you lose only 1/N of your capacity instead of 1/2), and improves throughput (assuming you have a true hardware RAID controller).

            I agree, to be truly useful as anything more than (say) a fast swap or Photoshop temp file drive, this thing needs ECC RAM.

      • Krogoth
      • 14 years ago

      RAID does not protect data from corruption (malware, forementioned bit flipping in I-RAM and overclocked buses). It only protects data against hard drive failure.

      Meant for #21

    • Zenith
    • 14 years ago

    It’ll put out enough heat to not want to put it right next to your video card, but it’s definitely one way to move into solid state storage. It’s a little expensive for a enthusiasts, and a little too impracitical for professional use….I like it.

    • MaceMan
    • 14 years ago

    The geek factor is high; problem is my C: drive consumes over 8 gig with the OS and “standard” apps. So I would need this to MORE than double to become possible, and even then, its just replacing C; I’d still need another drive for the “library” of stuff (250 Gigs is now too small).

    Sweet idea. Still a touch short in implementation (and ignoring cost).

    Biggest win, along with performance… lack of heat output and silent. HDs are such a huge heat producer.

    This tech has a lot of appeal. Kind of like the first Shuttle cube. Right idea. Needs polish and time.

      • tu2thepoo
      • 14 years ago

      “Biggest win, along with performance… lack of heat output and silent”

      4 DIMMs stacked that close generate a good amount of heat. I’ve got 4x512mb on my motherboard and if I muck about the case after it’s shut down the DIMMs can be almost as hot as hard drives.

      • Convert
      • 14 years ago

      You can slim down a xp pro install to to around 1gig without shaving needed drivers or apps.

      Moving other apps like office among other things to a hd and just leaving recently used game(s), or an app like PS, and a slimed down xp might work out pretty good. Though adding a hd puts noise back into the picture.

    • tu2thepoo
    • 14 years ago

    So basically, unless you’re copying 400mb files from one drive to another every day (or running HDtach), it’s only a few seconds faster in most tasks than a regular drive?

    Nevermind the question of who in their right mind would run a server off it!

    [edit]
    before any smartypants chime in and write “LOOK AT IPEAK NUB” – how often are you Xvid encoding while backing up entire drives? I’ve only hit that kind of peak usage a few times, and even then you’re just as well off queueing things up and letting it run overnight. And IOmeter is a server-level test; unless you’re literally running a server with multiple concurrent users, it’s hardly an indicator of single-user performance.

    I dunno, it’s really a pants-wetting-cool device and I’d like one of my own, but it’s up there with RAID 0+1/5 on my list of “more money than sense” propositions.

    • Convert
    • 14 years ago

    Excellent once again. Thank you for reviewing this!

    I never thought about using it in a raid config, that definitely opens up some more possibilities.

    • brick
    • 14 years ago

    “Since the i-RAM uses random access memory, it’s no surprise that its random access time is significantly faster than our hard drives and RAID arrays.” <- LOL

    • wierdo
    • 14 years ago

    Man I wish I had one… I’m not an early adopter, though, so I hope enough people buy it and encourage Gigabyte to come out with something that uses larger dimm sizes and/or more slots and/or ddr2 perhaps.

    • Tupuli
    • 14 years ago

    I find interest in this sort of technology a little bewildering. Wouldn’t it be better to move that 4GB to main memory (assuming 64-bit here) and have a larger file cache? A file cache with > 10x the bandwidth and significantly less latency?

    The only advantage of this scheme is the battery backup. Using the same memory for a file cache or ramdisk would be better in every other respect.

      • SNM
      • 14 years ago

      Windows still has its stupid memory limit, and putting another four gigs in would make it want a huge swap space. Use this as a swap space, though, and you’re finally drive-free!

        • Flowboy
        • 14 years ago

        See, 64 bit XP doesn’t have that limit. You’re more likely to be limited by finding a mobo and DIMMs that let you get past 4GB at a reasonable cost.

        • Alanzilla
        • 14 years ago

        What limit? Any of the server versions support up to 64GB.

          • UberGerbil
          • 14 years ago

          Actually, /[http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx<]§

      • muyuubyou
      • 14 years ago

      1 – there are just so many slots in a mobo – prices and availability problems for bigger sticks skyrocket after 1GB
      2 – OS support (depending on… huh… the OS 😉 )
      3 – price. You better have fast+expensive reasonably sized RAM than a whole lot of slow RAM
      4 – what to do with all those spare slower sticks?… I have unused 256/512MB 266 DDR sticks around, so they might have some use after all
      5 – browser cache set to 0%-RAM 100%-iRAM
      6 – swap(*nix)/virtual memory all to iRAM
      7 – temporary files in video editing processes wouldn’t make my HD trash like there’s no tomorrow
      8 – less HD fragmentation due to frequent creation/deleting of files
      9 – Finally a reason for SATA having actually more bandwidth.
      10 – Geek factor

      Still dissappointed with the boot time. At the Uni we had Linux boot in 7 seconds after the memory checks (reading from an EEPROM). This was mid-90s, where’s progress?

        • Tupuli
        • 14 years ago

        b[<1 - there are just so many slots in a mobo<]b This is somewhat true, but I'd still rather invest in a server mobo than this device. b[

          • muyuubyou
          • 14 years ago

          I guess it all comes to this point: applications are not optimized for huge RAMs, because almost nobody has that. This thing emulates a really fast HD, which in many cases renders great improvement. The only if is the 150MB/s bottleneck. 300MB/s would be greatly appreciated… otherwise an SCSI or raptor is a better deal.

          OS memory management is less than ideal, let me tell you. Especially under Windows. Memory assignment is a very hard problem, especially combined with high multitasking.

          Ramdisks aren’t trivial to set up (again depends on OS) so you just can’t expect applications to use your spare RAM as a drive. They won’t. In many cases they just can’t for security reasons.

            • Tupuli
            • 14 years ago

            b[

            • notfred
            • 14 years ago

            *[

            • Ten98
            • 12 years ago

            150mb/sec is not a bottleneck.

            You will be hard pushed to find any real world situation whatsoever that can utilise that for more than incredibly short bursts.

            the only time a SATA-2 drive will reach close to 300mb/sec is when transferring data from drive cache to IO cache. The largest cache size is 32mb, so unless you’re reading the same 32mb of data over and over, this performance will be over in less than 1/100 of a second

            This is ok for small operating system files but when are you only going to need 32mb of data? Most of the time the cache is used instantly then the drive goes back to reading from its mechanical platters.

            I-RAM delivers a SUSTAINED 150mb/sec sustained across all data all of the time, which blows SATA-2 hard drives, even 10’000RPM 4 disk arrays, completely away.

            • Buub
            • 14 years ago

            y[

            • muyuubyou
            • 14 years ago

            I’ve performed sereval benchmarks myself. There are per-process limits apparently, making your system thrash like mad far before it should. I’m really too lazy to search for these benchmarks, but both XP and 2000 were blown out of the water by any Linux or BSD I tried. Add to that the fact that most people won’t tweak their virtual memory settings, and you want good defaults because their experience depends on that…

      • Stefan
      • 14 years ago

      100% ++

      And (if we’re speaking servers) instead of the battery we have a UPS running anyway, so no need to be afraid of data-loss.
      Quite frankly I fail to see which market this device should appeal to. (Apart from the Über-Geek with money to burn.)

      • Bensam123
      • 14 years ago

      You know many people talk about just adding more system memory… There is a difference between a cache and a repository.

      Information is pulled from the repository (in this case a HD) into the cache for fast access…

      This product targets the repository for all information, where it is stored. Even if you have a large cache information WILL ALWAYS be pulled from the repository on startup and shutdown (most of the time inbetween as well) of whatever you’re doing. It’s trying to elevate that bottleneck. It’s not trying to improve the speed at which your system has access to its cache (RAM).

      Adding more system memory != Faster or better storage sub-sytems

        • Tupuli
        • 14 years ago

        b[

          • Bensam123
          • 14 years ago

          Alright I don’t think I completely understand…

          I-Ram isn’t meant to be system memory… It doesn’t target it nor does it try to replace it. You’re comparing apples to oranges.

          Adding more system memory will improve how much information you can store on it but you will STILL load it off a storage sub system because system memory doesn’t store that information after shutdown.

          As far as I know price doesn’t effect what a product physically does. Even if you want to say it’s no different then system memory, it is different from system memory. They’re two different things and I think you’re confusing them because this uses ram which you normally associate with a system cache…

          If you think otherwise tell me how adding more memory to your system removes the bottleneck of initially loading data off a disk, putting it back and writing while it’s in use? No matter how much system memory you add you will always pull data from larger storage sub-systems. Adding more doesn’t improve the situation.

            • Tupuli
            • 14 years ago

            b[

            • Bensam123
            • 14 years ago

            “Why is this so hard for people to understand? Why take a part with GB of bandwidth and hang it on a bus thats 1/10 that speed with 100x more latency than where it was intended to reside?”

            Perhaps you’re not the one understanding >_>

            Because it’s made to elevate the storage sub-system bottleneck not add more to system memory which is completely different. You just keep thinking they’re the same thing for some reason.

            It doesn’t matter if it’s the same stuff that you use for system memory, you can use it as a storage sub-system too, can’t you?

            d00d if this is how you react to a small device now, your brain is going to explode when solid state storage devices take the market…

            You can’t sell something so small for so much money! It’s made for your system memory only! It’s slow and clunky compared to your system memory! Why?!?!?! *brain explodes*

            • Buub
            • 14 years ago

            No, I think the issue is that this device effectively has the cost of RAM with the speed of a disk. That doesn’t make sense. If you’re going to spend that extra money and go through the extra effort of keeping this thing fed power and backed up, you should get a substantial payback in performance. That’s what solid-state memory will be about. This thing doesn’t deliver.

            I think if they had put a SATA or even PATA controller interface on this thing and made it look like a controller with a drive attached, it would have been much more successful. They could have used the full bandwidth of the bus, and it still would have had the potential to run without drivers.

            But even if that were the case, it doesn’t hold enough RAM and it doesn’t use ECC. If you’re going to have persistent system data on this thing, it /[

            • Bensam123
            • 14 years ago

            Well I guess thats the point when you go from fact to personal opinion…

            I think it does a very good job. Solid state storage is still in its infancy and it still dwarfs the competition. Unless you can show me a device that gives you that sort of performance at a better price then I would have to disagree with you.

            • Tupuli
            • 14 years ago

            b[

            • muyuubyou
            • 14 years ago

            I’ve been using RAMdisk since the Amiga days and then *nix and I see your point. The problem with getting 16GB of the fastest RAM is price, and if it’s not the fastest, then you better get less quantity of faster RAM, which is better for most uses. Then you have Windows and many apps not allowing you to use RAM as an HD, and then you have reboots.

            This is probably not for everyone, but it has its uses.

            • Ten98
            • 12 years ago

            Not to mention RAMdisk uses CPU quite intensively.

            If you have a lot of load on your CPU, RAMdisks can be slower than normal harddisks sometimes.

            • Bensam123
            • 14 years ago

            Ok I didn’t realize you were talking about software that partitions off system memory at boot but thats bad in itself.

            Have you ever used ramdisk or programs like it which partition off system memory? I have and I know a friend who has. IT IS VERY BAD to partition off system memory virtually through software. A green Mr. Yuck sticker should be placed on the application warning all that might drink it’s neon green contents just because it looks like it might taste good.

            System hangs, BSODs, data corruption, slow load/shutdown times, program errors and every other error you can think up springs from that sort of software.

            Yes theoretically it is a good idea. Yes it would be faster and better then I-RAM. In use it is a whole nother story though. Maybe if it was on a hardware level through your motherboard it would be good but it isn’t and I-RAM still remains a better solution… just because it works.

            BTW this uses ram too… Isn’t this the same price as buying system memory besides the device?

            • Buub
            • 14 years ago

            I have also, but I didn’t have any of those issues. The system was quite stable. I used the RAM drive as a temp files drive for an application that wrote and read a lot of temp files. It worked perfectly. It even had an option to read from a hard drive partition on bootup, and to write back to the partition in a lazy write-back fashion.

            However in most cases, more system RAM is going to perform better over a wider array of applications than partitioning off RAM and making a RAM disk.

            It’s possible your friend either had some bad RAM, or he was using buggy RAM drive software. There is no reason RAM drive software HAS to be buggy. Some of it works very well for the specialized tasks that RAM drives excel at.

      • Ten98
      • 12 years ago

      You can’t boot from main memory. The point of the I-Ram is that you can store the entire OS, your swapfile and your core applications on it, which massively reduces the I/O bottleneck for virtually every task and speeds up general system operations.

      In terms of home or desktop use that’s pretty much it, a nice way to speed up Windows.

      The seek time of 0.001s and the fact that the latency and transfer rates are monumentally high, pretty much at the maximum SATA can deliver, whether you’re accessing 2 or 10’000 files at the same time makes it a highly attractive unit for servers. Traditional hard drives tend to grind to a halt in multi user environments, which leads to things like multiple server redundancy where all thats needed is better IO.

      Finding a server admin brave enough to store mission crititcal data on one might be another story however 😛

    • dragmor
    • 14 years ago

    They need a 2nd revision with the following changes

    1) Dual 300MBs SATA ports (for 600MBs, 2 ram slots per port)
    2) Use DDR2 for larger size and less power draw
    3) Support the 4gb DDR2 dimms so you can have a 16gb drive

    • A_Pickle
    • 14 years ago

    Pownface!

    I’ve been waiting for this to be reviewed!!! 😀

    -Pikl

    • emkubed
    • 14 years ago

    I’d boot Windows from it, and Raptor the rest.

    Nice review.

    • leor
    • 14 years ago

    can you use ECC registered RAM, or only the regular stuff?

      • Xylker
      • 14 years ago

      Just the regular stuff

Pin It on Pinterest