BSOD bug hits Crucial m4 SSDs

As I said in our last big SSD round-up, the inconvenient truth about solid-state storage is that it still has reliability problems. Every major SSD maker seems to have been affected by one issue or another, and the SandForce BSOD bug proved particularly stubborn through most of last year. The blue screen of death has hit again, but this time, it’s Crucial’s m4 SSDs that are affected. The company’s forums detail numerous reports a 0x00000f4 BSOD error popping up on users’ systems.

Crucial has confirmed the bug, which apparently rears its head after about 5,000 hours of disk “on time.” According to Crucial, rebooting will return an affected machine to normal operation, but only briefly: “the system then requires subsequent restarts after each additional hour of use.” Ugh.

Although Crucial claims to have determined the cause of the problem, a firmware update to fix it isn’t due until the week of January 16. The company is adamant that users aren’t at the risk of data loss due to the bug, though. I’d recommend that anyone with an m4 make a backup of the drive’s contents just in case.

This news is certainly disappointing, and we’ll be sure to grill Crucial when we meet with them next week at the Consumer Electronics show. I can’t help but find some irony in the fact that the m4, a drive we’ve been recommending as an alternative to folks turned off by the SandForce BSOD bug, now has one of its own.

Comments closed
    • squeeb
    • 8 years ago

    That sucks πŸ™

    Glad my Spinpoint F3 is still chugging along…

      • PrincipalSkinner
      • 8 years ago

      I’ve got F3 and m4. No problems so far.

    • StarBlight
    • 8 years ago

    I personally own three Crucial drives. A C300 and two M4’s. All 128GB. Haven’t had a single issue to date. So hopefully with the firmware update I won’t ever see one, but from my experience I would still highly recommend their drives.

    • LoneWolf15
    • 8 years ago

    Bought a 128GB m4 for my wife’s ThinkPad a month and a half ago, due to reports on reliability. Now I get to tell her to bring it home from work in a couple of weeks so I can back it up (just in case) and flash the firmware.

    I think I’ll stick with the 600GB `Raptor I have for a boot drive in my desktop. As for my ThinkPad, I’m glad the SSD is a boot drive only for now, with a Scorpio Black for storage, although HDDs aren’t 100% reliable either.

    • Ihmemies
    • 8 years ago

    I’m glad for now that I bought the Samsung 830 128GB instead of Crucial M4. We’ll see if it was the wisest choice in the end. Pity Intel’s drives are at least twice as expensive as competition’s

    • HisDivineOrder
    • 8 years ago

    This is why SSD’s are severely overpriced. They are unreliable. In most any other industry, you get an item that’s so often defective or about to be, you’d see prices reflect that uncertainty.

    But computer users just shrug and watch their computer reboot every hour on the hour. Because it’s not unexpected.

      • Meadows
      • 8 years ago

      Not sure if trolling, or genuinely idiotic.

      • Vivaldi
      • 8 years ago

      I want what he’s having!

    • Frith
    • 8 years ago

    I had problems with my Agility 3 so I decided to use it as my virtual machine drive and bought an M4 for my main boot drive. It was about 15% more expensive than the Agility 3 and is also slightly slower but I thought it was worth it to get a drive that worked. I’m not amused.

    On the upside, this seems to be a more straightforward problem than the SandForce issues, and it sounds like it’ll be resolved long before I reach 5000 hours. Not really a really a major issue then.

    Hopefully TechReport will post another news article when the firmware update is released so I don’t forget about it.

    • ClickClick5
    • 8 years ago

    Mmm, another reason I’m sticking with HDDs for the time being.
    Maybe in eight years…

    • TEAMSWITCHER
    • 8 years ago

    Scratching my head on this one.. I have been using an M4 for several weeks 10 hours each day at work with a heavy workload and never had a BSOD. Hmmm…It must be something very specific that I have not been doing.

      • Yeats
      • 8 years ago

      Read what Geoff wrote, then read what you wrote. Then, in 50 words or less, tell us why what you wrote doesn’t apply to what Geoff wrote.

      • Malphas
      • 8 years ago

      I’ll give you a clue: 10 hours a day for several weeks doesn’t equal 5200 hours.

        • Farting Bob
        • 8 years ago

        It does on the moon!

      • dragosmp
      • 8 years ago

      5000 hours will be burnt thru in 500 days if only used 10h/day, roughly a year and a half, you’re good

      For 24/7 use 5000 hours gives about 7 months. The M4 didn’t launch such long ago, so in my opinion very few users will actually be impacted by this bug. Stop panicking πŸ™‚

      • TEAMSWITCHER
      • 8 years ago

      That’s kinda why I was scratching my head. No one one should have experienced this issue given the time it takes for it to occur. Unless, of course it can occur in less time in certain situations. I’m just wondering what conditions would cause this.

        • h4x0rpenguin
        • 8 years ago

        What about the people who bought their M4s a while ago (>”several weeks”) and thus have been using their drives for over 5000 hours?

          • ca_steve
          • 8 years ago

          As mentioned, 5000hrs = 208 days if running 24/7. The M4’s were introduced on April 27, 2011. So, today, this bug only affects SSDs purchased between April 27 and June 12 of last year that have been running 24/7 (call it end of June…if firmware fix is out in 2 weeks).

        • Malphas
        • 8 years ago

        Yeah no-one really, except you know people like me that have been using these drives in 24/7 systems since Q1 last year (which is a minority of users but still a significant amount). We’re the first wave of users experiencing this issue which is why it has only just been recognised.

        • dragosmp
        • 8 years ago

        Since there’s no data loss and it only occurs @around 5000h uptime maybe the “bug” is something related on how this SMART counter is interpreted by the drive. It’s like ‘”if h_count>5000 switch to this.mode” and this.mode bugs or the switch shouldn’t have been there in the first place. They could have probably validated the workings by adding a multiplier of something in the pre-release firmware to check the way this “time” counter is read and this way the SSD would have reached the 5000h mark “X” times faster. /2cents

    • NeelyCam
    • 8 years ago

    So… Intel is the most reliable one again?

      • Kougar
      • 8 years ago

      Good question… Intel’s 500 Series use the same controller, and may have the same bug.

      • Yeats
      • 8 years ago

      Don’t know… in the last 12 months, I’ve had 1 Crucial and 1 Intel SSD fail out of a total of 6 SSDs. Over the same time frame, I’ve had 1 WD and 1 Samsung HDD fail out of a total of 14 HDDs. All were replaced under warranty.

      • Airmantharp
      • 8 years ago

      The 320 series at least. Their 500- series will probably have the same bug as mentioned above, and are more expensive than Crucial’s M4.

      Also note that the 256GB M4 (not that I can afford one!) is the highest rated drive in it’s class on Newegg.

        • Firestarter
        • 8 years ago

        But the 320 had a bug, and is therefore totally utterly unreliable. Never mind that we probably never hear about 90% of the bugs that are still alive and well, one bad bit of news and I will forever boycott the product!

          • NeelyCam
          • 8 years ago

          Damn, thanks for reminding me – I [i<]still[/i<] haven't upgraded the firmware on my 320...

      • ptsant
      • 8 years ago

      The only SSD I would trust with my precious data would be an Intel 320. Then again “trust” means as a boot drive. The really precious data are on 2xWD RE4 2TB in RAID 1 with monthly external HDD copies. I wish Intel would charge a little less…

    • equivicus
    • 8 years ago

    52,345 hours WD RE 320 GB IDE drive in my DVR box zero issues so far
    5,341 hours on an Intel X25-M 80 GB with 1.3 TB total writes zero issues
    5,124 hours on an Intel X25-M 40 GB with 1.77 TB total writes zero issues

    I have personally purchased four other Intel SSDs and have not had any issues.

      • NeelyCam
      • 8 years ago

      I’ve been ridiculously lucky for years, none of my SSDs or HDDs have failed.. that includes a 1TB WD drive on my Tivo ( running nonstop for a third year now) numerous WD Greens, two X25s, one 320 and …

      Wait, I take that back. I had a G1 X18 that died three months ago.. got a G2 X18 as a warranty replacement. Anybody needs it? I should sell it as I put in a 2.5in 320 instead…

        • Deanjo
        • 8 years ago

        Had to vote you up on that one. I see nothing at all that warranted someone voting you down except for the fact that they were an abused child with no friends.

          • NeelyCam
          • 8 years ago

          Child abuse is no laughing matter. I’ll happily take thumbdowns if that makes them feel better about the world.

      • tone21705
      • 8 years ago

      How did you get those numbers? I would love to see how much time I have left on my m4…

    • geekl33tgamer
    • 8 years ago

    I’m not worried – Thats some 208 days of constant up-time. As I only bought an M4 on a good pre-xmas deal from OcUK last year, I would say im at no risk from having a problem before the new firmware rolls out.

    Now, I will be peeved if the Firmware upgrade nukes all disk data like it usually does on many other SSD’s out there… :/

      • stdRaichu
      • 8 years ago

      I’m one of those people who’ve been using the M4’s since 0001 (as well as C300’s since 0001) and none of Crucial’s upgrades have ever nuked the contents of the drive.

        • geekl33tgamer
        • 8 years ago

        That’s pretty cool then. I had some Patriot PS-Series SSD’s before the M4 and every firmware was a back-up data first excercise, or you were never getting your files back!

    • Malphas
    • 8 years ago

    I’m one of the users that’s been having the BSOD problem for a few days now, my on time is 5224 hours. It’s actually after 5200 hours that people have this issue. People have been complaining about this since at least November, but there haven’t been enough of them for the correlations to have been noticed. In fact when people did suggest there was a widespread BSOD error on tech forums they generally got accused of trolling because it was “common knowledge” that the M4 drives were very reliable.

      • Derfer
      • 8 years ago

      Doesn’t surprise me, on either end. Trolls are quite numerous, and it took awhile for enough people to hit that 5200 mark.

    • Meadows
    • 8 years ago

    Oh, it’s [i<]on[/i<]. Crucial was the brand from which I least expected this.

      • Krogoth
      • 8 years ago

      [url<]http://fc01.deviantart.net/fs71/f/2011/314/4/6/rarity___the_worst_possible_thing_by_nekosrocks-d4fqvit.jpg[/url<]

        • yogibbear
        • 8 years ago

        Why are ponies invading TR lately?

      • Airmantharp
      • 8 years ago

      Even though I don’t own one, this still hurts. I’ll keep patting my Intel 320’s, which so far have the least problematic bug out of the newest generation of drives.

    • ColeLT1
    • 8 years ago

    I put a 64gb M4 in my boss’s home computer (via z68 smart response and a 1tb WD black), and 3 128gb M4s in work laptops, and one more in a friend’s computer. The 64gb one had studding problems until the 009 firmware, besides that, nothing. I guess we are all below the 5k hours.

    I also have a 120gb Vertex 3 in my home computer (x58), no issues with it, it wasn’t even a fresh install, cloned it from a 96gb SSDnow v+100, then expanded the partition. My roommate has the hyperX 120gb with no BSOD issues either on an AM2+ Phenom II (790fx?). Finally, my GF has the SSDnow in her MX17r3, no issues either. I am lucky… and SSD crazy.

      • derFunkenstein
      • 8 years ago

      well 5000 hours is 208 days of 24-hour non-stop powered-up state. Unless you’ve had it that long, you won’t see the issue, but eventually everyone will hit aroudn that point.

        • ColeLT1
        • 8 years ago

        Unfortunately I have coworkers who are always on call and do not let their laptop sleep. I put them in late September, so like half way there.

          • derFunkenstein
          • 8 years ago

          yah, you’ll eventually hit it, but hopefully it won’t be next week. πŸ˜€

    • Derfer
    • 8 years ago

    Yes, I too think it wise to “grill them” for acknowledging and promptly fixing said bug.

    Should have gone the OCZ route. You know, pretend it isn’t real for 6 months.

      • evilpaul
      • 8 years ago

      A week and a half is prompt to you?

        • Derfer
        • 8 years ago

        Yes? You could have figured that based on the situation I compared it to.

        • stdRaichu
        • 8 years ago

        A week and a half is *definitely* prompt compared to OCZ, and speedy even compared to Intel; they’ve had the fix for a while, they’re just working on the usual QA validation.

    • HighTech4US2
    • 8 years ago

    Do any of these SSD companies do any reliability/lifetime testing?

    It seems unlikely with the constant reports on every new SSD released of BSOD/Data Corruption problems early in their life cycle.

    The worst problem a data storage device can have is “Data Corruption”. Next in line is random failures causing BSOD and other operational problems.

    I have only purchased one 64gb SSD (Kingston 64GB SSDNow V100 Series for $63) and because I waited for a sale the problems with it were found and fixed by the time I bought it. Hopefully there won’t be any more.

    I will not be a leading edge purchaser of any newly released SSD’s for they all seem to come with serious bugs.

    Early adopters get s c r e w e d.

      • jpostel
      • 8 years ago

      I work in the software vendor side and used to work in the hardware side, but not SSDs in particular.

      Most of the QA testing is based around specific “typical” use cases and then testing extreme (boundary) use cases. Other than physical torture tests for hardware that simulate extended use, relatively few consumer grade devices go through 5000hr (208 days) burn in testing.

      The short of it is that the only way to test 5000hr of continuous use is to run it for 5000hr, and by the time that test is done, the next firmware version has already been released.

      All that said, I would ABSOLUTELY expect that type of testing for the SLC Enterprise type SSDs, similar to what is done for HDDs. Those are designed to be hammered 24/7 and testing should reflect that. In those cases, it is actually of benefit for the hardware company to know when the drives will fail, so they can advise customers about upgrades or preventative maintenance.

        • ermo
        • 8 years ago

        You could well argue that (a small) part of the enterprise testing and validation cycle is collecting feedback from early-adopters.

        But I guess that’s hardly a new insight.

        In any case, I personally don’t own any SSDs and part of the reason is that, so far, reports here and elsewhere haven’t exactly convinced me of their reliability. So yeah, what HighTech4U2 said resonates with me as well.

      • jwilliams
      • 8 years ago

      Since the m4 bug does not bite until the SSD has been on for 5000 hours, there is no accelerated lifetime testing that would have caught it. So it is not surprising that Crucial missed this in their testing. However, I would have hoped Crucial (or Micron) would have had a number of their SSDs running continuously since they were released so they could see how they hold up. But it appears they did not have any of their m4’s running continuously, since it would have taken less than 30 weeks to hit 5000 hours (sometime in October), and Crucial would have known about the bug before the early adopters reported it.

      So, I give Crucial high marks for quickly recognizing the problem after their customers reported it, and (assuming they get the fixed firmware out soon) high marks for promptly fixing it, but I give them low marks for not having found the problem themselves in October.

    • Tumbleweed
    • 8 years ago

    So are the OCZ problems fixed with that firmware that came out a few weeks ago? I’ve not heard one way or the other. If so, it’s time to stop ragging on OCZ.

      • indeego
      • 8 years ago

      Haven’t had a re-occurrence since latest firmware/3rd Drive. Doesn’t mean much, however, took 5 months to resolve.

        • Tumbleweed
        • 8 years ago

        If the problem is fixed, it’s fixed. To me, that means a lot. Granted, how they handled it was ridiculous, but if it’s reliable now, it’s reliable. And the Vertex 3 sure is fast.

          • [TR]
          • 8 years ago

          So, you know those illnesses that mostly show up after you are, say, 50 years old? I’m close to 30 and still no sign of them, so I’m going with “Yup! Medical science fixed that for me, yet again! 100% healthy 4life!”.

      • Sunburn74
      • 8 years ago

      I bought a couple of vertex 3 with the latest firmware and have them running in a raid-0 about 3 weeks ago. I have not had a single BSOD or crash.

      • Hawkins
      • 8 years ago

      I was getting hit by it pretty solidly on my Vertex 3 until the fix, and haven’t had an issue since. I’ve seen others say the same thing. There are people who say its still not fixed, but that’s not my experience; i suspect that some of the people who say they still have issues may have other faulty components.

      So that’s at least one person saying one way πŸ™‚

      edit: I forgot to add – lest I sound like a fanboy… I had HORRIBLE customer service from OCZ for an unrelated reason ( DOA drive that wasn’t even detected – fine, that happens, but it took them a month to replace after they received my RMA, while giving me excuses and lying in the mean time). So I won’t be getting OCZ again for that reason. But the BSOD issue seems resolved for me, and I’m happy with my replacement drive now.

      • MadManOriginal
      • 8 years ago

      It was a SANDFORCE problem and affected literally every SANDFORCE-based drive, it was not an “OCZ problem.” Why are people on tech sites so stupid about this?

        • willmore
        • 8 years ago

        Because the problem wasn’t just the technical issue, it was the customer service issue of OCZ downplaying the issue, refusing RMA, dragging their feet, lying, etc. That’s the “OCZ issue”.

        I’ve not seen the issue on my setup, but then again, I’m using an old P45 chipset based motherboard, not anything new that was showing the issue.

      • tootercomputer
      • 8 years ago

      I had a Vortex3 and downloaded that firmware “update” and despite several reinstalls of win 7 64-bit,I getting blue screens. Ended up getting a Kingston hyper X, I’ve had one BSOD, and that was it. It has been working reliably now for two months. I was fortunate in that I was able to return the vortex3.

      • technoguru
      • 8 years ago

      i have a vertex 3 for 4 months now and never had any issues so to all who complain about OCZ: every SSD maker seems to have problems and issues so STOP COMPLAINING!

        • just brew it!
        • 8 years ago

        Most people seem to be complaining more about their customer service than the actual product.

      • hansmuff
      • 8 years ago

      Stop ragging on OCZ, a company that has a LONG, LONG history of putting out bad products and not supporting them properly, not handling RMAs properly?

      Power supplies, Memory, SSDs, OCZ has always had huge problems. I don’t understand how anyone can white knight this anti-consumer outfit.

        • Firestarter
        • 8 years ago

        But, think of the benchmarks! These bars here are this *gestures hands* much longer than those other bars!

        *empties wallet*

    • indeego
    • 8 years ago

    Has Samsung been impacted yet? Not the fastest, but I haven’t heard much.

    Still haven’t been hit by Samsung, Intel or Crucial, so I’ll stick with them.

      • beck2448
      • 8 years ago

      Sandforce 2282 controllers which are a little more expensive work great on MACs.
      OCW mercury pro extreme 6g 240gb are fast and great. No problems at all since Apple certified them in September.

    • Rakhmaninov3
    • 8 years ago

    “Old Faithful” has been chugging with it’s old-school WD Caviar for 4 years. Was thinking about switching to SSD if I could save the money, but the problems cropping up with them make me hesitate. The computer is still plenty fast enough for anything I have to do, I was just thinking it’d be a good bang-for-the-buck upgrade, but I need it to work more than I need it to work REALLY fast.

    Sometimes it’s best not to fix what isn’t broken lol

      • Firestarter
      • 8 years ago

      How about a Samsung SSD? They have the resources to test an SSD silly. Although, they could also do the mega-corporation jiggy and put their collective heads in the sand if a issue should crop up.

      • Sunburn74
      • 8 years ago

      Isn’t it generally accepted that SSDs are more reliable than hard drives over the long term? I realize some SSDs have bugs but for the most part, these bugs affect a few people and generally people with unusual or outdated parts. Even the vertex sandforce bsods, you’d be surprised how many weirdo chipsets were used by the a lot of the guys on the forums experiencing issues. “I have some 0.9xx sata card made in russia in 1976 and my vertex is crashing”. Even this 5000 hour bsod is very strange imo. 5000 hours of use a hell of a lot of hours of use and honestly seems to hint at some very unusual useage patterns as being part of the cause. Not saying crucial is excused at all from providing a stable, solid product even at that level of use; I just wonder if this BSOD is affecting your regular joe blow user or your extreme power users.

      I personally owned 2 intel x-25m ssds for 3 years. In that three years span I replaced my conventional storage hard drive 3x. My intel ssds never have had a problem.

        • [TR]
        • 8 years ago

        I wouldn’t say 5000 hours “on time” is unusual or strange. 5000 hours write/read or otherwise constant SSD operation, yeah. A few cases may hit that after the 208 24hr days it breaks down to, but if they’re getting it before you do, there’s nothing to say you won’t get it just because you’re not using it so intensively. It may take you longer to get there, but it would be nice not to have a brick wall waiting for you when you do.

        • Deanjo
        • 8 years ago

        [quote<]Isn't it generally accepted that SSDs are more reliable than hard drives over the long term? [/quote<] Only by the few that have never looked at reliability data comparing the two. All the reliability data so far points to SSD's and mechanical drives having equal failure rates overall. SSD's being more reliable then mechanical drives has turned out to be a myth so far. Sidenote: 5000 hours seems actually kind of low to me considering my main boot drives are a couple of Maxtor 6L250S0 DiamondMax 10 drives from late 2005 running in raid 0 with 49190 hours on them each with zero failures.

          • Malphas
          • 8 years ago

          Yeah I think the reason for that might be in your own post, the part where you said “from the late 2005” with regard your drives’ ages, contrast to the fact the M4 came out last year might explain the difference in runtimes.

          If you were meaning it seems too low to start having problems then you’re missing the point; besides the fact it’s an entirely different technology from hard drives, this isn’t even a case of use-related age issue, it’s a bug in the firmware where after the SMART data hits 5200 hours it triggers something (probably some rounding error) in the programming, which could happen at any arbitrary usage time if it were slightly different. It has nothing to do with becoming unreliable from usage.

            • Deanjo
            • 8 years ago

            I still find 5000 hours a year on the side of “low” considering after checking all my systems they are all about 8000 of hours per year so back to his original concern of it being “unusual useage patterns”.

            • Malphas
            • 8 years ago

            What sort of bizarre logic is that? A year only has 8760 hours in it, so to consider anything below 8000 to be low is absurd. Not only that but the M4 hasn’t even been out a year, so everyone hitting this 5200 hour bug has basically been running their drive 24/7 since they bought it, there’s nothing low usage about it.

            Like I said, if you mean it’s a bit early to start seeing issues then you don’t understand the problem since it has nothing to do with typical drive deterioration.

            • just brew it!
            • 8 years ago

            A lot of computers are left on 24×7. It is entirely reasonable to expect a computer to be able to go more than a year without a major hardware failure. So yes, I’d say 5000 hours is “low” for a device intended to be used in a computer…

            • dpaus
            • 8 years ago

            FWIW, we expect our systems – mostly built with consumer-grade components – to operate in the field for 24/7 for 3 years. And they mostly do.

            • just brew it!
            • 8 years ago

            That’s more or less in line with my own expectations as well. If a particular brand of component seems to be consistently failing in under ~3 years, I generally blacklist them until I hear something to indicate that they’ve gotten their act together. That’s why I stopped buying Maxtor DiamondMax 9 drives back in the day, and currently try to avoid ThermalTake HSFs (I’ve owned several where the fans wore out after about a year), ThermalTake PSUs (I’ve had multiple TR2s die on me), or LiteOn optical drives (their QA seems to have slipped these past few years).

            • Malphas
            • 8 years ago

            Did you even read what I said? It’s not wear-related, so the amount of time it’s been running is irrelevant – it’s a firmware bug. This problem could just as easily have happened at 500 hours instead of 5200. But you can’t say these M4’s have surprisingly low runtimes when they’ve done almost the maximum amount of hours possible.

            • just brew it!
            • 8 years ago

            Did you even read the forum thread at Crucial’s web site that was linked from the news post?
            [quote=”Crucial”<]This issue occurs after approximately 5,000 hours of actual β€œon time” use.[/quote<]

            • Malphas
            • 8 years ago

            Yes, and I was aware of it before this news post existed. I also actually have a bunch of drives that have been running 24/7 since April and all hit this error, which further highlights my point, which evidently you can’t seem to grasp. For a start this isn’t something that happens at roughly 5000 hours of use, it’s something that happens at [i<]exactly[/i<] 5200 hours, that's a clear indication it's a firmware error, not something happening from general wear usage. Thus what I meant by it happening just as easily at 500 hours, was if that was the particular integer that the firmware is having problems with, as opposed to 5200. Deanjo rather stupidly made a comment about how 5000 hours seemed low to him since he had several drives from 2005 with 10,000 hours. I was pointing out what a ridiculous comparison this is since the M4's with this issue have practically the maximum amount of hours possible, given how long ago they were released. I really don't know how many times I can repeat myself, or how much simpler I can say it. I'm pretty much at a loss if you still don't understand.

            • just brew it!
            • 8 years ago

            Ahh, OK. Completely misunderstood the bit about it happening “just as easily at 500 hours”. Mea culpa.

      • ludi
      • 8 years ago

      If you’re worried, pay premium and get an Intel drive. AFAIK they’ve had one major bug (the 8MB crash) but (a) you [i<]do[/i<] back up your data, right? and (b) you know Intel will generally use the best parts on the market and support them for a very long time. The upgrade is worth it -- Windows boot and resume times will be cut way back, and any programs and files loading off the SSD will simply pop open the way they do on a good phone. It's almost unsettling the first time you experience it.

    • yogibbear
    • 8 years ago

    LOL. This was the only SSD I respected and was going to use one in my next build…

      • Walkintarget
      • 8 years ago

      Shoulda went with OCZ !!!

      Hahaa, I kid …. seriously … no, don’t hit me with that heavy book !!!

      • slash3
      • 8 years ago

      Wait until the 17th, then pick one up. πŸ™‚

        • [TR]
        • 8 years ago

        If I were him, I’d wait a bit longer to check what the results of the patch turn out to be.
        While SSDs are cheaper now, they’re still a sizeable investment. I wouldn’t buy one without an expectation of error and lifetime numbers on par with an HDD. (Which is why I haven’t seriously considered buying one, so far)

      • LoneWolf15
      • 8 years ago

      Check Samsung. Intel should be okay, too.

        • hansmuff
        • 8 years ago

        Samsung SSDs look really good. I went with the Intel 320, love it, but wouldn’t hesitate to try out one of the Samsung drives. They are very big in OEM with their SSDs, which is a good sign.

          • Firestarter
          • 8 years ago

          Between the Crucial M4 and the Samsung 830, I’d buy whichever is cheaper.

Pin It on Pinterest

Share This