Windows 10 April 2018 Update doesn’t play ball with some Intel SSDs

There's a sizable contingent of buyers (particularly in business environments) that favor Intel hardware. Indeed, people have frequently swapped Intel into IBM's place in the old adage “nobody ever got fired for buying IBM.” Modern hardware and software is so complex that issues can crop up almost anywhere, though. Case in point: machines equipped with Intel's 600p and Pro 6000p SSDs are not able to upgrade to the Windows 10 April 2018 update due to a known compatibility issue.

Microsoft doesn't explain exactly what the problem is, but it manifests as an inability to boot into Windows. Some systems will reportedly boot into their UEFI setup as if there is no bootable device, while others will crash and restart repeatedly. When the company initially announced the issue two days ago, it was vague about which exact Intel SSDs models were affected. By now the company has updated the article to explain that only 600p and Pro 6000p SSDs are involved. Tom's Hardware confirmed the issue directly with Intel, too.

It's not clear whether the problem lies with Microsoft or Intel. Either way, Microsoft says it's working on a patch that will allow affected users to install the latest Windows feature update. Right now, affected machines are blocked from installing the April update, whether automatically or manually. Presumably the patch will be delivered by Windows Update, so don't be surprised if you sit down and your Intel SSD-equipped Windows 10 system has been updated to version 1803 without your intervention.

Comments closed
    • HERETIC
    • 1 year ago

    Add Toshiba to the list-
    [url<]https://wccftech.com/intel-windows-10-1803-incompatible-toshiba-ssds/[/url<]

    • freebird
    • 1 year ago

    Found my main PC inaccessible shortly after midnight this past morning… upon rebooting (and seeing updates applying) I could still boot my to my NVMe drive but my RAID 0 (2x2TB Toshibas) was inaccessible. This happened before and going into safe mode and shutting down properly fixed it before. This time it took about 4 hours (along with several failed system restores) to get it working again. Then I made the fatal mistake of trying to update to the latest RAID driver version thinking that would help… but now it won’t even boot into SAFE mode when I enable RAID in the BIOS (which it did before). Luckily I have an several months old copy of everything on the RAID drives on another 5TB drive. I can’t believe with all the system recovery/boot options enabling RAID is such a bug a boo for windows to setup and work with.

    That and every major Windows 10 update seems to really screw up at least one out of my 4 PCs.

    • Chrispy_
    • 1 year ago

    Intel aren’t flawless, they’ve made mistakes and firmware bugs in the past just as often as some other brands.

    The difference with Intel is that they’ll acknowledge the problem, spend the resources and manpower trying to fix it, and likely come up with a permanent update/fix for the problem within a week or two. I wish the same thing could be said of their competition from Samsung/Crucial.

      • Ninjitsu
      • 1 year ago

      Crucial sells products in India but doesn’t honour the warranty…despite their reps claiming that it does, in online support forums. Last time I’m buying Crucial…

    • Klimax
    • 1 year ago

    Interestingly, I got 600p in HP ZBook (Skylake gen) and no issues. Maybe there is some odd interaction between UEFI – driver – drive – OS.

    • Thresher
    • 1 year ago

    I have a 512GB Intel 600p.

    It wouldn’t update. I really didn’t think much of it, figured it was something that would be worked out.

      • romo
      • 1 year ago

      I don’t have the SSD and it wouldn’t update.

      I lost start button/menu and received ‘critical failure’ when attempting to click on it. So I look around. It’s most commonly cause is not updated Windows.

      Okie doke! Install the Windows Update through the desktop updater. Computer won’t boot for shit now. I thought my HDD failed tbh. Still small shot that’s it, lol, but quite the relief I saw this article.

      Originally when trying to update it wouldn’t work. Figured it was Windows 10 (which I like not throwing shade) The computer begins to boot up. Everything comes to life and then Windows won’t load/update.

    • Shobai
    • 1 year ago

    Well, this might explain why a machine at work, with a 6000p drive, began failing to boot mid-week. It doesn’t appear in the drive list of the Asrock mobo’s UEFI, but it has two entries in the boot device list (the original entry, which fails to boot, and a seemingly new “Windows UEFI somthing-or-other” entry, which boots successfully but must be manually selected at each boot)

    • not@home
    • 1 year ago

    My Acer Swift 3 has an Intel 600p in it. Good thing I have not turned it on in a month. Its too bad I cannot turn off auto updates. Microsoft, get your head out of your…

      • odizzido
      • 1 year ago

      Why should they? You’re already on 10, they know they can do whatever they want.

      • Klimax
      • 1 year ago

      I am pretty sure you are part of reason why Microsoft had to resort to forced updates.

      BTW: There is supported way to have auto-updates disabled. If you knew what you are doing, you’d already know about it…

      • UberGerbil
      • 1 year ago

      And it have made no difference; you would have been fine. Did you even read the article? You would not have received this update, because [quote<]Right now, affected machines are blocked from installing the April update, whether automatically or manually. [/quote<] In other words: there was a problem, Microsoft caught it in testing, and prevented it from being an issue in the field. But your machine is vulnerable to anything that has shown up in the past few months until it gets caught up on all the other updates, so you have that going for you, yay.

      • dyrdak
      • 1 year ago

      Just set your connections to metered and forget about updates (until you triggered them manually)

    • Wilko
    • 1 year ago

    Unrelated to the SSD issue, but still cropped up from this months batch of Windows updates. Had a Windows 10 laptop at work boot up to the friendly working on updates screen. Once at the log in screen, it refused to accept the normal password. It only worked after rebooting. Checking the version, it’s on 1709 now. Does rebooting upon log in failure after an update roll it back?

    • rnalsation
    • 1 year ago

    This juts re-affirms my hatred of both Intel SSDs and Windows 10.

      • nerdrage
      • 1 year ago

      Why do you hate Intel SSDs?

        • curtisb
        • 1 year ago

        They’re known for bricking themselves. We ordered 125 PC’s a while back with dual SSD’s. They all came in with Intel Pro 2500’s. We had repeated issues with one drive or the other disconnecting from the controller, which put the RAID1 array into a degraded state. Sometimes you could get it to come back on a reboot, and it would rebuild. Other times you can to destroy the array and recreate it…which meant a reinstall. And still other times the drive just wouldn’t respond. It wouldn’t even show as an Intel drive…you could only see the Sandforce controller. We went round and round with the manufacturer of those PC’s (I’m not naming them because it wasn’t their fault).

        There was finally an acknowledgement of the problem by Intel and they created a firmware fix…that spent 6+ months in beta. And they wouldn’t give it to us to beta test it for them. I told them it can’t get any worse than it already is…still no go. At that point I tried to get the manufacturer to just send out 250 replacements so we could get back to business, but they kept being told “next week” every time they asked Intel. We finally got Intel to agree to give it to us if we signed an NDA. I also got the manufacturer to agree to replace both drives in a PC if only one was bricked…normally they only replace the bricked drive.

        This still meant that we had to go to all 125 of those machines to manually upgrade the firmware…which involved changing the SATA controller from AHCI/RAID to Legacy, applying the firmware, resetting the RAID controller, and then rebooting the PC again. I think in total they we still ended up replacing 20-30 drives anyway.

        Fortunately, we haven’t had any problems with those drives since then. Not something I ever want to go through again.

          • Dposcorp
          • 1 year ago

          “We went round and round with the manufacturer of those PC’s (I’m not naming them because it wasn’t their fault).”

          Yeah, I’d stop buying Packard Bell.

          • Hsew
          • 1 year ago

          That NDA… It DID expire, right??

            • curtisb
            • 1 year ago

            The firmware is publicly available now.

          • Chrispy_
          • 1 year ago

          Ugh, that sucks.

          Hard to put too much blame on Intel though – that was a Sandforce drive with an Intel sticker and Intel got burned by Sandforce just as badly as all the other vendors. 6+ months is way too long, but at least Intel did actually manage to get your a fix in the end.

          AFAIK, no other vendor managed to write a workaround for the bugs in Sandforce controllers, and the fact that Intel even tried has to count for something.

          Out of curiousity, why the dual SSD’s – If this was for RAID1, I’d assume the redundancy was better implemented at the network storage level, and if this was for RAID0 performance, then Intel Pro 2500 is a very odd choice!

            • curtisb
            • 1 year ago

            Oh I agree with you that it definitely was a Sandforce issue. However, Intel drives disconnecting from the system has been a known issue for several generations. They had ample opportunity to move to another controller. 🙂

            I also agree that Intel did right by ultimately fixing the issue. They could have just as easily told us the drives were no longer supported and told both us and the system manufacturer to get stuffed. Then the system manufacturer would have been on the hook for providing a fix (i.e. new drives) since the systems were (and still are) under a warranty contract. We did A LOT of troubleshooting to prove the drives were the problem.

            [quote<]Out of curiousity, why the dual SSD's[/quote<] For RAID1. We've been doing that for years. Never underestimate the ability of an end user to put important files somewhere other than on the network storage. 🙂 We do tell them that it's on them if the files are lost since we provide them with plenty of redundant/backed up storage on the network, but it really doesn't cost that much more to add another drive to the desktop machines to help the situation. Unless of course you end up with a situation like that one....but those are definitely out of the norm. Plus it's a productivity thing. Hard drives are the components that tend to fail the most. With a RAID1 setup the end user can still use their machine while we RMA the failed drive. [quote<]Intel Pro 2500 is a very odd choice![/quote<] The drives weren't our choice. We never know who the drive manufacturer is until the machines are delivered. The next set of the same PC configuration we ordered came in with SK Hynix drives. Years ago we ordered some machines with dual 320GB 7200RPM spinners. Half of the drives were Toshiba and half were Seagate. The really odd thing was it was one of each in each system. Essentially the base system was in the warehouse with one drive type and they added the other drive to match our order...and they just grabbed what was stocked on the shelf at the time. We matched 'em up after they were delivered.

            • Shobai
            • 1 year ago

            [quote<] We matched 'em up after they were delivered.[/quote<] While you're on a roll, what was the reasoning behind this? I don't have any experience with RAID in production systems, but I recall reading about companies doing the opposite [i.e. using one each of two different drives] to try to avoid getting hit by lightning twice.

            • curtisb
            • 1 year ago

            It may or may not have had something to do with my “mild” OCD. 🙂

            I don’t have anywhere near full on OCD, but certain things do just have to be a certain way. I fully realize that when RMA’ing a drive with an OEM you are more than likely going to get something different from the rest of the array, even if it’s just a different model from the same drive manufacturer. I’ve even had 2.5″ drives be sent to replace a 3.5″ drive in a SAN. I just feel better if I start with them all the same.

            I’ve always tried to keep drives matching in RAID arrays. I know it’s not required, but there are performance characteristics differences between drive manufacturers. That one drive with the slower seek time can slow down the entire array.

            I can see a case being made for at least getting drives from different production runs because of failure concerns. I’ve actually been through that situation before. Our very first iSCSI SAN had 48 x Seagate Constellation drives. I had my reservations, but normally the Constellation line isn’t as bad as the Barracuda line. After we RMA’ed the 12th or so drive over a three-year period, we got the SAN OEM to replace every other drive that hadn’t already been replaced. Fortunately none of the failures resulted in data loss. Each shelf was configured with RAID50 and two hot spares.

            Hell, I’ll name the OEM on this one because they did us a solid. It’s a Dell EqualLogic SAN consisting of 3 x PS6000E’s. They shipped out Hitachi drives for that one. It still has a few Seagate drives that were sent to replace the original failures. That SAN isn’t used in production anymore, and hasn’t been for quite some time. It’s been pretty solid since the rest of the drives were replaced, though.

            • Shobai
            • 1 year ago

            Thanks for the reply!

          • jihadjoe
          • 1 year ago

          When TR reported that self-bricking behaviour back in the SSD endurance experiment I kinda thought of Intel as having done a lazy port of their server-oriented SSD firmware. Sure in a server where everything is redundant making the drive brick itself might be better than potentially continuing to write corrupt data, but it’s disastrous in a consumer environment.

      • odizzido
      • 1 year ago

      Yup. Forced updates and drives that brick themselves. Not a lot to like.

    • chuckula
    • 1 year ago

    It’s all part of the extended Meltdown/Spectre security strategy.

    If the system can’t boot… you can’t hack it!

      • crystall
      • 1 year ago

      A broken system is a safe system.

        • JustAnEngineer
        • 1 year ago

        “We made this [s<]village[/s<] [i<]system[/i<] safe for democracy!”

      • Srsly_Bro
      • 1 year ago

      Did you have any input on that? I’ll assume yes. Nice job, Chuck. Good work, indeed

      • Welch
      • 1 year ago

      [url<]https://imgflip.com/i/2a3p90[/url<]

    • JosiahBradley
    • 1 year ago

    Whew just upgraded to 1803 on a Intel 5 series SSD and forgot I had an intel SSD for a moment until after the install.

      • ajayka
      • 1 year ago

      I have a pair of Intel 520 (250 GB) running in RAID 0 and upgraded to 1803. I had to downgrade to the previous version 1709 as I ran into many issues after the upgrade. Some programs (Installing Visual Studio 2017, Updating Games such as Overwatch and so on) would randomly hang requiring a forced reboot. Windows reset failed (got stuck at 23%) and a complete installation of 1803 also failed (got stock at 21%). I had no option except to reinstall Windows 10 v1709. Hope Microsoft/Intel fixes this soon.

Pin It on Pinterest

Share This