news samsung adds ultrabook pcie ssd with 1400mbs read speed

Samsung adds ultrabook PCIe SSD with 1400MB/s read speed

The next wave of ultrabooks won’t just get a performance and battery life boost from Intel’s Haswell processors. They may also have substantially quicker storage, thanks to solid-state drives like Samsung’s new XP941 SSD.

THe XP941 is a PCI Express solid-state drive. It’s got a peak read speed of 1400 MB/s, which is more than twice the 600 MB/s maximum enabled by the 6Gbps Serial ATA interface. And this is no jumbo-sized solution: it has a scant 80 x 22 mm (3.2″ x 0.9″) footprint and weighs just six grams. To put things in perspective, Samsung says the XP941 takes up 1/7th the volume and weighs 1/9th as much as a standard 2.5″ SSD.

128GB, 256GB, and 512GB versions are available. The XP941 has entered mass production and has been shipping to “major notebook PC makers” since “earlier this quarter,” Samsung adds. That sounds like a roundabout way of saying shipments began some time within the past 17 days. The company doesn’t talk pricing, but I assume the XP941 isn’t quite as affordable as SATA (even mSATA) solutions.

This may not be the first speedy PCIe SSD to hit ultra-slim notebooks. As AnandTech reported a few days ago, Apple’s Haswell-powered MacBook Air laptops boast PCIe storage that pulls off peak read and write speeds in the 700-800MB/s range. That’s a fair bit slower than Samsung’s claimed performance figures for the XP941, but it’s still faster than 6Gbps SATA storage.

0 responses to “Samsung adds ultrabook PCIe SSD with 1400MB/s read speed

  1. I hope SSD manufacturers don’t jump the gun and start rushing PCIE SSDs into production. That could fuddle things up when SATAe hits. Cool none the less though.

  2. PCI-e flash is bootable. Older PCIe based flash devices didn’t have the proper BIOS mappings, requiring kernel drives to make it of use (therefore, not bootable). Even newer units on the market don’t even bother with trying to boot.

    PCI-e Flash is a complete solution to flash based storage, completely removing the legacy of SATA to go with it. If this stuff wasn’t bootable, Apple wouldn’t have them in their upcoming Mac Pros (using Samsung controllers, fyi).

  3. [quote<]My third thought was 'damn, so now I need to figure out how to RAID eight of them together!'[/quote<] Chicks, or SSDs? Because I'd be fine with either...

  4. CPUs are not the limiting factor in OSes — it’s heavily the resources surrounding it.

    Think of a processor + ram + disk setup like a juggler:
    – Cores = arms to juggle with
    – RAM = amount of things to be juggled
    – Disk = how fast you can turn those arms

    Modern OSes were built to handle all the cores and RAM you can cram into the machine. The largest limitation is *usually* disk and SSDs generally solve this problem.

    There are methods to show that significantly faster SSDs don’t necessarily yield much better results, yes (Windows boot times can get only so fast). These are usually due to file systems, block sizes and SOME software assumptions about the disk i/o (i.e. don’t read TOO much at a time, too large a perf hit). It will take little time to make even the current SSD limitations a thing of the past with PCIe-flash based ones (namely 4K read/writes).

  5. [quote<]But even so, the real limitation we're running into with fast SSDs on SATA3 is [i<]accessing[/i<] the CPU.[/quote<] FTFY If it were the CPU itself being the bottleneck then the following things, among others, would be pointless: [list<][*<]Faster RAM (RAM is MUCH faster than an SSD) [/*<][*<]10GigE NICs (I have servers with 3 x dual-port 10GigE NICs) [/*<][*<]Multichannel RAID controllers [/*<][*<]Multi-GPU configurations[/*<][/list<] Otherwise, they would be limited to the same speeds we're seeing out of SATA-based SSDs right now.

  6. [quote<]...My second thought was 'damn, why'd you blur out the chick? I like chicks!' My third thought was 'damn, so now I need to figure out how to RAID eight of them together![/quote<] Man, you are bad.

  7. [quote<]since "earlier this quarter," Samsung adds. That sounds like a roundabout way of saying shipments began some time within the past 17 days[/quote<] This quarter started on April 1st - not June 1st. EDIT: Nevermind - the nameless one already pointed this out.

  8. My first thought was ‘damn, cool DoF effect’ /photography geek
    My second thought was ‘damn, why’d you blur out the chick? I like chicks!’
    My third thought was ‘damn, so now I need to figure out how to [b<]RAID eight of them together![/b<]' So we need a PCIe x16 3.0 adapter that can take four of these on each side, giving each one it's own set of two PCIe 3.0 lanes. That should actually be fairly cheap to produce if there's any demand, which I could see coming from the server market. I just want >10GB/s storage throughput. Thanks!

  9. It’s a storage device- I’d think that it would just need to be accounted for in the BIOS (UEFI), which would be customized for any board/system that has a slot for one of these to plug in to, I’d think.

  10. Boot times are probably more limited by CPU time and device initialization when booting off of an SSD

  11. Gigabyte has a bunch of MB’s with a mSATA connector. My guess would be that they’re going to be the first to support the M.2-formfactor.

  12. Me too.

    I’d like to see if anyone buys one of these and a Mini PCIe to Desktop PCIe converter.
    SSD’s were already bumping up against the SATAIII spec at pretty much the same time as SATAIII was made commercially available.

  13. That is the whole point, I think!

    Well, most of it. USB isn’t going away anytime soon, nor is Ethernet, and probably a couple of other ‘legacy’ standards, and it seems pretty silly to attach these to a Thunderbolt controller when they could be attached directly to PCIe.

    But further reducing the surface area of the chipsets alongside the number of pins and layers needed is definitely a primary goal.

  14. But even so, the real limitation we’re running into with fast SSDs on SATA3 is the CPU; we’ve reached the point of diminishing returns when increasing SSD speed.

    It seems that the real limitation is actually software based. We need developers to be more storage speed conscious to actually take advantage of these faster storage devices!

  15. Thankfully it doesn’t matter much.

    But it is hilarious that a $300 Nikon D3000/18-55 kit is out of reach for product photography. A tripod ($30), wired remote release ($15), and some lights (scrounged) is all you really need.

    And if you’re using a nicer camera with a ‘macro’ lens with razor-thin depth of field, you can still use focus stacking across any number of shots to get a pristine product shot.

  16. IOPs are both limited by the controller and the interface. SATA has about 10x-100x more latency than PCIe, so there is a lot more potential IOPs with PCIe based devices and may actually drive demand for higher IOPs since SATA isn’t getting in the way.

  17. [quote<]I'm continually annoyed by DoF in product (CPU/GPU/MB/etc.) pictures where the area in focus is, for example, a surface mounted resistor.[/quote<] If you're shooting at macro distances, DoF is pretty shallow even when stopped down. And the photographer may be restricted by available light (esp at trade shows), shutter speed and/or diffraction limit. I do agree with you in principle, and not all tech journalists (or even product photographers) are great photographers, but there are mitigating circumstances.

  18. Latency would decrease to almost nothing with a direct connection to CPU, and overhead decreases significantly as well. That’s gotta do something for boot times.

  19. The area immediately above her index and middle fingers makes me wonder if that wasn’t done with Photoshop rather than a long, fast lens.

  20. What’s interesting is it’s already being put to use in the first Haswell laptops, so by Broadwell, the southbridge could be an optional “legacy” controller.

    The ULV version could have a few USB ports integrated into the CPU like Atom and Jaguar. Any of them could just tack on an external USB controller.

    Storage, wi-fi, Thunderbolt, and graphics cards can all connect directly to the CPU. No mobile or all-in-one computer would need anything else in 2014.

    It’s questionable whether a high end desktop would, either. Sandy Bridge E already had a few extra PCIe lanes reserved specifically for storage.

  21. If the 128GB versions also have the performance characteristics mentioned here (often with SSDs the quoted performance is for the top end highest capacity version) then that would make an awesome OS and Application drive.

    Shame that they’re not making small PCIe cards for desktops with these on really.

  22. They would probably be almost identical. The only change is peak theoretical throughput, not IOPS (unless those have changed, too, but I don’t see any mention of them).

  23. Maybe it’s also that many of the websites that post hardware photos don’t invest in fast (read expensive) enough lenses to pull off a decently stark depth of field effect. Maybe, with their slower glass, the only way to get any sort of bokeh with the field depth they’re working with is to focus in on a much smaller subject?

  24. So why the switch? Is it power, cost, or size? I have a hard time believing Apple would make the switch to PCIE on an ultraportable based solely on the increased performance.

    Or, is the extra performance needed for the aggressive memory management and background application pausing shown off for 10.7?

  25. Couldn’t the quarter be a calendar quarter, and that statement mean sometime since April 1st?

  26. Depth of field used properly in a tech photo.

    I’m continually annoyed by DoF in product (CPU/GPU/MB/etc.) pictures where the area in focus is, for example, a surface mounted resistor.

  27. I’m assuming it’s basically the same SSD that Samsung makes OEM for Apple for the Air and Mac Pro.

  28. My Samsung 840 pro is already blazing fast. I can’t even imagine boot times and application launch times with one of these!