Seagate prepares a monster PCIe SSD for the data center

Seagate has an intriguing new toy to show off at this weekend's Open Compute Project Summit. It claims to have the world's fastest solid-state drive on its hands. To back up that bit of chest-pounding, the company claims the drive has "throughput performance" of 10 GB/s. Furthermore, Seagate says the drive is ready for production and should launch sometime this summer.

Any "world's fastest" claim is a tough one to verify, especially when Seagate isn't specifying whether that 10GB/s figure describes reads or writes, nor whether it describes a random or sequential access pattern. Given that sequential figures are usually described in bytes per second, and that reads tend to be faster than writes with many drives, we suspect Seagate is reporting sequential read numbers.

There is an intriguing detail in Seagate's announcement that suggests how it's gaining that performance edge. The company says the drive can take advantage of 16 lanes of PCIe connectivity (albeit from an unspecified generation). Provided that the drive can take advantage of all those lanes, it could outpace many drives on the market. Seagate also has plans for an eight-lane drive, which Seagate claims runs at the slower, but still impressive speed of 6.7 GB/s.

Seagate has datacenters and other business applications in mind for the new drive, and it touts the monster SSD's compliance with the Open Compute Project specifications. Depending on its price and real-world peformance, though, the drive may also attract the interest of the enthusiast community. Maybe it's time to put those extra PCIe lanes on Intel's Skylake platform to work.

Comments closed
    • Krogoth
    • 4 years ago

    Before you people get too excited. The cards in question are geared towards enterprise-tier markets. They make no sense in a workstation and desktop system. You are going be to CPU-bound for any mainstream and most professional applications that you can throw at it. Thunderbolt 3 and 10Gbps Ethernet don’t cut it for external transfers. 40/100Gbps Ethernet are enterprise tier only (Optical).

    • UnfriendlyFire
    • 4 years ago

    How many racks of 15,000 RPM HDDs would it take to match the sequential and the random throughput?

      • derFunkenstein
      • 4 years ago

      Infinite, because the moment that more of what you need is on a single drive performance tanks, relatively speaking.

    • albundy
    • 4 years ago

    my old seagate 240GB ssd is still kicking. i dont think they’ve released anything after that for a few years…i think visiontek might have OEM’ed their drives at some point.

    • phileasfogg
    • 4 years ago

    If Seagate (which acquired SandForce in May 2014) used the same controllers that are used on M.2 SSDs (x4 lanes, Gen3 @ 8Gbits/sec), then they likely used a Gen3 packet switch to distribute the x16 Gen3 PCIe bandwidth across 4 EndPoints (EPs) each configured for x4 lanes.

    Perhaps this Gen3 Broadcom (nee PLX) switch?
    [url<]http://www2.plxtech.com/products/expresslane/pex8732[/url<]

    • the
    • 4 years ago

    Speed is impressive but what capacities is this drive going to be available in?

      • chuckula
      • 4 years ago

      720K,
      1.44MB,
      and… wait for it… 20 TB.

        • derFunkenstein
        • 4 years ago

        Oddly enough, it made my floppy…not.

    • cmrcmk
    • 4 years ago

    I’m curious how long they can sustain that throughput. 10GB/s is trivial if it’s writing to a RAM cache like most storage arrays do. But if it is RAM cache, that throughput would only last for a second or two.

      • Visigoth
      • 4 years ago

      Agreed. Sustained performance is where it matters! I would love to see that SSD benched thoroughly to see if it stands up to its claim.

      • Pwnstar
      • 4 years ago

      But a couple of seconds is all you need if that’s what loads your program.

    • derFunkenstein
    • 4 years ago

    At 10 GB/s isn’t PCI Express 2.0 basically eliminated as a possibility? The max theoretical bandwidth of a PCIe 2.0 lane is 5 GT/s, working out to around 7 or so GB/sec (after encoding) for 16 lanes of PCIe 2.0.

      • derFunkenstein
      • 4 years ago

      Ah, maybe that’s why the eight-lane version maxes out at 6.7 GB/s. Or else I’m missing something totally obvious.

      • chuckula
      • 4 years ago

      Yeah, it would have to be PCIe 3.0. Theoretically 10 lanes would be right around 10GB/sec, but given the reality of a little overhead and the fact that PCIe lanes don’t divide out that way, it’s going to require 16 lanes in a real implementation.

      • Srsly_Bro
      • 4 years ago

      Mega Giga transfers? That must be excessively fast.

        • chuckula
        • 4 years ago

        Quadrillions and Quadrillions of transfers [/CarlSagan]

        • derFunkenstein
        • 4 years ago

        Yeah, my bad. I had it at 500 MT/s originally but the math didn’t work out and Wikipedia said I was wrong, so I went for the biggest number I could think of. Then I edited the post TWICE without fixing it.

      • Krogoth
      • 4 years ago

      SSD PCIe cards have already move into PCIe 3.0. The card in question will most likely be use a 16x PCIe 3.0 slot (theoretically throughput of ~16GiBs). PCIe 4.0 is coming down the pipe.

      • MRFS
      • 4 years ago

      Close:
      bandwidth of one PCIe 2.0 lane is 5G / 10 bits per byte = 500 Megabytes per second.
      16 lanes @ 500 MB/sec = 8 GB/sec AFTER encoding
      PCIe 2.0 uses the 8b/10b “legacy frame”: 1 start bit + 8 data bits + 1 stop = 10 bits per byte
      PCIe 3.0 uses the 128b/130b “jumbo frame”: 130 bits / 16 bytes = 8.125 bits per byte
      8 GHz / 8.125 = ~984.6 MB/sec. per x1 PCIe 3.0 lane
      16 lanes @ 984.6 MB/sec = 15.75 GB/sec per x16 PCIe 3.0 lanes

Pin It on Pinterest

Share This