Intel measures up server SSDs with the “ruler” form factor

Enterprise customers looking to assemble storage systems have a number of interfaces available for connecting drives, but the physical form factor of the devices generally follows one of four forms, traditionally designed for client machines: the familiar 2.5" or 3.5" cuboids, M.2 gumsticks, or PCIe cards. Intel is adding a new option that the company says is tailored to the density and connectivity needs of datacenters, the "ruler" form factor.

On first glance, the ruler form factor looks something like a super-sized M.2 card with its long, slender profile measuring 12.8" (32.5 cm) long and 1.5" (3.9 cm) wide with a slightly narrower edge connector. Each ruler SSD is a separate module with room for several times more storage chips than conventional drive form factors, resulting in increased capacity and greater parallelism. Intel says the form factor should let manufacturers cram 1 PB of flash storage into a 1U server chassis.

According to Anandtech, ruler drives use a SFF-TA-1002 "Gen-Z" connector. That Gen-Z interface supports four- and eight-lane PCIe 3.1 interfaces for up to 7.88 GB/s of bidirectional bandwidth. The connector itself has more pins than an M.2 connector. The extra pins are used for enterprise features like SMBus management and charging power loss protection capacitors. The drives are supplied +12 V DC.

The site goes on to say that the form factor has been in service at some of Intel's select partners for the last eight months and that the company's DC P4500 drives will be the first available through the normal channels in the new form factor. Drives based on Intel's 3D XPoint products will also follow in the future.

Comments closed
    • willmore
    • 2 years ago

    So, does this imply that Intel will start making processors with enough PCI-E lanes to actually support that kind of storage? They showed a 32 drive prototype at the presentation. That’s 128 to 256 lanes of PCI-E.

    Sure, they could use switches, but that seems like a poor choice for drives that should be able to saturate their host connections–if an 8cm M.2 drive can do it, a 330cm drive should have plenty of room for enough dies.

      • willmore
      • 2 years ago

      To drive the point home, you need three Xeon’s (44×3) to get enough lanes to drive all slots at 4x. You’d need twice that to get them running at full 8x.

      For an Epyc system, you’d need *one* to get all slots at 4x. Though you’d need 4 to get them at 8x as they start using half of the lanes for CPU to CPU interconnect.

    • Airmantharp
    • 2 years ago

    It’s sleek, like a whole new paradigm in storage pricing…

    • jts888
    • 2 years ago

    You might want to restate the module dimensions. Firstly, 1.5″ is ≈3.85 cm, not 38.5 cm. But beyond that, I’d also describe them as 38.5mm h * 325mm d * 9.5mm w (with a 12.5 mm slot pitch), since that’s the orientation they’d have in a normally mounted server.

    Edit: had to fix my own typo…

      • Markopolo
      • 2 years ago

      Im confused. You say not 38.5 cm, but then you say 385 mm.

      38.5 cm = 385 mm

        • jts888
        • 2 years ago

        Yeah I transposed those 2s and 8s, whoops!

      • morphine
      • 2 years ago

      Heh, fixed the mention about the 3.85 (3.9) cm.

    • JosiahBradley
    • 2 years ago

    But how on earth do I fit this thing in an existing server? Are there adapters or cabling for this connector that I can easily bridge to a PCIe bus? Will this work on Epyc platforms? How is this better than a long PCIe card using 16x bandwidth?

      • chuckula
      • 2 years ago

      You don’t fit it into an existing server, it’s part of a larger movement towards making NVME/Optane type storage solutions more efficient in newer servers that are configured to take these types of drives.

      Older form factors won’t disappear.

        • jts888
        • 2 years ago

        It’s still geared for 1U form factor, hence the 1.5″ (of available 1.75″) height.

        FWIW, I’ve been predicting this SSD format for a few years but was still surprised on the details.
        I expected:[list<][*<]naked ~1.5"*~6" PCBs[/*<][*<]4 PCIe lane + power connector, no SATA etc. stuff[/*<][*<]~1 cm connector pitch = 2 PCIe lanes*48 drives / 4 PCIe lanes*24 drives for 96 PCIe lane budget[/*<][/list<] u.2 is better than PCIe AIBs, but something along this lines was guaranteed to supersede it sooner or later.

          • chuckula
          • 2 years ago

          The box will fit in 1U. You just need a new box that’s setup to take the drives.

      • Anonymous Coward
      • 2 years ago

      Tape. Tape the sucker on. If any metal gets in the way, cut it. Then tape.

        • UberGerbil
        • 2 years ago

        [url=https://pics.onsizzle.com/it-doesnt-matter-if-its-duct-tape-orzip-ties-men-2456063.png<]Indeed[/url<]!

      • ImSpartacus
      • 2 years ago

      I think this is kinda like Nvidia’s mezzanine form factor in that it’s for purpose-built servers, not general mainstream offerings.

    • drfish
    • 2 years ago

    Is that an SSD in your…

    This is the point where I should post, edit, delete, and say I’m not funny – but I’m going with this instead. *fingers crossed*

      • derFunkenstein
      • 2 years ago

      A bold plan. You should be commended.

      • caconym
      • 2 years ago

      for consistency, always be sure to measure when it’s in its “solid state”

      • Wirko
      • 2 years ago

      There are two possible answers to this question:
      – Sadly, yes
      – Sadly, no

    • UberGerbil
    • 2 years ago

    But where’s the knob so you can hold it up to the blackboard to draw parallel chalk lines for music or cursive instruction?

      • Wirko
      • 2 years ago

      It’s 2017, and you need to do simple things the complicated way. The ruler is controlled by an app.

    • CuttinHobo
    • 2 years ago

    Kudos to Intel for skipping the intermediate proprietary connection before bowing under pressure and reluctantly moving to an industry standard. 😀 Albeit one that I had never heard of until now: [url<]http://genzconsortium.org/[/url<] Oddly, I don't see Intel listed with all the rest of the consortium members. AMD is present as well as supporting component manufacturers such as Molex and TE Connectivity, so Intel is conspicuously absent.

    • Growler
    • 2 years ago

    One form to ruler them all,
    And in the servers farm them.

    • odizzido
    • 2 years ago

    Intel drives still bricking themselves intentionally?

      • Waco
      • 2 years ago

      The server-grade parts never did that.

        • jihadjoe
        • 2 years ago

        What I heard in a podcast (not sure which one, but likely PCPer) is that the self-bricking is likely a remnant feature from the server-grade parts.

        Reasons being 1) In a server everything is most likely configured to be redundant, so 2) rather than risk silent data corruption by continuing to work after endurance limits or error thresholds are reached it’s better to have the drive brick itself.

          • Waco
          • 2 years ago

          I have them in production. The consumer parts brick. The enterprise parts go read-only as designed.

            • odizzido
            • 2 years ago

            Good to know, thanks. I will continue to ignore them when looking at SSDs.

            • Anonymous Coward
            • 2 years ago

            Wait, so the consumer drives always brick when they reach their rated limits?

            Or they might brick at some point which is around their rated limit?

            I have a few of those in regular home machines, wouldn’t be happy to hear they go brickish according to some countdown.

            • Waco
            • 2 years ago

            In the past they bricked on a power cycle once reaching 0.0% life remaining.

            • Anonymous Coward
            • 2 years ago

            I wonder if either Win10 or Linux will warn about that impending condition. Its hard to forgive Intel for making that design choice.

            • Chrispy_
            • 2 years ago

            I believe with TR’s endurance test the Intel drive threw out several warnings about it’s imminent failure and then failed exactly on schedule.

            I’d still rather it became read-only at that point, but it’s equally unlikely that a consumer drive will ever reach that amount of data written in the first place.

            • Waco
            • 2 years ago

            That’s my recollection as well.

            I have a 10 year old Intel drive in my NAS as a cache drive, it still has 80% life left and it gets written to every day.

            • stdRaichu
            • 2 years ago

            SSD health check timer destroyed. SSD Health Check Uncertainty Emergency Pre-emption Protocol activated. This SSD will self-destruct in two minutes.

      • psuedonymous
      • 2 years ago

      Nope, at least not with the 600p and later; not sure how long before that they stopped with the read-only-but-only-for-one-power-cycle thing.

        • Waco
        • 2 years ago

        That’s good to hear if it’s now a standard consumer feature to go read-only!

Pin It on Pinterest

Share This