PCIe 4.0 specification finally out with 16 GT/s on tap

Some things are better late than never. At its yearly developers conference, PCI-SIG announced the PCI Express 4.0 specification to its members. As expected, the updated I/O technology will offer twice the bandwidth of PCIe 3.x while retaining full backwards compatibility.

Although it started as a key I/O component in PCs, PCIe now serves as the interconnect for any number of devices, including those in the server, storage, and mobile markets. The full details of the new specification haven't yet been published on the consortium's website, but we're told that it doubles the per-pin bandwidth of the previous generation, offering 16 GT/s data rates.

Given PCIe's wide variety of use cases, PCI-SIG also decided to improve the interconnect's flexibility and scalability. Developers will have access to more lane width configurations and speeds suitable for low-power applications. Other enhancements include reductions to system latency, improved scalability for added lanes, and lane margining.

PCI-SIG also teased the upcoming PCIe 5.0 specification. Penciled in for 2019, PCIe 5.0 will push the available bandwidth to 32 GT/s. One application that the consortium has in mind is high-end networking, where the architecture can serve up 128 GB/s of bandwidth operating at full duplex.

The PCIe 4.0 specification still needs to undergo a final IP review, but the PCI-SIG claims that the interconnect is ready to go. Prior to the publication of the spec, the SIG had already been doing compliance testing with a variety of its members, and it claims that a number of 16 GT/s solutions have already been worked out. Perhaps we'll see products using the new spec make their way to shelves sooner than later.

Comments closed
    • ronch
    • 2 years ago

    Crazy how we went from ISA to VL-Bus to PCI to AGP and finally to PCIe. What a journey it’s been. I love every part of it.

    • Anomymous Gerbil
    • 2 years ago

    I’m suffering from brain fade here – what’s the difference between the light-blue and purple lines on the PIC-SIG chart?

    • blahsaysblah
    • 2 years ago

    Error in chart, total bandwidth column should all be halved.
    edit: sanity check, an ultra nvme m.2 slot using 4x pci-e 3.0 has peak of approximately 4GB/sec..

      • jts888
      • 2 years ago

      It’s a long tradition with interconnect vendors to inflate stats in every way possible, including aggregating both directions for bidirectional traffic.

      Hell, Mellanox used to count 8b10b overhead in their “10”/”20″/”40″ Gb SDR/DDR/QDR Infiniband systems, the same way PCIe 1.0/2.0 counted giga-“transfers”…

        • blahsaysblah
        • 2 years ago

        So desensitized from recent news that i was like, oh perfect answer, my mistake… So sad.

    • jts888
    • 2 years ago

    While the inevitable upward crawl of PCIe clock rates will continue, I think more companies will do what AMD and Nvidia have done: use high clocked PHYs designed for PCIe for proprietary interconnect protocols like NVLink (20 Gbps PCIe plus extensions mostly for remote atomic operations) and Infinity Fabric/xGMI (12.5(?) Gbps interconnect with possibly greater emphasis on low latency so maybe 8b10b instead of 128b130b).

    PCIe is kind of a least-common-denominator at this point, and vendors who want to go bananas will always be tempted to go their own way in these situations.

    I’m already convinced that one of Vega’s release surprises will be that it can talk in native IF over a PCIe slot to Zen based CPUs for better HBCC texture streaming, given how Zeppelin is already confirmed to already mux all its SATA/10GbE/PCIe/xGMI controllers over common PHY banks, and Vega is promised to be IF based as well.

      • the
      • 2 years ago

      Oh, and AMD is a member of openCAPI. It may not be just the Vega GPUs can could benefit from such flexible PHY on-die.

        • jts888
        • 2 years ago

        AMD is a member of the rather overlapping OpenCAPI, Gen-Z, and CCIX consortia as well as sole owners of their own Infinity Fabric, so they damn well need the muxed PHY design to be able to at least pay lip service to the rest of the leagues of Intel-jilted vendors (Mellanox = not-Omni-Path, Xilinx = not-Altera, Samsung/Micron/Hynix = not-X-point, …).

        I don’t know how much traction Omni-Path will get nor whether QPI (UPI) will ever even be extender to 3rd party accelerators, but I definitely know that [i<]everybody[/i<] else is panicking about the possibility of Intel going walled garden with their platform.

        • psuedonymous
        • 2 years ago

        [url=http://opencapi.org/membership/current-members/<]As are Nvidia[/url<]. Both are OpenCAPI board-level members.

    • Aquilino
    • 2 years ago

    Nice! Now Intel will only have to put half the PCIe lanes in their processors.

    • tsk
    • 2 years ago

    Slowpokes, PCIe 5.0 in 2019? Sure. More like 2025.
    They need to keep up with the rest of the industry, the pace this is going at is embarrasing.

      • demani
      • 2 years ago

      Who else is doubling performance every 2 years?

        • tsk
        • 2 years ago

        PCIe 3.0 first in products in 2011, PCIe 4.0 first in products 2018. Seven, Seven years.

    • davidbowser
    • 2 years ago

    Have to admit that I needed to look up what “GT” was (giga transfers). I haven’t read anything technical about PCIe since 3.0 was put on motherboards.

    [url<]https://en.wikipedia.org/wiki/PCI_Express[/url<]

    • chuckula
    • 2 years ago

    4.0 has clearly taken longer than they would have liked but at least they press-release talk is that 5.0 is going to be less painful. We’ll see how accurate that graph turns out to be.

      • Shobai
      • 2 years ago

      In that case you’ve gotta wonder whether there’s any point in deploying PCIe 4.0 gear instead of holding off for “another year” to deploy PCIe 5.0…

        • chuckula
        • 2 years ago

        That’s partly true. You don’t want to Osbourne your protocol with the next big thing being too close.

        There might maybe one factor that lets them do it: It’s unclear if PCIe 5.0 is actually using copper for the data paths. There was some speculation that 4.0 was actually going to have to use fiber-optics instead of copper but they managed to get it done with copper.

        If PCIe 5.0 is relying on fiber optics, then they could still debut the protocol spec relatively quickly but PCIe 4.0 would still enjoy a long life since it’s much more practical from an implementation perspective until the fiberoptic components eventually work their way down from the very high end.

          • UberGerbil
          • 2 years ago

          Yeah, in that case we might very well have both specs in use at the same time, one in the mainstream and one in enterprise. Though if that is how it shapes up, and 5.0 is optical-only with all the changes that implies up and down the stack, I’d argue that they really should call it something else. PCI…O? PCIe-O? PC-ei-ei-O

            • Klimax
            • 2 years ago

            OGP = Optical General Port…

          • the
          • 2 years ago

          If they move to fiber, it would require a new slot and I feel a new name. How about PCI-[b<]F[/b<], as in fiber.

      • jts888
      • 2 years ago

      PCIe releases are slowed down mainly by 2 things:[list<][*<]nailing down technical specification of feature changes [/*<][*<]waiting for sufficiently fast PHY implementations to be small/cheap enough to use in commercial products[/*<][/list<] The consortium can have a final 5.0 spec released quite quickly if it's little more than 4.0 functionality with defined tolerances for higher clocked data, but it will mean little until CPU and peripheral vendors feel like burning die space on the bigger necessary transceivers. For example, Zeppelin/Ryzen already uses ~10 mm^2 for 32, 12.5 Gbps PHYs and would probably need to at least double that for >30 Gbps PHYs.

    • psuedonymous
    • 2 years ago

    Crossing fingers that OCuLink becomes a standardised internal connection. Gotta escape that card-edge ghetto sometime!

      • cmrcmk
      • 2 years ago

      Has OCuLink been deployed in any meaningful way yet?

        • TheEldest
        • 2 years ago

        SuperMicro uses it for NVMe cables in some servers. I’d expect that when Dell & HPE refresh for Skylake-EP they’ll be using OCuLink too.

        • psuedonymous
        • 2 years ago

        No, only a handful of niche application in some HPC gear. Hence why I’m hoping it becomes a first-class interconnect for PCIe 4.0, which gives it some chance to eventually filter down to consumer gear.

        OCuLink, along with internal 48V power distribution, are the two datacentre technologies I’d really love to see filter down to the consumer space. OCuLink adds a lot of flexibility in component placement over sticking flex risers into card-edge slots, and 48V would make a dramatic improvement in internal cabling over ATX12V, and in a lot of cases may eliminate cabling (e.g. running 48V over even existing PCIe slots would increase slot power – for the same current 5.5A limit – from 66W to 264W).

Pin It on Pinterest

Share This