PCI Express 3.0 base spec gets released

Get ready for 16GB/s PCI Express x16 slots! PCI-SIG says it has released the PCI Express 3.0 base specification, allowing members to grab it right now directly from its website. The new spec purportedly doubles per-lane bandwidth over second-generation PCIe and quadruples bandwidth over PCIe Gen1, all the while maintaining backward compatibility.

PCI Express 3.0 uses a couple of of tricks to achieve that meaty speed increase. First, the signaling rate has gone up from 5GT/s in PCIe Gen2 to 8GT/s in the new specification. Second, PCIe Gen3 replaces 8b/10b encoding with a more efficient 128b/130b encoding scheme, which increases the amount of actual data pushed per transfer. The end result, PCI-SIG says, is per-lane bandwidth of 1GB/s in each direction. Not bad, huh?

PCI-SIG’s announcement also mentions miscellaneous enhancements introduced in the new spec:

These enhancements range in scope from data reuse hints, atomic operations, dynamic power adjustment mechanisms, latency tolerance reporting, loose transaction ordering, I/O page faults, BAR resizing and many more extensions in support of platform energy efficiency, software model flexibility and architectural scalability.

Based on what we last heard about PCIe 3.0, products based on the new spec could be out as early as June of next year. Intel might adopt PCIe 3.0 in its server products by the end of next year, as well.

Comments closed
    • ltcommander.data
    • 9 years ago

    What about a coherency protocol? I thought that was what was needed to better take advantage of GPGPU operation.

    • Krogoth
    • 9 years ago

    Just in time for upcoming Sandy Bridge/Haswell and Lightpeak. 😉

    Intel is one of the biggest developers in the PCI-SIG circle……..

      • NeelyCam
      • 9 years ago

      SandyBridge: no.

        • Krogoth
        • 9 years ago

        Sandy Bridge will see it.

        The second iteration of its chipset platform will have it.

          • NeelyCam
          • 9 years ago

          No. SandyBridge PCI Express is integrated on-chip, just like Lynnfield. Chipset upgrade won’t upgrade it to gen3

            • OneArmedScissor
            • 9 years ago

            The socket 1356 and 2011 Sandy Bridge chips that come out way down the road are supposed to have it.

            • NeelyCam
            • 9 years ago

            Oh, right; Xeon might have PCIe Gen3 – it’s a different chip. The consumer desktop versions don’t.

            • Krogoth
            • 9 years ago

            It doesn’t stop motherboard manufacturers from using a third-party controller chip. 😉

            Besides, it would make for a nice product refresh like how they came out with a refresh of older P55 boards when USB3/SATA600 standards hit the market. 😉

            • NeelyCam
            • 9 years ago

            I suppose that would be remotely possible, but mobo manufacturers can’t magically create more bandwidth to the chip… they’d have to use two PCIe Gen2 lanes to “make” a single Gen3 lane with some strange third-party chip.

            It doesn’t seem sensible…

    • sschaem
    • 9 years ago

    Nice, anything to reduce the mount of trace is great news.

    Some chipset spend allot of resource to deliver 40+ PCIe lanes.
    That can be cut in half!

    Most MB have 16 lanes for the GPU and a bunch of 4 lanes for other expansion.

    You could build monster Tesla boxes. Intel seem to see a future in PCIe computer cards so I’m sure this is what HPC is in need for.

    For us… less PCIe lanes needed is not that big a deal, just cleaner motherboards (less pins, easier for SOC, could reduce pin count by 10%)

    For HPC… double the density !

    • OneArmedScissor
    • 9 years ago

    Dear goofballs,

    Needing nothing longer than a 4x PCIe slot, which can easily fit in small computers, is pretty darn cool. Radeon 4800/5700 equivalent cards have fit in laptops for a long time as it is. You could run a pretty fancy 28nm card off of a 4x PCIe 3.0 slot.

    Now the question is if anyone will actually take advantage of this to make video cards that small instead of keeping them 400 feet long so they can beat us over the head with the amount of bandwidth as a marketing bullet point…

      • sweatshopking
      • 9 years ago

      that’s a good point.

      • NeelyCam
      • 9 years ago

      Visionary.

    • Tumbleweed
    • 9 years ago

    Ah, this means I’d only need an x1 PCIe 3.0 slot for a nice SSD. Sweet.

    • Voldenuit
    • 9 years ago

    When can I get my mobo with PCIE3.0, SATA3, USB3 and Lightpeak?

      • derFunkenstein
      • 9 years ago

      The new 2012 Mac Pro?

    • south side sammy
    • 9 years ago

    Other than providing super sonic speed for the SSD that I can’t afford to put in the PCI-E slot what does it do for me ? The same thing that going from 1.0 to 2.0 did ?

      • bcronce
      • 9 years ago

      There’s always someone who says something like this about EVERY technological advance with computers.

      Chicken and the Egg. Make the bandwidth and CPU power and someone will find a new and innovative use for it.

      The problem is people not seeing past now and how will this affect me NOW.

      It’s not a matter of now, it’s a matter of “once this becomes standard, what can we do with it”.

      Why would person who only browses the web ever need a GPU?
      GPUs became standard, and now we got HW accel’d web rendering coming around the corner. Save power and make stuff look better, something that couldn’t be done and wouldn’t be done if GPUs weren’t standard.

      • south side sammy
      • 9 years ago

      Tell me #7, what’s this going to do for the average user in the short term…. between now and the next 5 years ? Not much I bet. More hype about “new technological advances”….. so what!

        • OneArmedScissor
        • 9 years ago

        Shrink OEM desktops, because they rarely use the highest end hardware and nothing but multi-GPU video cards will need a 16x 3.0 slot.

    • Flying Fox
    • 9 years ago

    Are they increasing the power supplied to the slots so they can power more power consuming video cards?

    • sweatshopking
    • 9 years ago

    the best part is that we haven’t reached close to the limit of pci-e 2.0. 8 lanes, or 50% of total bandwidth is still more than enough. This seems premature to me.

      • Flying Fox
      • 9 years ago

      In server scenarios, there is never enough bandwidth. Who knows how many drives they want to chain together running off one controller card.

        • sjl
        • 9 years ago

        This. I’m a professional backup/recovery sysadmin/consultant. You can get quad port 8 Gbps FC cards (eg: QLE2564) – that’s 6800 MB/s of throughput. To fully use such a beast, you need a x16 PCIe v2.0 connection. Or you could get a x8 PCIe v3.0 card (once such come available.)

        Why would you need four 8 Gbps FC cards? Well, LTO5 pushes 300 MB/s plus (compressed). You can hang three of them off an 8 Gbps port without any trouble, or up to six if you can balance the load so that half are reading while the others are writing. I’ve seen environments where I could easily justify a dozen LTO5 drives, all writing at once (yes, that’s an insane amount of data; I said as much. The company wasn’t willing to spend the money to calm things down, but they were willing to keep shelling out the money for more SAN arrays …)

        Then there’s the need for disks to keep the data flowing to those tape drives (no single host will be able to stream at those rates, as a general rule, so you stage data to disk and then flush it down to tape, unless you’re talking beefy database servers or similar.) That’s another four or more 8 Gbps ports right there. Let’s not get started on the amount of Ethernet bandwidth required to get the data into the server, for that matter. Add all of these up, and if you’re limited on rack space, having four or five quad port cards (with all of the bandwidth implications thereof) instead of ten or twenty dual or single port cards makes a hell of a lot of sense.

        Believe me – if you’re thinking about your desktop system, you aren’t thinking about the environments where PCIe v3.0 makes sense. Backup servers, in particular, can take all the I/O bandwidth you care to throw at them, and still beg for more.

      • Farting Bob
      • 9 years ago

      PCIE 1.1 16x is still enough for anything less than super high end SLI/CF in the consumer world. But yea, im glad they are continuing to advance the spec rather than just sit and wait like USB/SATA specs did before getting shocked by the speed of SSD and flash based storage.

      • Kurotetsu
      • 9 years ago

      Well, one major benefit of this, for consumer motherboards anyway, is that boardmakers can implement fewer lanes and still keep the same total bandwidth. This is useful for all those 3rd party chips that give USB 3.0 or SATA 6Gpbs, they can all hang off of 1 or 2 PCI-E 3.0 lanes and still have the same bandwidth as 2 or 4 PCI-E 2.0 lanes. Those extra lanes can then be left out to save money, or be re-allocated to something more useful, like implementing a few HSDL ports (the protocol that lets SSDs talk directly over PCI-Express)

      • shank15217
      • 9 years ago

      We reached it a while ago, no one uses pci x16 in the server world. In practical scenarios most implementations cannot push more than 20-25 Gbps across the 8x bus. Just 1 4x sas 6Gbps multi lane connector can saturate that.

Pin It on Pinterest

Share This