PCI Express 3.0 to be backward-compatible with 2.0

Will you be able to slip a GeForce 8800 GT in a PCI Express 3.0 motherboard in three or four years? Yes, according to an ExtremeTech report that quotes PCI Special Interest Group Chairman Al Yanes. Speaking at a conference yesterday, Yanes revealed that PCIe 3.0 will be backward-compatible with the current 2.0 standard and that it will use the same connector designs.

However, where PCIe 2.0 can transfer data five billion times per second per direction per lane, ExtremeTech says PCIe 3.0 will hit 8GT/s. The new spec will also ditch the 8-bit/10-bit encoding scheme current versions use, reducing overhead by 20% (8-bit/10-bit encoding uses 2 bits out of every 10 bits for error checking). The higher transfer speed and reduced overhead should theoretically translate to 1GB/s of bandwidth per direction for a single PCIe 3.0 lane—two times faster than PCIe 2.0 and four times faster than PCIe 1.1. For a x16 slot, maximum uni-directional bandwidth should be a whopping 16GB/s.

PCI-SIG expects to complete the PCIe 3.0 spec in late 2009, and it plans to begin testing in the second half of 2010. ExtremeTech says it doesn’t know when PCIe 3.0-toting motherboards will start to hit stores, though.

Comments closed
    • aleckermit
    • 12 years ago

    WTF, I don’t even have PCI-E 2.0 yet.

    Slow down technology!

      • Scrotos
      • 12 years ago

      You got 2 or 3 years, testing in 2010? Products probably mainstream in 2011. So you got time to catch up to PCIe 2.0 and ditch your old ISA cards!

    • pogsnet
    • 12 years ago
    • UberGerbil
    • 12 years ago

    The total number of lanes in a chipset has a practical upper limit due to the complexity and performance requirements of a switched point-to-point topology, and motherboards have an economic upper limit based on the number of layers required for laying out the traces — and it appears we’re pretty much bumping into those already. If the future really involves two or three cards with parallel fp processors (for graphics, physics, photoshop, or whatever) the cheapest way to do that is with fewer, faster lanes. I’m not sure that’s really where we’re headed, but it certainly would be cheaper and easier for everybody if x4 boards had all the bandwidth anybody needed, so motherboards and peripherals alike could get by with smaller slots and fewer traces. (Of course high frequencies have their own set of challenges.)

      • MadManOriginal
      • 12 years ago

      If mobo makers didn’t insist on putting all kinds of excess crap on their high-end mobos to go along with oc’ing-type features and quality basic components and design maybe the physical slot size wouldn’t be such an issue.

    • Mystic-G
    • 12 years ago

    Stupid Question Ahead… rear to the left.

    Are pci-e 2.0 slots backwards compatible?

      • d2brothe
      • 12 years ago

      Yes, they are, as you can put a PCIe 1.1 card in a PCIe 2.0 slot no problem.

        • Mystic-G
        • 12 years ago

        One more question…. My PCI-E slot *[

          • titan
          • 12 years ago

          PCI Express is just like USB. Just as USB 1.1 device work in a USB 2.0 slot and a USB 2.0 device works in a USB 1.1 slot, so too does PCIe 1.1 in a PCIe 2.0 and vice versa.

          • pogsnet
          • 12 years ago
    • Spotpuff
    • 12 years ago

    I’m still trying to figure out why motherboards still come with PCI slots 🙁 I know backwards compatibility is the big driver, but can’t we just like, move forward a bit faster?

      • adisor19
      • 12 years ago

      Get a Mac Pro. The motherboard doesn’t have any old PCI slot 😉

      Adi

        • d2brothe
        • 12 years ago

        Yea,…. but…its a mac….

          • Taddeusz
          • 12 years ago

          No, it’s a PC that looks like a Mac. ;-P

        • MadManOriginal
        • 12 years ago

        Why is that? Couldn’t Apple get an exclusivity clause for PCI slots?!?

      • d2brothe
      • 12 years ago

      Umm…most peripherals are still PCI–or at least the ones I have, which aren’t exactly outdated. I have a Creative X-Fi sound card, and a USB host card (for more ports), I was under the impression PCIe expansion cards were still rather rare.

      • absinthexl
      • 12 years ago

      My Delta 66 / Omni Studio would like to have a word with you.

      Replacing everything with USB / IEEE 1394 is fine – there are converters for just about everything. PCI is still being used, though.

    • Majiir Paktu
    • 12 years ago

    Why are they ditching 8b/10b encoding? My understanding is that it was implemented in the first place because of PCI-E’s high transfer rates at 2.5GT/s; now at 8GT/s they’re going to ditch it?

      • d2brothe
      • 12 years ago

      Well, presumably, they have a better encoding that uses less overhead, and yet provides the same benefit, or have found some other way to provide error correction/detection.

        • moritzgedig
        • 12 years ago

        if they had a parity-bit every 4bit, it would be the simplest way of doing it.
        likely they went for a more complicated method like CRC.
        having redundency for longer streches of bits makes it more efficient.

    • Usacomp2k3
    • 12 years ago

    We don’t even really need pci-e 2.0.

    That said, if they made a motherboard that just had like 6 4x 2.0, that’d probably be enough for most video cards and the 1x audio cards and such would fit in there as well.

      • Taddeusz
      • 12 years ago

      What we need are motherboards, like the one Apple uses in their Mac Pro, that follow PCI-SIG recommendations and have all physical x16 slots regardless of how many lanes the slots have. That way you can stick an x16 card in any slot.

        • adisor19
        • 12 years ago

        I second that.

        Adi

        • Spotpuff
        • 12 years ago

        It’s more fun to try to figure out whether or not x1 cards will work in x16 slots, and why manufacturers use x1 or x4 slots on their boards.

          • Taddeusz
          • 12 years ago

          True, they do attach the x16 slots directly to the northbridge while the rest of the PCIe slots are attached to the southbridge. Technically though, it /[

            • Forge
            • 12 years ago

            I’m guessing that 6200 didn’t use a 6pin PCIe plug on it. So far, Nvidia has drawn the lion’s share of the card watts from the PCIe connector itself, using the 6pin to top up. ATI on the other hand has drawn most of it from 6pin/8pin connectors and used the PCIe connector only to top up/cover contingencies.

            It may not have needed anything more than 1X for bandwidth, but it needs all 16X worth of power/ground pins.

            That’s why the Mac Pro design is so brilliant. It has full 16X physical slots with power and grounds to match. It routes the data lines more flexibly, but the cards don’t generally care about that.

            • crazybus
            • 12 years ago

            Except that a x16 slot has no more pins dedicated to power than a x1 slot. IIRC the PCIe spec’s 75W power limit is not dependent on the number of lanes a slot has.

            • UberGerbil
            • 12 years ago

            Crazybus is correct. Every PCIe slot (of a given generation) offers exactly the same power regardless of how many data lanes if provides. All the power pins are on the piece before the key; everything after that is data (and reserved and ground and various other things, but not power)
            §[<http://www.interfacebus.com/Design_PCI_Express_1x_PinOut.html<]§ §[<http://www.interfacebus.com/Design_PCI_Express_16x_PinOut.html<]§ Also, in the PCIe 1.0 spec (I no longer have access to the SIG, so I haven't read the later specs) there was no requirement that a slot accommodate arbitrary cards, just cards with the same number of lanes as the slot, and x1 cards. So x1 cards are required to work in everything, but there's no requirement that, say, an x4 card has to work in an x16 slot, only that it work in an x4. In practice, though, because of the way the control electronics negotiates the lanes, it generally works (in fact given a robust negotiation algorithm it would be strange if it didn't).

            • Taddeusz
            • 12 years ago

            It was actually rather odd. The card was detected and I was able to install the driver just fine. But when I tried to use it the display being detected was some absurdly small number. It may have even been negative numbers for the resolution but I think it may have been like 1×1 or something like that.

            As has been stated by others the power is carried before the key. Although it could be that either the 6200 just didn’t like running on fewer lanes or didn’t like not being attached to all those grounds. Not sure which is more likely though.

            • Scrotos
            • 12 years ago

            I bought a PCI (not PCIe) GeForce 6200 and had the exact same issue. Whenever it was detected, it showed a display of like 1×1 attached to it. It worked for a short while. I think it was heat or some broken traces. I read comments about that particular card and heat-related failures, so… that’s my guess. It was in a cramped desktop pizzabox-style IBM with no AGP slot and got rather hot. Interesting that the same type of card manifested the same problem. Maybe your card was just gimpy and something overheated? Mine was passively cooled.

            • Taddeusz
            • 12 years ago

            However, my card is working happily in my nearly headless SageTV server.

            • Anomymous Gerbil
            • 12 years ago

            Isn’t the difference that by connecting via southbridge, you introduce a bandwidth capacity constraint, i.e. the link to the northbridge.

        • Usacomp2k3
        • 12 years ago

        I’d be fine with that too.

        • ludi
        • 12 years ago

        I wouldn’t expect to see it. x1/4 slots give motherboard manufacuturers something far more valueable: real estate for placing and routing of things that aren’t x16 PCIe slots, without completely cutting off the user’s ability to upgrade with smaller PCIe peripherals.

        Apple isn’t constrained in this fashion because they sell botique products at botique prices. Ever disassemble a dual-G5 workstation? Thing was a monster. Nobody but Apple could have sold that brick profitably, and the same probably applies here.

      • MadManOriginal
      • 12 years ago

      Bridgeless multi-card configurations maybe. It could also have applications outside of desktop PCs, whether in servers or high-end workstations that crunch numbers.

        • Krogoth
        • 12 years ago

        Niche markets that already have proprietary platforms for that sort of stuff.

    • Krogoth
    • 12 years ago

    AGP 3.0 again. Let me guess, it will be on video cards for a long time before being on other peripheral devices. >_<

      • albundy
      • 12 years ago

      splain please. what peripherals would even use pcie 1 to its full extent?

        • BobbinThreadbare
        • 12 years ago

        RAID cards use 4x lanes, and I think some even use 8x.

        A dual gigabit ethernet card would use a full 1x lane.

          • continuum
          • 12 years ago

          /me pats his Areca ARC-1680 that uses x8 PCI-e lanes…

          *grins*

      • Meadows
      • 12 years ago

      You have an uncanny knack for blasting various technologies with defamatory comments, completely unfounded. AGP 3.0? Oh please.

        • Krogoth
        • 12 years ago

        That is what PCIe 3.0 amounts to.

        It is nothing more then a marketing trip that will almost never see its potential being harness. Does that ring bells? AGP 2.0 and 3.0

          • Taddeusz
          • 12 years ago

          It may not reach it’s potential by consumer devices. That doesn’t mean that the server/workstation space doesn’t need the bandwidth provided by PCIe 3.0. Funny how some computer geeks are so jaded to think that the computer industry revolves around them. ;-D

          PCI and PCI-X were in dire need of replacement. Maybe not on the consumer side so much as in servers and workstations. PCI and PCI-X do not provide enough bandwidth to feed many newer RAID and multi-port LAN cards. Stuff that has no use in the consumer world. Just because a technology is introduced doesn’t mean the intended recipient was consumers. It trickles down to us not the other way around.

            • Krogoth
            • 12 years ago

            Here is a question, why is the said standard being pushed first into the enthusiast ring not in where it is truly needed?

            Besides, PCI-X was not exactly that limiting. It still works fine as long as the server was not managing a huge RAID or several gigabit NICs with near-100% load.

            • Taddeusz
            • 12 years ago

            ePenis. Plus businesses don’t upgrade as often. The technology may hit servers and workstations first but businesses won’t see any advantages till their next upgrade cycle. Whereas consumers, particularly of the computer enthusiast variety, upgrade quite a lot more.

            I’m not sure if there is a foreseeable need for as much bandwidth as PCIe 3.0 provides but it could be that because PCI was such a fixed standard for so long that the PCI-SIG has decided not to sit on it’s laurels this time and keep a fairly steady series of bus improvements coming. Not that they didn’t do that with PCI (with 66Mhz, 64-bit and finally PCI-X) but there was only so much they could do with a parallel bus. The nature of PCIe allows it to be more extensible as technology advances. I might even argue that PCIe will outlast PCI in longevity simply because it is so much more comparatively extensible. It might even be the last electrical peripheral bus we see, but that may be a little far reaching.

            • UberGerbil
            • 12 years ago

            Most server cards don’t need /[

            • AMDisDEC
            • 12 years ago

            Yeah, they do.
            2.0+ and 3.0 are not targeting the frail consumer market but rather, the server and telecommunications markets.
            By adding cache coherency to PCIe, this allows the blade manufacturers to add cache coherence to the many different network add-in cards used in servers and telecommunications equipment. Adding coherence generates more probe traffic from intelligent I/O, so increasing speeds will compensate for the added traffic.
            These things are not for you grasshopper, but will also benefit the HPC, embedded, and cluster crowd.

      • l33t-g4m3r
      • 12 years ago

      with creative complaining about the current pci-e, maybe they’ll do a card that supports 3.0.

        • swaaye
        • 12 years ago

        their complaints have to do with latency, not bandwidth, I believe…

          • Krogoth
          • 12 years ago

          Nah, it is probably the corporate-types that refuse to spend $$$$ for proper R&D.

          Engineers could easily design a product to work under PCIe. Latency my arse, just engineer the said design around it. It is another matter whatever corporate guys will approve the necessary R&D.

          • BobbinThreadbare
          • 12 years ago

          If latency stays the same in number of clocks, increased clock speed will lower latency.

Pin It on Pinterest

Share This