PCI Express 2.0 spec almost finalized

The PCI Special Interest Group (PCI-SIG) says it has made available the release candidate of the PCI Express 2.0 specification to its members. According to a story on CNet, this release candidate is subject to a 60-day comment period, after which the spec will be finalized. PCI Express 2.0 doubles the current spec’s data transfer rate from 2.5Gbps to 5.0Gbps, giving an x16 PCIe 2.0 slot up to 16GB/s of bi-directional bandwidth. PCIe 2.0 is also backwards-compatible with PCIe 1.1 and PCIe 1.0, so new slots should accommodate existing PCIe x16 graphics cards. According to leaked roadmaps, both ATI and Intel are prepping chipsets with PCIe 2.0 support for the third quarter of next year.

Comments closed
    • evermore
    • 13 years ago

    I’m guessing the PCI-SIG people just come up with these specs for fun, and don’t really care if anything ever gets made that uses it, given the derth of PCIe cards other than graphics.

      • bthylafh
      • 13 years ago

      Look around and you can find PCIe network, SCSI, and Firewire cards at least.

      /me wonders if someone will make a PCIe modem.

    • echo_seven
    • 13 years ago

    I might be naive, but what does it mean to have a Release Candidate for a spec???

      • UberGerbil
      • 13 years ago

      It’s released for comments. Once those are in, any final revisions are made and the spec is frozen. It’s really not unlike fixing last-minute minor bugs in a piece of software. In some ways it is actually harder to “patch” a standard than software after it “ships” — and the consequences of having a broken standard can be worse — so it makes sense to have a last round sanity check. I’m not wild about the software development process terminology drifting expanding into this area — there was perfectly useful terminology before — but it’s not a huge stretch.

    • Krogoth
    • 13 years ago

    See why AGP and PCIe transition happened? It was never about bandwidth. It was about massive increases in power consumption with performance GPUs and market CF/SLI too.

    Same story with PCIe 2.0.

      • Shintai
      • 13 years ago

      No, AGP Pro is more power than current PCIe (110 vs 75). Its about easier and simpler. And adding bidirectional transfer in high speed. AGP dont have that. And AGP gives alot of support issues due to the doubledecker pinslot. Also AGP had exausted its upgrade potential in terms of bandwidth.

        • Krogoth
        • 13 years ago

        AGP Pro was meant for guess what? Professional-grade video cards namely 3DLabs stuff. I do not recall where ATI or Nvidia had to use AGP Pro for their own professional cards.

          • Zenith
          • 13 years ago

          They did. I infact have a 9500 based FireGL Z1 that uses AGP 8x Pro. There are more ATI and nVidia cards that used AGP Pro.

      • UberGerbil
      • 13 years ago

      It wasn’t about bandwidth per se, but it was certainly about better tech. We had AGP, PCI, and PCI-X in various flavors of 32bit, 64bit, and clockspeeds, all of which were shared buses. PCI-E is a single standard to replace all of them with a point-to-point switched fabric offering scalable, symmetric bandwidth. It’s a much cleaner solution with a lot of headroom for growth (both in terms of raw performance and in terms of capability, with things like external PCI-E and the Geneseo initiative).

      Regardless of bandwidth, PCI-E is a better technology. The fact that it got dumbed down to “more bandwidth” is irrelevant.

        • Krogoth
        • 13 years ago

        I know PCIe is a better technology, but I am just saying it to people who claim that AGP is still good enough for performance GPUs and there is no reason to upgrade.

      • Stranger
      • 13 years ago

      actually the bandwidth does help on the low end turbocache cards.

    • bthylafh
    • 13 years ago

    Soo… how long until video cards that require PCI-E 2.0? There would be no good reason, since it’s not like they saturate the current x16 or even x8 slots.

      • Shintai
      • 13 years ago

      Its like the AGP x1,2,4,8 again. x8 was hardly if ever still needed. But its not only beneficial this time for the GFX part.

      Anyway, even looking on just the Intel slide, its atleast 1year away and only premium chipset that got it.
      For the ATI side it looks about same, also highend only in the start.

      So when we need it? Maybe 3-4 years time or more for GFX.

        • wof
        • 13 years ago

        We needed, or at least wanted, more than that yesterday.

        Think SLI without extra connectors or video output cards separated from the 3D accelerator.

        One of the big problems with both AGP and PCI is latency and a large part of that latency will disappear in Vista if things go as intended.

        More bandwidth also means more dynamic content which is good in games.

          • Shintai
          • 13 years ago

          Why is latency a problem? 100ns or 10000ns doesnt matter.

            • wof
            • 13 years ago

            10000ns is only 1666 individual transfers per frame at 60hz which would be catastrophic if you wan’t to have loads of things moving around.

            Right now devs have to resort to all kinds of tricks to pack multiple objects in a scene togheter like i.e. patches of grass or vegetation because it’s so slow to issue draw primitive calls.

            This is also the main difference between a PC and a games console.

            • Shintai
            • 13 years ago

            What? The CPU sends info to the GPU what to render. Thats all, latency couldnt have any less effect since it aint directly tied to the CPU. Besides for the human viewing of it. So no, you can still do fine with 10000ns. Or 50000ns for that matter. Game consoles ain´t any better in this case either. They don´t have some “magic ability”.

            • wof
            • 13 years ago

            Oh you mean it just has to send 1x”info” to the gpu so latency doesn’t matter ?

            Memory mapped devices must have gone out of fashion like yesterday and all these nvidia talks about batching stuff must just have been marketing BS.

            Oh and the PS2 nolonger has the ability to feed the GS from the VU units directly either.

            I’m so sorry for taking up your time.

          • evermore
          • 13 years ago

          How does Vista eliminate hardware-inherent latency?

      • DrDillyBar
      • 13 years ago

      Agreed.

Pin It on Pinterest

Share This