PCI Express 3.0 base spec to be ready by November

Now that newer PCs are rocking PCI Express 2.0 slots, it’s time for an upgrade, don’t you think? EE Times reports that PCI-SIG will complete the PCI Express 3.0 base spec by November, having released a 0.9 version of the specification for review about a month ago.

PCIe 3.0 will enable data rates up to 8GT/s. According to PCI-SIG, that will translate into top speeds of 1GB/s per lane per direction, or 32GB/s for a x16 link—essentially twice as fast as PCIe 2.0 and four times as fast as the first-gen standard. EE Times expects PCIe 3.0 to come in handy for high-end graphics cards, upcoming 40Gbps Ethernet adapters, and high-end solid-state drives.

As far as real-world products go, EE Times quotes PCI-SIG’s Al Yanes as saying products "typically" come out "about a year after the spec becomes final." However, some firms will reportedly have devices out a little early—like Mellanox Technologies, which will have 40Gbps Infiniband adapters based on the new interface by June. Word is that Intel’s server-oriented Sandy Bridge CPUs will support PCI Express 3.0 "before the end of 2011," as well.

Comments closed
    • cygnus1
    • 10 years ago

    reply fail

    • Farting Bob
    • 10 years ago

    Yea this seems useful for things that currently push up against the 1x speed limit, since most boards will have a handy 1x slot, then just 16x slots. I know you can stick a 4x in a 16x slot but would be nice if 1x was enough for everything that isnt a GPU.

    • bcronce
    • 10 years ago

    I’m less thrilled about the 16x slots have twice as much bandwidth as the 1x/4x slots having twice as much bandwidth.

    RAID card with a 4x slot and a bunch of SSDs could push ~4GB/sec.. /drool

      • UberGerbil
      • 10 years ago

      If your video cards could live with an x4 3.0 slot (and current ones certainly could) it would be kind of interesting to have a board with nothing but x4 (and maybe x1) slots, and trim a couple of inches off the fore-aft-dimensions of the mobo…

    • Flying Fox
    • 10 years ago

    Did Nvidia manage to add a couple hundred watts supplied by the slot with the new spec?

    • Usacomp2k3
    • 10 years ago

    These will be used primarily in servers. Has PCIe made inroads over PCI-X in server farms?

      • Krogoth
      • 10 years ago

      Yes, PCI-X is dying.

      Just look at the ebay, there is an abundance of used, PCI-X peripherals. Also, take notice on how new server/workstation platforms have no PCI-X support. 😉

      *- Just to get it out of the way and avoid confusion with newcomers. PCI-X is not PCIe.

      PCI-X is a server/workstation orientated version of PCI. It uses longer “fingers” for cards which allows them to operate at wider/faster bus speeds (64bit 33Mhz, 66Mhz, 100Mhz, 133Mhz). PCI-X slots are compatible with most PCI cards. The only catch is that PCI-X bus operates at speed of the slowest device. So sticking a PCI device will force PCI-X bus to operate at PCI speed (33Mhz, 32bit)

      • cygnus1
      • 10 years ago

      Yes, PCI-X is definitely considered legacy now. Seeing those cards around remind me of VESA extended ISA cards…

      PCIe is all there is in many servers these days. Fiber HBAs and multiport GigE NICs definitely take advantage of the dedicated bandwidth in PCIe

    • jdaven
    • 10 years ago

    I have a use. How about 7 of these

    §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16820227517<]§ in Raid 0 configuration on one of these §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16813188067<]§

      • cynan
      • 10 years ago

      But where are you going to put the video card? Going to have to settle for only 6 of them I guess…

      • NeelyCam
      • 10 years ago

      Love it. 4.4k$ and it’s out of stock.

        • Buzzard44
        • 10 years ago

        Theoretically you could fill that *[

      • indeego
      • 10 years ago

      That product is a definite avoid until they get better support/documentation/firmware going ong{<.<}g

    • dpaus
    • 10 years ago

    and 40Gbps Ethernet adapters. Who does Mellanox think they’ll be selling these to in June?

    edit: meant as a reply to #2

      • odizzido
      • 10 years ago

      yeah I was wondering that too. AFAIK ram drives aren’t all that popular.

      • ostiguy
      • 10 years ago

      High Performance Cluster environments.

      Infiniband hasn’t found much use out of that niche

      Some network virtualization players (Xsigo) are pushing infiniband to replace both ethernet and fibre channel

        • dpaus
        • 10 years ago

        Does the current implementation support channel pooling and automatic fail-over?

        • shank15217
        • 10 years ago

        What are you talking about.. thats EXACTLY Infiniband’s niche.

        • shank15217
        • 9 years ago

        Infiniband is all over the place, not just HPC and it competes very well with 40Gbps Ethernet. It can even encapsulate Ethernet and it’s 2x-4x cheaper.

      • cmrcmk
      • 10 years ago

      Not that many people would do it, but more throughput on the NIC enables more scenarios where thin clients become possible. At 40Gbps, graphic designers could work seamlessly in photoshop on a thin client. Again, I doubt many people would want to do this, but you could potentially save some money with this setup.

    • Game_boy
    • 10 years ago

    Is there anything now or in the near future that would benefit from more bandwidth than PCIe 2.0?

      • jdaven
      • 10 years ago

      Video Cards.

        • khands
        • 10 years ago

        even that is tough to swallow though, most don’t see much benefit moving from 1.1 to 2. If the 6000 series releases as PCI-E 2.0 cards, I won’t be disappointed.

          • KikassAssassin
          • 10 years ago

          HardOCP did a test recently that showed that two GTX 480’s in SLI perform exactly the same at PCI-E 2.0 x4/x4 as they do on x16/x16 at resolutions up to 2560×1600. The only bottleneck they found was with three monitors at 5760×1200, where x8/x8 was just slightly slower than x16x16.

          §[<http://www.hardocp.com/article/2010/08/25/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x4x4/<]§ More bandwidth is never a bad thing, and it's good that they're pushing out the new spec well before it's necessary so it'll be well saturated in the market by the time it does become an issue, but for now, you don't need to worry about what PCI-E spec your new motherboards or video cards use.

      • OneArmedScissor
      • 10 years ago

      Forget the bandwidth. I think it would be pretty cool to shrink video cards down and have them connect to 4x slots that supply more power and don’t need cables.

        • khands
        • 10 years ago

        Yeah, I guess the biggest thing is they need to start adding more juice through the spec.

          • UberGerbil
          • 10 years ago

          No, video cards need to start demanding less. At the rate we’re going, motherboards will have to be thicker than laptops just to accommodate the metal layer for handling all the juice.

            • khands
            • 10 years ago

            GPUs are slowly but surely turning into add-on PCs.

      • NeelyCam
      • 10 years ago

      Insane SSD RAID arrays

        • NeelyCam
        • 10 years ago

        Or octo-SLI/Crossfire sets using 4x each

Pin It on Pinterest

Share This