Single page Print

PCI Express arrives
I wasn't kidding when I said ye olde PCI bus is slow. In its most common implementation in desktop PCs, at 32 bits and 33MHz, the PCI bus has a theoretical peak bandwidth of 133MB/s, which is shared between all devices. To give you some perspective, a single Serial ATA hard drive interface runs at 150MB/s, and Gigabit Ethernet runs at roughly 125MB/s. Asking a PCI-based bus to host an SATA RAID array and a Gigabit Ethernet controller is like asking Alec Baldwin to read through F.A. Hayek's The Road to Serfdom—you could practically watch its lips move.

Not only does PCI lack bandwidth, but its shared bus architecture requires arbitration between devices that want to transfer data and involves contention between upstream and downstream communications. PCI's limitations have forced chipset makers to integrate ever more functionality into their chipsets, as Intel did when it hung a Gigabit Ethernet interface off the north bridge in its previous-generation 875P chipset.

At 33MHz and 32 bits, ye olde PCI is decidedly slow and wide. PCI Express, meanwhile, is the epitome of the new thinking in internal PC communications links, the "fast and narrow" approach, a more serialized way of transmitting data with lower pin counts and higher signaling rates. Despite the dorky name, PCI Express doesn't actually share all that much with PCI, save some memory addressing and device initialization similarities so drivers and operating systems don't need major plumbing changes to work with the new standard.

In fact, PCI Express is downright network-like on several levels. On the lowest, physical layer, PCI Express uses pairs of dedicated, unidirectional links to transfer data between devices. A pair of links in a PCI-E connection is known as a "lane," and each lane offers 250MB/s of bandwidth in each direction, upstream and downstream. Because PCI Express lanes are point-to-point affairs, there are no worries about shared bandwidth, and because the lanes are bidirectional, there's no contention between sending and receiving data.

The slowest possible PCI Express configuration is a PCI Express X1 slot, where a device gets 250MB/s of bandwidth in each direction, or 500MB/s in full duplex. However, like NIC teaming in an Ethernet network, PCI Express lanes can be teamed up to deliver more bandwidth between devices. For graphics, sixteen PCI Express lanes will connect a graphics card to the rest of the system for a total bandwidth of 8GB/s, full duplex. That's a whopping amount more than the current, PCI-derived standard for graphics, AGP 8X, which tops out at 2.1GB/s.

The similarities between PCI-E and a network don't stop at the physical layer, either. PCI Express also employs a packet-based protocol for data transmission, and it uses packet header information to reserve bandwidth for delay-sensitive data streams with eight different traffic classes. These facilities should make PCI-E ideal for more than just dedicated connections between devices. PCI Express should become a standard for internal PC communications, just as AMD's HyperTransport is now.


The PCI Express X16 (top) and X1 (bottom) slots sandwich
a pair of legacy PCI slots on Intel's D915GUX motherboard

The PCI-E physical layer spec allows for X1, X2, X4, X8, X12, X16, and X32 lane widths, but the initial connector specs call for only for X1, X4, X8 and X16 slots. X4 and X8 slots may make appearances in servers soon, but for desktop systems, expect to see X1 slots for expansion and X16 slots for graphics.

Obviously, PCI Express will bring more bandwidth to the PC platform, but more importantly, it establishes a new foundation for PC expansion standards. PCI on desktop PCs hasn't changed radically since its inception, but PCI Express has the engineering headroom and a practical set of options for expansion, when needed.