Gigabyte X299-WU8 uses dual PEX switches to light up every slot

Intel's X299 platform offers a maximum of 44 PCIe 3.0 lanes from its accompanying CPUs, but even that might not be enough for the most demanding builders. Enter Gigabyte's X299-WU8 motherboard. This mobo uses not one, but two eye-wateringly expensive Broadcom PEX 8747 PCIe switches under its chipset heatsink to provide all four of its main PCIe 3.0 slots with 16 lanes of connectivity. If you just need to run a lot of single-slot cards, the WU8 can oblige with 16 lanes on its first slot and eight lanes to each of the remaining six PCIe slots on the board.

To provide extra power to whatever expansion cards ultimately occupy those slots, Gigabyte includes a six-pin PCIe connector southwest of the CPU socket. To keep those switches (and the VRM) cool, Gigabyte uses a blend of heat pipes and finned heatsinks that worm their way through the top half of the board. Eight-phase International Rectifier power circuitry provides juice to whatever chip ends up in the LGA 2066 socket.

Cramming every inch of board space full of PCIe slots and heatsinks does involve some tradeoffs. The WU8 only has a single M.2 2280 slot for next-gen storage gumsticks, although it does offer eight SATA ports. We're sure you can find some extra PCIe lanes to plug in an M.2 riser card if you really need more NVMe storage devices. The WU8's back panel offers two USB 2.0 ports, six USB 3.1 Gen 1 ports, and two USB 3.1 Gen 2 ports: one Type-A, the other a Type-C. Dual Intel Gigabit Ethernet ports handle wired networking, and Realtek ALC1220VB audio should provide serviceable sound quality for the discerning listener.

Gigabyte didn't provide pricing or availability info for this board, but two PEX switches don't come cheap. For those who need its unique capabilities, however, the X299-WU8 will likely be worth every penny.

Comments closed
    • Chrispy_
    • 1 year ago

    Can someone please fill in the obvious gap in my understanding?

    At best, the CPU can only talk to 44 lanes at once.

    Does this board connect 112 PCIe lanes to the CPU, but then the CPU bottlenecks everything so that despite the expensive switches only 44 of those lanes are used at once?

    Do cards talk to each other without going via the CPU, or something?

      • Krogoth
      • 1 year ago

      I believe the CPU uses one of its UPI channels (basically it is an updated QPI link) to talk to the PCIe controller(s) which are there for that reason. They are normally used for socket to socket talk on multi-socket Xeon platforms.

      • danazar
      • 1 year ago

      The switches on this board offer up to 64 lanes of connectivity. There are two configurations described, either (16×4) or (16×1)+(8×6). That’s 64, or 16+48, which is 64. To use all 7 PCIe slots you’ll need to use the latter setting.

      PCIe cards can need up to 8x or 16x bandwidth, but not (usually) 100% of the time. With PCIe switches, even if the CPU offers only 44 lanes, each card can connect to the switch at 16x or 8x, and as long as the cards aren’t all trying to use 100% bandwidth to the CPU at the same time, they’ll get full 16x or 8x bandwidth on demand when they need it.

      • jihadjoe
      • 1 year ago

      [quote<]Do cards talk to each other without going via the CPU, or something?[/quote<] I believe this is indeed the case. The PCIe switch works much the same way as a network switch would: Traffic gets routed to a destination, so data from a GPU that only need to see another GPU can get processed without involving the CPU. Likewise only the necessary CPU to GPU and GPU to CPU data will have to make its way across the limited CPU lanes.

    • Waco
    • 1 year ago

    I guess if you can find a way to dis-jointly use this many slots, it’s the board for you. I can’t imagine many cases where you need x16 links in this quantity for disparate use cases though…perhaps GPUs on some, storage on another, and never touch them at the same time? Accelerator cards for when you’re not using the GPU or storage in quantity?

    • DPete27
    • 1 year ago

    [quote<]We're sure you can find some extra PCIe lanes to plug in an M.2 riser card if you really need more NVMe storage devices[/quote<] This is one of the uses I see this board being good for actually.

    • Neutronbeam
    • 1 year ago

    Would you please provide a bit of numerical context for “eye-wateringly expensive Broadcom PEX 8747 PCIe switches”? I believe you, but when I’m budgeting for hardware and add the phrase “eye-wateringly expensive” as a line item into my spreadsheet, it just completely messes with the autosum function. 🙂

      • phileasfogg
      • 1 year ago

      PCIe Gen3 switch pricing is likely about $0.50-$0.75 per lane. so a 48 lane switch will likely cost somewhere between $24 and $36 per unit depending on volume. The article says there are 2 PEX8747s on board.

        • Bauxite
        • 1 year ago

        Old pricing, when Broadcom sucked up the companies & patents behind this they massively increased rates. It is intentional and strategic, spinning rust is quickly giving way to big flash (along with SAS/sata but not as fast) usually on NVMe for everything but bulk storage and everything is moving off old stuff onto pcie or custom.

        They also want to sell switched/active cabling too, especially for the middle and lower tiers.

        • jihadjoe
        • 1 year ago

        It’s way more expensive now. I looked stuff up and even a 16 lane chip goes for ~$90 .

      • Grahambo910
      • 1 year ago

      Mouser lists the chip at $111.64 in volumes > 100.

      Edit: DigiKey lists it at $132.59 without volume pricing, so figure in Gigabyte’s volumes and coming straight from Broadcom the BOM line is probably ~$90/ea…

        • Neutronbeam
        • 1 year ago

        Thank you both!

        • Waco
        • 1 year ago

        Came to say this. On a consumer motherboard like this I bet they add $100 each to the price at minimum.

Pin It on Pinterest

Share This