Gigabyte X58 mobo has nothing but PCIe x16 slots

With some of those new AMD 890FX mobos sporting outrageous numbers of PCI Express x16 slots, we were bound to see some retaliation from the Intel camp. Gigabyte has now introduced an X58 motherboard with nothing but PCIe x16 connectivity—not even a single 32-bit PCI or PCIe x1 slot graces the blue surface of its expansion area.

The GA-X58A-UD9 has seven physical PCIe x16 slots in total; Gigabyte has used not just one, but two of Nvidia’s NF200 PCIe bridge chips, enabling support for both four-way AMD CrossFire or four-way Nvidia SLI multi-GPU configurations. Other ports, connectors, and slots include 12 Serial ATA ports in total (two of which have 6Gbps connectivity), six DIMM slots (not that you’d want any less to feed a quad- or six-core Core i7), two USB 3.0 ports, and FireWire.

That’s not all. The X58A-UD9 features a 24-phase power design with "mutual back-up to each 12 phase," which results in not just a preposterous number of chokes around the CPU socket, but also purportedly better longevity and higher maximum power delivery to the CPU. Gigabyte has things covered on the cooling front, too, with an already-massive Hybrid Silent-Pipe 2 cooling system that supports an even more massive heatsink attachment so wide it pokes out of the I/O area.

See the image gallery below for some glamor shots of this behemoth. Gigabyte’s product page has more details.

Comments closed
    • The Egg
    • 10 years ago

    If you look at the inside of the slots, you’ll see that the back half of every-other slot is light blue. That’s because even though they’re physically x16 slots, every other one is only wired as an x8 slot.

    • Welch
    • 10 years ago

    Finally….. Seriously no need for 32-Bit PCI, almost all add in cards are PCI-E. But I could have sworn that I saw a board like this from AMD a few months back…. perhaps they don’t count it because it was more than likely a pre-release.

      • Anonymous Hamster
      • 10 years ago

      Both Asus and EVGA have already released Intel x58 motherboards with 7 PCIe x16-size slots. Gigabyte has an AMD-based one with 6 PCIe x16-size slots.

    • Forge
    • 10 years ago

    My Tt Armor+ is on the supported cases list, and all those PCIe slots make me goosh. I think I’d need a new PSU, too.

    Nah, can’t justify it, even to me.

    • Anvil
    • 10 years ago

    Looks like a lot of fun.

    • StuG
    • 10 years ago

    Dual 8-Pin CPU connectors. Have been seeing a rare few release with these, but it seems to be more prevalent.

    Never really thought in the past that as we progressed in technology additional connections were going to be this prevalent.

      • rogthewookiee
      • 10 years ago

      AND an extra molex! how much power does this board use!

        • UberGerbil
        • 10 years ago

        It’s not how much power the /[

    • Voldenuit
    • 10 years ago

    Let me guess… still no BIOS fan controls?

      • pogsnet
      • 10 years ago
    • no51
    • 10 years ago

    Gigabyte is starting to bother me with the non-standard motherboard sizing. What case can this monstrosity fit in?

      • StuG
      • 10 years ago

      Mine (Antec P180), as well as my other Antec’s and every NZXT I have worked with. It only requires 8 expansion slots, that’s rather standard.

        • no51
        • 10 years ago

        r[<* A case with 9 or more expansion slots is required. Please refer to "Chassis Support List" for compatible PC cases <]r sorry bro, §[<http://www.gigabyte.us/FileList/ChassisSupport/ga-x58a-ud9_caselist.pdf<]§ looks like you missed that it skipped slot0 for the chipset cooling solution. and the standard for ATX/eATX is 7.

          • Shining Arcanine
          • 10 years ago

          Odd. I only count 7 PCI Express slots on the board. If we include the skipped slot zero, then we have 8 slots. Why is a 9-slot case required?

            • Anonymous Hamster
            • 10 years ago

            If you study the motherboard closely, you’ll see that the first slot starts near the 2nd screw-hole away from the I/O shield/backplate. Normally, the first slot position is at the 1st screw-hole near the backplate. The space from one screw-hole to the next allows for 3 slots; this means that 2 slots have been skipped. Combined with 7 implemented slots, you’ll need space for 9.

    • Chrispy_
    • 10 years ago

    PHENOMENAL COSMIC POWER

    Floppy drive connector

      • Jambe
      • 10 years ago

      ahahahaha

      I haven’t used PCI for a few years, but I stopped using floppy even farther back as anything more than a “make finicky machines boot” tool.

      • BooTs
      • 10 years ago

      Floppy Drive = Fail drive imo.

      Should have really lost the IDE port, and PS/2 ports.

        • UberGerbil
        • 10 years ago

        I won’t buy a motherboard that doesn’t have at least one PS/2 port.

        Generally, unless the legacy connectors are creating some kind of problem I don’t see what the big deal is. However, given that Intel removed support for IDE in the southbridge quite a while ago, it’s surprising that the motherboard makers still see the need to put an auxillary chip on the board to add it back. They’re not going to be pinching pennies quite so much on a high-end board like this, and it probably isn’t creating any real routing or board real-estate issues, but on a board where eliminating PCI is the headline feature it’s a little odd to see a legacy “checkbox item” like that.

          • Farting Bob
          • 10 years ago

          Yea i expect the people who buy a board like this have absolutely no need for IDE, these are for very high end epeen builds, and the last time an IDE drive appeared in an epeen machine was probably 5 years ago.
          The floppy drive connector is also pretty much redundant on a very high end build, sure it doesnt add much space, but it adds no value either, so why bother? On budget boards sure, but not on this crazy board.

            • shank15217
            • 10 years ago

            You know, there is more than GFX cards that can fit into pci-e slots. Nothing really e-peen about it.

          • Captain Ned
          • 10 years ago

          I’ll need PS/2 until someone shows me how to hotwire the Model M for USB. It’s supposedly possible, but I’ve not yet seen a definitive guide.

            • Anonymous Hamster
            • 10 years ago

            There’s a website that focuses on IBM clicky keyboards, and they mention specific brand/models of PS/2 to USB converters that work with the Model M. You can find it easily with a bit of Googling.

        • mcforce0208
        • 10 years ago

        Everybody knows that a mb that does not have a floppy connecter is crap. In this day and age they are instrumental to a companies success and growth……. Well done Gigabyte.

        On a serious note awesome board…..

      • Anonymous Hamster
      • 10 years ago

      I think that the serial and parallel ports are feeling left out, given that the floppy connector and ps/2 keyboard and mouse ports are still getting the love.

      • DrDillyBar
      • 10 years ago

      haha

    • kamikaziechameleon
    • 10 years ago

    who gets 4 cards for sli

    hydra makes alot more sense.

    • kamikaziechameleon
    • 10 years ago

    without hydra isn’t this allot of wasted PCI slots.

      • bcronce
      • 10 years ago

      depends on how many 10gb NICs and high end RAID cards you want. Get into Workstation/Server grade stuff and you can find a lot of PCIe-8x parts.

      • mesyn191
      • 10 years ago

      2x 2 slot GPUs + 1 RAID card + sound card will use 6 of those slots up easily.

      If you start using 3rd market GPU HSF that take up 3 slots then you have even less room to work with.

      Its nice to have this many slots but I wish they had electrical x16 PCIe lanes and not just the physical connectors. You get a nice little boost if you actually use Crossfire with 2x x16 slots instead of 2x x8 slots. That is up to the chipset manufacturers and not the mobo manufacturers though…

        • UberGerbil
        • 10 years ago

        The switching fabric for 16×7=112 lanes would be a pretty fearsome add to the chipset design right now. And routing that many lanes on the motherboard might make for a lot of layers too.

          • mesyn191
          • 10 years ago

          They already have like 42 PCIe 2.0 lanes coming off the NB as is. If they broke it up over 2x chips and actually gave the bus between them decent bandwidth/latency (ie. 32 bit HT bus) instead of just trying to use more PCIe lanes to link them it probably wouldn’t be as bad as you might think.

          It’ll probably take a few years for something like that to happen and it wouldn’t be something you’d see outside of a uber overclocker or server motherboard of course ($$$), but it should still be reasonably doable.

            • UberGerbil
            • 10 years ago

            The problem is that PCIe is a point-to-point design, not a shared bus. That means that chipset has to incorporate a crossbar-style switch, and the complexity of that goes up with the /[

            • mesyn191
            • 10 years ago

            Yea, that is why I mentioned HT. The 32 bit version gets something like 50GB/s and is very low latency. AFAIK the issue with PCIe bandwidth, as far as gaming is concerned or more specifically Crossfire/SLI, is card to card and not card to CPU or memory bandwidth. So you may be able to get away with just leaving the CPU’s FSB alone.

            Obviously (as you note) if you did want to be able to use all 7 x16 lane slots at once while accessing the CPU/memory then you’d really need to improve the FSB on the AMD chips.

    • ChangWang
    • 10 years ago

    This reminds me of the Power phase battle we had some months ago. Mostly marketing and partially fluff

    • mmmmmdonuts21
    • 10 years ago

    Folding Machine anyone?

      • zimpdagreene
      • 10 years ago

      Yeap the thought came to mind. Using a couple of 4870s sitting in the box!

        • Flying Fox
        • 10 years ago

        Unfortunately for Folding you still need to stick with the green camp.

    • Vasilyfav
    • 10 years ago

    It’s PCI-e slots…all the way down.

      • mcnabney
      • 10 years ago

      There is an armadillo in there somewhere.

    • Mourmain
    • 10 years ago

    You know, when someone buys something like this, they aren’t exactly worried about it not fitting in their case. Why don’t they make the motherboard larger and the slots more widely spaced so those cards can actually *breathe*.

      • Taddeusz
      • 10 years ago

      What case would fit that? ATX is a specification part of which determines slot spacing.

      • WSmart
      • 10 years ago

      PCIe is a serial bus, so we could cable all the lanes off the mobo and use a much more flexable explansion slot format where you can customize the number of lanes and the type of slot and the spacing of the cards, and you could also move cards between systems by just changing the cables or possibly digitally, push button. Then if you had three 1x cards, not a problem. If you wanted to cut out cards you don’t need while you fire all lanes on a four card sli, good to go. There’s no flexibility with the slots hardwired to a mobo. This would also free up space on the mobo for new features there.

      Thanks all. Be real, be sober.

      • WSmart
      • 10 years ago

      Tripe slot coolers would be possible if space were no concern, also. Quiet please.

    • Kurkotain
    • 10 years ago

    new definition for the word “overkill”

      • Taddeusz
      • 10 years ago

      Not really, it’s a recommendation by the PCI-SIG that all PCIe slots be physical x16 so that any card can go into any slot.

        • Flying Fox
        • 10 years ago

        I would have to say “finally”. Hope this is going to trickle down to cheaper boards.

      • Chrispy_
      • 10 years ago

      Overkill, are you kidding?

      That thing only has physical space for four double-wide graphics cards.

      Where is there space fo my my Killer DeathRay 9000 network card, my PCI-e SoundMonster Sonar-Fi Platinum, and my L33T-tech SSD PERC controller for the 8 SSD’s in RAID 9000?

Pin It on Pinterest

Share This