PCI Express 3.0 slots appear on MSI Z68 motherboard

Computex — Like seemingly every other motherboard maker, MSI is showing off an X79 model loaded with PCI Express 3.0 slots. You don’t need to step up to Intel’s next high-end socket to get a dose of gen-three PCIe connectivity, though. MSI has also brought PCIe 3.0 slots to a Z68 motherboard with an 1155-pin Sandy Bridge socket.

The top two PCI Express x16 slots are still fed by the processor, but MSI has added the necessary electrical components to ensure compatibility with version 3.0 of the standard. Those components can’t bring Sandy Bridge’s PCIe lanes into the next generation—bandwidth can’t be manufactured out of thin air. However, MSI says it has measured an increase in performance using PCIe-based solid-state drives. The company suspects that the beefed up slots offer improved signal quality, enabling better performance even with PCIe 2.0 expansion cards.

If you want to take full advantage of the board’s PCIe 3.0 support, you’ll have to wait for Ivy Bridge… and the requisite BIOS UEFI update. MSI claims the board will be compatible with Intel’s upcoming desktop CPU, which will feature gen-three PCIe lanes built in. It’s also working on a new UEFI interface that promises to be much improved over the company’s previous efforts.

On the graphics front, MSI has come up with a handful of interesting cooling tweaks to differentiate its cards from the rest of the pack. The cooling fans on the N580GTX Lightning Xtreme Edition are coated with a special paint that changes color from blue to white depending on the ambient temperature. Hypercolor anyone? Another interesting twist: the card can spin the fans in reverse to prevent dust and other particulate from accumulating on the heatsink. This will only happen when the card is idling with a low enough GPU temperature, and then only for 30 seconds at a time.

MSI’s Afterburner software has become rather popular among folks looking to tweak their graphics cards from Windows. Now, there’s an app for that. Well, an Android one, anyway. An iOS version of Afterburner is on the way, as well.

Comments closed
    • jimmy900623
    • 8 years ago

    An excellent blog!I like it.but The meaning of the last paragraph a bit puzzled.It’s always good to have passions in life to keep yourself from going down the negative path and work towards staying positive.I really enjoyed this. You can look your article comments. This information has really been helpful for most of the readers. I really appreciate the way you have written about this. I will really like to read more serve. I’m constructive I’ll be back again yet again and can deliver a number of of my friends.

    [url<]http://www.buyoutlook2010.com/[/url<] [url<]http://www.buyoutlook2010.com/goods-27.html[/url<] [url<]http://www.buyoutlook2010.com/goods-49.html[/url<]

    • Bensam123
    • 8 years ago

    “Those components can’t bring Sandy Bridge’s PCIe lanes into the next generation—bandwidth can’t be manufactured out of thin air.”

    Hrmmm… not to knitpick, but that just doesn’t sound right. Perhaps semi-colon might’ve been in order.

    • flip-mode
    • 8 years ago

    MSI’s customer service apparently sucks pretty hard.

    Is there any mobo maker that has decent customer service?

      • shank15217
      • 8 years ago

      Asus, AsRock, Tyan, Supermicro all have decent customer service as far as I could tell.

    • willmore
    • 8 years ago

    Has anyone really looked into this? Are likely to need the additional bandwidth in a 16x pci-e 3.0 slot (vs 2.0) in the three years or so that this board will have as an operational lifetime? The last time I saw an article on this, I think the results were that, unless you’re trying to get that last 5% out of the GPU, a 4x slot was plenty of bandwidth.

    It just seems a waste to stuff all of your 2.0 or 3.0 pci-e lanes on a pair of 16x slots and then leave all of your 1x slots at 1.0–where they can’t even feed a single port USB3.0 or SATA-2 card at full bandwidth, let alone a dual or quad port card.

    Then again, maybe I’m biased as I just used a Dremel to cut down a 16x G210 card to 1x so that it would fit in an HTPC. Works great even at pci-e 1.0.

      • NeelyCam
      • 8 years ago

      [quote<]Then again, maybe I'm biased as I just used a Dremel to cut down a 16x G210 card to 1x so that it would fit in an HTPC. Works great even at pci-e 1.0.[/quote<] That is so cool! Pics?

        • willmore
        • 8 years ago

        Yes, I took a couple of them, but I don’t have a photo hosting service. I can email them to you or if there’s a way to post them here….

          • NeelyCam
          • 8 years ago

          Flicker?

      • shank15217
      • 8 years ago

      Why did you cut the card? You could have just cut the slot notch open. Some systems even come with open slots so cards can fit.

        • willmore
        • 8 years ago

        The MB it was to fit into had headers, some caps, an xtal, etc. in the way. It was a $9 card I got a Fry’s that was homeless, so the risk was pretty darn low. Now I have a handy 1x video card if I ever need to add a couple of extra heads to a box–once it’s done being a video decoder for my HTPC, that is.

      • willmore
      • 8 years ago

      [url<]https://techreport.com/forums/viewtopic.php?f=3&t=76757[/url<]

        • NeelyCam
        • 8 years ago

        LOL that’s just crazy! +5

        • flip-mode
        • 8 years ago

        I agree with Neely, that’s the craziest component mode I’ve seen. Wow, Dremeling away 80% of your video card’s connectors and the thing still works. That amazes me.

          • willmore
          • 8 years ago

          PCI-E was very well designed to scale the links to whatever is there.

          You could say it’s a natrual fallout of having to use multiple independent serial links–which you have to do when the clock speeds get this high otherwise timing skew would kill your timing budget–but there are ways they could have done it that wouldn’t provide this benefit. My hat’s off to the PCI-E designers for this one.

            • Bensam123
            • 8 years ago

            That doesn’t always work though. Even if it’s supposed to and that’s the spec, some cards just refuse to work or have issues when they have scaled down bandwidth. Component makers don’t always play by the rules. Be happy yours worked out for you.

            • NeelyCam
            • 8 years ago

            Then those component makers should be singled out as not following the spec.

            Do you have any specific examples?

            • willmore
            • 8 years ago

            The only way to get them to play nice is to get their customers to complain. Out them to the fanbase, let us beat on the integrators and let *them* push back against the bad chip designers.

      • tejas84
      • 8 years ago

      Well my two 6990 in Crossfire disagree strongly with you. They need the extra bandwidth….

        • Arag0n
        • 8 years ago

        yeah because that’s the most common system in the eart that owns a 0.001% of people…. get real. There is sense to keep some high end mb with plenty of PCIE lanes, but normal users wont notice the difference between a 4x and 16x. It’s a waste of materials that we keep manufacturing cars to fit into x16 slots… it has sense with some but not every gpu.

        • willmore
        • 8 years ago

        Oh, cool, you [b<]tested it[/b<]! Nice. Care to share your results with the rest of us?

    • Farting Bob
    • 8 years ago

    We have PCIe 3!*

    *Actually we dont, but our motherboard uses slightly higher quality componants that may be beneficial for PCIe 3 cards when they come out but nowhere near as useful as a PCIe 3 slot.

      • Dissonance
      • 8 years ago

      Not quite. According to MSI, the slots are fully PCIe 3.0 compliant. They’re just a conduit for the PCIe connectivity built into the CPU, so that’s the determining factor.

        • Arag0n
        • 8 years ago

        As long as your CPU is PCIE 2.0 you won’t realize almost any difference…. anyways, as was shown some days ago in the review of the Bulldozer motherboards we still not take full advantage of the pcie bandwith, that’s why they required the testing with SSD’s and not GPU’s.

        Oh and please, I’m pretty sure no one owns a 2000$ PCIE SSD for server proposes! If you are in server business this motherboard it’s not for you…

          • bcronce
          • 8 years ago

          PCIe devices can communicate with each other with out use of the CPU. I’m not sure how x-fire/SLI works completely, but increasing the bandwidth to the chipset could increase the bandwidth between your SLI’d cards.

            • Forge
            • 8 years ago

            The PCIe host controller, which is 2.0 on all current CPUs, lives in the CPU. It may not need x86-64 supervision as such, but all PCIe transfers definitely do include someone on the CPU die.

            The chipset is being deprecated at a rapid pace. Starting with the venerable Athlon64, more and more of what the northbridge used to do is now CPU work. Soon you’ll see more and more of the southbridge move into the CPU as well, until you’re buying an Intel system (on a chip), and buying a Gigabyte or MSI flat-part-with-connectors. The motherboard won’t do anything but physically connect things, at that point.

          • shank15217
          • 8 years ago

          You wont realize any difference at all, there is no ‘almost’ in PCIE

          • WillBach
          • 8 years ago

          You’re missing the point that the motherboard will be compatible with PCIe 3.0 CPUs when they come out.

            • Arag0n
            • 8 years ago

            No I’m not. Even if you try to upgrade your CPU once Ive Bridge comes out, you won’t notice the difference neither. Did you miss the last week banchmarks were a SLI with 8x lanes were performing equal to a SLI x16 lanes? There is still a long margin between what PCIE delivers and what you need. It’s not like USB2 and 3 man. Only extremly high end PCIE SSD’s will notice the difference, something that you won’t be using in this motheboards pretty sure.

            • NeelyCam
            • 8 years ago

            Yeah, but those cards are so last week.

            • Arag0n
            • 8 years ago

            True, but don’t expect a change in the bandwith required till XBOX 720 or PS4 hits the streets. Maybe 2014, 2015…

            • NeelyCam
            • 8 years ago

            I’m thinking 2x increase per generation (40nm->28nm) is perfectly realistic, and at least on the Intel side these Gen3 links will probably be limited to x8 links, so we’re pretty much in line with consumer requirements

      • WillBach
      • 8 years ago

      MIS is claiming that motherboard will be compatible with PCIe 3.0 CPUs when they come out, and that you’ll be able to use the PCIe slots at the PCIe 3.0 speeds.

Pin It on Pinterest

Share This