Report: Haswell chipsets add 6Gbps SATA, USB 3.0 ports

Intel typically releases fresh platform hubs, otherwise known as chipsets, with new CPUs. According to DigiTimes, Haswell and its accompanying 8-series platform hubs are due in April 2013, about a year after Ivy Bridge arrived with its 7-series chipsets. The site says this transition will also include a new LGA1150 socket that won’t be backward compatible with existing CPUs.

Code-named Lynx, the 8-series platform hubs will reportedly be much more power-efficient than the current generation. Fudzilla claims the new chips are projected to consume 50% less power on average than 7-series products. They’ll offer more connectivity, too. The site says we can expect four 6GBps SATA ports and six USB 3.0 ports in addition to the usual mix of other goodies. Intel will stick with second-generation PCI Express lanes, at least for the platform hub. Haswell-based CPUs should have plenty of gen-three PCIe built in.

Intel’s 7-series platform hubs boast dual 6GBps SATA ports and four USB 3.0 ports, so the 8-series upgrades are relatively modest. Still, the additional Serial ATA ports are long overdue; competing AMD products have offered six 6Gbps SATA ports for years.

To be fair, AMD’s current 6Gbps SATA implementation is slower than Intel’s. Perhaps AMD’s upcoming A85 platform hub will fare better. It adds ports, too; at the Computex trade show in June, we learned that the new chip will sport eight 6Gbps SATA ports in addition to six USB 3.0 ports. FM2 motherboards with the A85 platform hub are expected to hit the market this fall. They won’t be backward compatible with older CPUs, either.

Comments closed
    • Bensam123
    • 7 years ago

    Makes you wonder how AMD maintained compatibility with all their CPUs for roughly six years. 😉

      • Derfer
      • 7 years ago

      Frankly by being less innovative. There’s a reason AMD chipsets ALWAYS lag behind in performance. They could make bigger technological leaps but then they might alienate their more budget conscious fan base.

        • Krogoth
        • 7 years ago

        Nah, it is because of cost.

        It is more expensive to support, do QA and validate a larger range of sockets at one time.

          • Bensam123
          • 7 years ago

          Then compared to making brand new chips which support new standards backwards compatible with older sockets?

          They still released newer chipsets AND sockets, the CPUs were just backwards compatible with all their sockets. AM2/AM2+/AM3/AM3+.

      • MadManOriginal
      • 7 years ago

      By sucking for 6 years.

        • rrr
        • 7 years ago

        I concur, MadMan’s mother is very compatible with me.

      • Krogoth
      • 7 years ago

      AMD didn’t radically change the core logic too much since K8.

      They stuck with throwing memory controller onto the die and nothing else, until Fusion where they finally placed the GPU onto the CPU-die. Fusion chips of course isn’t socket compatible with the non-Fusion chips.

      Intel has been changing core logic on their chips since Nehalem which is why any significant change required a new socket. Lynnfield => Throwing in the PCIe controller, SB/IB => throwing on the GPU => Haswell, new GPU design.

        • Bensam123
        • 7 years ago

        I don’t buy that Intel has been changing things more then AMD. Intels ‘changes’ are basically the same as AMDs. AMD has been releasing new sockets too, their processors were just backwards compatible with older sockets.

          • Krogoth
          • 7 years ago

          How you been playing any attention to Intel’s CPU development since Nehalem?

          They have been throwing more and more core logic onto the CPU packaging. The extra stuff requires its own pins for power and data transfer. Any significant change in the layout of the core logic is going to require a different socket pin layout.

          The only thing AMD has thrown onto the CPU package has been the GPU (Fusion). The fusion chips have their own socket, while the GPUless chips have just the memory controller on them.

          There’s little difference between DDR2 and DDR3 from a electrical standpoint aside from operating at lower voltages. That’s how AMD was able to pull off the entire AM2 => AM2+ => AM3 => AM3+ transition without sacrificing backwards compatibility. The primary difference between those sockets is actually voltage support for newer AMD chips and faster HTP speeds.

    • bcronce
    • 7 years ago

    “LGA1150 socket that won’t be backward compatible with existing CPUs”

    Say it isn’t so! Not surprising.

      • UberGerbil
      • 7 years ago

      This isn’t really news. What will be is if Broadwell isn’t backward compatible with LGA1150 (and there have been hints that could be the case.)

        • NeelyCam
        • 7 years ago

        Intel’s tock-tick (or is it tick-tock?) CPUs have been compatible for a while now. I’d be very surprised if Broadwell chips couldn’t be used with Haswell mobos

      • stmok
      • 7 years ago

      The change is for engineering reasons. Not for marketing reasons.

      LGA1150 integrates voltage regulation on the microprocessor. This eliminates the voltage variation caused by mobo manufacturers. (Due to the deviations in VRM implementation.) …The goal is to bring voltage consistency, regardless of mobo brand.

      The physical holes for the heatsink will be compatible, as Intel didn’t change the specs too much for the cooling side. So your existing LGA1155/1156 heatsink solution is likely to be compatible until 2015.

      The enterprise versions (Xeon) will use DDR4. While the consumer version will use DDR3.

      Things will change again in 2015/2016 with Skylake and Skymont respectively. (This is when Intel will introduce DDR4 to the consumer. Since prices will have fallen and matured to affordable levels for this market.)

        • Duck
        • 7 years ago

        Since when does Intel care about affordability for the consumer? That’s not the reason DDR4 comes later to the consumers. DDR4 is worse than DDR3. You should put off the ‘upgrade’ for as long as possible until you need so much RAM that you cannot fit it into the 2/4 slots you have on your motherboard.

          • yogibbear
          • 7 years ago

          Obviously newer generations of ram (DDR4 in this case) start with average timings compared to their more mature forebears (DDR3). As manufacturing improves the timings come down. Yeah the only area it will really win is bandwidth. But that’s good enough.

            • NeelyCam
            • 7 years ago

            Power consumption also drops

            • Duck
            • 7 years ago

            There’s no technical reason why you can’t produce DDR3 on a smaller process node with lower voltage and lower power consumption.

            • NeelyCam
            • 7 years ago

            Absolutely true. But DDR4’s bus termination scheme is better from power consumption point of view.

            • Duck
            • 7 years ago

            How much better? I doubt it’s enough to justify the drop in performance.

            • Krogoth
            • 7 years ago

            Memory performance has been irrelevant in the desktop space for years outside of synthetic epenis benchmarks.

            Any penalties related to latency have been masked by the large caches and advanced pre-fetchers found on modern CPUS. Memory bandwidth hasn’t been an issue for a long, long time.

            The main benefit of DDR4 is cheaper high density modules due to die-shrinks and lower voltage requires. It means we will eventually have 16GB and 32GB modules on the cheap. Desktops will be able to hit 192GB memory limit of Windows Vista, 7 and 8 (Business/Ultimate).

        • Bensam123
        • 7 years ago

        .

    • 5150
    • 7 years ago

    Why do we have any SATA II ports anymore? The switch to SATA III should have killed them all, especially by the time the 7-series chipset came out.

      • jdaven
      • 7 years ago

      For some weird reason, chipsets include SATA and USB from two generations even though the most up to date generate is backwards compatible. This is not the case for PCIe. Either all PCIe slots are X.0 or X-1.0. I assume there is some sort of power or cost savings. Also, the latest and greatest is not necessary for tech that can never go that fast (i.e. SATA optical drives).

      • Farting Bob
      • 7 years ago

      Because it still saves a few pennies putting SATA 2 (and USB 2) in there, and margins for motherboards are pretty low (especially cheaper ones), so if the chip costs a dollar less that is something companies like gigabyte and asus will love.

      And really, why would you need 6 SATA3 ports on a consumer board? HDD’s are still struggling to hit SATA1 limits in real world scenarios, only SSD’s need SATA 3 and ive yet to see someone connect 6 of them to a on-board controller, anyone who would want that many will clearly be RAIDing them with a dedicated card anyway. Theres more argument for additional USB3 ports but its still only the top 1 or 2 percent of users who need them all.

      • Duck
      • 7 years ago

      In the case of some interfaces like USB, I can imagine the cost of transitioning 100% of ports to the latest, most high speed spec will be higher in in terms of power, transistor budget, testing and validation. But also the cost of the PCB may be higher. Supporting a higher speed bus takes more care and attention with the PCB layout. Running several of them side by side on the same PCB may either not be possible or take up too much PCB space (or too many layers).

      As an aside, if you only have USB3 ports and they require a driver to be installed in Windows before it becomes recognised and usable, can you install Windows using a keyboard and mouse connected to these USB3 ports?

        • Deanjo
        • 7 years ago

        [quote<]As an aside, if you only have USB3 ports and they require a driver to be installed in Windows before it becomes recognised and usable, [/quote<] Not with a proper UEFI implementation. A proper UEFI implementation can offer legacy mode to USB 3 unaware OS's.

      • Bensam123
      • 7 years ago

      I wonder… Intel chipsts still have USB 2 ports… Considering Intel used to immediately jump on new peripheral tech as soon as it came out…

      It may have something to do with Thunderbolt yielding ulterior motives, but Intel would never do that.

    • grantmeaname
    • 7 years ago

    Fudzilla, in the shortbread today, noted a 25% drop in TDP and a 50% in typical power use. For mobile solutions (QM77, IIRC), that’s 4W->3W for TDP and hopefully from 1W to .5W or something substantial like that in typical operations.

Pin It on Pinterest

Share This