Intel plans to integrate Thunderbolt into future CPUs

The USB Type-C connector promises universal compatibility and frustration-free connections, but the dizzying array of protocols that can connect through that physical interface can result in a frustrating trudge through manufacturers' spec sheets to figure out just how fast one's USB device will run from a given Type-C connector.

Intel's Thunderbolt 3 controllers have swept away some of this confusion by providing USB 3.1 Gen2 support along with Thunderbolt's swift 40 Gb/s transfer rate, ensuring that a USB Type-C connector hooked up to such a controller will Just Work™ with most any USB Type-C device. Thunderbolt 3 also lets PC makers implement the USB Power Delivery standard so that a USB Type-C port can charge a notebook PC's battery.

Partially because Thunderbolt 3 relies on a discrete controller chip at the moment, however, manufacturers have reserved the feature for premium notebooks and motherboards—if it's implemented at all. This morning, Intel announced plans to spur adoption of Thunderbolt by integrating support for the protocol directly into its future CPUs and making the Thunderbolt protocol specification available under a "nonexclusive, royalty-free license" in 2018. That means third-party peripheral manufacturers can start developing their own Thunderbolt-compatible controllers, as well.

Intel's integration of Thunderbolt into the CPU should be an appealing move for notebook and tablet designers who might have otherwise balked at finding room for another discrete component in today's increasingly thin-and-light PCs. Potentially, the technology could do away with the need for proprietary docking connectors on devices like Microsoft's Surfaces. It could also allow for the development of blistering external storage and single-cable VR headsets. There's plenty to like about the idea of Thunderbolt everywhere.

Intel being Intel, however, we'd expect to see significant feature segmentation by price point with Thunderbolt-equipped silicon. Less-expensive PCs might get only the USB 3.1 side of the bargain (or even fewer features), while Core i5 and Core i7 CPUs might be the only ones with every feature of the Thunderbolt controller flipped on. We'll have to wait and see just how the company's more open approach to Thunderbolt is received by the PC industry and peripheral manufacturers as time goes on, but we're still hopeful that the technology will be the one to drive every USB Type-C port from here on out.

Comments closed
    • chuckula
    • 2 years ago

    Thunderbolt….. HO!!!!!!!!

    • tipoo
    • 2 years ago

    Waaaait, I wonder if this is why the current Surfaces don’t have Thunderbolt, Microsoft knew the next chip would have it with no added cost anyways…

    • Arbiter Odie
    • 2 years ago

    This is outstanding news! Thunderbolt as a protocol is so full of potential, and has thus-far been shamefully wasted because of the $$$ barrier-to-entry for motherboard makers. Now some of the regular 3rd party actors (asmedia most probably) can get in on the game, and start bringing thunderbolt (Light Peak, for those who remember) to the masses.

    • NTMBK
    • 2 years ago

    So will Apple be able to integrate their own Thunderbolt controller in their ARM Macbooks?

      • blastdoor
      • 2 years ago

      I think the dream (or at least my dream) of ARM Macs is dead.

      But conditional on false, all is true, so I think the answer to your question is “yes”.

        • Takeshi7
        • 2 years ago

        Apple should go back to PowerPC. Or full blown POWER.

          • sophisticles
          • 2 years ago

          If Apple went to RISC-V I would buy an Apple in a heartbeat, which is something I never thought I would say.

          Same thing with ARM, I have said repeatedly that I wished AMD would ditch the x86 ISA and switch completely to ARM or RISC-V, I would jump up and down for joy.

          • tipoo
          • 2 years ago

          A Mac Pro humming along with a pair of Power 8s/Power 9s would have been insanity. But probably also more instructions per clock in hardware than the vast majority of consumer code could hope to utilize, though they do have 8 way SMT to address this. But then again, that many threads is also more than most consumer code could hope to utilize.

            • blastdoor
            • 2 years ago

            Anandtech’s review of the Power 8 did not make it look so impressive compared to Xeons. Unless I am misreading their article, Power 8 does ok (but just ok) in performance, but it sucks in performance/watt. If anything, it validates Apple’s decision to switch to Intel.

            • the
            • 2 years ago

            I beg to differ. POWER8 at the time was already two years old at the time of the reviews and compared to recently released Xeons at the time. I’d expect POWER9 to give Sky Lake-EP/EX a beating in terms of performance later this year but Intel will chip away at that with new Xeons in 2019 and 2020 before POWER10 comes along. Basically IBM needs to pick up the pace of their product launches.

            I also suspect that the performance/watt figures will change radically with POWER9 as there will be a model without the memory buffers. As noted in the Anandtech article, those have a very negative effect in the performance/watt calculation that couldn’t be avoided. (Though it should be pointed out that Xeon E7’s also use similar memory buffers, giving them a similar disadvantage.)

            • chuckula
            • 2 years ago

            There’s a strong contingent of people around here who get violently angry when you even mention the idea of using 6 year old AVX instructions to boost the performance of poorly written software on consumer-grade CPUs.

            If you can’t even do that, then there’s no way software is going to magically rewrite itself for a super wide execution core beyond the software that already does well on that type of core in the first place. On top of that, it’s pretty clear that POWER gets its name from POWER consumption these days, while Intel bends over backwards to target efficiency to the point of intentionally leaving performance on the table if the performance to power ratio isn’t good enough.

            • blastdoor
            • 2 years ago

            Sure… if IBM introduced better products faster, then they would fare better in these comparisons. Or to put it more succinctly, if their product was better then it would be better.

            • the
            • 2 years ago

            It was better when it was released than comparable Xeons V2’s. Intel just didn’t sit around and worked on removing that lead.

            So yeah, POWER8 is dated but it is still performance competitive today and POWER9 is arriving later this year.

            • tipoo
            • 2 years ago

            Some key things from the AT part 2 review:
            “Wrapping things up, let’s first look at the POWER8 from a microarchitectural point of view. The midrange POWER8 is able to offer significantly higher performance than the best midrange contemporary Xeon “Haswell-EP” E5s. Even the performance per watt can be quite good, but only in the right conditions (high average CPU load, complex workloads).”

            “According to IBM, MariaDB and Postgres have been more optimized for the POWER8 than MySQL. In those cases, IBM claims up to 40% better performance than the Xeon E5-2690 v4.”

            “The bottom line: the IBM POWER8 LC servers can offer you better performance per dollar than a similar x86 server. But it’s not a guarantee; as a server buyer you have to do your research and check whether your application is among the POWER8 optimized ones, and what kind of CPU load profile your application has. The Intel Xeons, by comparison, require less research, and are much more “general purpose”.”

            This is what I was getting at, Power8 was designed with an extremely wide execution window for a (might I add, three year old) processor – not all code is able to take advantage of this level of width. I seem to recall, most consumer code hits around 1.4 instructions per clock on much wider processors, very CPU heavy code is up to 4, Power8 can decode up to 8 instructions, issue 8 instructions, and execute up to 10.

            If you can take advantage of the entire wide engine, it can have better perf/watt than x86, but you have to know you can take advantage of it first.

            • blastdoor
            • 2 years ago

            So, to sum up: POWER is a niche product that is rarely updated. If it were updated more frequently and if programmers contorted themselves to take advantage of POWER’s unusual design, then POWER would fare better in comparisons against Intel. (and if my grandmother had wheels she’s be a wagon)

            I think Apple was wise to dump IBM in the Mac.

            I think Apple would be even wiser to dump Intel in favor of their own designs, but as I said — that just doesn’t seem to be in the cards (though not for any technical or economic reason… Apple’s just missing an opportunity here)

            • tipoo
            • 2 years ago

            For sure they made the right choice switching to Intel. I’d just like to see the What-If machine version of the world where they didn’t, that’s all πŸ˜›

            There was also a time when Krazy Ken Kutaragi himself pitched the Cell processor to Steve Jobs, which was rightly rejected. It had a lot of problems and an explosion in code complexity (60 lines on a general purpose processor explode to 1200 on an SPU), but imagine how insane it would have seemed in 2005 to have that many CPU flops in a dual Cell Power Mac.

            • the
            • 2 years ago

            Cell and the XBOX 360 chip were both pitched. The problem for both was low single threaded performance on general code. Cell did have the benefit that a lot of its advantages could be hidden away by software libraries to provide quick benefits.

            Freescale’s designs were also considered but they passed on that. Apple did buy Freescale a few years later to get the design team behind their ARM efforts.

            This software work would have been done by Apple, something they were not keen on doing. (Ironically they are doing some of that now by reportedly putting more functionality in MacOS that uses ARM coprocessors.)

            • tipoo
            • 2 years ago

            I didn’t know the Xenon was. Source? For the Cell, the PS3 lead architect himself pitched it

            [url<]http://www.patentlyapple.com/patently-apple/2013/11/steve-jobs-rejected-the-cell-processor-in-2004-sony-finally-dumped-it-for-the-new-ps4.html[/url<] Later in the Cells life there were better toolkits and debuggers for the SPUs, but as it launched all of that complexity was placed on the programmers. In fact I think a game developer got fed up and made a debugger. Later on Sony would have ICE create and share technologies to more easily use it. Even with those toolkits though, they make developing for SPEs easier, but you still have to specifically target the SPEs with the explicit nature and programming difficulty that entails. [url<]http://www.drdobbs.com/parallel/programming-the-cell-processor/197801624?pgno=3[/url<]

            • blastdoor
            • 2 years ago

            [quote<]Apple did buy Freescale a few years later to get the design team behind their ARM efforts.[/quote<] Say what now? Freescale was once part of Motorola, was spun off, and now is part of NXP. [url<]https://en.wikipedia.org/wiki/Freescale_Semiconductor[/url<] Perhaps you were thinking of P.A. Semi?

            • the
            • 2 years ago

            Yeah, got PA-Semi backwards with Freescale. One of those days.

            • the
            • 2 years ago

            Right when Apple was jumping off of PowerPC, they did some serious work in their kernel to boost thread scheduling. Most consumer code doesn’t directly invoke a high amount of threads but the libraries they do call can. OpenCL and Grand Central dispatch are two key pieces of this picture.

        • tipoo
        • 2 years ago

        Do you mean because of that Pro user summit, where they said they were sticking with Intel?

        If I were Apple I probably wouldn’t want to let on that I was going to stab Intel in the eye yet, not until I could put out a 1 year disclosure like for Imagination Tech πŸ˜›

        The 12″ Macbook at least is a good inflection point. A9X and A10 are already competitive with Core M, A10X should be able to give it a good firm kick in the nuts, especially in the GPU department.

          • blastdoor
          • 2 years ago

          Yup — the Pro user summit is partly what I had in mind.

          I think going with their own SOC in the Mac makes a great deal of sense, but I have to admit that it’s not at all consistent with how they have been treating the Mac. They seem content to sell replacement Macs to the existing user base without much interest in expanding that base. They also seem to believe that the only reason anyone would buy a new Mac is that their old Mac wore out, so there’s no real need to be in a big hurry to improve the Mac in any way.

          I think that if they were more aggressive they could double their worldwide marketshare for Macs, but they just clearly aren’t thinking that way.

            • tipoo
            • 2 years ago

            “I think that if they were more aggressive they could double their worldwide marketshare for Macs, but they just clearly aren’t thinking that way.”

            They sure aren’t. The jump in revenues for Macs directly corresponds with the 14% increase in average selling point, rather than the small 4% boost in sales with the redesign.

            They’ve never been about the race to the bottom, that’s understandable, but I’m now worried they see the mac as an inelastic demand that they can just charge even more for. We’ll see if prices drop post-redesign as has often happened.

            • blastdoor
            • 2 years ago

            I agree — I wouldn’t want them to go after marketshare by offering cheap products. For example, I think it would be a mistake to offer the Mac analog of the Amazon Fire tablet (low resolution screens, seriously under provisioned RAM and CPU, etc).

            But I do think they could increase their marketshare by offering a *somewhat* greater diversity of models and keeping them up to date. I know that some people will read that and say that I’m proposing a return to the days of a gazillion Performa models that offer no meaningful variation in features, but that’s not what I’m saying. I’m talking about substantive variation in models that allows the Mac to meet the needs of more customers. And it really doesn’t have to be too many more models. I’m wasting my breath, though… it’s not going to happen πŸ™

    • hungarianhc
    • 2 years ago

    Does this mean Thunderbolt3 on AMD?

      • Jeff Kampman
      • 2 years ago

      It seems like AMD could design its own TB controller under the license terms Intel set out.

        • derFunkenstein
        • 2 years ago

        Could board makers have ever just used the Alpine Ridge controller on AMD boards? It seems like it could have provided differentiation for some upscale AM4 boards.

          • terranup16
          • 2 years ago

          Zero basis to this speculation whatsoever, but I’d suspect at a minimum AMD would need to do for that what’s it’s been doing with XMP memory modules and the like. With Intel supposedly opening this up now though, AMD should be able to bake in native support, I’d think anyway.

            • derFunkenstein
            • 2 years ago

            Yeah, that’s what I’m trying to figure out. Is Alpine Ridge more than just a PCIe device that takes up a certain number of lanes? If not, then it seems feasible that Intel controllers could be on AMD boards but aren’t for some reason.

            • Vaughn
            • 2 years ago

            I saw on another site where someone posted they don’t want Thunderbolt because it ties directly into the PCI-E bus and is not secure.

            Thoughts?

            • DancinJack
            • 2 years ago

            What does that even mean? Not secure? With that logic, that person doesn’t want a single port on their computer.

            • UberGerbil
            • 2 years ago

            Well, PCIe devices are bus masters that are given direct access to the system memory space (DMA) independent of the CPU– this is what makes it possible to have a high-performance GPU on PCIe. Thunderbolt is PCIe exposed to the outside world, so anything connected to it has direct access to the contents of system memory. That’s a lot more, and lower-level, access than a USB device has. Of course, it’s generally true that if you don’t have physical security you don’t have security (if someone can sit down at your machine, you’re pwned regardless) and we generally trust the devices we attach to our systems — whether they’re going into the PCIe slots inside or hanging off a port. But it is true that a malevolent, or just misbehaving, Thuderbolt device has the potential to get up to more shenanigans than the external devices using most other common ports.

            • willmore
            • 2 years ago

            Intel did lean the mistake of Firewire–whose controllers allowed any attached device to DMA any memory. TB devices can be isolated from the system by an IOMMU–if the system has one. If it doesn’t, there isn’t much you can do to protect yourself.

            I don’t know the deails of it so, I cannot speculate how this effects AMD’s CPUs and chipsets, but it could be a complicating factor.

            • DancinJack
            • 2 years ago

            I understand what you’re saying, but it just doesn’t really rate IMO. It’s definitely not a reason, IMO, to not want TB3 integrated into the CPU (the original statement from Vaughn).

            • willmore
            • 2 years ago

            If they put a TB controller on the CPU, they have to include an IOMMU and Intel likes to use that as a product segmenting feature. Since they would be irresponsible to put a TB controller on there without an IOMMU, then the TB would be conditional on the IOMMU. So, if they continue to want to segment by IOMMU presence, TB goes along for the ride.

            It’s a completely reasonable speculation based on Intel’s past behavior. I’m not suggesting it’s going to happen for that reason, but I would be surprised if i3 and lower chips ship with fullly functional TB.

            • DancinJack
            • 2 years ago

            Makes sense. Although it’s not like lower end boards include TB as it is right now (thinking about pairing them with lower end CPUs). I still don’t see much downside to putting TB on the CPU.

            • willmore
            • 2 years ago

            I don’t know, but remember the days when Intel would only sell certain chips as part of chipset bundles? You couldn’t put certain chips on some boards because Intel wouldn’t sell you them without selling you a bunch of other parts in the same bundle. Well, you *could*, but you’d waste the other chips.

          • the
          • 2 years ago

          I think such products would wait until Raven Ridge arrives due to its integrated GPU. Half the versatility of Thunderbolt is that it can encapsulate DisplayPort in conjunction with PCIe data so it can be used for both simultaneously. Items like LG’s 5K display wouldn’t function with a pure data Thunderbolt 3 implementation.

          I’d also fathom that Alpine Ridge will be replaced by the end of the year with a new controller that’ll work with DP 1.3. Alpine Ridge has more IO bandwidth than DP 1.3 but the controller
          accepts dual DP 1.2 on in the motherboard side. The result is that items like LG’s 5K display are logically two displays being feed by two DP 1.2 streams muxed over a single cable. Intel’s next controller should simplify this by permitting a single DP 1.3 to handle such high resolutions.

    • willmore
    • 2 years ago

    I hope this just means that the controller part of TB will be on the CPU (or on a companion chip on the same module) and not the actual PHY for it. I don’t want an external jack that has a direct electrical link to my CPU.

    Hey, what could go wrong? You think BADUSB was a problem….

    • nico1982
    • 2 years ago

    I’m still waiting for an external GPU docking station free of early adopter tax. If this speed up the adoption of “USB-C” and TB peripherals it is a good thing.

    I would still prefer a proper USB IF spec for a PCI alternate mode. Maybe USB 4.0 will add that.

    • anotherengineer
    • 2 years ago

    I go to work overseas, and come back and Just Workβ„’ is trademarked………..sigh……….

    I will have to let everyone know back in the Govi next week πŸ˜‰

      • morphine
      • 2 years ago

      You’ll hear from our legal department soon.

      • CScottG
      • 2 years ago

      Yeah, another stupid trademark that will be litigated into the insignificance it deserves (..like “Monster”).

    • JosiahBradley
    • 2 years ago

    Good, once they integrate thunderbolt into CPUs they can force you to upgrade to coffee-lake or lock-in lake to prevent you from ever running it on unsupported unlocked SKUs. Viva la openess!

    See: Optane.

    Edit: wow you guys are in a mood this morning.

      • chuckula
      • 2 years ago

      Did you bother to read the article?

        • JosiahBradley
        • 2 years ago

        Sadly I’m blinded by hatred.

          • Shobai
          • 2 years ago

          [you forgot some sort of petulant reference to beating ‘the usual fanboi crowd’ to the punch, or similar]

      • Arbiter Odie
      • 2 years ago

      “This morning, Intel announced plans to spur adoption of Thunderbolt by integrating support for the protocol directly into its future CPUs and making the Thunderbolt protocol specification available under a “nonexclusive, royalty-free license” in 2018.

      That means third-party peripheral manufacturers can start developing their own Thunderbolt-compatible controllers, as well.”

      Probably don’t need to worry, as per the article πŸ˜›

      • DancinJack
      • 2 years ago

      Hi I’m Josiah, I don’t read actual articles, but I speculate that Intel will lock this down because hrmph!

      • Ninjitsu
      • 2 years ago

      it’s funny, i came to the comments section expecting a post like this from someone. wasn’t disappointed πŸ˜›

        • JosiahBradley
        • 2 years ago

        At least I I’ll show my true colors. I just don’t buy Intel not playing lock-in. They are masters of product divergence.

    • chuckula
    • 2 years ago

    Not interesting: Existing thunderbolt in a CPU [although the open and royalty-free licensing part is interesting from a market perspective, if not a technical perspective].

    Interesting: Silicon Photonics integrated in a CPU.

      • tipoo
      • 2 years ago

      That’s how the Photino Birds win.

Pin It on Pinterest

Share This