Analyst expects Apple to start transition away from Intel by 2017

No question about it, Apple’s homebrewed processors for the iPhone and iPad are getting mighty fast. According to an analyst quoted by AppleInsider, those chips will get fast enough to power Macs “within 1-2 years,” at which point Apple could begin to transition away from Intel.

The analyst, Ming-Chi Kuo of KGI Securities, believes the performance of Apple’s chips will land “somewhere between Intel’s Atom and Core i3 lines” in that time frame. That seems like a conservative estimate, since Apple’s current-gen A8 is already fast enough to beat Bay Trail—and the A8X is even faster. I’m sure Apple will have silicon powerful enough to drive new MacBooks and iMacs by 2017.

Basing those systems on homebrewed silicon would give Apple greater control over “launch timing of the Mac line,” Kuo believes, since the company would no longer be tied to Intel’s product cycles.

Of course, transitioning away from Intel would involve some compatibility headaches. (Remember, Apple’s A-series chips are based on ARM’s instruction sets, not x86.) The move may also make it more difficult to dual-boot Windows and run Windows games, which would be an obstacle for users who want the best of both worlds. Apple is becoming a force to be reckoned with in the semiconductor industry, though, so there could well be some upsides for users on the performance and battery life fronts.

Comments closed
    • sschaem
    • 5 years ago

    Do we have apple to apple numbers for the performance delta? (productivity apps)

    I have a feeling that its going to take a lot of efforts for an A9 to run decently xcode for example.
    From what I glanced at the A8x at 1.5ghz is hlaf the speed of a 1ghz dual core-M…
    Not encouraging when your low end is to match 3+ghz chips with 4 cores.
    And by 2017 we have skylake, a whole new ballgame.

    So ARM running in emulation mode would have a hard time VS native 14nm skylake very hard time.

    This also mean a complete move from x86 to ARM on the desktop… and the gain? slower performance, no more access to Intel state of the art fabs.

    Even if their CPU where superior (and they are far from it) it doesn’t seem worth it for Apple.
    They cant use the same SoC from a phone to a mac pro, its will always be way to slow.
    Server/workstation class CPU is another world…
    So they would need to design a special ‘SoC’ for the mac pro, and a special ‘SoC’ for the laptop line, and another (that they have now) for the mobile space.

    And still endup with slower performance and a complete lose of high performance x86 support.

    • d0g_p00p
    • 5 years ago

    Not surprising at all. I think everyone knew Apple would at some point move their entire lineup to ARM and iOS. One CPU and one OS for all their devices and computer line up. I know Apple must hate being attached to Intel and not have everything their way (the way they like it).

    However as much as I dislike Apple I’ll admit that they know how to handle switching to different architectures pretty damn well. 68x to Power to x86 and they pulled it off pretty impressively.

    It will be interesting to see the RDF on CPU performance of Intel x86 vs Apple ARM. Anyone remember the P3 vs G4 “benchmarks” on Apple’s site that showed the G4 being like 4 times as fast or some ridiculous claim like that?

    Last thing, I’m pretty sure they will have to add native virtualization to the OS or a combo hardware/software based hypervisor to keep x86 compatibility. I’m actually looking forward to see how Apple handles this if they even bother.

    • Bensam123
    • 5 years ago

    Yeah if Intel doesn’t stop their sidequest of ‘glory to the mobile’ and they move on to speed gains again and AMD stops playing catchup, sure.

      • chuckula
      • 5 years ago

      [quote<]Yup... and kicks a bunch of smaller manufacturers out the door, rings in consumers and forces them into a structure in which they have to purchase pre-rated chips (overclocking headroom disappears) and certain combinations, and allows Intel complete control over the motherboard. I'm sure the socketed models will be limited to socket 2013 or whatever their next ridiculously high end 'enthusiast' product is. [blah blah blah anti-Intel drivel that turned out to be completely wrong and deliciously ironic given that AMD's BGA-only non-upgradeable Carrizo is apparently there to "kick a bunch of smaller manufacturers out the door, ring in consumers and force them into a structure in which they have to purchase pre-rated chips (overclocking headroom disappears) and certain combinations," [/quote<] -- [url=https://techreport.com/news/24191/trusted-source-confirms-soldered-on-broadwell-cpus?post=700884<]Bensam123 in 2013[/url<] Your track record at making predictions may need a little improvement. I'd be REAL careful when making fact-free insults directed at Intel especially when you are insulting Intel for focusing on mobile chips... and the whole point of the article is that Apple would be using ARM in mobile where Intel has apparently focused its major improvements. Wouldn't want another Carrizo vs. Broadwell-K misfire... would we?

    • albundy
    • 5 years ago

    that analyst is as useful as the others. who in their right mind would believe this nonsense? if it is true, then this is the first analyst to ever get something right!

    • Ninjitsu
    • 5 years ago

    Nah, I don’t think it’ll happen by 2017. Far too soon. It’ll happen when Intel itself decides to transition away from x86 (breaking compatibility anyway), or Apple makes better performing [i<]and[/i<] more efficient chips, which will remain cost effective [i<]even after[/i<] including R&D costs. They also have to subsequently keep pace with Intel. This is assuming other foundries can keep up with Intel in the first place. I don't see enough of a motivation for it, except maybe having too much cash. And Intel could still just give them a larger discount and not be bothered. It just seems too large an investment for Apple and a mild inconvenience for Intel, given that Macs are about 10% of the PC market. Maybe if Apple buys AMD (and gets the x86 license), they'll pull the plug on Intel bought chips 2 years after this imaginary acquisition.

    • Milo Burke
    • 5 years ago

    How is the power consumption on Apple chips? If performance is creeping up on Broadwell, is their power consumption as low?

    It would be a bum move for Apple to switch to CPUs that are much less efficient with battery use. Not that I mind personally. Actually, I suppose I would enjoy it…

      • chuckula
      • 5 years ago

      [quote<]If performance is creeping up on Broadwell, is their power consumption as low? [/quote<] From my own benchmarks, performance is 'creeping up' on about 50% of the Core-m flavor of Broadwell. That's about a best-case scenario for non-intensive processing BTW, at number crunching I've benchmarked my Core-M at about 8x faster than the A8X in Linpack -- and the A8X was using all three of its cores. Power consumption: The Core-m and the A8X are similar in absolute power consumption numbers where it counts, giving the Core-m a major win at performance per watt. As a platform the A8X has some advantage simply because Core-m is a full-bore PC processor with full I/O, runs much more memory (8GB in my case) and includes real I/O including SSD, PCIe, ethernet, etc. Those take up some power that Apple doesn't have to expend simply because Apple doesn't have them. However, when you do an apples to apples comparison (pun intended) the processing cores in the Core-m are massively more efficient at getting a certain amount of work done using a given amount of energy.

    • End User
    • 5 years ago

    It is in Apple’s best interest to replace x86 with their own tech. Over the past 4 quarters Apple has delivered roughly 237 million SoC units. Adding another 20 million desktop parts per year should not be a problem.

    The A8X is a very powerful triple core SoC and it is hamstrung by being a mobile part. I’m sure Apple’s labs are testing ARM tech unfettered by mobile constraints.

    As far as OS X is concerned we know that its core features is already running on ARM as iOS was developed from OS X. I would be dumfounded if Apple were not already running ARM builds of OS X.

    Apple is poised to be the first major computer company to push ARM into the desktop space.

    Edit: Clarified wording

      • chuckula
      • 5 years ago

      [quote<]It is in Apple's best interest to replace x86 with there own tech. Just look at how well it worked for them in mobile.[/quote<] Yeah, could you post the article where Apple announced it was replacing its x86 smartphone chips with ARM chips again? I must have missed it.

        • End User
        • 5 years ago

        Hah. Poorly worded on my part.

      • mganai
      • 5 years ago

      Applying mobile thinking to desktop/laptop thinking. Riiiiiiight.

      Get back to me when ARM catches up to x86.

        • End User
        • 5 years ago

        [quote<]Get back to me when ARM catches up to x86.[/quote<] Don't worry. I will.

    • Shouefref
    • 5 years ago

    Fast chips? Come on, they compare them with the slower type of chips. They aren’t fast at all.

    • fellix
    • 5 years ago

    The main reason Apple moved away from IBM a decade ago was the unwillingness of Big Blue to invest in more power-efficient architecture and flexible product range for the needs of the booming mobile market back then, and the general mood in IBM to further disengage itself from the consumer market. This is in sharp contrast to Intel’s strategy we see today and as long as Intel is providing the needed technology for the right price to Apple, there would be no incentive to shake the boat. Unless other incentives are to coming in action.

    • Zizy
    • 5 years ago

    I don’t think Apple is going to migrate from x86 to ARM on OSX. Why would they? How could they, when ARM is slower?
    Nah, they will simply introduce iOS (ARM) products with increasing overlap, dropping x86 versions if ARM sells better.

      • HisDivineOrder
      • 5 years ago

      Apple has proven time and again that what people want is the perception of efficiency rather than outright speed. There is a certain level of speed required in order to allow for the perception of fluidity and then the rest is just gravy with diminishing returns.

      If Apple can get to that point with an ARM-based architecture (and in a couple of years they might well be there with ARM), it’s not hard to imagine they might endure such a transition if it enabled a thinner, lighter, smaller type of computer.

      That said, I think by then Intel will be offering a fantastic deal that’ll make the whole effort moot.

      I do think that–like the way Apple had an x86 port of their OS for a long time before they made the switch–Apple has a version of OSX being tailored to fit ARM already and it might even be ready for primetime in a couple years.

      Or more. Depends on Intel. Every year or so, Intel comes back to the table, reads these rumors, watches the panic that they might leave Intel from investors (in Intel) and then they make “the right choice.” And offer Apple amazing prices on CPU. Again.

      And everyone leaves happy.

    • Vergil
    • 5 years ago

    AMD’s K12 ARM 64-Bit SOC is launching sometime during the first half of 2016. Supposedly it’ll have a Server and a high-end desktop variants. Apple could very well use the K12 for their Macbook pros and iMacs; as well as their iCloud servers. And use their own ARM SOC for iPhone, iPad and Air respectively.
    It’s going to benefit Apple immensely on all 3 fronts: Power consumption, performance, cost efficiency and more Homogenous ecosystem across their multiple devices, including their upcoming iWatch.

    Brace yourselves, ARM World Order incoming XDD

      • Beelzebubba9
      • 5 years ago

      …why would Apple use an untested, unreleased, AMD ARM core when by most metrics they already have the best one on the market?

        • _ppi
        • 5 years ago

        Because unlike AMD, Apple (nor anyone from ARM camp for that matter) has no experience with high-performance chips whatsoever. I hope you do not really think it is as easy as overcloking A8X to 3+ GHz and adding 5 additional cores on the die.

        Look how long did it take Intel to make serious contender for tablet SoCs – like 3-4 generations. What makes you think Apple would be able to make the reverse move any faster?

        In addition, Apple can wait and see if AMD succeeds with K12 and then decide accordingly. I am doubtful AMD can do any worse than now, but then they are leaking R&D staff at unbelievable rate, so tough call.

    • HERETIC
    • 5 years ago

    Guys Guys Guys-over a 100 comments here mostly about CPU GPU.
    You have to think like the evil one…..
    12″ Tablet expected sometime this year-With a very well designed dock can be the
    testing ground for pushing air users over to this platform-then making the air obsolete….
    We can then make more money from the cloud $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

    • Klimax
    • 5 years ago

    If they want to see how quickly a company can be gone from certain markets, they can try. Losing performance (forget tablet version, you need to go bit higher…), losing apps and trying to compete with everybody using better CPUs out there…

    • ronch
    • 5 years ago

    [quote<]launch timing of the Mac line[/quote<] Is it really worth it in exchange for 1. Software incompatibility between the new machines and current software. Big headache for most Apple users, I would think. 2. Time to market. Apple may have control over the circuit design process but it doesn't have their own fabs. They rely on TSMC and other foundries, most of which don't meet their deadlines. Giving Apple first dibs on a foundry's new node doesn't help much either when you consider how these foundries miss their dates by about a year or so, sometimes being canceled altogether. Given these, I don't know how this move will ultimately allow them to put products to market faster. Sounds like a move some AMD eggheads would do. Intel, OTOH, pretty much meets their deadlines time and time again both for chip designs and process nodes, a feat AFAICT no other foundry has been able to match. With Intel's relentless tick-tock, at least Apple has some sort of reliable schedule to synchronize their schedules with. 3. Performance. Intel has one of the industry's highest IPC figures on its high end cores. To match that performance, Apple needs to create cores with similar IPC and clock speed (not to mention energy efficiency), but seeing as the next nodes from the usual foundries won't be built for high speed chips and will instead focus on energy efficiency optimized for mobile, Apple may well fail to meet clock speed targets so they really need to compensate with IPC. Can they do it? Easier said than done. And even if they do, what benefit would it bring the common end user who doesn't give a crap about the ISA used? 4. Costs. While it's well known that Intel makes a tidy profit from its chips, what makes it possible to price their chips the way they're priced is Intel's economies of scale. Intel serves many other OEMs and DIYers with boxed CPUs through the retail channel. Apple will be the only ones using their own chips. Will it be worth it? The cost of designing A8 may well be below the cost of designing chips that can rival Intel so saying that the A8 is proof that It could be done seems a little hasty. Overall, this sounds like Apple shooting themselves in the foot. But of course we'll only be able to confirm that maybe 5 years from now. Everything involving chips and things takes ~5 years to know if you made a bad move, right?

      • the
      • 5 years ago

      1) They’ve done it before…. twice. The PowerPC to Intel transition took 3 years from the first Intel Mac shipping to when they introduced an Intel-only version of OS X.

      2) Time to market compared to what? They would be free of Intel’s roadmap constraints but they’re not pressed to release systems before anyone else in the desktop space. They literally skipped an entire processor generation (Sandy Bridge-e) between the 2010 Mac Pro and the 2013 Mac Pro.

      3) Apple has done some impressive work in mobile in terms of single threaded and energy efficiency. The entire iPad, including the A8X, screen, wireless, etc. consumes 10W under high load.

      4) Cost is the reason why Apple would look at migrating Macs to ARM. They wouldn’t need to rely on extra chips for the littles things they do in their systems. Take PCIe storage for example: Appe has had it for years now while it has just become integrated into PC chipsets. Apple would have been able to directly integrate the PCIe storage into the Soc, saving space on the motherboard, the cost of an extra chip, reduced power consumption and likely would have been faster.

      There is a cost Apple would have to pay: internal development. Doing it yourself is difficult for most companies as the barriers to entry are quiet high for silicon development. However, Apple has already paid though and has the engineering talent in-house already. So the hurdles are a little smaller for Apple, never mind that they’re also swimming around in cash reserves.

      5) The main thing Apple has done to shoot themselves in the foot has been their instance of form over functionality. Compare the 2010 Mac Pro to the 2013 model to see exactly what I mean by that. For example owners of the 2010 model can plug in a new PCI video card when it arrives where as the 2013 models have a nice paper weight if they want to increase their GPU speeds.

      Apple’s race to the thinnest device ever really isn’t doing much to actually improve their devices. Example: recent iMacs. Who really cares if Apple shaved off nearly an inch in the system’s depth on a desktop machine?

        • ronch
        • 5 years ago

        1. How many times do they have to switch ISAs?

        2. So going with Intel’s schedule is worse than going with TSMC’s or GF’s schedules? Isn’t Intel one of the most consistent when it comes to execution?

        3. Expectations from mobile chips are quite different from expectations from workstations and such.

        4. At prices Apple products are selling for, I think the issue of placing the peripheral controllers on-die vs. on a separate chip set would be the least of their concerns. With Intel, all the validation is done for them. My point is, which is actually cheaper? Paying Intel for chipsets for every computer you sell, or designing all the controllers, links, etc. and validating them yourself for use with only the computers you sell? Intel has economies of scale. Apple does too, but nowhere near Intel’s volume.

          • Deanjo
          • 5 years ago

          One thing that you seem to be forgetting ronch is that Apple has no fear in dropping support for older architectures. Backwards compatibility with existing older software is not a big concern for them and with the majority of software for Macs being sold through the Mac store, it is easier for the end user, developer and Apple to transition the software from ISA to another.

          As far as Apple buying AMD, there is no upside for Apple to do so. Apple is cozy with intel, Apple is willing to invest into developing their own solutions and with AMD hurting so bad Apple could simply license any IP needed from AMD for a song (this is why nobody is scrambling to purchase AMD, why buy the cow when the milk is next to free?). If AMD still had their fabs, then they would be more enticing to buy out but they really don’t bring much to the table for anyone.

          • the
          • 5 years ago

          1) On the Mac side, they don’t have to as Intel is providing good competitive products. Rather the migration to ARM would be a cost saving measure that’d also enable better hardware integration.

          2) Intel’s road maps have had some terrible delays lately. Broadwell is at least 6 months late. The TSX bug has impacted the high end server market. Several Atom based SoC are also slightly behind schedule.

          There have also been some notorious bugs creeping up that Apple likely would have wanted to avoid. I already mentioned the TSX bug. The Xeon 5500 series chipset has a rare but nasty virtualization bug. The C606 has a bug that prevented the usage of 6 Gbit SAS (but it works at 3 Gbit). Due to this bug, SAS was also removed entirely from the X79 chipset prior to launch.

          As for dealing with foundry schedules, Apple isn’t dependent upon any particular one. The A8 and A8X are being split across two manufacturers: Samsung and TSMC. They could potentially add GF if they felt like it. So who to a new node first will get Apple’s business. There is also a fourth option: Intel themselves. They’ve been slowly opening themselves up to being a 3rd party foundry and is [url=https://techreport.com/news/25577/altera-partners-with-intel-to-build-64-bit-arm-chips-with-14-nm-transistors<]currently building some ARM based chips.[/url<] Apple can wave enough cash in front of them to fab their ARM based SoC. The real benefit of making your own time table is that they can move to 9 or 6 month product refreshes. Currently Intel's tick-tock strategy is a yearly update with a shrink every other year. Apple obviously wouldn't be expecting a shrink every other cycled with such an accelerated time table but they can still offer continual improvements to their product lines. It makes them far more agile. 3) The performance expectation is indeed higher from workstations but Apple has been slowly working toward a more heterogeneous compute model. Take the Mac Pro for example, one of the GPUs is dedicated to GPGPU software acceleration while the other handles actual display graphics. On the CPU side, Apple has traditionally pushed parallelism and designing there own SoC would enable them to scale to higher core counts (and at far far lower cost than Intel's Xeons). 4) It is not just Intel they're paying for components. There PCIe based SSD needed a proprietary chip to help drive them. Thunderbolt is still a separate component. The Mac Pro needs a PCIe bridge chip as there are not enough lanes for everything. Designing chips yourself enables you to get exactly what you want in a chipset without having to add additional 3rd party components to supplement the design. There is indeed a high initial cost for validation but much has already been spent in this area for validating their mobile SoCs. Long term it would be advantageous, especially if what Apple wants in a chipset is vastly different than what Intel is going to offer.

    • Peter.Parker
    • 5 years ago

    Ok, so they might have CPUs fast enough for the MBA and iMacs. But MBP and the cute New Mac Pro, they need a much more powerful CPU. Of course, they can’t (or SHAN’T) have two versions of the OSX, one on x86 and the other on ARM. It doesn’t make any sense to split the OSX in twine for the sake of controlling the CPU production ( assuming they can actually control it, since it’s depending on suppliers and all their variables).

      • ace24
      • 5 years ago

      It’d hardly be unprecedented… they’ve shifted macs from 68k -> PPC and PPC -> x86. The first shift was ~20 years ago, the last shift was ~8 years ago. Wouldn’t be hard to believe they’d switch again in 2 years, it would be approximately 1 architecture change per decade.

    • Pwnstar
    • 5 years ago

    I see this declaration every year.

    The news might as well just say “Analyst pulling stuff out of his butt”.

    • Billstevens
    • 5 years ago

    woopied fucking doo. In 1-2 years laptop class chips will be a lot faster too. I wonder which people will prefer for compute power… There is a reason Apple started using Intel Chips in the first place for laptops and desktops….

    • Krogoth
    • 5 years ago

    Not going to happen.

    Apple isn’t willing to commit the capital and time involved with semiconductors especially when silicon is about to run out of steam. A number of veterans in the semiconductors industry are quietly exiting the market. Only the big boys have the capital to continue against the laws of diminishing returns and physics, but at some point soon the easy ride is going to end.

    Intel is the undistributed champion in manufacturing side. ARM, AMD and Samsung only managed to carve out their own niches, but Intel is hot on their trail with own ULV, embedded and SOAC designs.

    Protip: Apple’s own “designs” are just modified ARM chips. Samsung handles the manufacturing.

    What the analyst should be saying is that Apple is going to continue license designs from ARM and there’s a good chance that ARM designs will replace Apple’s “Intel based” offerings. I don’t think it will happen since Intel is catching up in the ULV and embedded world.

      • Pwnstar
      • 5 years ago

      Apple might not be willing with diminishing returns but they could easily afford it. They could throw 10 billion at TSMC and think nothing of it.

        • Krogoth
        • 5 years ago

        It doesn’t make fiscal sense.

        Semiconductor manufacturing tech requires a ton of capital and years to pay off the original investment. You also have to continue to some tons of capital to retooling and upgrading your fabs for the next node.

        Apple is going to stick with custom-designing “licensed” designs and leave the manufacturing to third-parties.

      • adisor19
      • 5 years ago

      “Protip: Apple’s own “designs” are just modified ARM chips. Samsung handles the manufacturing.’

      How about a reality check : A6, A7 and A8 are custom FROM THE GROUND up designs that have nothing to do with any of ARM’s licensed cores. Apple invested billions of $ to buy out companies and brains to make this happen. They first dabbed their feet in custom hand optimized CPU mask design with the A4 and due to time constraints, they just used an off the shelf part for the A5. However, the A6 and more recent CPUs are custom designs with no input from ARM.

      Adi

        • DancinJack
        • 5 years ago

        I’m not sure i’d go so far to say no input from ARM, but you’re mostly correct as far as I know.

        • Krogoth
        • 5 years ago

        They have components that are derived from ARM designs (for software compatibility) which Apple got permission to do through licensing.

        It is just like AMD x86 chips share a number of similarities to Intel x86 chips for software compatibility via licensing agreement made back in 1990s.

        Apple didn’t create an architecture and design from ground up.

          • NTMBK
          • 5 years ago

          No, really. Apple created an architecture and design from the ground up, using the talent they acquired from PA Semi and Intrinsity.

          EDIT: Lol, downvoted for speaking the truth? Fun. Apple’s Swift and Cyclone bear as much resemblance to an off the shelf ARM core as Haswell does to Jaguar. It’s a totally different (and much wider) design.

          • Deanjo
          • 5 years ago

          [quote<]They have components that are derived from ARM designs (for software compatibility) which Apple got permission to do through licensing. [/quote<] From the A6 on, Apple designed their CPU they have an architectural licence not a core license. Only the A5X and older were using a core license.

        • chuckula
        • 5 years ago

        Are those chips “customised”: Sure.
        Are they “from the ground up?”: LMFAO, not even close.

      • the
      • 5 years ago

      Apple ‘modifies’ ARM chips like AMD has ‘modified’ an Intel design. Apple designed their CPU cores from scratch and only shares the common instruction set with other ARM chips.

      And this is exactly why I’d pay off for Apple to invest more into their own designs. If Moore’s Law hits a brick wall and explodes, the only way to go faster is to invest into accelerators and the software stacks that utilize them. Apple is a very, very vertical company and any sort of accelerator they’d put into their designs will be used in short order.

      As for Intel’s manufacturing lead, they’re attempting to become a foundry and have opened up to 3rd party designs on a limited basis. Case in point, there are several FGPA products coming out of Intel’s fabs with ARM cores in them.

      Edit: typo fixo, thanks erno.

        • ermo
        • 5 years ago

        “(…) will be used in [b<]sort order[/b<]" I know you meant "s[u<]h[/u<]ort order, but "sort order" does have a nice ring to it too. Not sure what to sort on besides the names of said tech, though...

      • HisDivineOrder
      • 5 years ago

      The advantage for Apple would be to get the GPU patents and engineers, then build themselves their own variant on Imagination’s PowerVR with some ATI sauce built-in.

      This would not be terribly unlike their modifications to the ARM architecture they do routinely, but it would enable them to distance themselves from outright reliance on Imagination. If Apple has proven anything, it’s that they get better pricing when they have alternative sources of parts, which is why them investing in their own CPU development team is likely as much about getting better pricing from Intel on chips for their PC parts or Samsung on fab or Foxconn on device manufacturing as it is about the joys of making their own CPU.

      I can’t imagine they don’t want to do the same with the GPU in the long run.

        • NTMBK
        • 5 years ago

        Apple don’t modify the ARM architecture; they completely replace it with their own CPU design.

          • Ninjitsu
          • 5 years ago

          I don’t know where you’re reading all this. It’s an ARMv8 core. The architecture is ARM’s. The core is theirs. Just like Steamroller is an AMD core based on the x86-64 architecture, or Qualcomm’s Krait is based on ARMv7.

      • End User
      • 5 years ago

      [url=http://www.gsmarena.com/tsmc_reports_strong_q4_earnings_thanks_to_apple_deal-news-10777.php<]TSMC handles Apple A8 and A8X production[/url<].

    • chuckula
    • 5 years ago

    There’s one key ingredient in this coming miracle that nobody — including that ANALyst* has mentioned, and it’s a biggie!

    Three little letters: G. P. U.

    I can’t believe I’m hearing myself say this, but it’s true: While Apple can at least emulate the performance of a lowballed Core 2 these days with their magical “custom cores” Apple has never… NEVER done its own GPU. That’s right kids, Apple needs Intel for its graphics capabilities even more than it does for the CPU (and believe me, Apple needs that CPU too).

    Apple relies on Intel, Nvidia/AMD (for discrete cards), or — in the case of the smartphone chips, Imagination for all of its GPU needs.

    Imagination makes some cute little GPUs for smartphones. Emphasis being “cute” and “little” Imagination does *NOT* do high-end graphics.. and by “high end” I mean last year’s desktop IGPs. Some of us who are fossilized enough to remember something called the “Kyro” have distant memories of Imagination’s first, and brief, incarnation as a high-performance GPU manufacturer… didn’t go too well.

    For all the purported “stagnation” from Intel (hey, AVX-512 only *doubles* the per-clock vector and FP integer performance starting in 2015) one area where Intel is not standing still is the GPU.

    So it’s 2017, what is Apple going to have to go up against a Cannonlake GPU? Oh but Apple can make it’s own GPU too you say! Well sure, but even going with the buyout route, it’s going to take a lot of time and heartburn to get a real GPU integrated into an “A” series part.

    * Obligatory Arrested Development clip: [url<]https://www.youtube.com/watch?v=pz8aYiH_nRg[/url<]

      • HisDivineOrder
      • 5 years ago

      On a side note, it’s worth noting that sometimes Intel uses Imagination GPU’s in their lower end products.

      I presume the talk of CPU performance improvement is probably in reference to the notion that both the CPU and GPU are improving in performance by leaps and bounds each year. This article seems to imply both will be good enough to run a full system in 2017. Personally, I don’t buy it and I see it as a ploy that Apple uses to get better CPU pricing each time they have to negotiate pricing, but hey…

      I think they might be powerful enough by then. Especially since Apple essentially designs its own custom parts built around technology developed by ARM and Imagination. So it’s not hard to imagine Apple scaling a chip up if they REALLY had to.

      I just think they’d rather use Intel. For cheap.

        • Pwnstar
        • 5 years ago

        I don’t consider Intel’s 5% CPU improvements “leaps and bounds”.

        GPU is, you are right, but then they are easy. Just add more cores.

          • DancinJack
          • 5 years ago

          He’s talking about Apple’s SoCs when he says leaps and bounds.

            • Pwnstar
            • 5 years ago

            No, he is clearly talking about Intel. Reread HDO’s post.

          • Ninjitsu
          • 5 years ago

          Well then I don’t think you understand CPU design in the slightest.

      • shiznit
      • 5 years ago

      Imagination can scale out beyond mobile SoCs, they don’t now because licensees aren’t asking them to. And nothing is stopping Apple from modifying design to fit their needs, look what they did with ARM. With raytracing coming in Wizard and Apple’s Metal api, I think it would be a pretty compelling solution.

        • blastdoor
        • 5 years ago

        I totally agree.

        The Internet is littered with the mistaken predictions about what Apple *cannot* do, because they aren’t a “real” computer company.

        Just as Apple slowly and semi-quietly built up a CPU design team, enabling them to do what previously only Qualcomm could do (build a custom ARM core), they are doing the same thing with GPU design. Over the last 15 years they’ve been building up a GPU team. It’s semi-public knowledge, in so far as the Apple rumor sites faithfully report every acquisition and linked-in profile associated with the effort.

        Apple began to deviate slightly from off the shelf Imagination designs with the A8X. I bet their customization starts picking up steam in the A9, and by the time the A10 rolls around, Apple’s Imagination-based GPU may have as big of a lead over the off the shelf versions as cyclone has over the A57.

      • the
      • 5 years ago

      You are a correct that Apple doesn’t do their own GPUs…. yet, kinda. They’ve been syphoning talent from AMD, nVidia and a few from smaller players. They’ve put that talent to use to get a GPU configuration in the A8X that Imagination Technologies doesn’t natively offer.

      On the other hand, Imagination has been doing a bit of saber rattling with going higher end. Their Rogue 7 series scales far higher than previous incarnations. They wouldn’t be a threat to the GTX 980 but midrange discrete cards like the GTX 750 Ti could have trouble. Then there is Imagination’s real time ray tracing chips they’re quietly preparing.

      With regards to stagnation, it is true on the Intel side. The big jumps in performance stem from new extensions like AVX3 which requires software to be recompiled. There is also the case as Intel continues to extend the x86 architecture, fewer of these extensions make sense to utilize in the general case. Sure, there is a niche to merit extending the ISA but how much generically can be put to use? For example, what good are the SSE4 extensions that help with AES decryption for a graphics application?

      ARM designs take the opposite approach: keep the core instruction set lean while letting coprocessors take care of niche functionality quickly. Thus the parallel niche AVX3 would fill can be handled by GPGPU. Apple has a huge tactical advantage here by being a highly vertical company. Any sort of accelerator that would make sense to include in their SoC will have software support.

        • Ninjitsu
        • 5 years ago

        The only counter-point I have is that by 2017, the 750 Ti would pretty much represent entry level performance, and the 980 would be mid-range.

        But yeah, I guess if Apple’s just aiming for “good enough” performance, it should be more than adequate.

      • derFunkenstein
      • 5 years ago

      I’ll admit that it did not look good on paper.

      • Zizy
      • 5 years ago

      Why would Apple even need to design their own GPU to move away from Intel? Imagination can reach about 500 GFLOPS (FP32). Sounds plenty to put in Air, Mac mini and similar. For anything more, NV and AMD sell GPUs. Or even license. Think X1/PS4 with Apple CPU instead of cats.

      Also, Apple has enough money to buy both AMD and NV if they want some GPU engineers and feel like developing their own GPU. But if they bought both companies, they already have top GPUs 🙂

      • NTMBK
      • 5 years ago

      The second biggest GPU vendor has been drastically downsizing over the past few years… where do you think their ex-employees have been turning up?

      Apple is building up GPU design capability.

    • HisDivineOrder
    • 5 years ago

    It’s a funny thing. I’ve read this rumor many times. Usually toward the beginning of a given year and usually about 3-6 months after I first (re)read the rumor (again), there’s an announcement about a new line of CPU’s for Apple computers with Intel and Apple proudly advertising the fact that a new line of CPU’s have arrived in Macs.

    You’d think after a half-dozen times, Intel would get wise that these rumors are just “leaked” by Apple guys to get better prices on Intel chips by putting the “We might go elsewhere” boogieman out there.

    It used to be AMD they used. When AMD went kaput, Apple went and bought their own CPU design team to always have their fallback/leverage over Intel. I imagine the negotiations for CPU prices are just getting started again…

    • NovusBogus
    • 5 years ago

    So Apple is looking to get rid of the thing that made users actually care about Macs again? Yeah, um, that actually sounds pretty typical for Apple.

      • Chrispy_
      • 5 years ago

      Indeed; PowerPC Apples were a joke, not because the PowerPC wasn’t any good, but because they weren’t Intel x86 so software that ran at all was scarce and software that ran well was even rarer.

      I would imagine that Apple is trying to cloudhost processing power anyway; Soon the Mac Pro will have almost no processing power or storage. The ludicriously high cost of entry buys a glorified interactive display that connects you to Apple’s Intel-powered cloud network for [i<]x[/i<] years.

        • the
        • 5 years ago

        Apple has solved the software scarcity problem with the App Store. Not that ARM software would inherently be more common but end users wouldn’t have to look around to find it.

        Apple’s development tools also make it trivial to bundle multiple binaries in one package. OS X can represent 32 bit PowerPC, 64 bit PowerPC, 32 bit Intel and 64 bit Intel executables as one single package for the end user to run.

        Conceptually cloud hosting would work until you realize that you’d also have to host your data in the cloud too. For the average person, that’s not so trouble some but for say a video editor, it is a deal breaker. It wouldn’t work for a professional system.

        Edit: typo fix.

          • Chrispy_
          • 5 years ago

          Apple software and development tools are completely irrelevant.

          What made Intel x86 hardware valuable to most users is the fact that you could run [i<]non-Apple[/i<] software on a Mac.

            • blastdoor
            • 5 years ago

            “most users”? I think you’ve got a very skewed perspective. I suspect that a tiny fraction of MacBook Air users ever boot into Windows or use one of the virtualization products. The fraction might be a little higher for iMacs and Pro machines, but even there I’ll bet it’s less than 10%.

            When I switched (back to) the Mac in 2006, it was certainly comforting to know that I could run Windows on a Mac at native speeds. I viewed it as a safety net, in case Apple gave up on the Mac or in case there was some critical piece of software I absolutely had to run that was only available for Windows.

            The state of the Mac in 2015 is very different. There is no worry about the platform going away, and software availability has improved since then. I don’t think the safety net is really needed anymore.

            • the
            • 5 years ago

            Being able to run Windows as a native VM did assist Apple in expanding market share in several professional niches. The average user though sticks with OS X exclusively. Now the preferred method of getting software is via the Mac App Store. This is why it wouldn’t be a major issue to change platforms as the ARM updates can be pulled automatically from there.

            And if all else fails, there is a version of Windows that’ll also run on ARM hardware and Linux runs on everything.

            • Beelzebubba9
            • 5 years ago

            Macs are the most common type of computer among the professional set I work with (web programmers, enterprise IT in NYC) substantially because they have the best type 2 hypervisor support.

            I don’t run Windows all that often on my work Macs, but having that option is important.

            • mganai
            • 5 years ago

            Win RT is already a joke though. Besides, what’s the point of Windows without the ability to natively run x86 stuff to begin with?

            • End User
            • 5 years ago

            [quote<]What made Intel x86 hardware valuable to most users is the fact that you could run non-Apple software on a Mac.[/quote<] That was very important to me back in 2006. Over the past 9 years my reliance on Windows apps has faded away to 0 due to the wide selection of apps now available to OS X, the cloud, and the fact that developers are now savvy enough to develop for multiple platforms.

        • blastdoor
        • 5 years ago

        You’re way off.

        The G4 and G5 were terrible compared to x86 contemporaries. It was the result of years of underinvestment by IBM and Motorola (not that I blame them — the sales volumes didn’t justify the investment).

        The specific motivator for switching to Intel was that it was impossible to get a G5 into a laptop with acceptable performance/watt.

          • Deanjo
          • 5 years ago

          [quote<]The specific motivator for switching to Intel was that it was impossible to get a G5 into a laptop with acceptable performance/watt.[/quote<] Bingo! (although it was less about performance and more about just pure heat). Still the G4/G5 held their own against x86 until x86 came out with SSE that negated the Altivec advantage that the PPC's had at the time.

          • the
          • 5 years ago

          The G4 stagnated as a process shrink didn’t result in much higher clock speeds. (450 Mhz to 550 Mhz). This was exactly at the worst time as Intel and AMD were racing to be the first to 1 Ghz. It also didn’t help that one of the lead designers died in a plane crash which made the modified G4 chip (PPC 7450) very late to market. There was also a second FSB scheme announced for the G4 but Apple didn’t want to adopt it so performance was continually bottlenecked there.

          The G5 fixed the FSB problem. Unfortunately, there was a nasty integer design defect that made integer operations take an extra cycle to complete. While the G5 was a made jump in performance even on the integer side, it still lagged behind the Pentium 4 and Athlons of the day in terms for those workloads. Switch the topic to floating point and the G5 was very respectable due to the inclusion of FMA in the core ISA and the flexibility of Altvec.

          The real problem Apple faced with the G5 was the cost of the chipset. Things like that FSB needed to be initialized by a service processor before boot which required a good chunk of additional logic. For the POWER4 that the G5 was derived from, it made sense as a server but as a consumer desktop these costs added up. The chipset was also a source of contention between Apple and IBM. When the G5 chip was in development, Apple was responsible for the first generation chipset which IBM would license for their servers. Apple’s memory controller was horrible and IBM noticed. The second generation chipset IBM developed themselves but it required a significant amount of external logic (the PCIe controller isn’t integrated into the north bridge for example).

          The inability of the G5 to get into a laptop was a problem for Apple. Hence why they were looking at PA-Semi’s low power chip as an alternative. When things broke down between Apple and IBM for the desktop roadmap, the PA-Semi deal fell through. Ironically, it was PA-Semi that Apple bought to be the corner stone of their CPU development team.

            • blastdoor
            • 5 years ago

            Yup, those are the details. But I contend it all stems from underinvestment. Nobody (Apple, IBM, Moto) had the incentive to make the investments necessary to make PowerPC competitive. The volumes were just too low to support that level of investment. In the early 2000s Apple was selling fewer than 4 million Macs a *year* (now Apple sells more than that in a quarter). And Apple was pretty much the only user of G4s and G5s. Given the economics, it’s amazing the G4 and G5 weren’t even worse.

            Imagine if Apple were selling between 15 and 20 million Macs a year back then (as they are now). The economics of PPC would have been very different from those higher volumes alone. Now imagine that in addition to 15 to 20 million Macs, Apple were also selling over 200 million devices per year using some of the same core components as the Mac. How competitive might PPC have been then? I’d say — pretty darned competitive. Yet that’s exactly where Apple is now with their ARM SOCs.

            I have to chuckle at the people who dismiss Apple’s efforts here, saying that only “big companies” can afford to invest billions of dollars and years in development on their own CPUs. What company is bigger than Apple? Who has more billions to spend? Who is more willing to make long term investments?

            • the
            • 5 years ago

            The catch with your iPad and iPhone parallel is that the SoC is designed in house. While a member of the PowerPC consortium, Apple never designed a core themselves. Much like Intel’s usage in Macs today, Apple has to rely on the designs Intel brings to market and not exactly what Apple wants.

            PowerPC did get its investors though. I’ve already mentioned PA-Semi as a designer but before that the was Exponential and the [url=http://en.wikipedia.org/wiki/X704<]X704 chip[/url<]. PowerPC was also chosen by the big three players in the console business to the be CPU core of choice for their platforms.

            • sweatshopking
            • 5 years ago

            TNX NERD.

    • PrincipalSkinner
    • 5 years ago

    So if this happens, maybe intel won’t be able to charge $400 for dual core CPUs anymore.

    • DancinJack
    • 5 years ago

    [quote<]Basing those systems on homebrewed silicon would give Apple greater control over "launch timing of the Mac line," Kuo believes, since the company would no longer be tied to Intel's product cycles.[/quote<] Being tied to TSMC and Samsung isn't really any better.

      • chuckula
      • 5 years ago

      I’m breeding my own mule for transportation so I don’t have to be tied to the operational schedule of those Pratt & Whitney turbofans on that jet.

    • Laykun
    • 5 years ago

    Yes I’m sure Apple and third-party OSX developers would be more than happy to port their software to ARM. I’m also sure Apple would love to retroactively destroy the progress Macs have made as gaming machines by making all the current titles non-functional, because you know, computers are about the hardware, not the software. /s

    Seriously, what has this analyst been smoking?

      • blastdoor
      • 5 years ago

      Apple has paved the way to make the software transition fairly painless. Carbon is long gone and the only software development tools are controlled by Apple, and Apple has taken its compilers/tools down a path that is compatible with cross-cpu compatibility.

      Compared to the other CPU transitions Apple has made (at times when they were a much smaller, much more “beleaguered” company), this would be easy-peasy.

        • Pwnstar
        • 5 years ago

        Devs are lazy. They aren’t going to port their back catalogue of games to ARM just because of Apple.

    • blastdoor
    • 5 years ago

    I find this prediction credible both because of the track record of the guy making it and the substantive rationale for doing it.

    I’ll guess that an A10X is between 2 and 3 times faster than an A8X for CPU tasks. I base this on these assumptions:

    1. 4 cores instead of 3
    2. 2.5 GHz instead of 1.5 GHz (when used in a Mac)
    3. 40% increase in IPC

    For tasks that benefit from the extra core, that’s a cumulative 3x speed increase. For tasks that can’t use the extra core, it’s a cumulative 2x increase.

    For the work that people do on a MBA, I think that’s fast enough, and basically takes general purpose CPU performance off the table as an issue. But certainly this is a level of performance that, in and of itself, is insufficient to justify the switch. So here’s what I think will justify the switch:

    1. Price. Apple will probably save between $50 and $100 in marginal cost for every Mac.

    2. Customized fixed function units. I think this might be the deciding factor. Apple can implement features in hardware that others can only implement in software. I’m not sure what these features will be, but I think it’s noteworthy that Apple bought the company behind the 3d sensors used in Kinect. More obvious fixed function units could focus on video encoding/decoding and of course this will make it easy to include TouchID in an MBA.

    • chuckula
    • 5 years ago

    I believe this to the exact same degree that those analysts predicted the booming future market for Itanium.

      • Takeshi7
      • 5 years ago

      Itanium would have been sweet if it would have been accepted. They should have used it in markets where backwards compatibility isn’t as important. Like an Itanium game console. *Drool*

        • Grigory
        • 5 years ago

        They idea to put information about the possible parallelism of the code into the code itself at compile time instead of having the CPU extract it at great cost at every execution is pretty solid.

          • mesyn191
          • 5 years ago

          Except no compiler will do this well with branchy scalar code. You need lots of programmers to make up for the deficiency of the compilers or else your software will run very slow.

          Which was a common issue with ported Itanium software. It was slow on everything but simplistic FP code but then all VLIW uarchs are great at that sort of thing.

        • bhtooefr
        • 5 years ago

        Backwards compatibility wasn’t Itanium’s biggest problem, it was that Itanium only worked well on specific workloads, even with native code, due to how it was structured.

        • mesyn191
        • 5 years ago

        No it wouldn’t have. The performance was never there and the compilers were never able to make up for the short comings that Intel said they would. It was used in markets where backwards compatibility wasn’t that important but reliability was, it failed there too.

        For a CPU that was supposed to be small, cheap, and capable of high clocks it was always large, expensive, and had mediocre clock speed.

        • Krogoth
        • 5 years ago

        You realize that Itanium was meant to go against RISC and other “big iron” CPU architectures back when it was being developed?

        It was never meant to be a mainstream design. It was a HPC chip from the start. The Itanium’s biggest fault was that it was too little, too late. The big iron market was dying by the time it came out. Cluster and distributed computing was taking over the HPC world. It was competing against cheaper and more versatile solutions.

          • the
          • 5 years ago

          No. It was Intel’s and HP’s master plan to make Itanium the 64 bit platform of the PC world. It is true that all the Itanium chips were released were server/workstation focused but Intel’s original plan was to have [url=http://www.xbitlabs.com/news/cpu/display/20040512052741.html<]Tejas ship with IA64 functionality[/url<]. Also Itanium was never really targeted at the HPC space. Sure, it was used there by SGI for a bit as MIP fell off into the embedded realm but that was pretty much it. In reality, Itanium has become the platform for HPUX, OpenVMS and Nonstop operating systems.

            • Krogoth
            • 5 years ago

            That was a long-term goal after the software ecology for IA64 was mature enough that it would allow x86 crowd to wean away.

            Itanium (Not IA64) was a HPC chip from the start and it was targeted at “Big Irons”. IA64 started development back in 1992 when Big Irons still had demand. It was Intel’s DEC, SUN and SGI “killer”. I suppose that was successful in that regard since those platforms were on life-support by the time Itanium reached commercial channels. Itanium end-up competing against its x86 counterparts and lost. AMD’s revision to X86 standard a.k.a x86-64 was the final nail in Itanium’s coffin.

            Itanium is a only small niche that serves its existing userbase.

            • the
            • 5 years ago

            I’ve always heard that [url=http://en.wikipedia.org/wiki/IA-64<]IA64 was Itanium[/url<] and not a different entity. Perhaps you are thinking of another Intel project likethe i860 or i960?

            • Krogoth
            • 5 years ago

            IA-64 was the codename of the architecture that was meant to replace x86 in the long-term. Itanium was the first commercial product that came from it.

            Intel was going to use it go after HPC market and then slowly disseminate it to other markets once the software ecology adapts to it and moves away from x86.

    • Takeshi7
    • 5 years ago

    And here I am, still rocking an Apple with an IBM processor.

      • willmore
      • 5 years ago

      *scoff* Motorola, baby.

    • sweatshopking
    • 5 years ago

    YOU SPELLED MY NAME WRONG. I’VE BEEN SAYING THIS FOREVER.

      • Srsly_Bro
      • 5 years ago

      I CAN SPELL IT RIGHT. IT”S SWEET SHOP KING!

        • sweatshopking
        • 5 years ago

        I’M GOING TO HUG YOU UNTIL YOU BEHAVE.

          • Srsly_Bro
          • 5 years ago

          I”LL NEVER BEHAVE!

          • ronch
          • 5 years ago

          I’d love to be the mischevious and rowdy student of a sexy, pretty teacher in school who does that.

        • ronch
        • 5 years ago

        Is it (Sweet) (Shop King) or (Sweet Shop) (King)?

          • Srsly_Bro
          • 5 years ago

          HE HAS A BAKERY AND HE SAYS NICE THINGS TO ME

      • MadManOriginal
      • 5 years ago

      They spelled it half right: ANALyst.

    • tay
    • 5 years ago

    Analysts are dumb as shit, and know nothing. They get to their positions through hard work academically, connections, and good social skills.
    Witness
    – crude oil price targets that get revised constantly based on actual
    – stock target prices that get revised up or down after the price is trending already
    – PC and tablet sales predictions made from thin air just reflecting previous trends

    In other words, I’m not buying this and their guess is as good as yours or mine. I don’t see a reason for it just yet.

      • NeelyCam
      • 5 years ago

      Nobody has ever gotten fired for looking at the last N years and extrapolating to the future.

    • smilingcrow
    • 5 years ago

    Cyril – “I’m sure Apple will have silicon powerful enough to drive new MacBooks and iMacs by 2017.”

    That seems a stretch considering the iMacs use the fastest Core i7 desktop chips rated at 88W. To go from a high end tablet chip to a high end desktop chip in that timeframe would be incredible.
    Apple have no experience of designing and just as important fabricating such a chip. It’s not easy, just ask AMD and the foundries.
    I can see Apple getting there eventually but not so soon. They could get the multi-threaded performance earlier as they aren’t allergic to using large core counts (at least in Mac Pros) but hitting the single core and power efficiency targets aren’t so easy.
    I suppose it also comes downs to how much it will cost Apple to develop them and how that relates to the volumes they ship.

      • bhtooefr
      • 5 years ago

      It is worth noting that Apple could probably beat damn near anything AMD can throw at them CPU-wise, using today’s A8X design adapted to different situations (overclocking and adding cores). That’s how bad AMD is off…

        • smilingcrow
        • 5 years ago

        That’s a big ‘probably’ as taking a low power design and scaling it to a high performance fabrication process isn’t easy. Look at the problems the fabs are having with high performance nodes below 28nm. Not saying it couldn’t be done but it’s not a given.

    • jjj
    • 5 years ago

    They can already do it for an Air at a lower price point on 14/16nm ff.
    They go quad, they up the clocks vs 20nm and that’s good enough. A 20$ SoC and a few more tweaks would allow them to push the starting price to 699$ (at least) from 899$ and that would allow for a significant gain in market share.
    What is unclear is if they care enough anymore about the Mac and if it’s not rather late. Think what impact foldable screens will have on PC sales. When even 10 inch tabs will fit in any pocket, it will be a lot harder to sell laptops. And then glasses would push even further. If Apple chooses to set the Mac line on auto pilot and focus on future categories, it wouldn’t be that surprising.

      • xeridea
      • 5 years ago

      The reason that an Air is $899 instead of $699 is because it is shiny, and Apple fanatics like shiny things. It has little to do with how much it actually cost to make.

        • Pwnstar
        • 5 years ago

        But if Apple reduces the cost to make it, they can lower the price while keeping the same amount of profit.

          • NeelyCam
          • 5 years ago

          Or they can keep the price, make more profit and keep investors happy.

            • Pwnstar
            • 5 years ago

            They could, but remember the lower the price, the more you sell. If they keep the same profit margin, selling more units means more profit. Investors like that.

            • NeelyCam
            • 5 years ago

            Yes. You determine the sales volume as a function of price. Then, you determine what the single-unit profit is as a function of price. Multiply those two, and figure out where the product’s derivative is zero.

            Boom. Maximum profits. Assuming sales volume and profit are both positive at least for some price range.

            And then you hope your competitor doesn’t slash prices or release new chips that would screw up your sales volume formula.

    • WhatMeWorry
    • 5 years ago

    So in two years time we’ll get something “…between Intel’s Atom and Core i3 lines”? At least to me, that doesn’t sound very exciting. And Intel is just going to be sitting down in the mean time? Won’t they be at Cannonlake or something.

    Nonetheless, exciting times. Maybe if AMD falters, Apple will take up the mantle. Hey, I made a pun. get it: AMD and mantle?

      • Pwnstar
      • 5 years ago

      Surely this “analyst” means what Atom and i3s will be in 2 years time.

      • mganai
      • 5 years ago

      Analyst doesn’t know what he’s talking about. i5 has been the default for iMacs for nearly 4 years.

    • Thrashdog
    • 5 years ago

    More likely they just drop OSX like it’s hot and make the ARM-based Macs into giant, glorified iPads with keyboards and (maybe) mice. Other than the five people who still use Mac Pros for creative work, I doubt anyone would even notice the difference.

      • chuckula
      • 5 years ago

      You may have just called it: While we argue technical merits, Apple just goes and declares whatever chip they happen to have in 2017 as “good enuff” and Macs turn into oversized iPads…. and Woz weeps bitter bitter tears.

    • Farting Bob
    • 5 years ago

    I dont think the Mac line is strong enough to break away from x86. Intel make the fastest consumer CPU’s and will continue for a long time, and losing that would hurt their prosumer market, and losing the option to dual boot windows would dissuade some.

    And if they went with their own ARM based CPU just for the low end or smaller macbooks, then you have 2 different versions of MacOS that would be incompatible between some laptops and all their other higher end Intel ones.

    In tablets people are fine with ARM and iOS. In laptops and desktops, i dont think it’s worth it.

      • Ethyriel
      • 5 years ago

      The rumored 12″ MBA is actually extremely tablet like, with a single USB port for connectivity and charging combined. I don’t know that the rumors are true, but maybe they’re priming their product line to move IOS to the MBA while the MBP and Mac Pro stick with OSX. OSX has already been moving towards IOS in interface design for years, and I’m sure IOS can be extended towards OSX for these use cases.

      Or maybe they’re planning to throw more cores at the higher end machines like I suggested in another thread. Maybe stream processors, too.

      Or, more likely, maybe this is BS.

      • Voldenuit
      • 5 years ago

      [quote<]if they went with their own ARM based CPU just for the low end or smaller macbooks, then you have 2 different versions of MacOS that would be incompatible between some laptops and all their other higher end Intel ones.[/quote<] Cough. Windows RT. If Microsoft was dumb enough to make this mistake, what makes you think Apple, in their hubris, would be any wiser?

        • bhtooefr
        • 5 years ago

        Microsoft was also dumb enough to prevent RT devices from running any desktop apps other than what shipped on it (and Office), and prevent them from joining a domain. Microsoft has never properly pulled off a CPU architecture change (their best attempt at it was with Alpha (and that was only because DEC forced them into it to settle the Mica/NT lawsuit), and even then 99% of software was running in emulation, it only worked because the Alphas at the time were ridiculously fast compared to the x86s at the time), either, because they don’t have enough control over their market.

        Apple, on the other hand, has a lot of experience jumping CPU architectures. They’ve got a lot of control over their ecosystem, and could easily force an architecture change down developers’ throats even against their will. Granted, they’ve done architecture jumps in the past when there was a significant performance improvement, and this wouldn’t be (and I honestly don’t see them doing a straight jump to ARM, but rather moving devices to iOS if they want them to be ARM).

      • the
      • 5 years ago

      Except Apple did the whole hardware transition thing in the paste… [i<]twice.[/i<] Macs originally used Motorola 68k chips before migrating to PowerPC. Then OS X came along which helped the transition to Intel greatly. All the pieces are in place for a rapid transition if they decide to go that route. Case in point, the time from the first Intel Mac shipping to delivering the first Intel only version of OS X was three years.

        • Klimax
        • 5 years ago

        Always to better thing. Here, in absolutely best case which in real world it won’t be, it will be sidestep but far more likely strong down grade with no positive side.

          • the
          • 5 years ago

          There are several positive sides. For the MacBook line, it’d be mean tighter integration and lower power consumption. On the desktop side, it’d mean lower prices for Apple.

          Then again, this is Apple so even you are correct that it’d be a downgrade, Apple has oddly moved in that direction before. Take the 2014 Mac Mini as an example.

      • demani
      • 5 years ago

      But Apple could ship fat binaries again, and it will be like the late 90’s all over again (when they switched to PPC). Thing is, if any company can make it happen it’s Apple, but the question is partly what the actual benefit is (chip design isn’t cheap) and whether they want to risk falling behind on performance again like they did with the G5s seems like this is completely hypothetical, and not based on anything other than conjecture looking at performance improvement curves.

      Of course, irony of ironies: ARM was Apple’s to begin with, and it was only by selling it off that it was able to become the force that it is allowing this conversation to be had in the first place.

    • Flying Fox
    • 5 years ago

    This still means they will have to stick with the Xeons for the Mac Pro’s? There are a couple of scenarios:
    1. Push iOS on Macbook’s (perhaps Macbook Pro’s too?) and iMac’s, marginalizing OS X.
    2. Do the emulation thing as other suggested and still keep OS X for Macbook/Macbook Pro/iMac/Mac Pro lines.

    The professionals may scream about this because they can only do their work on desk-bound Mac Pro’s with satisfactory performance?

      • Ethyriel
      • 5 years ago

      Or they could throw enough ARM cores at the problem. Scale through parallelism.

        • Duct Tape Dude
        • 5 years ago

        The main thing ARM can’t compete with is single-threaded performance. I don’t think I’ve seen any ARM chip come close to a common laptop x86 CPU for that.

          • the
          • 5 years ago

          They have a lot of ground to cover to catch up to a 4 Ghz Haswell but they’re standing closer than you think. Singled threaded performance for the A8X is around Nehalem level per clock. That’s not terribly old. Of course the catch is [i<]per clock[/i<] as Nehalem desktops exceeded 3 Ghz while the A8X is left at 1.5 Ghz. Apple would have to target a higher clock speed with their designs which will likely lower IPC do additional pipeline stages etc.

            • Klimax
            • 5 years ago

            “Per clock” is key. It cannot clock any higher, because of design. Always compromises. You cannot have high IPC and high clocks.

            As soon as you include clocks, no ARM chip is anywhere near Haswell in performance. Omitting clocks is standard mistake one terminating any conclusions on sight.

            • NTMBK
            • 5 years ago

            [quote<]It cannot clock any higher, because of design.[/quote<] [[i<]citation needed[/i<]]

            • Ninjitsu
            • 5 years ago

            Citation being anyone who’s taken a VLSI or related course.

            • blastdoor
            • 5 years ago

            Except when you move from one power envelope to another, the tradeoffs change.

            Just take 2 seconds to think about this — does a 1.7 GHz Haswell have a different IPC than a 4 GHz Haswell?

            • Ninjitsu
            • 5 years ago

            Well, no. Other things do, though. And you’re assuming that Intel hasn’t tested/designed for that dynamic range.

            The circuit has to meet stuff like latency and stability targets too. I’m sure it’s no accident that everything from Sandy Bridge to Haswell has a minimum of 800 MHz, for example.

            • blastdoor
            • 5 years ago

            I think you’re moving the goalposts in this discussion. Klimax made a blanket assertion that the A8X “cannot clock any higher, because of design.” NTMBK asked for a citation on that. You made it sounds like the assertion was self-evidently correct.

            It is certainly true that there are high power and low power processes. It is certainly true that the A8X is fabbed on a low power process. It is certainly true that there is a tradeoff between IPC and clock speed.

            But it is not self-evidently true that it is physically impossible for Apple to increase the speed of the A8X, even on the current process.

            It is even further from self-evidently true that the existing cyclone design, fabbed on a different process, couldn’t achieve even higher clock speeds than what’s possible on the current process. (yes, I realize designs needs to be mated to the process, and that involves work, but that’s a radically different issue from saying that it’s self-evident that the design cannot achieve higher clock speeds because of some fundamental tradeoff between IPC and clock speed).

            I can’t seem to find any info out there about cyclone’s pipeline depth. I thought Anandtech had figured that out, but I can’t find it now. But I’d think you’d at least want to know that before you start asserting with metaphysical certitude that increasing the clock speed of the current design is impossible.

            The bottom line is, you guys were saying something that is not at all self-evident. You were asserting that the fundamental design of cyclone would have to be changed for any increase in clock speed. That’s far from clear. Even with no change in process, it’s reasonable to assume some headroom exists on clock speed (people overlock CPUs all the time without first redesigning their cores… there are even some DIY websites out there that focus quite a bit of attention on overlocking… maybe you’ve heard of them) And with a change in process, it could go a lot higher.

            • Klimax
            • 5 years ago

            Just reposting links:
            Just one of many: L1 Latency:
            [url<]http://www.realworldtech.com/forum/?threadid=143924&curpostid=143957[/url<] And a nice paper: [url<]http://www.cs.utexas.edu/users/skeckler/pubs/isca00.pdf[/url<]

            • the
            • 5 years ago

            All other things being equal, I would actually say [i<]yes[/i<]. The reason being is a cache miss to main memory takes more CPU cycles at 4 Ghz than at 1.7 Ghz to resolve (remember that the memory latency is the same regardless of clock speed here). Of course ultimately the 4 Ghz chip is faster due to the raw clock speed increase but their IPC is going to be ever so slightly different. At some point designers have to make the rest of the system proportionally faster as well to negate diminishing returns.

            • NTMBK
            • 5 years ago

            Oh so you have access to the VLSI code for Cyclone, and have performed a thorough analysis of its maximum clock speed? Cool! Care to share with the rest of the class?

            Of course there will be clock limits to the design, and I suspect that they will be far lower than Haswell’s. But flat out claiming “it cannot clock any higher” is a bit ridiculous, when none of us know.

            • Klimax
            • 5 years ago

            Just one of many: L1 Latency:
            [url<]http://www.realworldtech.com/forum/?threadid=143924&curpostid=143957[/url<] Incidentally it perfectly addresses POWER too... Still can't search those forums, so other aspects have to be found manually... 🙁 (Also many things are spread fairly thinly over that forum) And a nice paper: [url<]http://www.cs.utexas.edu/users/skeckler/pubs/isca00.pdf[/url<] Should be enough for now.

            • VincentHanna
            • 5 years ago

            Clarke’s Three Laws are three “laws” of prediction formulated by the British science fiction writer Arthur C. Clarke. They are:

            [b<] When somebody states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.[/b<] The only way of discovering the limits of the possible is to venture a little way past them into the impossible. Any sufficiently advanced technology is indistinguishable from magic, if you don't know any better.

            • chuckula
            • 5 years ago

            I prefer this adaptation from Clarke:

            [quote<]An ARM based MacBook will come out 50 years after everyone stops laughing[/quote<] So let's assume we all stop laughing this year, I'll start saving to buy my 2065 model Macbook!

            • the
            • 5 years ago

            High IPC and high clocks? I would present the POWER7 as an example as it goes up to 4.4 Ghz an d its IPC is similar to that of Haswell. The POWER8 is also faster per clock but it currently tops out at ‘mere’ 3.8 Ghz. It can be done but high clocks, high IPC also means high power consumption.

            We have yet to see Apple design anything that’d need a TDP beyond 5W. There is a lot of room for Apple to grow if they wanted to introduce a 130W part for desktops.

            • blastdoor
            • 5 years ago

            Exactly. It’s not just IPC and Hz. There’s also power. Why is this so hard for some people to understand?

            • Ninjitsu
            • 5 years ago

            Yeah, but you don’t simply throw a larger power envelope at a problem. It’s not THAT simple, is all we’re saying. Look at AMD’s designs, for example.

            Maybe Apple [i<]does[/i<] go ahead and pull this off? If it does, then I can assure you that their low-power and high-performance architectures will be different, just like Bay Trail and Haswell are different.

            • blastdoor
            • 5 years ago

            You are aware that Haswell runs at more than one clock speed, right?

            • Ninjitsu
            • 5 years ago

            I am. You are aware that it’s designed to, aren’t you? It’s not magical. It was engineered with certain constraints in mind. Oh and, the BCLK can’t be changed too much, just the multipliers, because other things depend on it. And those multipliers are additional hardware blocks.

            Apple will have to do the same* with an equivalently sized core of equivalent speed. It’s not simple. Clock speeds are just one part of the puzzle. There’s a ton of stuff that can go wrong, it’s incredibly complex. That’s all I’m saying.

            What Haswell does or doesn’t has no real bearing on what Apple can or can’t do with Cyclone’s [b<]successors[/b<]. But it's unlikely that Apple can simply crank up the clocks (and voltages) on Cyclone and expect it (and other systems around it) to work, unless they've already allowed for such a thing. *engineer with a different set of constraints in mind

            • blastdoor
            • 5 years ago

            And you know what Apple’s intent was in designing cyclone? You know that they never intended for it to be clocked higher than 1.5 GHz? Do you know the pipeline depth?

            • the
            • 5 years ago

            There is likely some wiggle room in the design as it is wiser to be a bit conservative on expectations. Its target speed almost certainly was higher than 1.5 Ghz though how much higher has been a topic of great speculation. I’d fathom 2.5 Ghz max considering the clock speed rush on the Android side and Apple’s periodic usage of their SoCs in other devices (AppleTV).

            As for the pipeline depth, it is an absolute minimum of 14 cycles by merit of the branch misprediction penalty and observed upper at 19 cycle. Interestingly enough, those values are also what Haswell is described as being with a 14-19 state pipeline. [url=http://www.anandtech.com/show/8554/the-iphone-6-review/3<]Source[/url<]

            • NeelyCam
            • 5 years ago

            [quote<]But it's unlikely that Apple can simply crank up the clocks (and voltages) on Cyclone and expect it (and other systems around it) to work[/quote<] I was just reading [url=http://www.anandtech.com/show/8864/amd-fx-8320e-cpu-review-the-other-95w-vishera<]Anandtech's review of AMD FX-8320E[/url<]. It shows pretty well how upping clocks/voltages isn't particularly effective... After overclocking it to 4.8GHz (system power consumption goes from some 160W to 340W), it's roughly equivalent to i7-4765T (a 35W TDP part).

            • Klimax
            • 5 years ago

            Evidence? Also POWER is fairly strange thing with fairly narrow focus and some crazy things. (Like massive eDRAM)

            • the
            • 5 years ago

            [url=http://www.itjungle.com/tfh/tfh092914-story02.html<]Here[/url<] are some industry standard benchmarks for servers between the POWER8 and Ivy Bridge-EP (Haswell-EP had yet to launch). Per core, POWER8 is roughly twice as fast as Ivy Bridge but that doesn't factor in clock speeds. The POWER8 was running at 3.5 Ghz vs. 2.7 Ghz. So per clock, the POWER8 is roughly 50% faster than Ivy Bridge-EP. Considering that Haswell moved performance about 10% faster per clock than Ivy Bridge, I think it is safe to say that POWER8 is faster than Haswell per clock. As for massive eDRAM, Intel has been rumored to offer it in future EX systems on a daughter die, similar to the Crystalwell parts in mobile. At the very least, Intel is going to be using some stacked memory as massive L3 cache for Knights Corner. As for other POWER8 oddities, it supports binary encoded decimal in hardware, so its wickedly fast with that data type. The L4 cache in POWER8 is very, very specialized as it is part of the memory buffer chips. Transactions going through that particular memory channel will be cached in that particular L4. How it is handles coherency is a bit different than the on-die caches.

            • Klimax
            • 5 years ago

            Uses PR provided by IBM. Great. You can fairly play with SPEC to massage data in interesting way without violating guidelines. Interestingly I didn’t see any link to official submission… Hmm, why is that…

            That has to be very careful selection to boot. You know, I did mention that POWER is targeted at very narrow workloads. Link I posted about L1 addresses nicely POWER. (just 2W alone for it) And another oddity about POWER is its manufacturing process which is IIRC specific to IBM and POWER. (IIRC eDRAM as found in POWER is unique in tech and nowhere else found and is different from eDRAM used by Intel)

            Also you may want to pay attention to more things then just clock. (Following links will have nice discussion about some of that)

            Anyway thread about submitted official SPEC numbers:
            [url<]http://www.realworldtech.com/forum/?threadid=141811&curpostid=141811[/url<] SAP Benchmark [url<]http://www.realworldtech.com/forum/?threadid=144193&curpostid=14419[/url<] In short: IPC is not on par with Haswell. It is throughput machine aimed at large SMT.

            • the
            • 5 years ago

            Both the SAP and SPEC links you’ve submitted reiterate the claims made in mine: that POWER8 has a higher IPC than Haswell. If you actually look as the data, the results of my source and your source are nearly identical for SPEC.

            For example, the 80 core SAP benchmark breaks down like this:

            POWER8: 79750 score / 80 cores / 4.2 Ghz clock = 237
            Xeon E7-8890 v2 (Ivy Bridge-EX): 49000 / 120 cores / 2.8 Ghz clock = 145
            That’s makes POWER8 roughly 60% faster per clock than Ivy Bridge-EX. For Haswell-EX, my prediction of POWER8 being 50% faster per clock wouldn’t be too far off from the IPC gains we’ve seen from Ivy Bridge -> Haswell in the consumer and low end server space.For SAP, the IO system has a fair amount stress put on it so it is not a good pure CPU benchmark but still highlights how much faster IBM’s POWER8 box can be.

            SPEC from your links break down like this:
            POWER8 int base: 1750 / 24 core / 3.5 Ghz = 20.8
            POWER8 int peak: 1280 / 24 core / 3.5 Ghz = 15.2
            Xeon E7-4890 v2 int base: 2420 / 60 core / 2.8 Ghz = 14.4
            Xeon E7-4890 v2 int peak: 2340 / 60 core / 2.8 Ghz = 13.9
            Here the POWER8 is faster per clock than Ivy Bridge-EX systems and the base score is just under 50% faster. Interesting thing is the gulf between POWER8’s base and peak results where as Ivy Bridge-EX barely changes. Still, these figures mesh with the concept that POWER8 is faster per clock than Haswell-EX given what we know about Ivy Bridge to Haswell transitions in other markets.

            For SPEC floating point results:
            POWER8 FP base: 1370 / 24 core / 3.5 Ghz = 16.3
            POWER8 FP peak: 1180 / 24 core / 3.5 Ghz = 14.0
            Xeon E7-4890 v2 FP base: 1770 / 60 core / 2.8 Ghz = 10.5
            Xeon E7-4890 v2 FP peak: 1730 / 60 core / 2.8 Ghz = 10.3
            [url=http://www.spec.org/cpu2006/results/res2014q4/cpu2006-20141009-32183.html<]Xeon E5-2699 v3 FP base:[/url<] 947 / 36 core / 2.3 Ghz = 11.4 [url=http://www.spec.org/cpu2006/results/res2014q4/cpu2006-20141009-32183.html<]Xeon E5-2699 v3 FP base:[/url<] 915 / 36 core / 2.3 Ghz = 11.0 Again, here POWER8 is roughly 50% faster per clock than Ivy Bridge-EX. I've provided a Haswell-EP results for comparison. I also wanted to verify that AVX2 was being used as I was expecting FMA to provide a better improvement per clock. Using your data sources, I think it is safe to say that POWER8 is indeed faster than Haswell per clock. PS: One of those links has me active in the discussion under a different pseudoname. 🙂

            • Klimax
            • 5 years ago

            Pretty sure you can’t skip division by threads, otherwise you are likely getting incomparable ro otherwise invalid numbers. There is giant difference especially for very wide machines like POWER. SMT2 versus SMt4 or outright SMT8…

            BTW: Who’s you there? (I should be very obvious… ;))

            • the
            • 5 years ago

            The reason to go with higher levels of SMT is if you continually see aggregate performance gains. As wide as the POWER8 is, software engineering and compilers have a long way to go before they’re able to put pressure on the design on a per clock basis.

            But no, performance traditionally is not divided by thread count unless explicitly in the context of per thread performance. Even then, both Intel and IBM designs have the ability to disable SMT to set optimal performance. In the case of POWER8, that can be changed dynamically by the OS (ie switching between SMT4 and SMT8 without a reboot).

            In conclusion, POWER8 is faster than Haswell per clock and is a high clock speed design. In fact, IBM ships POWER8 at high clocks that the highest clock speed Intel offers Haswell in.

            As for who I am over there, I’ll just say a hint it was mentioned in a TR podcast. 🙂 (I will admit that I’m not David Kanter.)

            • Klimax
            • 5 years ago

            I am sure we are skipping a step or two there, but whatever, we are already getting into not so relevant things to my point, so I’ll grant you raw IPC win by POWER8. (Should have done it already and move on) Now, what is its TDP and how does it compare with Haswell.

            All I could so far find is [url<]http://www.extremetech.com/computing/181102-ibm-power8-openpower-x86-server-monopoly[/url<] and there 250W. (Don't remember any number being mentioned on RWT) And that is the final price of high IPC and frequency. I would say that Intel might be able to push EP/EX there, but they judged as not good use of resources. For comparison I got highest IB (still current EX) E7-8893V2 at 3,4GHz or Haswell-EP E5-2687WV3 at 3,1GHz. Interestingly, linked article with TDP mostly attributes most of performance increase not to IPC, but to massively increased number of threads, which would make sense as you yourself noted current code cannot effectively use such wide OOO, so practical IPC is nowhere near of theoretical IPC unless you use large SMT. (Making IPC comparison fun as we just showed) Anyway, POWER shows that you can get better then Haswell and co, but your TDP will go through the roof. -- Which episode? I don't generally listen to any podcast out there. (Prefer text over audio/video) -- BTW: Thanks for highlight my mistake in assuming single thread for IPC, when posing said question. ETA: Just gave +1 to last posts of yours. They were at least bit helpful... 😉

            • the
            • 5 years ago

            [url=http://www.heise.de/ix/meldung/IBMs-erste-POWER8-Server-L-steht-fuer-Linux-2170989.html<]OpenPOWER documents (German link about them as they've since moved behind a paywall)[/url<] indicate that it consumes 190W. It wouldn't surprise me if the socket was specced to deliver 250W to make room for future multi-chip modules*, higher clock speed chips or even a POWER8+ chip down the road. IBM did something similar with the POWER7 to provision enough power in the socket for potential dual die POWER7+ modules down the road. Similarly, I believe the design documentation for IvyBridge-EX is that the socket to be able to support 165W even though Intel only shipped parts rated at 155W max. Haswell-EX is going to be a drop-in replacement so those values are expected to be maintained, though Intel can go slightly higher without dropping support for existing systems. Considering that the EX servers can easily be priced in the 6 digit range, protecting these investments for both customers and OEMs is important. So from a performance/watt standpoint, POWER8 should come out on top in terms of performance/watt compared to Haswell-EX in the high end. The performnance/watt gap will be much closer with lower end Haswell-EX and definitely with low end Haswell-EP chips. If the POWER7 is anything to go by, IBM will also offer several POWER8 chips at different wattages. POWER7 went down to 150W for blade systems for example. Essentially there is going to be a performance/watt overlap due to how Intel and IBM bin these chips regarding speed and power consumption. One thing I haven't pointed out here yet is that Haswell isn't scaling to very high clock speeds when core count goes up. This isn't too surprising as generally adding cores gets more performance out of server workloads than blindly increasing clock speeds and voltages for a little more absolute per core performance. Case in point is that the 18 core Haswell-EP chip has a 2.3 Ghz base clock where as the 8 core goes to 3.2 Ghz. I think it is pretty clear that Intel isn't actively pursuing high clock speeds anymore except in the consumer desktop segment. *Oddly, the low end S824 uses two 6 core dies for the 12 core configuration and not a native 12 core part. Edit: [url=https://techreport.com/review/26446/the-tr-podcast-154-amd-k12-ocz-future-and-the-z97-invasion-begins<]And the podcast I was on.[/url<]

            • Klimax
            • 5 years ago

            I’ll post link as soon as Real World Tech is up again.

            For now,, just couple of questions which will suffice for now: What CPU has higher IPC then Haswell and Haswell’s clocks?
            Why Intel uses L1 with higher latency in Hawell then Apple in their CPU or why it is higher then designs in past including Williamette.

            ETA: Links posted. Should keep people busy…

          • Ethyriel
          • 5 years ago

          The question isn’t if it can compete with x86 in single-threaded performance, it’s what does the average person need per thread. I suspect ARM is really close to that, if it’s not there already. Once that’s the case you can emphasize multi-threaded performance in your kernel and frameworks.

      • the
      • 5 years ago

      An option for desktop systems would simply be to include both an x86 and an ARM chip. Since Apple controls both the hardware (well minus the Intel parts) and software, it can feasibly be done. IBM did something similar with their zSeries mainframes and x86 based expansion blades all while running Linux instance.

      • nafhan
      • 5 years ago

      There are server ARM cores coming out, now, from AMD and others. I don’t see any reason why Apple couldn’t come up with something similar to replace Xeons on the desktop by 2017; they’ve got some of the best ARM architecture people in the business. It would application compatibility, not performance, holding things back.

      Also, OS X should be capable of running on ARM with some tweaks. It is a *nix, after all, and one that’s not all that far removed from a major architecture change. Again, the apps/software is where things will break if that happens.

      Finally, to be clear, I’m not sold on this, I think it’s more likely that Apple will stick with Intel for their desktop stuff. I’m just coming up with reasons why they wouldn’t have to.

    • bhtooefr
    • 5 years ago

    My preferred crazy hypothesis is that Apple will buy AMD for their x86 license as well as the ATI IP, and put an x86 front end on their processors, to solve the incompatibility.

      • Ethyriel
      • 5 years ago

      Intel has to approve a transfer of the x86 license.

        • bhtooefr
        • 5 years ago

        Except Intel would want to approve it, due to the antitrust concerns.

        Basically, they’d most likely rather want to lose Apple as a customer AND get AMD out of general competition, than take the restrictions they’d get for shutting things like that down (and I think Apple would wait to do it until AMD is in their death throes). Intel desperately needs AMD, whoever owns them, to keep making x86 processors.

          • Ethyriel
          • 5 years ago

          That is logical. Or maybe they’d deny the transfer and finally grant Nvidia the x86 license they’ve wanted for so long.

          • mesyn191
          • 5 years ago

          Anti trust concerns haven’t been an issue since the late 90’s with the Citigroup merger which was retroactively made legal.

          Given the way BP was allowed to squeak out of paying most of its fines and skimping on the clean up as well as a almost total failure to prosecute banks for the robo signing fraud scandal its fair to say things have gotten worse and not better since then when it comes to regulating the excesses of major corporations.

            • bhtooefr
            • 5 years ago

            To be fair, there’s also the EU, and they take a dimmer view on that kind of thing, and could move against Intel.

            • mesyn191
            • 5 years ago

            The EU is every bit as bad as the US when it comes to regulating major corps these days + Intel is a US company.

            • bhtooefr
            • 5 years ago

            There’s also some protectionism at play in that particular case, though.

            “Oh, you don’t want to play by our rules? That’s cool, we’ve got our own processor company that has some sufficiently competitive designs in the EU, and we can penalize you or even ban your imports. Your move, Intel.”

            (We’ll ignore that ARM isn’t actually competitive in a lot of markets yet…)

            • mesyn191
            • 5 years ago

            There is no company that has sufficiently competitive CPU designs in the EU vs what Intel has now much less had several years ago.

            Also if they wanted to be protectionist the time to do it would’ve been before Intel got huge back in the 80’s.

        • Theolendras
        • 5 years ago

        Wasn’t it abolished, with the AMD antitrust settlement ?

          • bhtooefr
          • 5 years ago

          Looks like all that was abolished was the restriction on where AMD could get their chips fabbed?

          But, GlobalFoundries also got a license agreement with Intel, it seems.

        • xeridea
        • 5 years ago

        If they purchased AMD but still kept AMD as its own company, what would happen then?

          • smilingcrow
          • 5 years ago

          That’s not a workaround.

        • shiznit
        • 5 years ago

        I can see Intel approving it in exchange for an exclusive manufacturing agreement. Those expensive fabs will need something to do.

        That said, I don’t thing Apple cares about x86 that much anymore and Imagination’s tech is superior to AMD’s in many ways. Most business apps in 2017 will be web based and native Windows/Intel software will be exception not the rule.

      • Milo Burke
      • 5 years ago

      I thought the x86 license isn’t transferable?

      And the last thing I want is Apple to own AMD’s graphics division.

        • Takeshi7
        • 5 years ago

        Maybe they could spin off ATI before Apple buys AMD?

          • xeridea
          • 5 years ago

          No, a huge focal point is the APUs, and HSA, there is no way they would just abandon them.

            • bhtooefr
            • 5 years ago

            And, let’s face it, with Apple’s work with GPGPU tech (and not CUDA, either), Apple could use ATI.

            It’s crazy, I admit it, and there’s other ways to do it (for instance, pay AMD to slap their name on Apple’s designs), but Apple’s style is typically to go big or go home.

            • the
            • 5 years ago

            Apple has been a big pusher of GPGPU in their systems. They’re the ones who originally developed OpenCL for example.

          • ronch
          • 5 years ago

          Spin off the crown-jewel graphics division and keep the troubled CPU division? That’s like buying a candy bar, throwing the candy and keeping the wrapper.

        • Jakall
        • 5 years ago

        I think the license is not transferable. But then the 8000 or so patents that AMD holds in microprocessor and x86 business are, including the crucial x86-64 part – could you imagine Intel running windows in 32 bit only? And then, who has the better lawyers, Intel or Apple? We all know the answer to that…

          • chuckula
          • 5 years ago

          [quote<] But then the 8000 or so patents that AMD holds in microprocessor and x86 business are, including the crucial x86-64 part [/quote<] Already licensed to Intel the license to Intel would carry over for any new owner in the event of any transfer from AMD... it doesn't matter who gets them, Intel can't be sued. Believe me, lawyers think of about a million different eventualities just like that when they do these licensing agreements.

      • Deanjo
      • 5 years ago

      If intel wanted Apple to have an x86 license they would just directly grant them a license. It would be easier to get by regulators and since Apple would only use their chips in their product there wouldn’t even be a real addition to any competition for intel.

      • cmrcmk
      • 5 years ago

      My preferred crazy scenario is Apple moving to ARM for everything power/thermal constrained (mobile and low end iMac) while switching back to Power for Mac Pros and high end iMacs!

        • mganai
        • 5 years ago

        Power’s been a non-factor outside of the high end server sector for a decade, which is why Apple bailed on them.

      • HisDivineOrder
      • 5 years ago

      Ostensibly, AMD loses the x86 license the very second another company owns AMD or if AMD merges with another company that has any aspirations for CPU’s. That’s why AMD had to buy ATI (or nVidia as they tried to originally).

      Say, Qualcomm buys AMD? No x86 license. Apple buys AMD? No x86 license. nVidia buys AMD? No x86 license. Basically, anyone buys AMD, they lose the single thing that most companies WOULD buy them for.

      So…

      Yeah, no. Now Apple might be able to get an exception if somehow they could do their PR spin magic on Intel and convince them it was in their best interests (ie., contracts, money, something-something) to let the new AMD/Apple keep the x86 license.

      But that seems like a real longshot.

      More likely, the companies that want to get the rest of AMD and are fine with losing the x86 license realize buying up the scraps of what’s left of AMD as they slowly death spiral into oblivion is probably the cheaper way to go than trying to buy the whole company that would value itself WITH the x86 license even though the very moment the sale was done, it wouldn’t have it.

      Hell, I imagine nVidia and Intel dividing up the company with Intel snatching up most of the GPU-related patents and nVidia snatching up most of the CPU-related patents. I also see Qualcomm and Samsung scrambling in there somewhere for some of them, too.

        • bhtooefr
        • 5 years ago

        Actually, the CPU-related patents makes me wonder about something else.

        In the AMD case, the x86 license goes two ways. There may be some way to use Intel’s AMD64 license as leverage.

        • the
        • 5 years ago

        Actually the license can transfer but Intel would have to sign off on such a deal. Due to anti-trust regulators watching, it would be in their best interest to let it transfer. It does however give Intel an avenue to negotiate. Say for example Intel wanted a complete patent cross licensing agreement with the buyer. Such an agreement is something that the regulators would sign off on so Intel would get something out of the deal.

          • HisDivineOrder
          • 5 years ago

          Ehhhh.

          It’s mostly assumed now that Intel doesn’t have to worry about anti-trust implications because of ARM (and to a lesser extent MIPS). They’re so prevalent and so widespread and such an obvious concern to Intel, it’s hard to imagine anyone telling Intel that if AMD goes out of business they’ll have a lock on the PC market.

          And having a lock on only the (reportedly) diminishing PC market built around only x86 is like Samsung having a lock on the Plasma TV market while LCD’s run rampant. Sure, Plasma’s got its advantages, but it’s not exactly a powerhouse in terms of sales.

            • the
            • 5 years ago

            Regulators are looking at Intel in the context of the PC market though. While Windows RT is out there as a foot hold to get ARM into the PC market, its adoption is less than what AMD currently ships in this space.

            In the greater context of computing where it would include tablets and smart phones, yes Intel has plenty of competition. So far regulators haven’t been keen at looking at Intel from that perspective as that wasn’t brought up in the various trials. Intel could appeal to make this argument but so far it hasn’t stuck with regulators.

      • Billstevens
      • 5 years ago

      AMD at its current value and business model is dead weight for a company like Apple : )

        • HisDivineOrder
        • 5 years ago

        Which is why waiting until they’re dead and all someone’d be buying is the patent portfolio is probably the way they’ll go.

        Team Rockstar anyone? Apple jerks out its hand and calls out, “By our powers combined, we are Captain Patent Troll! Rollcall! Apple! Microsoft! Blackberry! …Sony!”

        Google’s watching and shaking their head. Intel sigh and whispers to Google.

        “Guys, you realize that this whole software patent business is not going to end well for you.”

        Captain Patent Troll blinks. “Wut?”

      • meerkt
      • 5 years ago

      That would be a sad day.

      • Peter.Parker
      • 5 years ago

      What ? AMD running OSX? This is crazy! In fact, it’s so crazy, it might just work!

      Sent from my AMD Hackintosh.

        • bhtooefr
        • 5 years ago

        Well, in this insane idea, it would be AMD in name only.

        It’d actually be an Apple x86 CPU using AMD’s x86 license.

      • ronch
      • 5 years ago

      Here’s a crazy prediction: AMD’s Zen is going to propel AMD back to the high end and future Macs are going to be powered by AMD Zen processors.

      Not placing any money on the table though.

        • Deanjo
        • 5 years ago

        I have more faith in SSK forgetting how to use capslock.

        Insanity: doing the same thing over and over again and expecting different results.

        • HisDivineOrder
        • 5 years ago

        AMD’s promise that Zen will save the day reminds me of AMD’s promises for Phenom and Bulldozer, of Blackberry’s promises of their last few OS releases, of Palm’s promises, and of 3dfx’s last few promises (post-disastrous merger btw that left them cash-poor and with little advantage for it in the long run).

        I still remember the day when 3dfx announced they’d sold all the good stuff to nVidia while retaining only the STB-related nitty-gritty of “hardware support.” I remember all the weeping as all those shiny “new” 3dfx boards became shiny, unsupported hardware that would receive no more updates, but SLI and other assorted 3dfx technology (and people) were incorporated into nVidia.

        It taught me that you NEVER buy product from a company on the verge of collapse, no matter how unlikely you think it to be in the short term.

          • Deanjo
          • 5 years ago

          [quote<]AMD's promise that Zen will save the day reminds me of AMD's promises for Phenom and Bulldozer, of Blackberry's promises of their last few OS releases, of Palm's promises, and of 3dfx's last few promises (post-disastrous merger btw that left them cash-poor and with little advantage for it in the long run).[/quote<] Yup, right on the money. I see AMD and VIA battling it out in not to far off future fighting for those extremely small markets just to stay alive.

Pin It on Pinterest

Share This