Rumor: Intel to stop offering socketed desktop CPUs

No better way to start the post-Thanksgiving week than with a strange, eyebrow-raising, and possibly worrying rumor. Yes, according to the folks over at Japanese site PC Watch, Intel is going to abandon socketed processor packages with its 2014 architectural refresh.

PC Watch is talking about Broadwell, the 14-nm successor to Haswell (which will itself supplant Ivy Bridge next year). Reportedly, Broadwell will only be available in BGA, or ball-grid-array, variants. If I’m reading this right, you’ll only be able to buy Broadwell processors soldered onto motherboards—no more retail-boxed, easily interchangeable CPUs.

PC Watch goes on to say Intel will bring the platform controller hub die onto the processor package, as well, which will make Broadwell a sort of proto-SOC. The chipmaker is expected to target power envelopes between 10W and 57W. Right now, Ivy Bridge spans the gamut from 17W (for ultrabook variants) to 77W (for high-end desktop chips). That’s down from a maximum of 130W for six-core, LGA2011 Sandy Bridge CPUs, which still lack Ivy-based successors.

Unless this story is complete bunk, it looks like Intel will double down on low-power, small-form factor systems at the expense of traditional desktops. That would probably be good for computing in general, but sadly, it would mean we enthusiasts get the short end of the stick.

Of course, we already know what it’s like to build a PC with a processor soldered onto the motherboard. Such boards are widely available today—just search Newegg or Amazon for Mini-ITX Intel Atom or AMD E-series offerings. The reduced flexibility is unfortunate, but it doesn’t make building your own PC impractical—not even remotely. (Thanks to X-bit labs for the tip.)

Comments closed
    • kneelie
    • 7 years ago

    This is sad, or as least has the potential to be very sour for pc enthusiasts. Either people are going to have to get good at doing [url<]http://www.youtube.com/watch?v=6Zh46cR6k3s[/url<] or take what they are given like it or not. The ultra small ultra portable market is not something I am interested in. I hope that they are considering enthusiasts, artists, and workstation market, but it does not appear that they are. Games, photo and movie editing, and virtual music studios can never have enough processing power and memory. I personally want a box that I can swap parts, add several expansion cards, a board that I can cram full of memory, and hook up several monitors to. When are they going to come out with chips with more than 4 cores or at least a dual chip board which may be better anyway. I know there are xeons but 2 socket xeons are coming on 2 generations behind standard desktop parts. Intels new competition is arm and and ultra portable. A market that produces useless gadgets for most of what I use a computer for. Sure, they are great for reading a book or checking an email but that is about it.

    • sweatshopking
    • 7 years ago

    [quote<] low-power, small-form factor systems at the expense of traditional desktops [/quote<] remember that time that ms made their new operating system lean more in the direction of mobile at the expense of the traditional desktop? geez, what the heck were they thinking!? not like the entire market is going that direction!

    • ezrasam
    • 7 years ago

    Provided Intel’s NUC news, this doesn’t seem to be rumour to me…

    Intel is going after Apple’s way “I am smart, so I choose what my customers use.”

    After all, Intel is in the game with ARM…
    Intel needs serious competition for good pricing structure, otherwise we all know how they price their products.

    I believe, Performance on Desktop will not be primary focus for few years to come, it would rather be Thermals/Power.
    Depending on competition, we might have reasonably priced products as well

    • ub3r
    • 7 years ago

    I presume there are technical reasons to this..

    Low power high impedance CPU I/O is more susceptible to track-track coupling due to the parasitic path inductance’s.

    Those interconnects are inductors (although very low inductance), and ridding them will result in less coupling issues and higher slew rates, which means higher frequencies will be attainable at lowerer power..

    I can tell you from experience in high speed design, impedance matching is a pain in the ass, esp around highly inductive connectors.

    • Kougar
    • 7 years ago

    So continues the trend in consumer devices, if it breaks throw it all away and buy a new one? Are people willing to throw away a $300 CPU if their motherboard should happen to break in any way? If some port shorts out, or a power surge kills the board? When the next episode of bad caps comes along like it did with HDTVs? If a BIOS flash bricks the board?

    It would be hilarious if there was another massive chipset recall like with the P67 chipsets in such a scenario, Intel would have to spend a considerable sum of money salvaging CPUs off every bad board.

    I would suspect there’s half-truth to such a rumor, that Intel will be pushing BGA for low power and SFF models. But that they will still be offering higher models to serve the performance or flexible side of the market. Otherwise the only alternative is that Intel is crazy, and anyone that wants a socketed chip will be forced to buy into LGA2011 and whatever replaces it when Haswell-E shows up.

    • halbhh2
    • 7 years ago

    Gotta admit, I find this story plausible and…well, amusing, actually.

    • brucethemoose
    • 7 years ago

    AMD and Nvidia already do this for GPUs… it’s not that big of a deal, really.

    It’ll ease manufacturing a tiny bit if anything (pins can be a PITA to get right), and I’ll bet all the OEMs will still sell/manufacture motherboards. Yes, this’ll give Intel another way to squeeze more money out of customers, but it’s no worse than all those locked SB/IB CPUs.

      • xeridea
      • 7 years ago

      Video cards are a different story. They are tightly paired to what the GPU needs (memory, power, vrms, cooling, BIOS. People wouldn’t be switching GPUs to upgrade because it wouldn’t be an upgrade, and it would be very hard to manage. The GPU is by far the most expensive part. Video cards have always been like this. It is the same for tablets and phones, you can’t really just swap it easily, and it needs to be in a small package.

      CPUs have been replaceable for 30 years for good reason. Upgrades, and the complex mobo can die, or partially stop working (ports, BIOS brick, etc).

        • designerfx
        • 7 years ago

        that was the first thing I thought of. Processor failure for any reason = complete computer failure, if this moves forward.

      • geekl33tgamer
      • 7 years ago

      It is. I can’t see mobo vendors offering 20 different motherboards, with 20 different CPU’s in all combinations – That will be a PITA for any vendor to manage. They will potentially segment products: i3 CPU’s on low end mobos, and i7’s only available on the highest end mobos.

        • A_Pickle
        • 7 years ago

        Yeah. That sucks. Sometimes I want the performance of a CPU, but don’t need the features of one of those crazy expensive high-end boards.

          • travbrad
          • 7 years ago

          How will you get by with less than 16 USB ports?

    • eitje
    • 7 years ago

    The model number options are endless, once we start mixing dozens of CPU variants with dozens of motherboard variants. 🙂

      • internetsandman
      • 7 years ago

      I dunno, I think a P10-Z88-ITX-i3/5100-Pro is a perfectly reasonable model number

    • wingless
    • 7 years ago

    we all knew Intel would pull some monkey s**t like this if AMD started circling the drain, AND THEY DID!

    • maxxcool
    • 7 years ago

    Awesome. if true I blame the lack of competition for Intel…

      • Deanjo
      • 7 years ago

      I doubt it. Intel would have gone this way even with competition.

        • maxxcool
        • 7 years ago

        If there were 3 successful x86 vendors with more than a scant slice of market share this would be a different scenario (*if is its true*).

        If there was fear of bleeding customers because brand x and brand y allowed socket use vs “Here is your limited” bonded board they’d more likely relegate the proposed action to “business class cpus and lower end parts” where swappable sockets would be moot because features on board would meet computing demands.

        However that’s not the case, *again were this true*, there killing the grey market ala-apple where people can hold onto a socket 1156 cpu because at 3ghz its more than fast enough. And in the process also reducing the number of options for end users thus upping their sales and margins while passing off costs to the oems to do the dirty work of mounting a cpu and dealing with the rma nightmare.

          • derFunkenstein
          • 7 years ago

          WHAT

          Are you saying that AMD CPUs will fit in Intel’s sockets?

    • Tristan
    • 7 years ago

    Good news. Integration accelerates, thank to Moore Law and also thank to stacked chips, that will be used very soon. Within 5- 7 years, modern PC chip will include: 1 layer for CPU, 1 layer for GPU, 1 – layers for IO stuff(sound, ssd, usb, …), 1 – layer for cache (for CPU and GPU), four layers for system RAM and ten layers for SSD memory. Very easy and cheap mobos will be required only for deliver power (some 10W only) and few IO connectors.

      • xeridea
      • 7 years ago

      Good luck cooling the inside of your 1″ thick SoC.

        • geekl33tgamer
        • 7 years ago

        With non-removable “dustbuster” heatsink.

    • GTVic
    • 7 years ago

    There is probably no reason why the motherboard can’t be divided in two pieces. Board 1 = Processor and Memory, Board 2 = PCI Express and I/O.

    Roughly 60% of the CPU pins are power or unused, 25% are memory and 15% other. If you solder the processor onto Board #1 which also contains the memory DIMM slots, you have 85% of the pin connectors that are contained on the board. Less than 200 connections to make to Board #2. If you used a tech like Thunderbolt you could cut that drastically.

    • vascaino
    • 7 years ago

    Although this sounds like a big concern for the DYI crowd, I think that it will only actually affect them when they first start a new build. The problem is that there will be much less choices. Motherboard models will only have a small variety of CPUs that they will “carry”. Most might even be available with only one type of CPU. So we would have much more limited choices.

    As for upgrading CPUs, as most people have noted, CPU upgrade is a much rarer occurrence nowadays. Especially for Intel CPUs. By the time newer and faster processors are out, most would be wise to upgrade the motherboard as well in order to be able to take full advantage of all of the CPU features. That if the sockets are compatible at all in the first place.

      • xeridea
      • 7 years ago

      It is a rare occurrence for Intel because they make a new socket every generation. I have upgraded many AMD systems without a new board. Also, if a board fails, now you will be forced to buy a new CPU, which could be costly.

        • Deanjo
        • 7 years ago

        Yup, the other thing is that CPU’s themselves rarely go bad. Most of the time it is the motherboard that gives up.

          • vascaino
          • 7 years ago

          I do agree with both of you. Having the freedom of replacing your CPU when needed / appropriate is a great thing. It would be better for enthusiasts if different sockets were somewhat compatible. However this is not a feature that most people care about. Most people just try to buy some junk cleaning / antivirus products when their computer gets slow. If all fails, they backup their data and just buy a new computer. Because it is easier. And it’s also better for the hardware manufacturer, who sells more. I don’t have any data to back up what I’m saying, but I think it’s safe to assume that manufacturers make more money selling a whole new computer than just a single CPU. And it would be cheaper to manufacture boards with soldered CPU. So you join one with another and you see why Intel really doesn’t care about enthusiasts. They’re a small chunk of the market.

            • Jason181
            • 7 years ago

            I think they care some, or there wouldn’t be the “K” series cpus. The side benefit for Intel is that since you’ll be choosing a cpu for the life of the system, you might err on the side of additional cpu power.

            • vascaino
            • 7 years ago

            I’m not saying that they don’t care. Or that they didn’t care. They do and that is one of the reasons why we still have very high end series and socketed CPUs. But I assume that this market is diminishing to a point where profits are soon going to become very small or maybe inexistent. And Intel predicting this are shifting their focus to what is more profitable.That’s my 2 cents.

    • theadder
    • 7 years ago

    There was a time when I might have cared about this.

    Now I don’t; I’m content with the reduced power consumption and better performance as more of the device is handed over to Intel.

      • xeridea
      • 7 years ago

      You don’t get reduced power consumption or better performance going to BGA. Moving the hub to the CPU could be done with a socket. You only get a non replaceable CPU, less choices, and more money shelled out to Intel.

        • MadManOriginal
        • 7 years ago

        Not bored of posting the same ‘OMG MORE MONIES TO INTEL’ assumption yet?

          • xeridea
          • 7 years ago

          Nope just stating the facts. Monies to Intel is the lesser matter. The greater matter is shafting the consumer. Though with the post of the update on the rumor and it being mistranslated it may not be an issue anyway.

            • MadManOriginal
            • 7 years ago

            The thing is, it’s pure assumption that Intel would get more money from it. It’s also a poor assumption, because Intel isn’t going to kill their entire OEM channel by not selling them CPUs. Even if this BGA mainstream platform were to happen, the most likely outcome would be…basically the same as it is now. Intel offering branded motherboard+CPU, and selling CPUs to OEMs or motherboard manufacturers to make their own product.

        • NeelyCam
        • 7 years ago

        Sockets cost money. No socket = cheaper

    • izmanq
    • 7 years ago

    if intel really do this, perhaps this is a good chance for AMD to get back the top position 😀

    • albundy
    • 7 years ago

    thats great news! AMD can have a new chance to shine!

    • sschaem
    • 7 years ago

    TR troll its own website now, I thought this was our job?
    Guys, this will not happen in the coming decade, relax.

      • NeelyCam
      • 7 years ago

      S|A’s update says:

      [quote<]"two OEMs have confirmed to SemiAccurate that they have now been briefed that Broadwell is BGA only."[/quote<] Sounds like it's happening

        • chuckula
        • 7 years ago

        Or.. sounds like Intel is simply skipping Broadwell on the desktop and keeping it in mobile form-factors while the desktop remains Haswell & LGA-1150 for two years. SemiAccurate’s own stories back up this theory (go find the one where they lament the fact that Intel isn’t planning an immediate refresh to the Z87 chipset for Haswell… the reason is that Haswell will be around for 2 years on the desktop and the chipset doesn’t need to change).

          • deinabog
          • 7 years ago

          This makes a lot more sense. I’ve read that the Haswell family of CPUs will be the chips to have on the desktop for awhile.

    • chuckula
    • 7 years ago

    [b<]UPDATE[/b<]: This if from a native-Japanese speaker posting on Xbit-labs so take it with a grain of salt (but then again, take everything with a grain of salt): [quote<]EDIT: This article is mistranslated. This article says: -- Intel will not provide new products for Desktop and non-BGA laptop segments in Broadwell era -- Instead, they will provide higher clocked Haswell for those segments in 2014 -- Broadwell is "more than tick", and it will include some technologies that were previously planned for Skylake -- This is because Intel needs to be more competitive in the tablet market, and this may mean the end of Tick-Tock strategy -- It mentions nothing about Skylake and later or if they will be LGA or not for the desktop.[/quote<] Source: [url<]http://www.xbitlabs.com/news/cpu/display/20121122022244_Intel_s_Haswell_Could_Be_Last_Interchangeable_Desktop_Microprocessors_Report.html[/url<] The above post makes a [b<]TON[/b<] of sense: Haswell continues on the desktop in a rev 2 version in 2014 [b<]in the standard LGA format we have come to expect[/b<], while Broadwell is exclusively targeted at mobile solutions [b<]where BGA packaging is already well established so nothing really changes[/b<]. Could there be some all-in-one Broadwell solutions that make it into desktops? Sure, but they wouldn't be any different than the all-in-one solutions using mobile chips that you see now, so once again: nothing changes. Another piece of the puzzle: Remember how I lampooned Semiaccurate? Well Charlie himself says that Sky Lake is showing up in LGA packaging on the desktop! So it makes sense: On the desktop there is still LGA, don't panic. Intel could simply be skipping [b<]desktop[/b<] Broadwell, which actually makes a lot of sense considering that Broadwell on the desktop compared to Haswell would be similar to Ivy Bridge vs. Sandy Bridge: not a huge step up. In mobile, however, Broadwell should be worth the effort. Good news for AMD: Trinity +Trinity 2.0 will still have better IGPs on the desktop. Bad news for AMD: In mobile, where IGPs really count, Haswell is going to take away the Trinity advantange and Broadwell is going to make life tough for Kaveri.

      • Generic
      • 7 years ago

      Thank you for this.

      Hope it pans out as described.

      • MadManOriginal
      • 7 years ago

      Funny, after reading through the translation, and re-reading parts to try to make sense of it, that’s what I thought was likely and in fact there is a diagram in the article that suggests just that. The other angle I considered is that Intel wouldn’t want to use 14nm for desktop socketed CPUs initially, so they would use 14nm for BGA Broadwell designs of various types, and continue with 22nm for Haswell or ‘Haswell 2.0’ for at least a little while. I don’t know if one would call that the end of tick-tock, or just an alteration to it…I’d call it a decoupling of tick-tock whereby new processes aren’t used for an entire range of CPUs but just for some of them.

      • halbhh2
      • 7 years ago

      Heh, we’re not in Kansas anymore, Toto.

      Tick Tock = speed for competing with AMD.

      Yes, they’d better move forward stuff all right.

      • Sahrin
      • 7 years ago

      >Good news for AMD: Trinity +Trinity 2.0 will still have better IGPs on the desktop.
      Bad news for AMD: In mobile, where IGPs really count, Haswell is going to take away the Trinity advantange and Broadwell is going to make life tough for Kaveri.

      Intel’s problem has always been drivers, not hardware.

    • slaimus
    • 7 years ago

    The more scary part of this story is that I believe Intel thinks the days of mainstream “tower” type computers are near their end. They will probably be replaced with custom SFF enclosures.

    If you think of SFFs/laptops/All-in-Ones, changing to a socketless design makes a lot of sense since those designs are space limited.

    Intel probably looked at the numbers, and it shows enthusiasts have accepted paying more to buy their higher end 3570k/2500k/3770k/2600k/SBE38xx type processors. They will still sell those types of CPUs with sockets if I am reading the story right.

    The days of buying a Pentium E2140 for $70 and overclocking to 3.2Ghz to match a $300 Core2Duo, or even overclocking a Celeron 300A to match a Pentium 450, are long in the past already.

    Low end chips will only be going into the $249 Walmart specials, and going socketless will save them money on a low margin market.

    It is also a good way for them to sell more motherboards as a bundle, as not many people even buys Intel branded boards these days.

    So it definitely make sense for Intel, and the loser is us budget enthusiasts.

      • xeridea
      • 7 years ago

      Win for AMD woot woot.

    • ludi
    • 7 years ago

    For better or worse, the socketed CPU was something of a manufacturing oddity, driven by limitations in package design (and later, heat dissipation requirements) that pretty much forced the industry to use elaborate pinning and socket layouts.

    The practical BGA package didn’t become mainstream until the 1990s, and prior to that the only good option for surface mount with lots of pins was TQFP, which placed a lot of limitations on pin layout and routing:

    [url<]http://en.wikipedia.org/wiki/TQFP[/url<] IIRC the last x86 CPU designs to use it on a large-scale basis were some variants of the 486/SX, and mostly for embedded applications. edit: clarity

    • sircharles32
    • 7 years ago

    What happens when it’s time to trouble shoot.
    You won’t be able to separate the CPU from the motherboard.
    If something doesn’t work, you’ll have to send both components back (if they’re still under warranty).
    If it’s not under warranty, they guess what, you get to pay for a new CPU + Motherboard, even if it was just the $100 motherboard that died. Not a good thing for the consumer, if you ask me.

    • WaltC
    • 7 years ago

    This is probably a baseless rumor, but if Intel should actually do something like this–what a boon for AMD. The rumor is Back to the Future–circa the 1980’s, the only thing you could buy were motherboards with cpus surface mounted on them. Ugh…;) TO change out a component you had to swap motherboards–that was your only option.

    It was actually Intel standards that created upgradable expansion buses like ISA/PCI/e, and so on, and unless Intel is planning on dumping those, too, this rumor would not seem to make a whole lot of sense. Most likely someone has misunderstood something important.

      • shiznit
      • 7 years ago

      AMD may not be around in 2014

      • Deanjo
      • 7 years ago

      [quote<]It was actually Intel standards that created upgradable expansion buses like ISA/PCI/e, and so on,[/quote<] It was actually IBM that created the ISA standard and the Gang of Nine created EISA.

    • Geistbar
    • 7 years ago

    This reminds me of an earlier idea Intel had to allow people to purchase some form of activation code to upgrade their processors — either unlocking features like hyper-threading or upping the multiplier, both, or something similar. While that sounded like a bunch of nickel and diming for contemporary systems, that sounds like it could be enough to make a BGA approach not that bad.

    As time has gone on, it seems to me that we’ve had much less to differentiate chips at the binning process other than “x number of these need to be made” — two generations from now, I could see those differences being even more insignificant. If they combined BGA with that idea, then the number of different motherboard SKUs that would need to be made wouldn’t be so bad — only one model per base silicon type, instead of one model per CPU model. You could buy your motherboard with either a quadcore or an octo-core processor, then decide on first install which specific model within that family you wanted.

    It’d be quite a blow for overclocking — maybe the end of it in most cases — but I could see it being potentially fairly tolerable within the enthusiast circle if that’s how it works out. If it isn’t how it works out, then I expect a huge contraction in choice due to the huge number of mobo/cpu combos that are currently available being reduced for sales management purposes.

      • MadManOriginal
      • 7 years ago

      I’d forgotten about the software-based feature upgrade. Good and interesting point, it would decrease the physical inventory migraines manufacturers would have otherwise, and might even suit enthusiasts who want certain features but not others if the features were semi-a la carte. Ex: want a low TDP dual core but need AES or full virtualization features? Pay for the base CPU then buy just those upgrades.

      One downside is one would have to buy the upgrade from only one real source and there wouldn’t be a used market or maybe not much price competition.

        • xeridea
        • 7 years ago

        Yeah, or just get an AMD CPU where they don’t willy nilly exclude features from random CPUs just to confuse people.

          • maxxcool
          • 7 years ago

          *IF* there even in the desktop cpu game at all in a year…

          • MadManOriginal
          • 7 years ago

          If AMD makes a competitive product that meets my needs I will buy their CPUs. An aspect of that competition is feature set, but for me right now there’s nothing compelling enough to make AMD CPUs worthwhile as a whole but that could always change.

          My post is pure speculation, but if Intel delivers price/performance with feature set upgrades then they’ll get my money, if not AMD will.

            • maxxcool
            • 7 years ago

            I am not optimistic 🙁 I am still waiting to see comment from Intel … ;\

        • Geistbar
        • 7 years ago

        Yeah, I feel it could work out OK overall, though I do agree with your worries with respect to putting more control in Intel’s hands.

        I think it’s important to keep in mind how many former motherboard features are being integrated into the CPU with each successive generation. We’re not that far off from having SoC being the norm for x86 — at that point, it won’t really matter to even enthusiast end users if it’s BGA or LGA or anything else. The “motherboard” would be reduced to being the board that everything connects to, with little unique circuity of its own.

    • End User
    • 7 years ago

    I want total control over my build. This does not sound good at all.

      • NeelyCam
      • 7 years ago

      I prefer higher integration, better quality and lower cost over “total control”. I’m not a control freak

        • Deanjo
        • 7 years ago

        Ya but all to often higher integration != better quality. Take for example audio, video, network. I still find myself purchasing those items with a discreet version to replace the cheap solutions they offer on most motherboards.

    • mcnabney
    • 7 years ago

    This won’t happen because despite all of the enthusiast interests in changing processors, what this really does is create an inventory nightmare for every PC manufacturer.

    Right now, Dell or HP or Lenovo might be selling 10 different processor tiers and 10 different board varieties to provide different system tiers. In order to provide the same level of product and performance matching they would then have to inventory 100 SKUs – insanity!

      • Deanjo
      • 7 years ago

      They already do this with laptops so doing so to a desktop isn’t going to be anymore of a nightmare.

    • Beelzebubba9
    • 7 years ago

    Anyone want to take a bet that this won’t apply to high end enthusiast or workstartion parts for a while at least? Despite all of the worrying and fretting, between the K series CPUs and the surprisingly cheap entry level LGA2011 parts Intel seems pretty happy to cater to the enthusiast. I’d wager that there will still be room for a higher end (though still basically affordable) Xeon or Workstation grade part with a socket.

    Plus I don’t see any game-stopping issues with buying a CPU and a motherboard as a bundle. Sure, it will limit choice and a bad thing philosophically, but I’d wager the majority of enthusiast purchases fall within a pretty narrow range of CPU, chipset and motherboard options and for most people (myself, at least) it won’t change very much.

    Also, anyone else here feel like the next PC they build might be the last? Since CPU performance has been ‘good enough’ for a while now, I’ve noticed that my upgrade cycles are stretching out longer and longer (my 3.6Ghz i7 920 still is more than fast enough for anything I currently do), so if my next Haswell/Broadwell upgrade lasts as long I’d be looking at 2018 before it’s replaced and who knows what computers will even look like then.

    • Chrispy_
    • 7 years ago

    Rumour or not, this isn’t a disaster;

    Everyone buying a new processor has always been best served by buying a new board at the same time. When the boards cost significantly less than the processor, it’s hardly worth compromising.

    Those of us that buy a great board with the intent to upgrade later are the only ones being affected by this rumoured change. Quite often that tactic fails for them anyway. 1156 became an incompatible 1155. Going back further, New Core2 boards supported DDR3 and my trusty P965 board was stuck with slower DDR2.

    If it has been possible, putting a new processor into an old board has hindered the new processor in some way; Either memory bandwidth, SATA speeds, chipset features such as SRT or basic IO options like Thunderbolt and USB standards. Missing out on UEFI boot managers isn’t ideal, either.

    Buying BGA-soldered motherboard/CPU solutions will no doubt limit choice, but also improve compatibility, eventually reduce costs, and really only hurt the stubborn impoverished “upgraders” and the [i<]niche-overclocker-wanting-insanely-unusual-boards[/i<].

      • Sahrin
      • 7 years ago

      >Everyone buying a new processor has always been best served by buying a new board at the same time. When the boards cost significantly less than the processor, it’s hardly worth compromising.

      The problem arises when you begin to think about *what* processor goes with *what* board.

      I, for example, tend to buy the leanest board I can with the most expensive board I can, on the understanding that for the most part peripherials can either be attahced easily via USB or are better served by having a dedicated component. I like MicroATX or smaller boards, because it usually means lower cost and easier installation. Worst case scenario, I could later upgrade my board but keep the same CPU.

      It’s unlikely I’ll be able to score a mid-range or even entry-level board with a top-of-the-line CPU.

      The general idea of “bundling” is OK – the problem comes in the application when you have a price list that covers 10-20 points, all of which have different performance characteristics across multiple workloads. There are too many combinations.

      The other problem is that this will sink anyone but Asus and Gigabyte when it comes to board manufacturers, because the board maker is going to have to buy the CPU and shell it then mount it on the board, meaning for every board they sell they are out $200+ to Intel. This will absolutely crush the enthusiast board market.

      I don’t really understand what the advantage to Intel is (other than financially), other than cheaper package? Maybe it’s an engineering issue – but other than the initial problems with the LGA sockets it’s not like we’ve been hearing nonstop about issues with packaging.

      Just remember: the last time Intel tried to fuck with the socket model it lasted less than a generation (Slot 1).

        • derFunkenstein
        • 7 years ago

        [quote<]I, for example, tend to buy the leanest board I can with the most expensive board I can[/quote<] WHAT

          • nanoflower
          • 7 years ago

          I think some editing went awry. Clearly he meant to say that he tends to buy the leanest board he can with the most expensive processor. The idea being he goes with just the board features he needs and gives it a turbo boost with the best processor. Not a bad idea if you plan on keeping the system for a number of years.

            • Chrispy_
            • 7 years ago

            Exactly.

            If using BGA chips that incorporate platform controllers is Intel’s plan, we just have to hope that these new all-in-one BGA SoC processors come with the full-fat versions of the old traditional southbridge – ie the Z77 equivalent, and not H61 rubbish.

            • derFunkenstein
            • 7 years ago

            I know, I just had to read it 3-4 times before it dawned on me.

      • no51
      • 7 years ago

      The only instance where I upgraded a processor in an existing socket was when I got a great deal on a Q6600. It replaced an E6600.

        • moose17145
        • 7 years ago

        So far I have upgraded the CPU in every computer build I have ever had for myself. Somehow I always managed to come across some really good deal or someone is getting rid of their old machine that just happens to have a faster CPU than mine on the same socket.

        Either way, IF this rumor turns out to be true… then the build I currently have will likely be my last. I LIKE having a user replaceable CPU, I LIKE having more motherboard options than I know what to do with all while being able to pair them with any CPU that I want (not the cpu that intel or whoever else thinks I want). Maybe it’s just me, but I sincerely enjoy having options like that. Or like when my one friend NEEDED a new computer because his old one crapped out in the middle of a school semester and he couldn’t afford a really good system right away. He bought a nice Abit board, then spent 35 dollars on the cheapest Celeron he could get at the time, then when he had money later he bought a nice new Core 2 to replace it. In the end he ended up with a pretty nice system. But unlike most it wasn’t built all in one go…

        So from my perspective (if this rumor holds water), is nothing but bad. Obviously not everyone shares my opinion on this…

          • Chrispy_
          • 7 years ago

          Don’t get me wrong, I like the choice we have now too.

          I can choose if I want to save $40 on the board by getting the cut-down H75 chipset instead of a Z77, but as long as the BGA board/cpu combinations give us the full-fat versions for less than previously, everyone’s a winner.

          I would be prepared to give up a little bit of flexibility if it brought down the cost of the top-end solutions to entry-level prices. What worries me is if they just remove the entry-level pricing altogether. At that point the enthusiast crowd (myself included) will be shouting pretty loudly.

          What I wonder is if this will spell the end of the traditional Full-ATX, dual-GPU solutions. What’s clear with the move to BGA is a reduction in space required and cost, but if that is part of Intel’s move towards NUC-like mini-PC’s I doubt we’ll be seeing much scope for quad-SLI in Intel’s vision of the future.

            • xeridea
            • 7 years ago

            There is no space saved, and it won’t be cheaper. This is just Intel being dumb and trying to get everyone to buy their stuff, motherboard included so they can make more money.

          • Jason181
          • 7 years ago

          Good news for you; when your buddy upgrades his cpu, you’ll get both his cpu and mobo!

        • bthylafh
        • 7 years ago

        The last time I upgraded a CPU on an existing system was a Pentium II-350 to a Pentium III-500, when the latter was used and cheap. I think the only other time I did was in the mid ’90s when I installed a DX2-50 OverDrive processor to replace a 486SX-25.

      • rwburnham
      • 7 years ago

      Only twice in my life have I upgraded the CPU and not the board. So yeah, this isn’t the worst thing in the world.

      • albundy
      • 7 years ago

      “Those of us that buy a great board with the intent to upgrade later are the only ones being affected by this rumoured change. Quite often that tactic fails for them anyway. 1156 became an incompatible 1155. Going back further, New Core2 boards supported DDR3 and my trusty P965 board was stuck with slower DDR2.”

      and intel had nothing to do with that? how important were you to them vs how important was their profit margin for their investors is the question you should be asking yourself. compromise between design and performance should not have involved so many new sockets, yet here we are. at least there is still hope with AMD.

      “Buying BGA-soldered motherboard/CPU solutions will no doubt limit choice, but also improve compatibility, eventually reduce costs, and really only hurt the stubborn impoverished “upgraders” and the niche-overclocker-wanting-insanely-unusual-boards.”

      those impoverished upgraders and niche overclockers you are talking about is probably everyone that visits this site, and many others similar to this one. all in all, there is always the hope that haswell’s success will be just as entertaining as P4’s Willamette catastrophic lineup.

      • Anonymous Coward
      • 7 years ago

      The problem isn’t upgrades, the problem is that it would no longer be possible to build a machine to exacting specifications. Except to the exacting specifications of big manufacturers, where the objective is market segmentation.

      However this may not be as large of a problem as it would have been in the past, because the usefulness gap between the top end and bottom end of Intel’s range has diminished dramatically, even while the number of confusingly named variants seems to multiply.

      My main complaint is that this will represent one more small step towards controlling no aspects of your computing platform of choice. In a decade, the whole thing will be locked down with glue, encryption and laws.

    • yogibbear
    • 7 years ago

    So… now if I want to overclock a CPU I am at risk of having to spend $600 replacing it, rather than $300?

    Gee… thanks.

      • Deanjo
      • 7 years ago

      You assume that they would allow overclocking on a soldered on cpu.

        • yogibbear
        • 7 years ago

        Ok… now I am worried… before I was just being thrifty. :/

    • jjj
    • 7 years ago

    It would offer an opportunity for AMD (if still alive in 2014) and ARM to gain share but it would kill smaller mobo makers.

    “it looks like Intel will double down on low-power, small-form factor systems at the expense of traditional desktops.That would probably be good for computing in general, but sadly, it would mean we enthusiasts get the short end of the stick.
    – depends what kind of computing,we don’t get much perf at reasonable prices anymore and the move won’t help traditional PC sales,if anything it will hurt them. We don’t get more cores so they can make more $ with smaller dies and ofc more money is also why they would get rid of the socket,maybe add DRAM” and so on.

    • derFunkenstein
    • 7 years ago

    At least you won’t bend any socket pins anymore. #killLGA

    • bcronce
    • 7 years ago

    The new desktop is the tablet/ultrabook. Anything faster is going to be “server” class.

    Natural progression of things. This was expected, it was just a matter of “when”. We’re still not sure until Intel releases official info.

      • Arclight
      • 7 years ago

      When things get slower and weaker you don’t call it “progress”

        • nico1982
        • 7 years ago

        It depends. Weaker in absolute therm are not necessary a regression if the transition is the consequence of a re-assestment of the actual needs, or a change in the paradigm.

        I’m not supporting bcronce view, just pointing out that progress is not one-way.

        • jdaven
        • 7 years ago

        Electric cars are slower with less horsepower but I call it progress over gas guzzlers. Your comment just sounds right but isn’t. Besides the power will be in the cloud through fast clusters of supercomputers that will render our games and run our apps. Clients will become dumb terminals that don’t need much power.

          • Arclight
          • 7 years ago

          Again with the car comparisons, given the context i am certain you know i was talking about computers, x86 CPUs to be more exact. ….

          • ludi
          • 7 years ago

          I don’t. I call it “experimental technology” because that’s what it is. Until someone solves the equation that permits pure-EV designs to have the energy storage density of a few gallons of gasoline, they aren’t going anywhere (so to speak).

            • Geistbar
            • 7 years ago

            They won’t need the same energy density if they can solve other problems. Some method for rapid (I’m thinking less than 15 minutes) charging of the battery would solve that problem from another angle. Another solution would be to significantly expand the availability and speed of trains for long distance traveling, while keeping “good enough” range electric cars for daily commutes.

            I think there’s a bright future ahead for electric cars.

            • Deanjo
            • 7 years ago

            Depends on the area of the world that you live in as well. Here in the great white and often cold north a pure electric vehicle has to also contend with the increased electrical load of heating a vehicle. On top of that, the city here invested heavily on diesel/electric hybrid buses and they are finding them terribly expensive to maintain with having to replace the packs every year. There is also another concern where that a used electric vehicle is more then likely to incur an expensive additional expense of replacing the packs otherwise it becomes a useless vehicle once the originals have worn down.

            • eofpi
            • 7 years ago

            Rapid charging brings on its own problems, in particular a massive increase in electricity demand for each gas station. It’s certainly doable, but it’ll require huge increases in generating capacity and grid current capacity.

            How much capacity? Let’s take the Tesla Roadster’s 53kWh battery pack and 3.5 hours at 70A@240V charging figures, and assume we can get the same 90% charging efficiency in a 15-minute full charge; current battery chemistries won’t charge anywhere near that fast, but let’s pretend one does. 53kWh/15 minutes is 212kW. Throw in charge efficiency, and we get 235kW. We’ll ignore, for now, the practical issues of running that much power through a single/small number of connection point(s) (either arc-happy very high voltages or very high currents and the accompanying resistance heating), as well as any AC-DC conversion inefficiencies.

            If we replace a humble 4-pump gas station’s pumps with electric charging ports, it suddenly needs a megawatt power supply. Most US gas stations, though, have 8, 12, or 16 pumps, so a complete replacement with electric charging would see demand rise to 2, 3, or 4 MW. Now, 15 minutes is about 3 times as long as it takes to fill a typical gas tank. So, to match car throughput, these charge stations would need 3 times as many charging ports. That gives us an average figure of 6 to 12MW per charge station.

            The US has a bit over 100,000 gas stations. if these were all converted to charge stations, they would need 600GW to 1.2TW of electricity. And that’s just to charge our electric cars. The highest peak US power consumption over the past 10 years was summer 2006, with 789GW, with later years only a few percent off, so that’s not going away. Current peak generating capacity in the US is between 1006GW and 1017GW. This means, even neglecting transmission losses (which are significant), we would need to double US generating capacity, just to meet the demand of a fully electric car fleet. And our electric grid is already straining under its existing load; it would need a bigger capacity increase than generation does.

            • Geistbar
            • 7 years ago

            This is only if you assume that nothing changes beyond the source of energy for cars.

            If electric vehicles got to the point of having enough range to cover most people’s daily commute. Looking around, I’m getting numbers of ~15 miles for the average one way commute. A range of 60 miles would cover double the average of the round-trip commute. If we double that again — a range of 120 miles — we would probably be at the point of covering 90+% of people’s driving trips. At that point they won’t need to go to a charging station at all; they would just charge their car slowly at home at night (night being off-peak hours, making it an ideal time for the nation to do such).

            This would leave us just needing to cover the types of trips that people make only a handful of times a month (if even that often) at charging stations: demand would be significantly lower. It certainly wouldn’t need to be covered by 100,000 stations, and could probably get away with just 2-4 charge stations. We also would most definitely not need to produce enough extra power to cover the national number of gas stations times the number of charge stations per gas station times the power consumption of each charge station — it would just need to cover the average peak power consumption of those. That would be a dramatically smaller number.

            I’m not trying to say that EVs will just waltz right into the world without nary a problem to solve — they have quite a bit of progress to make still — but I do think you overstate the hurdles that need to be solved.

            • ludi
            • 7 years ago

            Speaking as someone who actually works in the electrical power industry and would stand to make a fair bit of money from massive EV conversions, I maintain that both the onboard energy storage density problem, and the power supply issues outlined by eofpi, are non-negligible and will require at least two decade’s worth of buildout to become practical.

            Trying to force the technology into the market before the market is ready didn’t work for the EV1 in the late 1990s, and it isn’t working for the Leaf and MiEV now.

            • Geistbar
            • 7 years ago

            Oh I certainly didn’t mean to imply that every car, or even a majority, will be switched over to pure electric within any sense of “soon”. Even ignoring all other issues, switching over enough all our energy consumption in gasoline in cars to powerplants would take a long time. I agree that there are non-negligible hurdles to work past for a majority EV situation to appear.

            I just mean that I don’t think the hurdles are insurmountable, and I think that we will start to see the number of individuals with EV start to trickle upwards over time. It might not be good enough for everybody, but the number of people it will be good enough for will go up with time. I can’t recall any transition on this scale or scope that happened quickly, and I have no reason to believe that EVs will buck that trend. All the same, the number of cases where they are practical is going to steadily increase with time — just like SSDs have been and will continue to do so.

            • eofpi
            • 7 years ago

            I did indeed assume that only the energy source for cars changes, and I did so deliberately. For one thing, that gives a worst-case scenario. The vast majority of people’s round-trip commutes will fit within the Tesla’s 200+-mile quoted range, and even in the 120 miles Car & Driver wrung out of it in the EV worst-case usage scenario (aggressive long-distance highway driving). Such cars will, most of the time, slow-charge overnight at home.

            But behavior is human, and humans are slower to change than technology. So a lot of people will insist on commuting, solo, from a nice suburban home to a nice suburban office over decently maintained roads in a 3-ton off-road vehicle that has never gone off-road. Weight, aerodynamics, tire rolling resistance, and wheel moment of inertia all chip away at range until the 53kWh pack that gives the Tesla an extravagant range figure is merely adequate for an Ironclad McE-SUV.

            Moreover, the main benefit to short charging times is that then EVs can be used more like combustion cars. Namely, road trips. That means we need charging stations at all of those middles of nowhere exits that currently just have a gas station or two and maybe a stoplight. Commuter routes, though, would be able to lose some stations without significant impact, as long as most people remember to plug in their cars at night (or some automated system is used, like parking over an inductive charge plate). I’ll concede that tripling the charging points per station is excessive (gas stations seem to be limited by land area, not by throughput estimates, and that won’t change with electrics), but it did provide an illustration of what happens with even faster charge times (in particular, 5 minutes instead of 15).

            Finally, what matters for utility planning is not average consumption, nor “average peak” (whatever that is), but peak consumption. Exactly how that relates to number of gas stations times charge stations per gas station times power consumption per charge station varies somewhat with scale. A single gas station with 4 15-minute charge stations absolutely needs to be able to draw 1MW whenever it wants from its substation. If it can’t, blackouts will occur, and the local power company will get some strongly worded letters. City and state scale may be able to get away with underserving slightly. And the major grid level will be able to get away with underserving somewhat more, not least because each of the 3 grids in the US (eastern, western, and Texas) is split between two time zones. Exactly how much underserving is reasonable is a question better left to ludi, since I know nothing of power grid load provisioning. That said, I stand by my conclusion that peak electricity consumption would rise significantly with mass adoption of EVs, and significantly enough to require substantial additional generating capacity.

            • Geistbar
            • 7 years ago

            A lot of disagreeing when you see to be saying something not that all far off from me: I’ve never said that EVs will take over immediately, and I see a multi-decade switchover process as being fairly reasonable. That said:

            [quote<]Such cars will, most of the time, slow-charge overnight at home.[/quote<] That's why I brought those cars up: to show that your worst-case scenario is implausible as a starting point. [quote<]But behavior is human, and humans are slower to change than technology.[/quote<] Humans adapt with technology. I've [i<]never[/i<] said or implied that everything will change overnight: things will progress and EVs will slowly become more practical. If someone gets wants a car that is too heavy for the battery in an EV to cover their commute, then they just won't get an EV at that point -- it'll get delayed until the technology improves enough. For most cases, EVs are still impractical. That doesn't mean they will stay there forever. Yes, we'll need to boost our energy production up significantly to make them work, but we also have many, many years to carry that out. [quote<]Commuter routes, though, would be able to lose some stations without significant impact, as long as most people remember to plug in their cars at night (or some automated system is used, like parking over an inductive charge plate).[/quote<] Exactly what I was getting at: we won't need to upgrade all ~100,000 gas stations into charge stations. Yes, we'll need to ensure that there is sufficient geographic distribution, but we won't need to consider the power consumption of if there were 12 people charging their car at every single one of those stations simultaneously. [quote<]nor "average peak" (whatever that is)[/quote<] Average peak was meant to avoid picking a stupid point for your peak -- if you measured peak power consumption in NJ shortly after Sandy, you wouldn't have at all a realistic number. Likewise, if you measured airplane flights around the holidays for your peak airplane usage you would have far too much capacity for the rest of the year: paying the costs to temporarily boost capacity around the holidays would be cheaper than making it available year round. Average is just there to ensure that the peak is chosen with some intelligence behind it. [quote<]A single gas station with 4 15-minute charge stations absolutely needs to be able to draw 1MW whenever it wants from its substation.[/quote<] I never said otherwise. We just don't need to have the power production available for every single one of them in the entire nation to be doing that simultaneously. Just like how we don't need (0.6 L/s * 8 * 100,000) = 480,000 liters/s of gas available constantly. National throughput will be much lower than the theoretical maximum, and while each station needs to be able to meet it's maximum at any moment, the entire nation doesn't need to be able to meet the sum of those maximums at any moment.

            • ludi
            • 7 years ago

            [quote<]I never said otherwise. We just don't need to have the power production available for every single one of them in the entire nation to be doing that simultaneously. Just like how we don't need (0.6 L/s * 8 * 100,000) = 480,000 liters/s of gas available constantly. National throughput will be much lower than the theoretical maximum, and while each station needs to be able to meet it's maximum at any moment, the entire nation doesn't need to be able to meet the sum of those maximums at any moment.[/quote<] That's a rather weak comparison because liquid fuel, unlike electricity, can be readily stored in tanks. The gas station can handle peak simultaneous fueling demands that exceed what an on-demand distribution infrastructure might be able to supply at that place at that point in time because the fuel is store in a local tank. There are ways an all-EV charging station could do some of that, for example by having an on-sight storage bank comprised of large storage batteries buffered by supercapacitors, functioning like a realy-time UPS between the utility grid and the charging stations. The storage bank could be drawn down during periods of peak demand and then recharged during slower periods. But this is a range of technologies that doesn't even exist yet and has its own efficiency limitations...and due to the capital equipment costs, won't exist until if and when EVs comprise a sizable minority of all vehicles in service.

            • Diplomacy42
            • 7 years ago

            I don’t know what city you live in, but in my experience, a lot more people live in the suburbs than you give credit for. people who live less than 10 miles away from the office probably constitute less than 10% of the population. personally, I commute more than 15 miles, just to go to the mall from time to time.

          • xeridea
          • 7 years ago

          Electric cars are fulfilling the needs of energy conscious commuters who would be getting fuel efficient cars anyway. You are suggesting shoehorning power users into slower, hotter laptops. By the way, there are really fast electric cars also, with good range. Running everything in the cloud is stupid, especially for games. The only thing good about the cloud is storage.

          • clone
          • 7 years ago

          electric cars are slower, more expensive, have less pony, are range limited, capacity limited, size limited, are only mildly impractical in warm climates while a hopeless cause in cold ones, they are harder on the environment than gas guzzlers and have to be heavily subsidized by taxpayers in order to only be several thousand per vehicle more expensive….. so how on earth are electric cars an example of progress?

          p.s. the current electrics that are snappier require a price is no object mindset.

          note: if they ever get the capacitor variants to work for a reasonable price then their is a chance for electric although it’ll still require a gas engine to charge the cap every few miles.

            • bcronce
            • 7 years ago

            Everything you said could be said about every utility, not to mention nearly every technology we use today.

            • clone
            • 7 years ago

            try again because your comment has nothing to do with my post.

            • xeridea
            • 7 years ago

            With gas prices, they have the potential to be cheaper in the long run in the future. The technology has been advancing. Gas cars 100 years ago were terrible. How do you think they are harder on the environment than burning 20,000 gallons of gas? In some areas, like California with heavy traffic, and strict emissions, they are practical.

            • clone
            • 7 years ago

            ok, here goes.

            1 they are slower per dollar by quite a lot.
            2 they are more expensive by quite a lot.
            3 they generate considerably less power by quite a lot.
            4 their range is severely limited
            5 electric vehicle capacity is ridiculous because to increase it exagerates all of it’s above shortcomings.
            6 they are size limited for the same reasons as listed above.
            7 they are only mildly impractical in warm climates meaning they are climate dependent.
            8 they are a hopeless option in cold climates making the car completely worthless in half the world.
            9 they are harder on the environment than fuel based offerings.

            I’ll respond to this one because you did, first off have you seen a modern nickel mine?… Sudbury Canada is one of the larger ones and it’s known for 2 things, mining nickel and filming moon landings because that’s how hard nickel mining is on the environment, rivers run red with pollutants, but wait that’s just to get the nickel, then we load up a cargo ship and did you know that just 16 cargo ships will generate more pollution than all of the cars in the world combined….. and that’s just to get the nickel to Europe for processing because then it heads to China by…. you guessed it, cargo ship where the batteries then get built, and then it gets shipped back to North America by…. you guessed it cargo ship, generating even more pollution…. and then you have to consider what is it the world plans on doing with all of these worn out batteries because currently their is no plan.

            so when comparing “20,000 gallons of gas”.. …. compared to the environmental destruction that is caused by the birth of an electric car it’s no comparison, gas wins every time because 99% of that “20,000 gallons” gets recycled by nature into breathable air vs the increased environmental impact involved in building and disposing of an electric car.

            10 electric cars have to be heavily subsidized by tax payers in order to make a business case for them.

            you also mentioned the future potential of electric vehicles….. which is not a valid reason to buy one today, is not a valid business case for why they should be taxpayer subsidized today so that they can be sold on the consumer market…. today.

            regarding your comment about California and valid choice, 1st off the answer to rush hour traffic is start stop technology that is being introduced on 2013 vehicles…… when stuck in traffic the engine shuts off while waiting and 2ndly what Californians haven’t discussed yet is where to put all of those acid dumps that’ll have to be created because it’s toxic waste and won’t be going into the landfill anytime soon…. massive amounts of battery acid makes for a very long term disposal problem akin in popularity to nuclear waste which currently has no home.

            it’s very easy for ppl to like electric when they don’t look into the downside, it’s very easy to judge conventional engine tech because you see a tail pipe on a conventional car and assume it’s the problem when it’s nothing of the sort.

            so in summation out of 10 negatives you countered 1 and the reality is that the one counter point failed when examined beyond the surface, so I’ll be blunt because it’s more fun and ask HOW THE HELL are electric cars an example of progress?

            note: edited several times for later fact checking and some grammar… but only some hehe.

          • LastQuestion
          • 7 years ago

          Actually. No. Electric cars can move 0 to 60 faster than anything using gasoline. With recent advances in battery tech they’re soon to offer more miles per dollar than gasoline ever has while recharging nearly as fast as it takes to fill up a tank at the station. The combustion engine is a dinosaur, and it’s only a matter of time before it’s completely obsolete.

        • Theolendras
        • 7 years ago

        Well, this is probably a temporary problem. Give the CPU in tablets 3 to 4 years it will probably beat a current modern high end processor. Yes it kind of fall behind right now, but convnience, mobility are also something to take into account. Moore’s Law will enable to get past our current computing power thresold even in smaller devices, just give a few years.

        Look a the phone industry. Cell phones at the beginning we’re bricks that just weren’t convenient to use, voice quality subpar etc… Wired was more pleasant to use and sounded better back then. But that didn’t hold for long.

        As an added bonus, IT industry will be much gentler on power as a whole, which is welcome.

        I sure hope AMD or it’s possible takeover comagny will still be around with compelling upgrade path still in the cards. But that side of the story might be hopeless…

          • Arclight
          • 7 years ago

          All you see is that in a few years we will get back to where we are now. All i see is lost time and potentially enomous gains in performance flushed down the drain.

            • Theolendras
            • 7 years ago

            Sure, but where are the killer apps ? Sure there are plenty of field where you can’t have to much computing power, but that’s not the case for way too many users. Also desktop will hang around during that transition. Desktop won’t be the focus of the industry, but performance is improving along the way even then.

            See how far we’ve come since Pentium 4, Intel is reducing power enveloppe at the same time as improving performance to accomodate mostly laptops. But I do not complain at all, we now have relatively silent Desktop and overclockable CPU (the latest P4 didn’t had much overclockable headroom if I recall correctly).

            New developpment models like OpenCL and transactional memory, might finally takeoff and actually increasing the pace of performance.

            • Arclight
            • 7 years ago

            They’re not apps, they are called programs and they’d be happy any day with faster/more cores chips.

            • Jason181
            • 7 years ago

            Apps is short for applications, which is indeed another word for programs.

            • Jason181
            • 7 years ago

            I overclocked my P4 by 25%, which is not bad (from 3.0 to 3.75), and it was a D-variant so it had two physical cores. I used a power calculator and it told me its peak power consumption was 189 watts. Hehe, it was a beastly heater, and I had a massive HSF to cool it. Compared to the core/core 2 cpus though, it wasn’t that impressive.

            According to Intel, Haswell will only improve about 15% or so on legacy code, which will be the majority for probably at least a year, most likely closer to two. I look forward to what transactional memory will bring to the table too, but like hyperthreading I think the real performance increases will take place on servers.

          • xeridea
          • 7 years ago

          Yeah, 3W CPUs will beat 130W CPUs on performance. Thats like saying my lawnmower could win NASCAR.

            • Theolendras
            • 7 years ago

            Lame, try answering to something I have actually said… I guess I have to put figures : I said current high end CPU let’s say Ivy 3770K, and tablet CPU 3 to 4 years from Broadwell (which is the processor implied in the thread here) will probably be more performant. Since Broadwell is to be release 2013-2014 timeframe, that would extend the window to a 2016-2018 if you take what I said litteraly. Provided Intel is delivering on it’s Tick-Tock schedule, it’s giving 2-3 node shrink plus 2-3 architecture refresh.

            Let’s say roughy Broadwell on a tablet could clock speed would be half that of a 3770k and with half the cores. So about a quarter of the performance.

            Each iteration get’s about 40% more power from one to the next. Figuring Haswell and Broadwell would be mostly about reducing power usage, this would account for probably :

            3 additional node shrink +
            3 additional architecture refresh

            So here some maths on those.

            1.4^6 = 7.5

            So a 7.5 fold of the 0,25 performance of a today high-end Ivy.
            7.5 x 0.25 = 1.875

            Nearly twice the performance in a Broadwell power enveloppe of that hypothetical CPU on the timeframe I mentionned, which will probably be higher than 3W thermal budget, but that was your figure, not mine.

            Ok then I admit, the math stuff is optimistic on the 40% some node or architecture refresh might account for less sometimes but it’s generally what it’s about. Sure I could be wrong by a year or two if that’s not the case.

            So yeah, it’s probably gonna be in the same ball park by then. You know a high end smartphone is better than some mainframe in early 80s do you ?

            Ah, and analogy with cars might do it for a few concepts, but not to compared evolution of performance overtime. Cars core technologies are evolving incrementally at best. Processor are evolving on a exponential curve. But I guess it’s the very notion of time lapse you missed in my original post anyway.

        • derFunkenstein
        • 7 years ago

        We’re actually getting back to power consumption and heat production of desktop CPUs from 15 years ago in mobile computers today. We got interrupted by the race to 1GHz and a new reality of how much heat we can dissipate from the late 90s and early part of this century, especially when Netburst…errr…burst onto the scene. So in a way we’re returning to “normal”.

        Here’s an amusing set of tables:

        [url<]http://en.wikipedia.org/wiki/List_of_CPU_power_dissipation_figures[/url<] So in a contrived way, we've progressed considerably. In another way, you're right, this is a step back from the absolute speed available.

          • Arclight
          • 7 years ago

          That’s an interesting list, thanks.

        • bcronce
        • 7 years ago

        Slower and weaker? Most Netbooks have more processing power than workstations of only a decade ago and way more processing power than what most end users need.

        My point is that processing power is scaling faster than our ability to use it. At some point economics kicks in and says “If it’s not being used, it’s wasted and increased costs”.

        Demand for SoC designs is increasing. Most cell phones are closing in on what current desktop users need.

        Until we get some new processing demanding software, hardware is going to keep getting smaller and more integrated to save both money and power-usage.

        edit: forgot to add, even for servers, the limiting issue isn’t processing power anymore, but heat, IO, and memory.

        btw, how many alt accounts do you have down-voting me and up-voting you?

        Trolls and griefers abound on TechReport comments.

          • Dr_b_
          • 7 years ago

          “Give the CPU in tablets 3 to 4 years it will probably beat a current modern high end processor.”

          Are you serious?

            • albundy
            • 7 years ago

            no no no, he mean all of the cpus ever made for tablets during 3 to 4 years total will probably beat one current modern high end processor. that would make much more sense. rofl!

            • Theolendras
            • 7 years ago

            You sounds a little like Bill Gates : “640K ought to be enough for anybody.”

            If you go for the absolute highest current CPU like 3960x that might not hold out. But current quad core, it is definely possible. It probably won’t be a ipad-thin tablet, but probably the kind of tablet we’ll probably see Haswell fit in a near 10w enveloppe. Heck, Samsung and others are already selling Core i5 based tablet already. They are seriously downclocked, but nothing a few Moore’s law iteration couldn’t fix. But given that the engineering efforts are mostly towards reducing the power footprint, all the while improving performance slightly this is probably attainable. I wouldn’t bet the house on it, I might be wrong by a year or two, but make no mistake it will happen.

          • xeridea
          • 7 years ago

          Who needs alternate accounts when your post is terrible. Netbooks are terribly slow, that’s why they flopped. You are comparing them to 10 year old technology. Cell phone CPUs are ok for cell phones, but suck badly compared to desktop. Browsing the internet on my phone is a terribly drawn out waiting game that often ends up with me giving up. Forget multiple tabs, or any sort of gaming you would do on a desktop, or any other CPU intensive task people do.

          • Theolendras
          • 7 years ago

          Hey there ! there is some truth there. This is holding out for the typical office bureaucratic machine and casual users which is like the mass, until a new software developpement comes out and requires every bit of power available.

          I must admit that kind of relatively universal appeal for a software to functions properly or with decent performance, that would bring your most systems to a crawl did not show up in quite a while. Cloud based computing is kind of ruining a lot of these local software I would think.

          Bring ever more intelligent voice recognition, OCR, as part of an OS or a Office suite like Microsoft, now we would talk generating shift in people needs.

          For enthusiast, graphist, science, finance, prospection and what else, every bit can be welcome.

      • sschaem
      • 7 years ago

      When tablets get faster, they will just become laptop with detachable keyboard.
      Not the other way around.

      And when will people stop wanting faster ‘tablets’ ? Because right now we have a long ways to go if tablets are to take over the laptop market.

    • Arclight
    • 7 years ago

    Wait, so who will do the soldering? Intel? If so will that kill all the other mobo manufacturers?

    This is stupid, what was wrong about having options? Sigh, for a few years now i feel like we are moving backwards.

      • flip-mode
      • 7 years ago

      The same people who do it for Intel Atom and AMD E-series, I guess.

        • Arclight
        • 7 years ago

        Bad things i see from this:
        – we will probably have to pay a bigger price for the CPU+mobo compared to now (it would be stupid for mobo manufacturers not to price the CPU on the package higher compared to Intel)
        – mobo offerings within certain segments will be very limited compared to now since manufacturer will likely not want to take risks with models that they fear are too extreme and will remain on stock (now with the added cost of the CPU as well)

          • ludi
          • 7 years ago

          I don’t see why. The integration of everything into the CPU, leaving the motherboard as little more than a power supply and expansion backplane, has been progressing for quite some time. And the future of the enthusiast market may look a lot like that: a “carded” CPU with critical I/O onboard, and a “backplane” motherboard that basically offers expanded I/O.

            • designerfx
            • 7 years ago

            except that integrated everything in the intel case tends to perform incredibly poorly.

            Have you ever matched up intel integrated vs the amd integrated processors? Both of them perform like shit in comparison to everything else, and it’s already ridiculous to pay a surcharge for a GPU to be integrated if you aren’t going to use it.

            Now we have to pay motherboard and processor?

            If this is actually legitimate, this is going to piss off a lot of people.

            I don’t mind if they cut costs, but dont’ cut out an entire market to do it. sheesh.

            • MadManOriginal
            • 7 years ago

            You do realize that they are talking about a lot more than just integrated [i<]graphics[/i<].

            • Jason181
            • 7 years ago

            Intel’s NIC, USB and storage arrays are the very best integrated you can get. They got into the gpu business just a few years ago and are making improvements in leaps and bounds. By the time BGA cpus are here, it’s not hard to believe that they’ll be in parity with AMD, especially if AMD continues on their current path with desktop performance, or lack thereof, on the cpu front.

          • flip-mode
          • 7 years ago

          I think you’re worrying too much, to be honest.

      • derFunkenstein
      • 7 years ago

      Manufacturers do their own soldering in notebooks, I don’t think this would be much different.

      • Deanjo
      • 7 years ago

      Why would you think Intel would be doing the soldering? MB manufacturers are perfectly capable of handing BGA. Do you think intel solders on every processor in laptops?

        • derFunkenstein
        • 7 years ago

        not to mention their chipsets onto motherboards.

      • ludi
      • 7 years ago

      The average electronics component contains several BGA-type chips already; adding one more won’t be a problem. In fact it will be much cheaper to manufacture and install compared to a conventional CPU socket, although heatsinking on higher-power models could be interesting.

      When BGA (ball grid array) are assembled, the chip is manufactured with solder balls in place of pins, and these are then cooked in a reflow oven:

      [url<]http://en.wikipedia.org/wiki/Ball_grid_array[/url<] [url<]http://en.wikipedia.org/wiki/Reflow_oven[/url<] For other types of bulk soldering, including through-hole and surface mount, wave-table manufacturing is common: [url<]http://en.wikipedia.org/wiki/Wave_soldering[/url<] If you're unfamiliar with automated PCB assembly, you can probably find some decent videos of PCB assembly on YouTube (I can't point you to any right now, because the work proxy blocks it).

        • xeridea
        • 7 years ago

        I don’t see how it would be cheaper. They are replacing pins with solder balls, that still need connected. They may be saving 50 cents on the plastic for the socket.

          • maxxcool
          • 7 years ago

          50c is still a + margin per board… however it is more about controlling the market. less players = more profit from sheeple buying intel over “ECS” or other really crappy third party oem vendors.

            • MadManOriginal
            • 7 years ago

            Why are people extrapolating this to ‘Intel won’t sell CPUs to any motherboard makers’?

            • maxxcool
            • 7 years ago

            I’m thinking purely of the really low margin OEMs like ECS, Zotac, biostar and a few others that will get squeezed. I am sure the rma process will be a horrendous nightmare for both the oem and the end consumer. So whilst not a direct action to “kill” competition and raise revenue by reducing options it will still have that effect if Intel truly follows this path.

            • MadManOriginal
            • 7 years ago

            I suppose we’d need to know the financial details of those companies to know 100%, but it wouldn’t necessarily be bad for them. Say the motherboard vendors start getting quantity CPU pricing like Dell, HP, or Lenovo do. That might actually *increase* their margins if they markup the CPU over their cost, or at least increase their revenue if they sell the CPu with no markup (taking into account manufactuiring costs and whatnot). That might be offset by increased inventory or RMA costs but I’d suspect Intel would treat them like any other OEM purchaser when it comes to warranty.

            • maxxcool
            • 7 years ago

            Well have to wait and see…. the part I hate 😛

          • ludi
          • 7 years ago

          You really think the cost of manufacturing something like LGA 1366, and the PCB design considerations to accommodate its installation, is comprised principally of “50 cents [of] plastic”?

            • xeridea
            • 7 years ago

            There is still a cost to manufacture the BGA setup, making a socket doesn’t cost that much more, if any. You can get a whole motherboard for $30, with tons of ports, slots, controllers, and even a crappy IGP.

            They still both connect to motherboard essentially the same way, one is soldered, one is not. Cost differences are negligible.

            • ludi
            • 7 years ago

            You don’t understand what a “BGA setup” is, do you?

            • derFunkenstein
            • 7 years ago

            Shh, shh. It’s fun to watch.

    • obarthelemy
    • 7 years ago

    First, it’s not a revolution: many CPUs, be it ARM, or x86 mobile ones, already come in BGA packages that can’t really be user-installed. Intel is probably saying that some more CPUs, in the desktop market this time, will be transitioned to that model. Not all of them, some.

    Second, I’m OK with it. I’m the most tech -inclined in my family, and I haven’t upgraded a CPU in gaes. By the time a CPU is obsolete for me (2-3 yrs), the accompanying mobo is too, and I change them both… or rather, I buy a new system, and pass along the old one. Extremelly few people ever upgrade CPUs while keeping the mobo. Furthermore, sockets seems to change every year or two nowadays, so CPU upgrades are out.

    The only inconvenience will be for OEMs: if Intel keep having CPU with only 5% faster clocks from the model one step down, it will mean having to stock up lots of almost-similar SKUs, or having to bet on customer preference.

      • Veerappan
      • 7 years ago

      My current Gigabyte AM2+ board has been through the following:
      Athlon x2 5000+ Black Edition, 4GB DDR2, Radeon 4850, then 4770
      Phenom II x3 720, 4GB DDR2, Radeon 4770
      Phenom II x6 1055T (upgraded to 8GB DDR2), Radeon 6850.

      CPU upgrades are still feasible, and in each case, I got a noticeably better CPU between each upgrade.

      Next, I wait for Steamroller or Haswell, and finally make the jump to DDR3 and GCN/Kepler

        • Chrispy_
        • 7 years ago

        And this is why it’s a shame AMD are leaving the CPU scene.

        At the same time, their APU’s from Brazos, Llano, and Trinity are all on incompatible sockets. Perhaps sockets are a PITA to design chips for, after all…..

    • chuckula
    • 7 years ago

    LMAO.. so Mostly-Inaccurate has picked up the story with these gems:
    1. Apparently Intel already killed the desktop enthusiast market with… Nehalem… so there’s nothing left to destroy anymore.
    2. So the SOCKET IS DEAD…. except for the fact that Charlie then goes on to say that Skylake will have a socket, BUT NO, REALLY THE SOCKET IS DEAD. So basically, the socket is dead, except for all of these Intel products that still have sockets. Check.
    3. Kool-aid TIME: ARM is now the new enthusiast platform! Yup, some guy posted a (probably faked) screenshot of a cell-phone overclocked to 3 GHz and that is proof that enthusiasts only care about ARM now.

    So here’s the Charlie logic:
    1. Intel has sockets.
    2. Intel may abandon sockets in the future, except for all of these products where it won’t.
    3. ARM has never been socketed (at least in any device that consumers buy).
    4. Therefore: All enthusiasts will completely abandon x86 and go to ARM because Intel is supposedly completely abandoning the socket (except for a bunch of products where it won’t abandon the socket).
    5. In conclusion: SOCKETS ARE REQUIRED FOR ENTHUSIASTS EXCEPT FOR ARM WHERE SOCKETS ARE EVIL AND BAD!

      • bhtooefr
      • 7 years ago

      Heh, I’ve got a socketed ARM device, intended for consumers.

      (It’s a desktop computer, with a StrongARM on a card. Not overclocked, and it needs a motherboard.)

        • chuckula
        • 7 years ago

        OK! You are the one & only enthusiast left in existence according to S/A!

        • derFunkenstein
        • 7 years ago

        I have two socketed ARM devices attached to my shoulders.

          • Chrispy_
          • 7 years ago

          Inevitable, but I suppose jokes like this are mostly ‘ARMless.

            • Deanjo
            • 7 years ago

            True but being 2012 with the Mayan calander expiring I expect full on ARMegeddon.

            • derFunkenstein
            • 7 years ago

            If it’s so inevitable why was it that I had to do it? 😉

      • MadManOriginal
      • 7 years ago

      The socket is dead, long live the socket.

    • HisDivineOrder
    • 7 years ago

    What did you people expect? Now I read a similar article to this, but it stated that the high end desktop would be excluded from this decision. I took this to mean the current K products would continue to be sold to enthusiasts. It’s just that the current non-K parts would go SOC and probably not really sold to end-users.

    Even that, I see as an interim solution until people get used to the new state of affairs and then they’ll take that away, too. Do you think the Intel CEO retiring is a coincidence with these new rumors? The board clearly doesn’t believe in the way things are. They are dissatisfied with his relying and remaining consistent with the way things are. They want things to change in a big way. Switching to a SOC-state of affairs (like the NUC test bed) is probably a lot more profitable. Plus, it makes it easier to deprive nVidia of the discrete GPU market in the long run. If the rumors of nVidia putting an ARM CPU on its next gen Maxwell GPU or its successor are true, then that’s them preparing for a future where Intel is not allowing them to build external GPU’s for computers that are mostly SOC’s. Imagine what nVidia’ll charge for a discrete GPU then. For that, I imagine they’ll just offer you their own SOC that includes more advance GPU and an ARM-based CPU to run on a version of Windows RT built for end users or perhaps bundled with it.

    All that said, again I ask? What did you expect? I guess you aren’t thinking back through your history. Do you remember the radio? It was built. Consumers built them from parts. They’d upgrade them, they’d mess with them, etc. Does anyone do that beyond as a side project now? A history lesson?

    Or TV’s? People used to build those up from box to CRT to the whole she-bang. Enthusiasts don’t do that, either. HDTV’s are miniaturized. Imagine trying to build a HDTV from scratch. Buying the display, buying the housing, buying the computing device that goes into it, etc. Those hobbyists just don’t exist beyond a few very niche people doing crazy-advanced things.

    PC’s will be the same. I think anyone who looks back historically knew this was going to happen eventually. I just hoped it wouldn’t be so soon. I guess I’m not surprised, though. I hope that whoever takes Intel’s reigns continues to offer enthusiasts the K-line as a way for enthusiasts to keep doing what they’re doing, but the days of enthusiast PC building will eventually end. This is very bad for the “art” (or experience) of building PC’s, but for PC gaming it might wind up being better.

    Why better? Because though we’ll lose the high end, more and more NUC-like devices (on the cheap) will get better and better integrated GPU’s, expanding the size of the market of PC’s that can run games equal to the rumored console specs of PS4 and the neXtBox. Certainly, greater than the Wii U. That means those ports will look better and perform better on NUC-like devices based on Haswell. Which means, much like the HDTV, PC’s may not be enthusiast-built nearly as easily, BUT it won’t matter as much because gaming will require less in order to play. And a larger audience’ll mean more titles for those PC gamers. Eventually, that’ll kill consoles because they just won’t be profitable enough to warrant even making.

    In the end, PC gaming and tablet/smartphone gaming will kill consoles or drive it into extremely niche areas. That, at least, is a swell consolation prize. The real laugh is that in the end Steve Jobs’ post-PC world is all the more real as Intel and Microsoft both show real signs of giving up on the traditional PC market in favor of just what Jobs wanted for computing. But you see the joke, right? It isn’t Steve Jobs who did them in. It’s their own impatience and willingness to capitulate that’s going to kill the traditional PC.

    Tablets are already losing momentum. Smartphones will peak. Based on Microsoft and Intel’s moves of late, PC’s will apparently transition into smaller devices that function in similar ways but stay tied to your HDTV and your home in general. That said, Intel is still lightyears ahead in fabrication and that should keep them insanely profitable for decades to come just so long as they don’t stumble during their CEO transition.

    It’ll be sad, but I suspect one day soon “building” a computer will involve buying a barebones NUC, deciding if you want Intel or nVidia, getting an included OS, and perhaps slapping in a SSD/HD if you’re particularly lucky. That said, PC gaming will rise from the ashes of consoles once more. It’ll be barely recognizable to the way PC games started, but it’ll still get its Pyrrhic victory.

    • Deanjo
    • 7 years ago

    For a guy like me this is bad news. I pretty much do at least one processor upgrade per mb (up to 4 times on some, 6400+ –>9850–>955–>1090T). What hurts even more is that I will probably have to give up premium motherboards with a ton of connectability pairing them with low cost cpus. I imagine that they are only going to pair premium motherboards with insanely expensive high end cpus.

    • Krogoth
    • 7 years ago

    This move make sense in the era of SOC silicon.

    Part of the price of moving core logic into CPU package is that anytime architect wants to make any signficnat change. They have to redo signal and electrical pinout which is why the current generation of CPUs come in several different sockets. Despite the fact that they share the same basic design. It doesn’t make fiscal sense to continue to develop several different sockets for markets.

    The only crowd who dread this are the motherboard manufacturers. They are going have to play with Intel some more to secure CPU shipments. This can get can ugly, if AMD somehow makes a competitive CPU……

      • jdaven
      • 7 years ago

      Too much logic (pun intended) this early in the morning.

      Good post!

    • chuckula
    • 7 years ago

    The story has one or two semi-official looking slides in it, neither one of which shows sockets being dropped. Outside of the semi-official slides, all I see is unconfirmed rumors.

    I could certainly see some new SKUs of Broadwell coming to market that would be BGA chips simply to make it easier to fit Broadwell into very small form factors (read: tablets). However, I highly doubt that Intel is going to jettison the socket entirely when it is already taking the trouble to put out socket 1150 for Haswell and it would be too easy (and cheap) to not simply make socketed Broadwell chips for the existing platform.

    • LSDX
    • 7 years ago

    A few years ago, when we had sockets that remained compatible for several years (socket A, 775) this would have bothered me. But in recent years, you need to change the mobo anyway when you want to upgrade to a newer cpu generation. So it doesn’t really matter.

      • rrr
      • 7 years ago

      Except even having the same socket was rendered moot in reality. Ever tried popping C2D on eg. 865 based mobo with 775 socket? Well, then you know the problem.

      • Kaleid
      • 7 years ago

      The “change mobo anyway” is probably just another business decisions. Change the CPU enough so that people will have to buy new motherboards with new chipsets..

    • boing
    • 7 years ago

    If this is true, I thought at first it being a total disaster. Then I realized that it really doesn’t matter much since every new generation of CPUs seems to require its own new motherboard, so the only difference will be not having to fit it myself anymore onto the motherboard.

      • Geistbar
      • 7 years ago

      The issue wouldn’t be so much CPU replacement as it would be the restriction of choice. You’d have to get your mobo and CPU as a combo, except it would only be combos that others had decided to offer. Think of how many different models of CPU Intel offers; now think of how many different mobos you could put them in. There’s no way that they would put up with the number of SKUs needed to support every possible combination. It would become a lot harder to pair a low price CPU with a high end mobo, or vice-versa.

      • just brew it!
      • 7 years ago

      It would likely mean less choice as well. Motherboard vendors aren’t going to want to stock 10 variants of the same motherboard with different CPUs on it.

    • Bensam123
    • 7 years ago

    And it starts… When this appears I suggest people start going exclusively AMD if you value having a desktop you can service yourself. No good will come of this for consumers.

      • lycium
      • 7 years ago

      Exactly right.

      • chuckula
      • 7 years ago

      Yeah so how upgradeable will all those Jaguar systems be? Considering that Read & Co think that Jaguar is going to be their #1 seller, the only real difference I see between AMD & Intel is that AMD gives you lower performance and a lower price with little to no difference in upgradeability (assuming these unsubstantiated rumors are true).

      EDIT: Yeah, go and check Newegg’s selection of individual AMD CPU/APUs, and you’ll notice that there are exactly [b<]ZERO[/b<] individual Brazos chips that you can buy & slap into a Brazos motherboard (which, BTW, Newegg does not sell either unless there is already a chip in the motherboard). Don't worry, you can still get your fanboy fix for the day by completely changing your argument (again) and claiming that AMD invented the idea of non-upgradeable CPUs and that Intel is just stealing from AMD's "amazing innovation". [url<]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007671+50001028&QksAutoSuggestion=&ShowDeactivatedMark=False&Configurator=&IsNodeId=1&Subcategory=343&description=&hisInDesc=&Ntk=&CFG=&SpeTabStoreType=&AdvancedSearch=1&srchInDesc=[/url<]

        • Bensam123
        • 7 years ago

        I really don’t understand how you make some of these connections. So because AMD has low end embedded chips the like of the Atom and ARM, that means it’s representative of their entire desktop lineup?

        I don’t even understand what you’re arguing in your edit, it makes no sense. Do you even read my posts before you reply to them? There was all of two sentences to read and comprehend in my original post. You didn’t address anything I said or attempt to refute them.

          • chuckula
          • 7 years ago

          I really don’t understand your selective filter: When there are rumors that AMD is dumping its high-end line (you know, the ones that actually use sockets) you are the first in line to say that they are complete FUD.

          When there are rumors that Intel is somehow abandoning all socketed systems, you are the first in line to say that we all need to immediately jump on the AMD bandwagon for some undefined reason (like having a slower and power-hungry chip simply because AMD makes it instead of Intel).

          Oh… and you have spent the last few days making an endless dribble of posts about how the Raspberry Pi (which you have never used) is a full desktop replacement. Well, as an actual owner of a Raspberry Pi, let me tell you a secret: It has a soldered on chip with zero upgradeability! So why is the Raspberry Pi the greatest thing since sliced bread while Intel is the Great Satan again?
          Don’t give me some crud about “oh it’s different markets!” either, you flat-out said that the Raspberry Pi is superior in every way to the NUC, which is clearly aimed at the lower-end desktop.

          Unlike you, I’m not a complete shill: I have repeatedly said that the AMD rumors are overstated and that at a minimum the mainstream APU line like Trinity will continue, with Kaveri being delayed to 2014 but not cancelled. I’m [b<]also[/b<] saying that while Intel will likely introduce some new packaging due to a desire to get Broadwell into tablets, that is not the same thing as saying that all Intel chips will abandon sockets and the end-times are here. Your inability to think rationally, which goes back *years* BTW, has been and continues to be annoying and unimpressive.

      • Deanjo
      • 7 years ago

      Amd is heading down the same path with them focusing their efforts on APU’s.

        • Bensam123
        • 7 years ago

        And proof of AMD moving their entire desktop lineup to embedded chips? If there was proof of it I wouldn’t suggest AMD as a ‘voice your concern’ alternative.

          • Deanjo
          • 7 years ago

          AMD has already made it clear that they are abandoning the enthusiast market. Their bread and butter now days lies in the APU market in which AMD already has quite a few soldered to the board offerings.

          Abandoning the socket means that with every AMD motherboard sold an AMD processor is sold along with it, instead of a raw MB purchases and people reusing old processors. They no longer have to worry about servicing the end user with warranty coverage either so there are some more jobs they can eliminate to reduce their expenses. The transition of moving more and more functionality onto the CPU is making this possible.

          With a cpu soldered to the board they only have to worry about warranty coverage to their OEM’s and not the end user. It also frees them of having to adhere to a socket/pin layout and allows them to stick an extra pin or two whenever they need to without worrying about socket restrictions or end users bending pins and not having their HSF’s properly installed.

          Abandoning the socket frees them up from layout restrictions and warranty overhead and guarantees that each motherboard is accompanied with a new cpu. This all means more revenue for AMD, an area of their business that they badly need improvement in.

            • Bensam123
            • 7 years ago

            I get the whole idea that it’s cheaper for them to do this, but I never was arguing why this will benefit them more in the end, I was originally arguing how this will inevitably screw consumers and that consumers should switch to AMD to stop this behavior. If AMD does the same thing, then of course this changes, but they haven’t announced this. You’re only assuming they will off of loose speculation (including their ultra low powered integrated options which Intel and ARM also have).

      • NeelyCam
      • 7 years ago

      [quote<]When this appears I suggest people start going exclusively AMD if you value having a desktop you can service yourself.[/quote<] Do you [i<]really[/i<] think it's worth the performance, power efficiency, form factor etc. sacrifice? Honestly, like many have already said, even for enthusiasts, CPU upgrade often comes with an upgrade of the whole system - especially now when most consider CPUs from 2-3 years ago perfectly adequate for their needs. The upgrade cycle has gotten longer, so once you pull the upgrade trigger, you upgrade sort of everything. Do you know anyone who's taken out their Sandy Bridge CPU to put in an Ivy Bridge one? I don't. It's just not worth the hassle/cost. I'm guessing it'd be the same thing with Haswell->Broadwell.

        • Deanjo
        • 7 years ago

        [quote<]Do you know anyone who's taken out their Sandy Bridge CPU to put in an Ivy Bridge one? I don't.[/quote<] With intel that has always been a bit of a problem however with their never ending replacements of sockets with every other generation of processor. On the AMD side however, quite a few upgrade their processors because of the longevity of their socket compatibility. This is personal testimony so take it for what it is worth but I know several AMD users who's upgrade cycle consists of upgrade CPU..... wait a few months for the new boards to come out, purchase the new board use the same processor, wait for the next gen of processor, replace the processor when the prices come down. A soldered on solution eliminates the capability of doing this leap frog upgrade which is easier to manage from a cost perceptive as you are not always having to purchase the latest and greatest all at once.

          • NeelyCam
          • 7 years ago

          Ah I see. I’m sorry – I forgot about my AMD friends here who actually need CPU upgrades… Intel CPUs have been fast enough since Core2Duo

          • derFunkenstein
          • 7 years ago

          Most of the big board vendors allow it. In fact, my Z68 board allows it and if I decide I need a quad, I could yank my Sandy Bridge i3 and put in an Ivy i5. I’m a fringe case, to be sure.

          edit: should have been a reply to Neely, not Deanjo. My bad.

            • NeelyCam
            • 7 years ago

            [quote<]I could yank my Sandy Bridge i3 and put in an Ivy i5. I'm a fringe case, to be sure.[/quote<] I know - I could do the same thing with my board. But I haven't, and sounds like neither have you. Probably for the same reason: it's just not worth the hassle.. Because of that, I don't see this no-socket Broadwell thing as such a big issue

        • Bensam123
        • 7 years ago

        I get the power effeciency standpoint, but performance/price is on par and form factor is no different for AMD then with Intel.

        It’s not about upgrading down the road, it’s about the original purchase. If CPUs are soldered to the board that limits your options and as a whole will probably make some motherboard manufacturers dry up immediately. Other ones will simply slim down their lineup and then only make what Intel wants them to.

        It’s a overall loss for consumers. A removal in choice allows Intel to more closely control the prices of the end products. Once they have control over motherboards they can start to control all the other components of the system instead of just being a processor. They’ve already started doing that and announced similar with Haswell.

          • NeelyCam
          • 7 years ago

          Form factor is very different. I haven’t seen an AMD NUC… or AMD cell phone chips. The gap will grow bigger with Haswell.

            • Bensam123
            • 7 years ago

            AMD makes processors that fit in mini-itx systems. There are also AMD Brazos systems that are tiny. The NUC isn’t something new, custom built PoS and other systems have been doing this for ages. I even slipped you a link to them on Newegg in that other thread.

            Cellphone is a completely different ball game.

            • NeelyCam
            • 7 years ago

            [quote<]I even slipped you a link to them on Newegg in that other thread.[/quote<] Yes you did, and I responded by saying that everything in that link was either quite a bit bigger or significantly less powerful. NUC may not be "revolutionary", but as an evolutionary step it's big.

            • Bensam123
            • 7 years ago

            We are talking about form factor here, not speed.

            That aside, just because Intel makes one system that’s the size of other low powered systems, which consequently overheats and explodes under load doesn’t mean they make they own that sector. If you can’t put a system like that under load without it crashing it’s not worth using in the first place.

            • NeelyCam
            • 7 years ago

            Actually you brought up “performance/price”, so speed does matter to you.

            Sounds like Intel screwed up with the wifi, but I haven’t heard of the cpu overheating, and most certainly NUCs haven’t been exploding. I know you’re desparately trying to suggest that AMD is somehow competitive in the tiny form factor segment, but this sort of fud doesn’t help you make your point

      • nafhan
      • 7 years ago

      You’ll still be able to service your desktop yourself. The only difference is that the CPU and MB will be a single unit. As more and more functionality gets integrated onto the CPU, and the MB becomes correspondingly simpler and simpler, this kind of makes sense.

      Another point: added functionality may also mean more pins on the CPU. It may be that Broadwell has so many pins that building a traditional socketed CPU without an unreasonably large package is infeasible.

    • srg86
    • 7 years ago

    As I commented in the shortbread, especially for more entry level systems this takes us back to the 80s and early 90s. Many many Am386DX-40 systems had soldered on CPUs. It appears we’re going back to that.

    As BlackStar stated though, upgrading a CPU is rare, though I did take that route once (Opteron 148 to Athlon64 X2 on S939).

    • bjm
    • 7 years ago

    I can’t see this being a good thing for builds like the EconoBox. One of the greatest things about the mobo/cpu split is that you can get a low-cost motherboard paired with a mid-range cpu to get a rather low cost system for the speed. If this story is true, I’d imagine Intel and friends would take this as an opportunity to pair low-end to low-end and mid-range to mid-range.

    You know those nice $45 mobo deals you can find sometimes? Imagine if they were only limited to Celerons, rather than being capable of running i5s and the full range of 1155 CPUs. And worse, they can pair it not just any a Celeron, but the lowest end G530 Celeron. If you want virtualization support, you have to get a higher end CPU that only comes with a higher-end motherboard that comes with features you don’t care about but have to pay for.

    *shiver* I hope I’m wrong…

    • ronch
    • 7 years ago

    This probably doesn’t mean much for most of the folks out there, but for computer hardware enthusiasts this means nothing less than apocalypse.

      • NeelyCam
      • 7 years ago

      The last time I upgraded a CPU on a system was, what, six years ago? E6600 -> Q6700. Cut my hand on the metal case.

      Since then all my upgrades have been full-system upgrades, and handing over my old systems to family/friends

        • ronch
        • 7 years ago

        Still, it’s good to have a replaceable CPU. We’ve all been used to that for decades. Also, if something goes wrong with the board (not a rare occurrence) it’s good to be able to keep the CPU when you ditch the board. Having the CPU soldered on the board would most likely mean higher board prices as well, so if your board conks out you have no choice but to spend more on a CPU+mobo combo.

        I strongly hope this rumor is just that. It’s not like putting a socket on a board is so hard and Intel’s been used to selling PIBs for so long. I was expecting smaller systems (i.e. Intel’s NUC) to adopt the same flexibility as today’s PCs, but instead we get this rumor.

        If at all, I don’t think this is a step forward.

          • Jason181
          • 7 years ago

          Actually I think the cpu + mobo price will go down, all things being equal. I’d suspect that soldering on a BGA cpu would be less expensive than having a socketed mobo and then a pinned/padded cpu.

          I don’t really think this is a great thing, but not necessarily an apocalypse. As Neely pointed out, how many of us keep our systems long enough to upgrade the cpu? I sure don’t. It sounds like with the integrated PCH and the already integrated MCH and GPU, there’s less and less on the mobo. I could see sound integration coming soon too (could be wrong).

          In some ways, it would make troubleshooting easier in that if it’s a cpu or mobo problem, you rma rather than trying to figure out which. But I suppose it will mean a smaller selection of mobos since they’ll be much more expensive to inventory.

          I could easily see memory soldered to the mobo too for the budget crowd; an all-in-one solution essentially.

    • Aveon
    • 7 years ago

    Does that mean Asus, Gigabyte, Msi….. will lower the models to accommodate more generic CPU-mobo Comobs?

    If it is that’s real bad considering My I7 is planted on a b75 Mobo !

      • FuturePastNow
      • 7 years ago

      And I generally pair mid-range processors with pretty mobos, which generally have better layouts, more slots, and so on. I’ve never really needed a high-end CPU, but I enjoy those “quality of life” things.

      I have a feeling my ability to do this may be coming to an end soon.

    • BlackStar
    • 7 years ago

    My first reaction is that I hate the reduced flexibility of this approach. But when I think about this more rationally, in the 15 years I’ve been building computers I’ve never actually upgraded a CPU on its own. The only time I considered upgrading a CPU was during the socket 939 era (Athlon64 to Athlon X2), but in the end I opted to wait for the Core 2 architecture (which turned out a good decision).

    My main concern is the _possibility_ of soldering the cooler onto the CPU and motherboard. I shudder at the thought of 4000 rpm dustblowers slapped onto $0.5 worth of aluminum fins becoming the norm (as this the case with most of the Atom and E-series CPU-mobo combos).

    • Mourmain
    • 7 years ago

    As well as reducing manufacturing costs, this probably cuts into losses incurred due to people extending the life of their systems by upgrading with second-hand processors.

    Just like in game rentals, those second-hand sales take away from the manufacturer’s profit…

      • chuckula
      • 7 years ago

      You do know that Intel originally went to ZIF sockets to encourage people to upgrade CPUs in order to make more money right? If you think that there is some huge market for second-hand CPUs.. well, all of those CPUs need motherboards to run in, so why wouldn’t there now just be a market for second-hand CPU + motherboards? If Intel is abandoning sockets, it has nothing to do with the second-hand CPU market.

    • Narishma
    • 7 years ago

    The X-bit labs article you link to says it’s only for mainstream CPUs, and that the high-end ones will still be offered with sockets as usual. Don’t know what exactly they mean by mainstream and high-end.

      • BlackStar
      • 7 years ago

      Were I to wager a guess, I’d say high-end refers to Xeon and the likes.

        • bhtooefr
        • 7 years ago

        As I’ve been predicting… the consumer desktop and laptop space slowly gets merged with the tablet space.

        The business space splits between consumer machines for low computing demand workers, and workstation-class machines (which will still be around, in their present form) for high computing demand workers.

        I’m not sure what operating system(s) the workstations will be running, though. I’d guess Windows and Linux.

      • kureshii
      • 7 years ago

      Probably high-end means X58-, X79-based systems and similar.

Pin It on Pinterest

Share This