Life after Moore’s Law

Recently, I’ve been turning the pages of Michio Kaku’s new book, Physics of the Future. While much of its content echoes what was said in his earlier work, Physics of the Impossible, one section in particular grabbed my attention: Kaku’s discussion regarding the end of Moore’s Law.

This topic has received no shortage of attention. Pundits have predicted the end of Moore’s Law ever since its inception some 46 years ago. Even so, few people today seem to agree on a precise day of reckoning. Kaku believes that by 2020, or shortly thereafter, transistors will run up against their atomic size limits and Moore’s Law will break down. Years ago, Intel predicted this event would occur at a 16-nm process node with 5-nm gates, yet it has plans on the table for 15-nm and 11-nm process nodes going forward. When wires and gates get too small (about five atoms thick), electrons begin to stray from their dictated paths and short circuit the chip. This issue makes shrinking the transistors further a futile endeavor.

When transistors can no longer be made smaller, the only way to continue doubling the transistor count every two years is to build upward or outward. Stacking dies poses challenging heat dissipation and interconnect problems, while making larger dies and linking them together in a single package is only sustainable up to a point. Indeed, with 22-nm production ramping up, it seems that the zenith of silicon-based IC design is finally a legitimate point on the horizon.

The paranoid among us may see the recent netbook mantra of “good enough computing” as a ploy by CPU manufacturers attempting to acclimate users to an impending period of diminishing performance returns. More interesting to me than when we break the law, though, is what the consequences will be as advancements in cheap computational power begin to level off. Will research and advancement in other areas, such as genomics, decline at the same rate as commodity computing power? Will longer silicon refresh cycles pour salt in the wound of an already ailing world economy? Will people even notice or care?

I believe those of us in the enthusiast realm will indeed notice when transistor counts begin to level off, but for the vast majority of everyday users, it probably won’t matter. As we near the end of the decade, the bulk of our daily computing burden will be likely be removed from our desks, and placed on the backs of “cloud” companies that handle the heavy lifting. This arrangement will provide a comfortable layer of insulation between the masses and the semiconductor manufacturers, as scaling issues will be dealt with quietly in the background, giving users get an end-product that “just works.” You’ll still be able to find laptop-toting hipsters at your local coffee spot, and locally installed software will still be commonplace. But by the end of the decade, the size of your Internet pipe may be more important than the speed of your processor.

As transistor counts stagnate, a combination of clever parallel programming techniques and engineering tricks at the silicon level will become even more important than they are today. These tweaks will be required to keep the industry moving forward until a post-silicon computing era can take root. There are several prospects on the radar to replace traditional silicon chips, including graphene, light-based logic circuits, and quantum processors. The next big thing beyond silicon is still anybody’s guess, though.

The biggest questions in my mind, however, revolve around the world economy and its reaction to silicon scaling issues of the future. Will the mantra of “good enough,” coupled with incremental improvements over time, be sufficient to stave off a meltdown in the tech sector? Will device upgrade cycles lengthen, or will users continue to purchase new toys at the same rate, even though they aren’t much faster than their predecessors? Will software begin to outpace hardware, creating enough computational scarcity that market forces drive efforts to advance computing to the next level?

There are a lot of unknown variables at play here, and the capital required to research and develop silicon’s successor is staggering. Further damage to the financial sector could potentially slow down progress toward the post-silicon era if R&D funding dries up. Similarly, the fallout from various government debt crises could limit future investments in technology, despite immense interest in quantum computing for cryptography.

Before this starts sounding too much like a doom-and-gloom, fear-mongering editorial, it should be understood that the end of Moore’s Law does not spell out the end of all advancement. It merely suggests that the gains we’re used to seeing come online every two years will be on a slower schedule. Progress will march on, and there will be ample time to develop new technologies to pick up where silicon leaves off. Of all the items on the post-silicon wish list, the technology that makes me feel the most warm and fuzzy inside is superconductors—specifically, the potential discovery of a room-temperature superconductor.

A superconductor is a material that loses all electrical resistance when cooled to a certain temperature. In theory, assuming an absence of outside forces, electrons could zip around a superconducting ring forever with no loss of energy. Materials have already been discovered that lose all resistance at temperatures easily attainable using cheap liquid nitrogen. However, finding a material that does so at room temperature would represent the holy grail for scientists and electrical engineers. Imagine a processor whose interconnects and transistors were crafted from a superconducting material. Such a beast would be able to operate with next to no electrical leakage and minimal heat generation despite running at insane clock speeds. There are many other novel and mind-boggling uses for superconductors, particularly in the transportation sector, but such a discovery would be a huge boon to the technology world. There is nothing out there that proves room-temperature semiconductors actually exist, but the end of Moore’s Law could serve as incentive to ramp up the search efforts.

Looking ahead to a time when chip makers are no longer able to significantly shrink transistors, how do you think the world will cope? Personally, I predict things will be business as usual in a post-Moore world. Worst case scenario, compute power will expand to meet demand inside of large cloud-based server farms, where performance can grow by adding more CPU’s to the mix and use is metered and billed like a utility. Best case scenario, we’ll all eventually be playing holographic Crysis on personal desktop quantum computers or shattering clock speed records with cool-running, superconducting CPUs.  For those interested in computing and the physics behind it, the next couple decades should provide quite a show. Who else wants some popcorn?

Comments closed
    • Draphius
    • 8 years ago

    i hope the post-moore’s law chipmakers begin to actually try to optimize the chips instead of just using the brute force method and who knows maybe a better optimized chip will run much faster every generation as they find new and inventive ways to squeeze whatever they can out of that tech but from the looks of it graphene seems to be the next logical step up and i think we might see that arrive before moores law expires.

      • travbrad
      • 8 years ago

      What do you mean by optimize? The per-core and per-clock performance is still improving all the time, although perhaps not as quickly as in the past. A lot of focus has been shifted to reducing power consumption too, rather than sheer performance.

      Just look at the P4 if you don’t think any optimizations have been made lately. It’s a fraction the performance of current CPUs, and uses more power while doing it.

    • Kougar
    • 8 years ago

    I’ll take an extra-large popcorn, thanks!

    It should prove interesting indeed. And they very well may find a bag full of tricks to starve off the end of Moore’s law, eking out a few extra years should nothing radically new come along first.

    I still remember the news of IBM developing a 3D silicon processor with watercooling channels built right into the silicon, several years ago now. Plenty of cooling alternatives already exist. Tri-gate transistors aren’t anything radical either, but they further shrank the die size and power leakage. I’m sure there are more tricks up Intel’s sleeve even before calling upon exotic cooling solutions… as another poster mentioned, even carbon nanotubes are a possibility, as those exhibit super-conducting properties in some situations.

    • Antimatter
    • 8 years ago

    While the end to Moore’s Law might mean the end of silicon chips it will not end the exponential growth of computing performance. New technologies like carbon nanotubes or quantum computing will continue the drive in exponential growth.

    [url<]http://www.kurzweilai.net/the-law-of-accelerating-returns[/url<]

      • Krogoth
      • 8 years ago

      The whole singularity thing is just a technological version of the Rapture.

      It painfully ignores the realites of physics, economics and for a better lack of a term, human nature.

      New computing models will have to own set of limitations and challenges to overcome. It is rather optimistic to say that exponential growth of computing power will continue to grow indefinitely.

        • Antimatter
        • 8 years ago

        While the singularity and the indefinite exponential growth of computing are questionable. I don’t think the end to Moore’s Law is an end to the growth we have experienced in computing; there are other computing paradigms that will pick up where Moore’s Law left off.

      • APWNH
      • 8 years ago

      I don’t understand this obsession with saying that silicon chips are “going to end”. No they are not going to end. What craziness is this? They will simply no longer obsolete each other at quite the same rate that they have been doing. Chips, and software, will both still get faster and faster, just no longer at an exponential rate.

      Last time I checked there wasn’t anything (other than the phenomenon of bloatware) which necessitated keeping up with that exponential rate anyway. Has the world come to accept that the ONLY solution to an optimization problem is to throw more clock cycles at it? It’s madness.

      The only types of things that I see this affecting is extremely compute bound tasks like computer graphics. But if we’re talking about graphics, the leap to parallel hardware was started decades ago. But I wouldn’t say that moore’s law even really factors in because you can always, always hook up more CPUs and more GPUs to a render farm.

        • Antimatter
        • 8 years ago

        Unless you know a way of building a processor with transistors smaller than silicon atoms using silicon atoms, the era of silicon chips will end.

    • etymxris
    • 8 years ago

    It doesn’t take a genius to figure out what will happen. Moore’s Law has been on the decline for the past decade now anyway. So…

    1) Multiple cores — already passe.
    2) Specialized circuits for things like AES — already happening.
    3) Renewed focus on application performance — see the web browser wars.
    4) Applications will slowly start making use of multiple cores — this has been happening for the past few years.

    Honestly by 2020 it’s just going to be more of the same. Thermal and power efficiency is much more important than raw performance, especially in the smart phone generation.

      • cynan
      • 8 years ago

      Except that lowering thermals and increasing power efficiency is the same issue as increasing performance – just approaching it from the other end.

        • etymxris
        • 8 years ago

        I wasn’t implying anything different. The ending note about smartphones was just to show that we’ll be doing (1)-(4) to improve low power efficiency more than we will to improve absolute performance.

    • jstern
    • 8 years ago

    Obviously there’s a reason, but I’ve always wonder what is it that holds off CPU maker from going from lets say a 220nm CPU to a 32nm. Since I don’t know much about these things I’ve always wondered why can’t they just to some calculations, look ahead at what it’s needed and shrink things dramatically. Obviously there’s a reason, but can someone try to explain it. It’s something that interests me.

      • DavidC1
      • 8 years ago

      It’s very simple. Because next generation products are fruits of learning from past generations. You can’t expect an 8 year old to do good of a job as a 30 year old on a project.

      There’s obviously a varying degree but they are just as bound by real world limits as we are. Employees have limits in how much they can do in their time, there are budgets to be met, customers have expectations(shipping millions of products mean lot of verification to make sure they all work properly).

      Before the wright brothers, there was noone that would believe in the concept of powered flight. Few hundred years ago, there was no concept of flight AT ALL. Now we take technology for granted because there are very hard working very smart people that can keep advancing things that noone else can do.

      • travbrad
      • 8 years ago

      New issues/problems show up with each silicon shrink, so they need to solve those before they can move on to the next shrink.

      That happens with all technology. It’s always a gradual effort and a culmination of past research. It’s the same reason the first cars had low horsepower engines and got terrible mileage. They would barely even be usable as cars nowadays, but we had to start somewhere. 🙂

      Or in the words of people who said it much better than I ever could “If I have seen further it is by standing on the shoulders of giants” and “Rome wasn’t built in a day”

      • UberGerbil
      • 8 years ago

      After the Wright Brothers demonstrated power flight, what held them off from building a 747 as the follow-up? They just needed to do some calculations and figure out what was needed, right?

    • Krogoth
    • 8 years ago

    Moore’s “observation” has already been invalided by physics and economics. IMO, it is really a corollary of exponential growth.

    Prescott was the first platform to break it. It didn’t take that much longer for the other semiconductor giants to have own issues.

    The new focus has been trying to squeeze more performance out of a given power and transistor count envelope. That’s why CPU performance on a per core basis hasn’t improved that much since Core 2 and Athlon 64. The only difference is that modern CPUs have more cores and have greater power efficiency.

    The days of rapid performance and transistor growth are over, expect more gradual and less frequent jumps until we move onto a new computing model.

      • EtherealN
      • 8 years ago

      While this is only one test and one application (no time to do anything exhaustive), I’d say it calls your statement about per-core performance into question. Look at the single-thread numbers:

      [url<]https://techreport.com/articles.x/20486/12[/url<] 0.85 for the C2Q9400 0.61 for the C2D6400 1.54 for the 2600K Q9400 got 3.23 with all four cores working, 2600K got 6.92 Not exactly moores law type stuff, but we are still talking almost double performance on a single active core in 29 months, and more than double when running multithreaded on the same number of cores. Still quite considerable. So yeah... "only difference" is a bit more than power efficiency and core count. 😉

    • sluggo
    • 8 years ago

    I think once “good enough” performance has been established, designers can turn their attention to minimizing leakage and other current losses. Power efficiency may well be on the top of the list when it comes to product differentiation and customer wants. If I can get last-year’s laptop performance with 50 percent longer battery life, I’ll be happy.

    Tiny feature size is good from the standpoint of clocking and die size, but better power performance may come from less than state-of-the-art processes.

      • smilingcrow
      • 8 years ago

      They are already focussing heavily on power efficiency.

        • UberGerbil
        • 8 years ago

        In fact as they’ve passed below 100nm they’ve only been able to move to each successive node by first making huge strides in “minimizing leakage and other current losses.”

    • Wirko
    • 8 years ago

    Transistor shrinking may hit the limit imposed by economics earliear than that imposed by physics. If Intel, IBM and others don’t have the resources to develop, say, 2-nanometer technology, then there will be none and the world will be stuck at 3-nm (and with no fusion reactor and no tourist trips to Jupiter’s moons, either).

    • ShadowTiger
    • 8 years ago

    This article inspired me to read up on Optical Computing… basically using photons instead of electrons.

    This technology hasn’t really made any breakthroughs, but this is likely because we are used to thinking in terms of electrons and the current technology already has so much progress/investment behind it.

    I think that if we ever reach a point where we simply can’t make transistors faster anymore, we can finally start exploring a complete overhaul of computing using photons instead.

      • EtherealN
      • 8 years ago

      There’s more to it than just over-reliance on an old paradigm though. Getting the breakthrough on anything that is pretty much completely new is extremely difficult; for example, compare the advances made in the 50 years before and after the Wright Flyer. There was a very very slow search for the right way to do it with lots of failure and little progress, but once the key problem was solved things took off REALLY fast.

      I expect to see something similar happen with the next big thing in computing technology; one of these days someone gets it working properly and in a scalable fasion after plugging away at it with not much apparent success for decades, and things will go VERY fast after that. Only real question is when this will happen, and what it is that “gets it done”.

        • Grigory
        • 8 years ago

        “things took off REALLY fast”

        Very punny, Sir. 🙂

          • EtherealN
          • 8 years ago

          Oh god… I didn’t even realise I had done that. Holy heck, I deserve punishment.

    • kvndoom
    • 8 years ago

    Fewer people are going to care than we, an enthusiast site, might think. There are people who need faster and more powerful computers, and there are people who do nothing besides surf the internet and play flash games. Computers are not becoming obsolete as quickly as they used to.

      • APWNH
      • 8 years ago

      The best thing we can hope for is that the dying of Moore’s law might actually get Adobe to improve the performance of Flash. There is just so much computational power being wasted there (and elsewhere) that once hardware speed improvements level off, software speed improvements will keep us busy for years to come.

    • anotherengineer
    • 8 years ago

    So the real question now, is will Moore’s Law get downgraded to Moore’s theory?

      • UberGerbil
      • 8 years ago

      Despite the popular formulation, it always would have been better expressed as an Economic Imperative rather than as a “Law”

      And the imperative definitely remains, even if (to paraphrase von Clausewitz) it must be continued by other means.

        • paulWTAMU
        • 8 years ago

        applying Clausewitz to PCs. That is impressive.

      • FakeAlGore
      • 8 years ago

      Not sure if serious

      <insert that one picture of Fry from Futurama here>

      or fails to understand distinction between theory and law

    • stan4
    • 8 years ago

    I think long before the silicon reaches it’s limit, there will be a replacement for the PC standard, which was created by IBM a loooong time ago and needs desperately an update.

    • tejas84
    • 8 years ago

    Firstly you all need to understand that America is not the world. Second India, China and Russia have a hunger for silicon transistors and Moore’s Law. Intel, AMD will be fine thanks to these countries. Graphene, photonics or quantum qubit computing will solve the problems of silicon

    They also happen to have close to 3 billion combined population. Plenty of money and opportunities for the PC to flourish.They also are churning out the population of US in bright science graduates every 5 years. I am looking to Eastern Europe and Asia because that is where the world will be focused on and where US companies will want revenue from as you decline.

    Cloud Computing may become big in China eventually since the govt can easily control people via intercepting their data.

    America has had it’s time (upsets me to say this) but every great nation has to decline (see Roman Empire and British Empire) but that does not mean that Moore’s Law and your great tech has to fall with your nation.

    I do love America and have a close cousin who is a proud and serving US Marine Sergeant veteran of Iraq and Afghanistan but you have sown your own problems.

      • UberGerbil
      • 8 years ago

      A big chunk of those countries are skipping PCs entirely and going straight to smartphones. Combined with docking stations and the cloud, they may have no real use for PCs as we know them.

      BTW, I remember the 70s when everyone was convinced the US was doomed, bankrupt (morally and financially), and already over. And the 80s, when the Japan Inc was buying up American companies and real estate and clearly was going to dominate the world economy. China and India have unleashed a lot of dormant potential, but they also have to deal with some very real, and very large, internal inequities and tensions. The future, as always, is uncertain.

      • cynan
      • 8 years ago

      [i<]They also happen to have close to 3 billion combined population. Plenty of money and opportunities for the PC to flourish.They also are churning out the population of US in bright science graduates every 5 years. I am looking to Eastern Europe and Asia because that is where the world will be focused on and where US companies will want revenue from as you decline. [/i<] I agree with you overall, but the above paragraph is a gross oversimplification and full of conjecture. Sure, the way things are headed, China is the next "super power". The question is when and how this will run its course. And how do you come to the conclusion that Eastern Europe is so promising? It's still being bogged down by what's left of USSR. Eastern Europe (with maybe a couple of exceptions for countries that are close to the west - Czech Republic maybe?) is in a lot of trouble for a while to come. You can't just look at the number of people graduating from University. And unless you are a government official in China, how would you ever even get a hold of that information anyway? A lot of the infrastructure (government funded research and tech companies) to make use of highly educated, specialized education simply isn't quite there yet in China, and especially not in India. Because of this lack of infrastructure, many of these graduates are more poorly trained than their North American counterparts... The fact remains is that the US still owns the most "intellectual property", simply because that is where most of the companies that subsist on science and technology are situated. And that's not going to change any time soon (though it may eventually, and probably will the way things are going). Many of the people who graduate with graduate degrees in science, math, engineering, end up coming to North America. Though the reasons for this are changing as middle classes grow in countries like China and India, this is still where you have to go (and will be for a while) if you want to work for companies like Intel, Google, etc. Again, I agree with your sentiment, but I think you've jumped the gun a bit. Maybe in another decade or two...?

        • EtherealN
        • 8 years ago

        What, you have to go to the US to work for Google? Wow… I guess the guys at the Google facilities in Stockholm never noticed that. Shame on them for developing Google products in a place where it can’t be done! 😛

        While a lot of your critique is correct, I feel you made the same mistake the guy you are critiquing did. 😉

          • cynan
          • 8 years ago

          OK, I’ll admit my second last sentence was a generalization: Of course Google and many other large global companies have multiple sites around the word. The fact remains that most of these jobs are in the USA because this is where the headquarters of most of these companies are based. (Besides, the previous discussion was largely ignoring Western Europe specifically, focusing mostly on China and India vs the “west” as a whole. And yes, Google even has a few offices in these locations as well…)

          To continue with the Google example, you can just go to Google’s website and search job postings that require high degree of technical skills (ie, Software Engineer). There are less than a dozen such jobs posted for Sweden (only 2 for software engineers) while there are hundreds of postings for the US.

          After small businesses, which provide most of the local jobs (service industry, etc) and other than the oil companies, these “tech” companies are largely what is keeping the US economy afloat, for unlike India and China, there is no more manufacturing base in the US to speak of.

            • ludi
            • 8 years ago

            [quote<]there is no more manufacturing base in the US to speak of.[/quote<] Incorrect -- US manufacturing output is at historical highs. However, it relies heavily on automation and employs far fewer people per unit output than in the past, and is utterly dwarfed by the size of the service economy. This is very typical of a "mature" economy.

            • cynan
            • 8 years ago

            [i<]Incorrect -- US manufacturing output is at historical highs.[/i<] This is irrelevant. Jobs per capita in manufacturing in the US is a small fraction of what it was only a few decades ago when the American automotive and steel industries were still "solvent". You contradict my post, only to largely agree with it after making this first statement...

            • EtherealN
            • 8 years ago

            Well, sweden is quite a few times smaller, too. There are individual urban areas in the US that double sweden’s population. So yes, overall fewer jobs isn’t a weird thing. My point is that Sweden isn’t exactly unique in this, and companies will (obviously) end up employing people more in their home country than in other countries. There are major chipmakers, for example, all over the planet: UK, Germany, Israel, Taiwan, Korea, China (and no – not only “manufacturers” for china, actual R&D of independent processors) and so on. Typically, the Korean company will employ more people in Korea than in the US. (I guess one exception is Foxconn as far as manufacture goes, since they have way more of their stuff in the People’s Republic rather than Taiwan, but they’re mainly manufacturing and not development, as far as I know.)

            There was an interesting article someplace – I think it was linked in a news item on this site, but I’m not sure – that dealt with the fact that the new Kindle couldn’t be made in the US even if they wanted to, because the competence is all abroad. Harder to get proper numbers on the proper R&D of course, but even if we were to assume that no R&D happens in those countries (which is extremely false, but for argument) the US itself would still not be helped. It doesn’t have the infrastructure.

            Not necessarily terrible of course – my country doesn’t have “infrastructure” to cover it’s petroleum needs (I think there’s some oil shale a little here and there, but afaik they’ve barely even surveyed it let alone done proper prospecting). We can still live through other exports (and actually have a net surplus on trade). The problem would come if other countries stopped needing our exports – and if you end up where the majority of your export is knowledge and research rather than raw resources and/or the products made from them (like wood products and steel in our case), it is easier for other countries to just “get their own”. And we actually see a lot of the brain-growth in the developing world here – we are a popular destination for pakistani, indian and chinese students; they come here, get their degrees, work for a couple years, then bring their knowledge and experience back home. Not a bad deal for us since the unis get paid and they work, so all fair – but they’ll only have to do that for so long. After a while they’ll be able to cover their teaching and a lot of their research on their own, and for whatever they can’t do on their own they’ll just collaborate across the borders the same way europe does in particle physics and astronomy and so on and so forth.

            I don’t think it’ll end in disaster for the US, it’ll just end up “different”. My country’s economy isn’t wrecked just because it lost the huge shipbuilding business it used to have – it’s just different and we’re doing other things.

            • cynan
            • 8 years ago

            [i<] Well, sweden is quite a few times smaller, too. There are individual urban areas in the US that double sweden's population. So yes, overall fewer jobs isn't a weird thing. My point is that Sweden isn't exactly unique in this, and companies will (obviously) end up employing people more in their home country than in other countries. There are major chipmakers, for example, all over the planet: UK, Germany, Israel, Taiwan, Korea, China (and no - not only "manufacturers" for china, actual R&D of independent processors) and so on. Typically, the Korean company will employ more people in Korea than in the US. (I guess one exception is Foxconn as far as manufacture goes, since they have way more of their stuff in the People's Republic rather than Taiwan, but they're mainly manufacturing and not development, as far as I know.)[/i<] I did not say there was no development of technological intellectual property anywhere else in the world. Korea (notably, Samsung) is a good example... I said that American companies still make up the majority. And yes, this is likely to change eventually, just not quite yet. And of course the Kindle couldn't be made in the US. But this still has everything to do with competitive manufacturing costs and nothing to do with how feasible it is due to other factors to set up such a manufacturing facility(ies) on US soil. I'm not aware of any large scale manufacturing conducted in the US that could be done for less cost elsewhere - this is a large contributing factor leading to the wealth inequality in the US that is slowly straggling it. Apple doesn't manufacture anything in the US (as far as I know) either and yet they are the largest American consumer tech company and employ hundreds of engineers working in software and hardware. I can't remember when any computers or computer components (other than processors at some fabs) were made in North America... [i<] The problem would come if other countries stopped needing our exports - and if you end up where the majority of your export is knowledge and research rather than raw resources and/or the products made from them (like wood products and steel in our case), it is easier for other countries to just "get their own". And we actually see a lot of the brain-growth in the developing world here - we are a popular destination for pakistani, indian and chinese students; they come here, get their degrees, work for a couple years, then bring their knowledge and experience back home. [/i<] I half agree with you here. However, a product is a product, whether intellectual or a tangible good or service. And no, you cant just "get it somewhere else" and still make a globally competitive business of it, because someone else will have had "it" first. However, I'm not saying that companies that focus on intellectual property rather than hard goods are better... their not - it's just this is the one sector were US still leads the world. For now. One of the reasons why the US's future looks grim is because they are not sustaining the quality and, even more so, accessibility to the education required to perpetuate their lead in "intellectual superiority". As a result, these US based companies are needing to hire more and more foreign graduates - not just because they are better than their American counterparts, but because there simply aren't enough American counterparts. (I see the same influx of foreign graduate students, etc, in Canada as well). The cost of higher education is increasing in the US, while the average American has less and less wealth. And countries like China and India are just beginning to acclimatize to the existence of an educated middle class (where most of these grads come from). However, for every one of those, there are hundreds more poor who work as farmers or migrant workers, etc. For now, this new middle class is a novelty, but I can't see there not coming a time when there will have to be some sort of reckoning due to the growing rift between these classes. Ironically, the US is likely headed there as well due to the growing wealth inequality, but from the other direction.

            • EtherealN
            • 8 years ago

            “I did not say there was no development of technological intellectual property anywhere else in the world. Korea (notably, Samsung) is a good example… I said that American companies still make up the majority.”

            Where? Not here. Apple has an acceptable market share, sure, (with tech sourced from the UK and Korea, manufacturing sourced in China), but what about the other big ones? Ericsson? Japan/Sweden. Nokia? Finland. HTC? LG? Samsung? For PC’s – well, a couple percent for Apple, currently cornered on CPU tech developed in Israel. Same for most Intel-platform stuff sold by Dell and HP. Asus is LARGE here, as is Acer.Then of course we have MSI and all the other guys who are, conspicuously, not american.

            To be honest, the american precense is fairly minor. Financially – yes, there’s a lot of driving american money behind a lot of it. But the tech, the development, the manufacturing… I don’t see much of the USA there anymore. At least not here. I haven’t checked for other countries, but with the EU being a common market the same way the US is, I suspect the differences will be relatively small. As for manufacturing in the US – there still is some there, and there used to be a LOT more. In the 70’s and 80’s the british were proud of stealing market share on manufacturing from the US. (Then Taiwan, Korea and Japan came around and everyone got owned.)

            “For now, this new middle class is a novelty”

            A novely with a population of the entire north america. Ten years more, and it’ll be a novelty the size of north america + europe.

            When that happens, the monetary benefit of living and working in the US compared to their home country will be nil. (This of course also applies, to an extent, to the EU as well)

            American “wealth inequality” and average americans getting poorer would be something I’d like numbers on, though of course it’s not really the topic of this forum. It just smells of ideology.

            • cynan
            • 8 years ago

            First of all, I do not consider some of those companies you mentioned to have a large intellectual property core. Acer, Asus, MSI, etc, develop very little intellectual property to speak of (though, yes, they do somewhat, and perhaps this is growing). They largely “assemble” intellectual property designed by others into consumer products. And yes, low cost assembly lines in most industries has been an area where Asia has, in the last couple of decades, “owned everyone”.

            Ironically, HP used to be heavily into tech R&D, but recently, it is becoming more and more like Acer or Dell. Many of the cell phone makers are similar, though there is more of an emphasis here on software design, which involves more intellectual property development. LG and Samsung subsist to a large degree on this to, but they also are heavily involved in intellectual property development, which is why I thought Korea was a better example of non “western-based” intellectual property development, than say, China or India.

            Yes, the middle class in these countries are growing – but the “educated” middle class (those with university or equivalent degrees that might work in fields of intellectual property development) is still not so large, and I’m skeptical to believe that it is anywhere near as large as you say. Yet. T

            When that does eventually happen, you may be right about there being no advantage for the educated to emigrate. Though that depends on other factors as well. I think it will most largely depend on what happens to the lower classes in these populations as the middle classes grow – whether or not they will “demand” more of a share of the growing wealth of the middle classes (though this also depends on future political climates).

            Wealth is relative and finite (more or less). The reason why manufacturing in the West got “owned” by China, etc, is simply because a large enough number of people there were “suddenly discovered” by the West, who were willing to work for less than their Western counterparts, and in turn, the Western countries being short-sighted enough to exploit it. Before this, there was no alternative, and as a result, more of the wealth was shared with the “lower” working classes.

            This was a major influence to the current trend of downwardly spiraling wealth inequality in the west, and especially in the US. Other factors include the seemingly increased greed of the wealthiest to accumulate even more wealth, regardless of the consequences (look what the 2008/2009 mortgage crisis alone did to the economy of the US and other western countries), and the increasing cost of energy (oil), which so many Americans depend on (a dependency almost unlike any other country – developed partly because oil was so relatively cheap for them for so long).

            As for asking for references (“numbers”) – we’ve both been throwing around numbers for this entire topic… If you are interested, you can look at how the personal debt of Americans (credit card, mortgages) is higher, adjusted for inflation, than it has ever been. This is a direct result of this growing income/wealth disparity. Other signs include increases in social and health problems (anything from teenage birth rates to obesity): Among developed countries across Europe and North America, it has been shown that many of these increase in proportion to income inequality, relative to other countries, the US now being the of the worst.

    • ronch
    • 8 years ago

    Today’s x86 CPUs, which are the most popular, are not making the most out of available silicon. Way too many instructions, lots of decoding work, (i.e. CISC to RISC and back), mostly serial processing, etc., are the reasons. Going forward, I would think the industry will really have to crack the problem of parallelization. There’s already much talk about GPGPU computing. I suppose it may offer a temporary answer, as it offers massive computational performance per square millimeter of silicon, but it’s still pretty much in its infancy. This doesn’t sound like it’ll totally solve the problem of how to make transistors endlessly smaller and smaller and sooner or later we’ll run up against a wall again, but GPGPU computing should make the most out of the silicon and should at least give us more time to ponder how we should go forward.

    • Rakhmaninov3
    • 8 years ago

    I’m developing my fine dexterity.

    I’ll be the last one laughing when I’m able to beat a Gulftown using my 1957 abacus.

    • thesmileman
    • 8 years ago

    Graphene will help out for a while.

    • Theolendras
    • 8 years ago

    Well maybe the new approach might resolve around abandoning silicon for something with faster swicthing capability, like graphene and go back to clockspeed war of old with ever pushing forward cooling techniques, like intra chip cooling.

    I still feeling storage IO will be the bottleneck for most usage if we still manage to keep Moore improving for the next decade.

    • AGerbilWithAFootInTheGrav
    • 8 years ago

    cheap liquid nitrogen cooled superconducting CPU’s… that will pay off, as it does not require that much energy to give interesting enough returns… I am not sure though that those materials will be conductive to being economical themselves… but once we reach those temps, temps themselves should not be the issue…

    Cloud is overhyped… good for some uses but not a desktop replacement especially for “power users”… due to latencies involved it is like return to 1990’s…

      • Theolendras
      • 8 years ago

      Most of the latency issue is attributable to the telcos network improvement rate which is traditionnaly much slower than computing.

    • TurtlePerson2
    • 8 years ago

    The real problem in integrated circuits these days is actually power dissipation and not Moore’s Law ending. If you simply Google image search “power density moore’s law”, then you can see that as transistor scaling has increased the power dissipation per unit area has increased dramatically.

    This wasn’t always the case. It used to be that whenever a process was shrunk, the voltage used was shrunk proportionally. This kept the power density the same as the previous process. This worked well until we got to a little bit above 1V. Nowadays we can’t shrink the voltage without having disastrous performance consequences. We now do what is called “fixed voltage scaling” which increases the power density with every die shrink.

    An even bigger problem is something called leakage current. When a transistor is turned off, a small amount of current leaks through which dissipates a little bit of power. It used to be that this was negligible (less than 1% of TDP), but now it is approaching 50%. Unfortunately, this gets worse and worse with each die shrink. If you do a Google image search for “leakage current moore’s law” you can see some graphs that illustrate what I am saying. Unfortunately, superconducting wires (or interconnects as we VLSI designers call them) won’t solve most of our leakage problems. While wires do add resistance and capacitance, it’s not the majority of power dissipated in a chip.

    I heard Prof. James Meindl talk about life after Moore’s law and he seemed very bullish about changing the data token. In other words, instead of using electron charge to carry information you would use magnetic spin or some other smaller and more fundamental quantity. Obviously this is a good way off from now, but we are already seeing quantum computers being made that function using similar ideas.

    • yogibbear
    • 8 years ago

    Once we view a blackhole (well, the ring of light around it), then a lot of these quantum physics vs. macro physics wishy washy that is at the issue of going super small will be materialised and then once we have a universal law that explains the differences between macro vs. quantum physics better it will all be resolved and you’ll have your tera hertz clock speeds if you so choose.

    I’m pretty sure this should happen in the next 5 years or so. But I don’t know how long it takes astronomers to process their results… so say they view it by 2015 and then take another 5 yrs to understand it and find someone smart enough to rewrite physics then we’ll be on the right track.

      • UberGerbil
      • 8 years ago

      None of this is true, or even makes much sense

        • yogibbear
        • 8 years ago

        Exactly which bits are not true? As far as I know, everything I wrote is correct. Maybe I got some of the terminology wrong, but this isn’t my field of expertise and will happily take the criticism. I’m pretty sure though that the concept of viewing the ring of light around a black hole and looking at it’s shape is where we work out if the theory of relativity is correct or not. Considering this is the one place where both quantum physics and gravity are at play, it makes perfect sense that this will be the starting point for trying to make something like “quantum gravity” or whatever you choose to name it work.

        It doesn’t take much to then see that you’d be able to use quantum gravity to better understand why we have issues going smaller and smaller in chip size. Once we understand the why, then the how to fix it / use it to our advantage should be the next thing that follows.

          • Peldor
          • 8 years ago

          The main bit that’s not true is the notion that gravity, quantum or otherwise, has any significant effect on the operation of an integrated circuit. The secondary bit that’s not true is that we are going to turn that immeasurably small effect into a panacea for our future difficulties with integrated circuits. The bit about astronomers using the light emitted from near the event horizon to suss out a GUT is neither here nor there.

          • FakeAlGore
          • 8 years ago

          We have already worked out whether the Theory of Relativity is correct or not. It is one of the most tested theories ever put forth, and it has always tested “true.” Viewing the event horizon of a black hole is irrelevant. I’m not even sure why you bring that up. We have sophisticated mathematical models of black holes that fit observation remarkably well. We pretty much know what happens at the event horizon.

          As for unifying classical mechanics and quantum mechanics, we are likely a great deal more than five years away from that. Some of the brightest minds in the world are up against a brick wall thinking about it, and very few experiments currently running or on the table will answer any of the remaining questions. The term you’re looking for is “unified field theory,” by the way.

          Also, what does any of that have to do with producing better chips? We already know for a fact that quantum effects will, without fail, fatalistically interfere with the operation of chips once we reach a certain size threshold. This cannot be avoided regardless of any knowledge of quantum gravity.

          In short, please learn more physics and less crazy.

      • Game_boy
      • 8 years ago

      You read a science article in a tabloid as the complete truth of physics?

      If we’ve learnt anything in the last 50 years it’s that a universal law explaining all scales is not going to be simple or even deterministic.

        • cynan
        • 8 years ago

        Lol. That pesky nuisance of particles taking on wave-live properties when they become sufficiently small of mass…

      • ImSpartacus
      • 8 years ago

      Did you get that out of your local newspaper?

        • dpaus
        • 8 years ago

        Yeah, “The Onion”

    • OneArmedScissor
    • 8 years ago

    “There are several prospects on the radar to replace traditional silicon chips, including graphene, light-based logic circuits, and quantum processors. The next big thing beyond silicon is still anybody’s guess, though.”

    That’s too dumbed down. It doesn’t have to be one or the other. This is like when people want X new CPU with the “highest IPC” or “highest single threaded performance,” despite the fact that nobody is designing them for that anymore because there’s no longer a blanket “better” metric. You go with what’s best for the particular job.

    Many limits have already been hit, which is why new CPUs have progressively been built to be more specialized, and new methods beyond traditional silicon CPUs will only cause that path to diverge further.

    We’re not even at 2D silicon CPUs that offload floating point calculations to the GPU yet. That’s going to take a few more years, and then a few more years yet for software to adapt to truly seamless 3D SoCs that can finally shed all the deprecated circuits.

    And [i<]then[/i<] they can start on what new replacement for silicon is best for what. For example, maybe in your phone the CPU will have a silicon "CPU" with a graphene co-processor, and integrated optical interlinks. Servers might be part traditional processor, and part quantum processor, with each handling particular tasks. The likelihood that someone can immediately just jump from a silicon 3D SoC to a graphene one is slim to none, and the possibilities for mixing and matching things are just as endless as variations between semi and fully integrated circuits are now.

      • Theolendras
      • 8 years ago

      Yeah, why not use specialized ASIC to address specific jobs.

    • ShadowEyez
    • 8 years ago

    There are few uses for raw CPU power that average people need so in cloud/hosting services we should be ok for a while. This same sort of argument came up a few years ago when intel basically said they could not clock higher than about 3-4 ghz, and then went to multicore designs.
    Once the transistors get so small, they will likely try different materials in parts of the process (silicon on insulator?) to increase the performance, or different optimiazatons of the process.

    This will give time needed to find the ultimate next tech (quantum, maybe with optical circuits) that is cheap/realiable enough for mainstream.

    CPU tech could mirror hard drive tech. For the past 40 years hdd used magnets (and still do) and in the last 3 years use flash chips. A combination of magnetic limits (slow, loud, power hungry, not shock resistant) and reasearch on flash chips, and market need is leading to ssd replacing harddrives. When those factors hit scilicon it will likely give way to the next tech,

    • Anomymous Gerbil
    • 8 years ago

    [quote<]Imagine a processor whose interconnects and transistors were crafted from a superconducting material. Such a beast would be able to operate with next to no electrical leakage [/quote<] Just because something superconducts doesn't mean it won't leak - true or false?

      • David_Morgan
      • 8 years ago

      There will always be outside forces and cosmic radiation that might cause electrons to leak out, but similar to running water, electrons follow the path of least resistance. In the case of superconductors there is no resistance, which makes it kind of hard for an electron to ignore that path.

      I think the problem would lie in the semi conductor material in the transistor. The electrons would still have to pass through that at some point which would cause leakage and heat, but the rest of the journey should be relatively ‘free-flowing’.

        • mako
        • 8 years ago

        In a superconductor, the particles that move aren’t individual electrons. Superconductivity happens because electrons pair up (in conventional superconductors, they’re called Cooper pairs) and become a quasiparticle that obeys different physical laws. Right now no one has fully explained the pairing mechanism for high-temperature superconductivity (HTSC).

        There’s superconducting chips out there, and I think most of them don’t use transistors. When you have zero resistance, you have virtually no voltage (V=IR) so conventional voltage-based logic doesn’t work. Instead they use varying amounts of current.

    • shess
    • 8 years ago

    Over the past decades we’ve been in the strip-mining portion of computing. Huge amounts of speed accessible with relatively little work (building a fab is a huge effort, but amortized across millions of devices, not so bad). At some point it gets harder and harder to keep scaling along the same dimension, but even if we stopped being able to scale tomorrow and found nothing to drive performance forward, it would take decades for computing devices to flow into all the available spaces.

    Fortunately, I think we’ll also have a few decades of alternative approaches. There has been a half-century of work on resource-constrained computing, often enough stillborn because by the time things really get somewhere, hardware obsoletes it. But that info still exists, and is always being rediscovered and re-purposed. We’re in the early years of figuring out what the assembly-line is for computers, both in terms of computing devices working together to an end, and in terms of software engineers building systems.

    One last point – upgrading hardware every 20 minutes isn’t healthy. Having the useful lifetime of your devices increase is a GOOD thing.

      • Game_boy
      • 8 years ago

      Yes; it will progress with new materials at a slower and variable pace instead of just expecting a shrink every two years.

    • dpaus
    • 8 years ago

    [quote<]the technology that makes me feel the most warm and fuzzy inside is.... the potential discovery of a room-temperature superconductor[/quote<] For me, it's the potential discovery of faster-than-light travel. Or maybe pixie dust. Or best of all, the potential discovery of world-peace-and-an-end-to-hunger, if only because that might finally cause Bono to Shut Up Already. Seriously, though, if the wall is indeed to be hit about 2020, imagine that you're a junior school student right now, considering a career in science/engineering. How to decide what area of physics to study in the hope of being in the right field at the right time?

      • shess
      • 8 years ago

      You shouldn’t be making such a decision based on which field will be most lucrative.

        • dpaus
        • 8 years ago

        I urged all of my kids to study whatever interested them most, which included philosophy and marine biology – neither noted for their job prospects (as the one who took philosophy noted: “What possible use to an employer is a proven ability to perform rational, critical in-depth analysis on any topic?”). But many parents will push their kids into an area they believe will give them the best future job prospects, regardless of how miserable the poor kid ends up.

          • UberGerbil
          • 8 years ago

          Mos of the happiest, most productive people I know are working in areas that have nothing whatsoever to do with their college major.

          • NeelyCam
          • 8 years ago

          It shouldn’t be decided [i<]only[/i<] on what is interesting. Future career prospects should also be taken into account to some degree (but should not dominate the decision making). The most interesting job in the world won't make you happy if you can't put any food on the table or a roof over your head. These are just facts of life.

      • Game_boy
      • 8 years ago

      For me it is nuclear fusion. Cheap, clean, renewable, as reliable and scalable as current coal/oil, and no risk of meltdown / long-term nuclear waste like fission.

      And completely within the reach of current technology if only they put some money into it.

        • David_Morgan
        • 8 years ago

        Kaku discusses fusion rather in-depth in this book. May be worth a read if you can source a copy of it, and you’re interested in such things.

          • Game_boy
          • 8 years ago

          So it does. Thanks.

          • dpaus
          • 8 years ago

          Yeah, I’ve added it to my list of ‘Books to Add to my Pile of Unread Books’ too; thanks!

        • anotherengineer
        • 8 years ago

        Burn wood it’s cheaper and it grows back 😀

          • Game_boy
          • 8 years ago

          Doesn’t scale to power Western nations or cars.

            • ImSpartacus
            • 8 years ago

            And neither do solar, geothermal, or wind power, but hippies still promote them.

      • ImSpartacus
      • 8 years ago

      World Peace? World Hunger?

      If we lose those, we lose our humanity. The same human irrationality that made Beethoven’s 5th also made the Rwandan Genocide.

        • Peldor
        • 8 years ago

        Oh, no, I’m pretty sure we’d still have profoundly idiotic statements like that to remind us of our human irrationality. It’d be nice to keep them in check though.

    • kamikaziechameleon
    • 8 years ago

    I still feel I can’t challenge my cpu most of the time because my HDD is such a bottle neck. I hope that SSD tech becomes cheaper so I can build a quad raid array out of 250 gb SSD to quench my machines need for fast and beefy storage.

      • Johnny5
      • 8 years ago

      Extreme!

      • NeelyCam
      • 8 years ago

      What is it that you do with your computer that requires such superfast storage?

        • Vasilyfav
        • 8 years ago

        Show off in the comment section for e-peen points most likely.

        • dpaus
        • 8 years ago

        FWIW, we’re investigating RAID 5 arrays with SSDs for storing multi-channel live video feeds (mind you, our interest is at least 50% in the increased reliability/survivability)

          • NeelyCam
          • 8 years ago

          Makes sense.

      • codedivine
      • 8 years ago

      But for SSDs to become cheaper, we need to go to smaller processes so we are back to square one.

      • anotherengineer
      • 8 years ago

      Exactly.

      I think todays CPU’s for most everyday tasks far exceed the requirements. However there are other major bottlenecks that have to & should be addressed.

      Accessing data from the HDD or SSD is a big one.

      I mean if the bandwidth of ddr3-1600 is XX GB/s even the fastest consumer SSD’s fall short.

      I think once there is a ram drive or an SSD that can be accessed through the PCIe that can have a read/write bandwidth of at least 1GB/s minimum and preferably 6GB/s and be “affordable” then there would be a real need for a nice CPU again. (besides supercomputers)

      But that is just my opinion/view.

      I just look at todays consumer quad and hexa core cpus like a big wood chipper that could handle a log 12″ in diameter, and then the hdd transfer rate is like having a chute on the wood chipper that is only big enough to feed little sticks through.

        • UberGerbil
        • 8 years ago

        What are you doing that is causing you to be storage-bandwidth limited? You’re continuously streaming data at the maximum speed of your HD or SSD for minutes to hours at a time? (I would suggest a latency issue, which would be something else entirely, but you seem to be focused on bandwidth)

      • TurtlePerson2
      • 8 years ago

      Hard drive storage calls have been minimized after several decades of computer architecture work. You’re almost always better spending your money on your core components than your hard drive. If your hard drive is making a bunch of random HD loads/stores, then you’re probably using a poorly written program.

      • Chrispy_
      • 8 years ago

      I already have a triple SSD RAID array and it’s fast (3 x 120gb indilinx) but not a lot faster than one.

      Some things are slightly faster, but only by a few percent. I suspect once you move to any decent SSD, the next bottleneck is, as always, [i<]piss poor software design[/i<].

      • hiro_pro
      • 8 years ago

      i would like to see the day when my computer throws most of my programs a regularly accessed files into my 2 TB of RAM (or otherwise address the various bottlenecks on my computer).

      i suspect more and more of our processor power will go to user interface such as higher resolution graphics, 3D interface, more voice commands or eye tracking type stuff. creating intuitive response from a computer (i.e. something that feels like AI) will also take more processing power than most realize.

      and maybe once we finish miniaturizing everything and run out new innovations under Moore’s law we can work on reliability. i would love to have a computer go two years without crashing.

    • kamikaziechameleon
    • 8 years ago

    A cold day in hell when I don’t upgrade every 24 months… I might see that processor design is hitting a wall in one direction but we have yet to really push thread counts and memory bandwidth as well as storage speed/size, and cloud computing will change the way allot of computer tasks are accomplished/processed. I think we keep seeing some very encouraging developments in mobile processors as well as graphics processors. Maybe hitting one wall will be good in that it will force us to change the way tech has been evolving.

      • Theolendras
      • 8 years ago

      Well as mentionned it will surely put a lot more pressure on programming models. If hardware alone can’t cope with our ever increasing speed need. GPGPU or transactionnal memory might be a great way to increase speed of execution without necessarily augmenting that much the transistor budget.

    • dashbarron
    • 8 years ago

    Maybe that is just it. The end of Moore’s Law ushers in the use of cloud computing and the likes of services such as OnLive. Higher bandwidths are available to make this transition where before it was impractical (I’m not saying things like OnLive is the best or practical).

    The other path I see is…well, an unforeseen area of CPU exploration. Rather they increase the processor footprint on the motherboard, stack more chips with extreme cooling option, discover some new amazing transistor/CPU architecture, have CPU addon-cards, some form of quantum computing, who knows, there is always new technology or ideas around the corner. Maybe Apple will wave the magic wand and solve everything, cough.

    I don’t think we’re screwed, we’ll survive the transition into whatever is next.

    • OneArmedScissor
    • 8 years ago

    Skynet.

      • UberGerbil
      • 8 years ago

      Not if John Conner has anything to say about it — and we know he does, because we’re now on some alternate timeline where SkyNet has already been thwarted a couple of times.

        • dpaus
        • 8 years ago

        Not ‘thwarted’, just slightly delayed… (OMG, Skynet runs on Bulldozer CPUs!!)

        EDIT: Or, maybe, Sandy Bridge E with PCI 3.0 interconnect. It’s all so confusing. Perhaps we should just kill both products to protect humanity.

          • Saribro
          • 8 years ago

          [url<]http://www.smbc-comics.com/index.php?db=comics&id=2362#comic[/url<]

Pin It on Pinterest

Share This