AMD’s FX-8150 further overclocked


You probably know this by now, since it seems pretty much everyone has read our initial review of AMD’s FX-8150 processors, but the “Bulldozer” architecture on which the FX chips are based is a “speed demon”—a CPU designed to run naturally at high clock frequencies. The concept is to do relatively less work per clock cycle and to enable higher speeds to make up the difference. You may also know that the FX-8150’s performance hasn’t entirely lived up to the “FX” product name—or to the expectations of many AMD fans. The reasons for that fact are many, but one of them seems to be fairly clear: Bulldozer-based chips probably haven’t reached the clock speeds AMD’s engineers originally intended.

We attempted to rectify that fact during our first round of FX CPU testing by cranking up the clock speeds, an effort made easier because all FX-series processors have unlocked multipliers. We were a bit frustrated to find out that we couldn’t nudge our chip past 4.4GHz without it turning flaky and crashing on us. However, we also had quite a bit of trepidation about pushing our brand-new, pre-release, 32-nm processor beyond about 1.46V.

Yeah, we were basically being wusses.

We were using a pretty formidable tower cooler at the time, but pumping that much voltage into a chip means it will be thinking deep thoughts about its own mortality before dropping off into C1E sleep. If you’re going to push that hard, it’s a good idea to have really effective cooling—maybe not just a biggish air cooler, but a truly large water cooling unit, with a radiator the size of Charlie Sheen’s liver and a pair of fans to match.

Fortunately, AMD seems to have been thinking along those same lines, because it has been making arrangements to include a beefy FX-branded water cooling unit (originally made by Asetek) in the box with certain FX processors. AMD says it will begin by bundling this cooler “with the AMD FX CPUs in select regions,” starting with Japan and “then rolling out into other regions.” We don’t yet have final word on exactly when this cooler might make it into North America, but we expect the bundle to add about $100 to the price of an FX processor alone.

The cooler is completely self-contained and pre-filled with coolant, so users won’t have to mess with filling or maintaining the fluid in the unit. Installing it is as simple as twisting in the four thumbscrews around the CPU socket, making a fan-sandwich out of the radiator, and plugging in a couple of headers on the motherboard. One of those headers powers the pump and fans, and the other is a USB connection for control and monitoring of the cooler. AMD supplies a software CD with a relatively simple utility that monitors the liquid temperature and fan speeds, along with allowing the user to choose one of the pre-existing fan control policies or to define his own.

Our mission was to see how far this fancy bit of kit would allow us to push an FX-8150 processor. At AMD’s recommendation, we chose the “Extreme” fan speed preset. Fluid and fan temperatures plummeted as Damage Labs was filled with a loud, Dyson-esque whine…

The results

We took our prior plateau of 4.4GHz at 1.465V as a starting point for our renewed overclocking attempts. We were soon able to reach 4.5GHz and then 4.6GHz at 1.525V, which was encouraging. However, getting the chip stable at 4.7GHz required more voltage, forcing us to ratchet things up to 1.55V, the peak value exposed in AMD’s Overdrive utility—and a heckuva lotta juice for a 32-nm processor. Once at that voltage limit, we tried for 4.8GHz, but we quickly saw errors in Overdrive’s stability test.

So, 4.7GHz was it. That’s 1.1GHz above the FX-8150’s base clock, but only 500MHz beyond its peak Turbo Core frequency. Still, it’s not far from the projection in AMD’s press literature, which says AMD’s internal attempts with water cooling topped out at 4.9GHz (presumably with more than one chip on hand). 4.7GHz is also a couple of hundred megahertz higher than what we’ve seen from Intel’s Sandy Bridge and Gulftown, in our limited, air-cooled overclocking exploits with those chips.

Even at 1.55V, we weren’t pushing the FX cooler past it limits. During stability testing, CPU temperatures topped out at around 54.5° C, with an ambient room temperature of about 74° F/23° C. Yeah, the thing was incredibly loud—I’d give you a decibel number, but we’re currently having the roof replaced, and I don’t want to harm the workers’ ears. Er, I mean, all of the hammering would throw off the measurements. Still, the cooler itself could have taken more heat, had we needed it.

At those overclocked settings, our FX-8150’s power draw rose considerably from its stock levels. We did a quick measurement, in fact, and it came back like so:

That, my friends, is why AMD didn’t push any higher than it did on FX clock speeds. Cranking up the CPU voltage does bad things for power consumption. Although our motherboard, PSU, and cooler could apparently handle it reasonably well, an FX processor at these speeds goes well beyond the top established PC power envelope of 125W. Even resurrecting the old 140W power window probably wouldn’t have bought AMD much more in terms of frequency. There is headroom in this chip, but you pay for it dearly in wattage.

Oddly enough, the benchmarks we selected months ago for our overclocking performance tests seem to be pretty well suited to the Bulldozer architecture. Thus, turning up the clock frequency allows the FX-8150 to put up some really nice numbers, tying or beating a Core i7-2600K overclocked to 4.5GHz in several cases. There are some pain points here, such as the difference in single-threaded Cinebench performance between the FX-8150 at 4.7GHz and the Core i5-2500K at stock (scores of 1.16 vs. 1.48, respectively). Still, had Bulldozer landed at frequencies north of 4.5GHz within conventional power envelopes, the competitive landscape might look rather different. Indeed, if GlobalFoundries can manage to refine its 32-nm fabrication process to allow such speeds in the coming months, who knows?

For now, thanks to a formidable bundled cooler, those folks who bleed AMD green (or is it red now?) will have an option for achieving bragging-rights-type performance in some cases, so long as they’re willing to pay for it in the form of added heat, noise, and power draw.

Comments closed
    • Draphius
    • 8 years ago

    BTW should this really be considered watercooled? at best that closed loop is slightly worse then a high end air cooler. i think TR needs a custom loop setup for bench tests with a variable speed pump and atleast a 360mm rad

    • Arclight
    • 8 years ago

    For now Intel s aproach is called Brainiac
    [url<]https://techreport.com/image.x/2011_7_4_Talent/comic-20110705-big.jpg[/url<] While AMD is going for the Speed Deamon [url<]https://techreport.com/discussions.x/21884[/url<]

    • Fighterpilot
    • 8 years ago

    <Neelycam> “It may be shocking to a lot of people, but I’m actually not biased towards Intel.”
    That has to be sig worthy.
    Perhaps a few of your “not biased towards Intel” posts from some other sites will refresh your memory…

      • NeelyCam
      • 8 years ago

      You must be talking about SemiAccurate, right? There I play a role of an Intel troll because >80% of the site visitors (and Charlie himself, of course) are pro-AMD and anti-Intel/NVidia to the point of insanity. SemiAccurate article comment section is nothing but trolling ground to me (curiously, their forums are actually a great source of useful info.. and they wield some serious banhammers for that section). In contrast, I respect the folks here at TR – the article comments are more civilized and informative; I try to keep my comments less insulting here, and even aim to contribute from time to time.

        • dpaus
        • 8 years ago

        “I’m not a doctor, I just play one on TV”

          • NeelyCam
          • 8 years ago

          Something like that. This was from Friends, right?

    • HisDivineOrder
    • 8 years ago

    Man, there just has to be something WRONG with the design or fabrication for it to use that much power at your highest overclock, no matter its clockspeed. I just don’t see why a CPU should be soaking up that much power.

    And it’s funny because the whole reason Intel dumped Netburst was precisely because they wanted to create mobile chips that had great performance with little heat. Why in the world would AMD want to make a chip that is tuned to go fast but make MORE heat than the competition?

    I just feel like either there’s a serious manufacturing flaw, design flaw, or some aspect of the Bulldozer tech is not yet being discussed, like this design better benefits when in combination with GPU functions (like Trinity will be with its Piledriver cores).

    At least, I hope so. Otherwise, AMD is screwed. Intel may have had the money and clout to hold AMD off despite their serious mistakes there, but AMD absolutely does not have the money or clout to match that.

      • sschaem
      • 8 years ago

      Simple answer, the current FX are overvolted in their default operation.

      AMD never had the intention to use 1.4volts on 32nm for their target clock speed.
      (When you think about it 1.4 volt on a 32nm 2billion transistor chip is massive)

      Some site took the investigation the other way, and undervloted the FX. And found some astounding results.
      Well, not that surprising, but the 32nm FX is ~2x more power efficient then the X6 thuban (and in effect llano).

      Even with all the other problems the FX faces (os scheduler, ‘broken’ memory controller, buggy cache coherency) the current B2 stepping show some potential.

      Speculation: a future stepping that address the cache/memory and voltage could make Zambezi a viable product for AMD.

      I think Intel knows Zambezi is a big ‘bug’ and if AMD can get their head around those problems, IvyBridge will have some real competition.

        • Waco
        • 8 years ago

        Yup. Evidence of this is obvious if you look at their server chips. TWO Orochi dies, at > 2 GHz, in a 95 watt power envelope…

    • RtFusion
    • 8 years ago

    Eh, you can’t make the pig (Bulldozer) prettier by applying lipstick (FX-branded water cooler + ocing) to the face, its still a pig.

    This setup might be nice for some of the AMD fans that may be still around but to see those power consumption numbers, ouch.

    • Aussienerd
    • 8 years ago

    If only this was 9 months ago and $50 cheaper, the world would be a different place.

    Love the line

    “with a radiator the size of Charlie Sheen’s liver “

      • dpaus
      • 8 years ago

      I was mulling over the other parallels between Charlie Sheen and the 8150. They both can be brilliant performers under the right circumstances, but tend to be very hot-headed in doing so. They both have some tiger blood, and they’re both their own worst enemy, but somehow, ya gotta love ’em….

      EDIT: Oh, and everyone wants to see them defeat their demon and get on with the potential we see in both of them.

        • yogibbear
        • 8 years ago

        One is a hot-headed, porn star f*******, dickhead that leaves the airconditioning on when you’re not at home and then laughs in your face about it while getting paid too-much for the performance he delivers, the other is charlie sheen. πŸ™‚

    • Mr Bill
    • 8 years ago

    At this point, I feel that perhaps AMD should have scrapped the Bulldozer and used the process shrink to come out with a Phenom II X8.

      • Anonymous Coward
      • 8 years ago

      Do you base that opinion on AMD’s 32nm Llano performance? At this point no one knows how much of AMD’s problems are in the 32nm process and how much are in the processor, however AMD has historically milked a lot of improvement out of each process. I expect that AMD will [i<]eventually[/i<] release a 32nm bulldozer at the sort of speeds tested here (more or less, I'm not clear on what effect turbo mode has had).

        • Mr Bill
        • 8 years ago

        I’m just thinking that with two more cores, a Phenom II X8 would have the same number of integer units as Bulldozer and twice the FP units. That would probably put it over the top for multi-core jobs compared to the I7 2600.

          • OneArmedScissor
          • 8 years ago

          The problem would be the same. The more cores you have, the more you’re not using, and the more cumbersome the memory system will be at all times, which is a net loss.

          Why do you think Intel doesn’t sell their 10 core CPUs and AMD doesn’t sell their 12 (and soon to be 16) core CPUs outside of servers, not even for workstations? For that matter, why don’t even the highest end server CPUs actually double the core count every shrink? There are diminishing returns in every case and it’s a never ending balancing act.

          Clock speed is still king for desktops. The problem is that [b<]this one iteration of Bulldozer[/b<] cranks it up on the cores and totally ignores it for the L3 cache. It's best to just keep everything going as fast as possible, as even what people consider to be "multi-threaded" tasks for their PC are not very parallel in nature.

      • clone
      • 8 years ago

      I’d have preferred a smaller Phenom II X6 for now……. leave X8 to Bulldozer where it belongs, scrapping Bulldozer would be a horribly superficial / colossal mistake.

      Bulldozer is the future and after listening to the Podcast my opinion has changed, I believe intel is going to stretch it’s cpu pipeline in the next gen in response to AMD, that’s not to say that Bulldozer is the best but that it’s got a very promising future.

      here we are it’s almost 2012 and still no cpu’s pushing 5ghz…. instead lazily sitting at sub 4ghz leaving Intel and AMD to struggle with boosting IPC which certainly served and will serve for a little longer but the end is near.

      Bulldozer will do well where the real money is, server, on the desktop front it’s pointing the way, this cpu isn’t a disaster it’s just not exciting and nothing near what was promised from a performance standpoint, I was thinking Thuban after seeing this cpu but if the price “adjusts” enough then I’ll forget all about Thuban.

    • tfp
    • 8 years ago

    The one thing this does show is how well the FX-8150 scales in multi threaded apps. Because the scaling on heavily multi threaded apps is better than Intel chips with HT there is a point were even with the same clock speed bulldozer will pull a head. Of course the issue is the power draw required to get there. The design does show it does has great scaling but AMD needs to improve IPC in a single thread. If they can do by 10-15% on single threaded apps the chip should really shine in the benchmarks in the writeup because of the great scaling on multithreaded apps.

    If Intel stands still for AMD things should work out great.

      • forumics
      • 8 years ago

      lets not forget that phenom2 also scaled extremely well against the core and might do good against SB too, but amd will never be able to shrink the processor small enough to get the high frequencies into an acceptable power envelope.

        • khands
        • 8 years ago

        There’s some design flaw that seems to be holding it back at acceptable voltages, if they can figure that out the rest will follow.

    • rhysl
    • 8 years ago

    Its plain to see , Intel have the brainer brainatics.. AMD still have a long horizon to reach ..

    • WaltC
    • 8 years ago

    I very much doubt the BD story has been completely told. Perhaps–and this is an interesting possibility–AMD will actually succeed where Intel failed with the P4. Wouldn’t that just be one hell of an ironic twist?…;) Incredible, actually. Then again…

    My first thought on seeing the 2,000,000,000 transistor number was that it was a misprint, quickly followed by “WTF are they doing with 2B transistors, exactly?”

    This all reminds me so much of the P4 Northwood/Athlon action so long ago as captured so well [url=https://techreport.com/articles.x/3289/11<]here[/url<]. (Except that some elements are nearly reversed!) Intel was whipping Northwood into the best representative of the Netburst architecture yet produced at .13 microns, while AMD was struggling with its manufacturing process to move the Athlon down to .13 microns from .18 microns--and the P4, with its much longer pipeline was clocking at 2.2GHz+ while the Athlon was pretty much stuck at ~1.6-1.7GHz. As the splendid TR review I've linked indicates, this was the first time in more than a year that the P4 was able to leap out ahead of the Athlon--and it was all due to a very good process implementation by Intel with Northwood. I remember some guy in a forum on some site somewhere (here?--Can't recall) who kept insisting that the reason the Athlon "would never hit 2GHz" was because the "Athlon architecture lacked the headroom" to operate at that speed. We argued about that for a long time, as I recall, and my position was that his position was "nonsense" because this was clearly merely a temporary problem relative to AMD ramping up its .13 micron manufacturing processes. And sure enough, although it put AMD back about six months from where the company wanted to be at the time, when AMD had mastered metal layering techniques within its processes, the company had no problems moving Athlon to .13 and to 2GHz+ to once again whizz right past the P4 performance-wise. AMD would retain this performance lead, more or less, right up until Core 2, IIRC. [i<]But I've often imagined[/i<] what would have happened had Intel actually been able to pull off an ever-shrinking, relatively simplistic, long-pipeline P4 that actually *did hit* 10GHz eventually (which was Intel's unofficial goal for Netburst, at least as "candidly" shared with the world in several Intel employee interviews I read at the time.) Would there have been anything AMD might've done at the time to counter it or compete? I very much doubt it. Even people like John Carmack at the time were enamored of the long-pipeline, high MHz approach, with Carmack going so far as to say that AMD's significant IPC performance advantage was a "hack" that would only temporarily keep AMD out front because Carmack, like others, was expecting the P4 to more than make up the deficit via a continuing stream of progressive die shrinks accompanied by ever-upwards MHz increases. That didn't happen because Intel was unable to create a Netburst cpu that worked in silicon like it did on paper. Instead, Intel scrapped the Netburst architecture entirely, adopted AMD's x86-64, and successfully moved into the IPC arena (relative to P4) beginning with Core 2 and beyond. AMD has been unable to reclaim the overall performance crown since that time, more or less. So now, here we have AMD reaching back into time to grab a "speed demon" concept and attempt to implement it--implement a strategy that Intel was completely unable to successfully adopt roughly a decade ago! There is some mention--I believe--in the TR review, that AMD is saying that the architecture actually performs better per clock the *faster* it is clocked-?-or so I recall-- pardon me if I misunderstood. If so, then the role of those 2B transistors might become exceedingly evident in the coming weeks as AMD refines and tweaks its 32nm processes. This limited overview of what happens with the cpu when it is clocked higher (over clocked) is certainly indicative of that sort of thing--though, if so, it's going to be unique among the cpu architectures I've known. Basically, I'm... Still scratching my head over Bull Dozer.

      • Arclight
      • 8 years ago

      I’m giving you +1 just for being around for so long. You just earned some internet cred. How do you feel?

        • Zoomer
        • 8 years ago

        Are you kidding? He didn’t even pull out the 300A @ 450 or K6-2 yet.

          • WaltC
          • 8 years ago

          Heh…;) I didn’t dump Intel until 1999, when I moved to the Athlon, where I still am. Prior to K7 there wasn’t much AMD (or Cyrix, etc.) was doing that interested me–as I had just moved to Intel after 9 years of Motorola-family cpus used in the C= Amiga models. Went back to Intel ~1995, moved to AMD in ~1999. Prior to picking up the Amiga in ~’87, my only other x86 experience was with an 8086/88 circa 1985/6.

      • DrCR
      • 8 years ago

      Am I the only one looking across their room, lovingly, towards a t-bred or barton-core machine? Mine’s a A7N8X, 2600+, still going strong.

      Edit: That was a fun era. And I still use [url<]https://techreport.com/news.x[/url<] with the blue theme when I'm here at TR... I like the old layout. πŸ™‚

        • NeelyCam
        • 8 years ago

        My t-bird melted six years ago because it was unreliable AMD silicon. My Prescott from six years ago is still running fine and hasn’t melted despite the, um, heat generation issues..

          • Krogoth
          • 8 years ago

          No, the problem is that pre-Palomino Athlons had no thermal protection. They will fry themselves if you have a HSF failure. Pentium 2s and Slot I Penitum IIIs did the same thing.

          Intel started to implement thermal protection with Coppermine Penitum IIIs which would just lock-up the chip.

          Pentium 4s started the current thermal throttling method.

          • wierdo
          • 8 years ago

          Thermal protection was just starting to become a feature from both companies at the time, if a CPU cooling was half assed or not installed well those days then you would overheat/fry your CPU, my friend learned that the hard way once, he thought it was ok to turn on the pc without a fan on, and poof white smoke.

          Those were still in the days of hardcore PC assembly, mistakes/oversights in terms of cooling/wiring were pretty dangerous. Things got really easy nowadays, almost anyone can put together a PC with a little fiddling with the mobo manual. If anything goes wrong the thermal protection more or less got yer back.

          • NeelyCam
          • 8 years ago

          +1 to both Krogoth and wierdo – good job defusing a weak troll.

        • WaltC
        • 8 years ago

        One thing I have never forgotten from that time period is the fact that prior to picking up an AMD system in 1999, out of pure curiosity, I used to frequent the Intel processor forums quite a bit–and the sheer amount of negative propaganda written about all things AMD at the time was astounding…;) Literally, it was difficult for me to see how an AMD system would be able to function after you turned it on.

        I felt like nothing could be *that bad*–else no one would buy it and AMD would not exist– and so went out and bought one. I think it was a T-bird (didn’t Barton come later?)-based system. I was suitably amazed when not only did it have no problem booting up, and that not only was it a much better buy than what Intel was pushing at the time, but I really had no more or less “problems” with the system than I had had with the Intel cpus and core logic I used for a few years. That was when I first discovered that listening to Intel talking about AMD was not a very profitable experience! I left the Intel forums after that and never returned…;) I believe it is still pretty much “shill’sville” over there still.

      • travbrad
      • 8 years ago

      [quote<]and this is an interesting possibility--AMD will actually succeed where Intel failed with the P4. [/quote<] I doubt it. Intel's CPU's can reach higher clock speeds than AMD right now, so if AMD is going to rely on clock speed this is a bad place to start. Especially since ramping up the clock speed seems to increase power consumption exponentially.

        • Yeats
        • 8 years ago

        They can?

      • Peldor
      • 8 years ago

      [quote<] There is some mention--I believe--in the TR review, that AMD is saying that the architecture actually performs better per clock the *faster* it is clocked-?-or so I recall-- pardon me if I misunderstood. If so, then the role of those 2B transistors might become exceedingly evident in the coming weeks as AMD refines and tweaks its 32nm processes.[/quote<] Water-cooled to a 30% overclock here there's performance scaling of 86-99%. Edit: 86-99% clock-for-clock, or 26-30% faster performance. I hope that was obvious, but one cannot be too careful. Fool me once, shame on you, fool me twice, I'm going to find someone who can sell me a Bridge.

      • bjm
      • 8 years ago

      I thought this was going to be another one of your long-winded posts that don’t really say anything. But I must say, nicely said and very interesting viewpoint. +1!

    • EV42TMAN
    • 8 years ago

    Lol AMD swings and misses again hahah

    • r00t61
    • 8 years ago

    If, if, if…shoulda, coulda, woulda.

    And that power consumption is frankly absurd.

    Now we’ll have to wait and see a year if the next iteration irons out all these massive kinks.

      • kamikaziechameleon
      • 8 years ago

      Its ironic to see how brave they were with many major changes but how they botched so many of the smaller things rendering many of those gains mute. I expect the next gen of this architecture will do very well, but until then intel has no competition at all.

    • ronch
    • 8 years ago

    If BD is this power-hungry, I’m concerned about how low they have to clock the chips to get them down to laptop-friendly TDPs. Unless they really improve IPC a lot, imagine how slow a low-clock, low-IPC chip will be.

      • OneArmedScissor
      • 8 years ago

      Breaking news: laptop CPUs don’t come with 8 cores at 1.4v and 16MB of cache.

      Ever seen a laptop with an overvolted Westmere EX? That wouldn’t work very well, either. And yet, astoundingly, Intel found an alternative!

      While power increases disproportionately with clock speed, it’s an inverse releationship, so I don’t know what you’re getting at to begin with. A 2 GHz Bulldozer would use very little power – which is exactly the reason for this particular chip to exist in the first place.

        • khands
        • 8 years ago

        Yeah, we’ll see a two module chip of this at best on laptops and they’ll be cherry picked for power consumption.

        • chuckula
        • 8 years ago

        All the things that you just quoted that will help Trinity’s power consumption also drastically hurt its CPU performance. There’s a good chance that a 4 “core” Trinity system will have less CPU performance than current 4 core Llanos, especially if they are being clocked down to only 2 Ghz. Trinity is all about GPU at this point and trying to get into a 17watt power envelope so that AMD can participate in Ultrabooks.

          • khands
          • 8 years ago

          Don’t forget that trinity will be using piledriver cores instead of zambezi, while that won’t fix everything it should help.

        • ronch
        • 8 years ago

        I know. But 2GHz? And AMD will probably have to pull out 2 modules and lots of cache. How do you think that will stack up against today’s mobile Phenom and Llano offerings?

          • OneArmedScissor
          • 8 years ago

          [quote<]I know. But 2GHz?[/quote<] Llano already runs faster than that in laptops. [quote<]And AMD will probably have to pull out 2 modules and lots of cache.[/quote<] Exactly. And then it will be fixed. PCs don't get anything out of the slow as molasses L3. It does more harm than good. The L2 is large enough and much faster. You don't even have to take my word for it - AMD even said it out themselves. And I think we can all agree that even on a desktop, four "modules" is a bit silly. In a laptop, it would be just outright waste. [quote<]How do you think that will stack up against today's mobile Phenom and Llano offerings?[/quote<] I didn't say that. I don't know anything except that its impact on laptop battery life has quite literally nothing to do with this, and that Llano did not exactly set the bar very high.

            • ronch
            • 8 years ago

            I pretty much think along the same lines as you. To clarify what I meant in my original. post, AMD would have to do a lot of slimming down BD if it wants to put it in mobile devices. But that will likely make performance worse than what we’ve already seen in full-featured desktop variants, which is already disappointing as it is. Given that, how would a stripped down FX fare against current mobile CPUs based on K10 (which are also down-clocked/stripped-down counterparts)? An FX-8150 can be around 2x faster than 4 K10 cores, suggesting per-core performance to be [u<]more or less equal[/u<] to K10 at current clocks. Extrapolating from that, a stripped down mobile FX with just 2 BD modules (containing no improvements in clocking or IPC) may, at best, deliver the same performance as today's mobile AMD quad cores unless Trinity contains some major improvements. If that'll be the case, it'll be the same story as with today's desktop FX variants being compared to current K10-based chips.

    • kristi_johnny
    • 8 years ago

    Let’s remember that 2600k it is a “”middle class” CPU, Sandy Bridge-E is yet to arrive, not to mention Ivy Bridge.
    Yes, in some benchies, 8150 equals 2600k, but OCed 8150 consumes 348 watts, a lot more.It’s realy AMD’s Netburst

      • Arclight
      • 8 years ago

      I’m sick and tired of people saying this. No, 990x is not much better than i7 2600k in moust benchmarks and especially in gaming. Performance/dollar/power consumption the i7 2600k is the best Intel has. Read the review again and see for yourself.
      [url<]https://techreport.com/articles.x/20188/1[/url<] SB-E though and Ivy Brdige will totally destroy AMD though, if it hasn't been destroyed already.

        • chuckula
        • 8 years ago

        [quote<]I'm sick and tired of people saying this. No, 990x is not much better than i7 2600k in moust benchmarks and especially in gaming.[/quote<] You are right to say that the 990x is not better than a 2600K (or even 2500K) in benchmarks that are not heavily threaded and in games. The 990x, however, *is* a whole lot better than the 2600K in massively multi-threaded benchmarks, which are also typically the same benchmarks where the FX series chips have their best showings. The plain fact is that relatively few applications used on ordinary PCs really are that massively multithreaded to take advantage of either the FX-8150 or the 990x. The big difference (other than price) is that the 990x is still relatively competitive with the 2600K at most applications including single-threaded applications whereas Bulldozer goes from sort-of-good to abysmal when the thread count drops.

        • kamikaziechameleon
        • 8 years ago

        “SB-E though and Ivy Brdige will totally destroy AMD though, if it hasn’t been destroyed already.”

        To say that is to negate the potential of this design. All indications we’ve seen is that while first gen products from this design are near total crap the potential for it to offer an extremely compelling A symmetrical product to Intel is more apparent than ever. If only AMD can realize this, lol.

        • OneArmedScissor
        • 8 years ago

        Sandy Bridge E has no destroying to do. It’s just leftover scraps from the Xeons, thrown to a dying niche market.

        Ivy Bridge can’t make things any worse than Sandy Bridge. It basically [i<]is[/i<] Sandy Bridge. What are you people expecting it to be?!? You've set a made up bar too high and it's just going to lead to disappointment.

          • Arclight
          • 8 years ago

          I’d expect SB-E CPUs to be to SB like the 980x was to a 920.

          As for Ivy, 77W on top model (which might sport an IGP included in that TDP) sounds pretty damn good to me, plus the few % more performance from tweaks compared to SB.

          • CB5000
          • 8 years ago

          Except with ivy bridge, the power consumption will be lowered with the smaller die, with improvements in performance by about 10-20%. That will make things worse for bulldozer if it can’t gain more performance per watt. It’s sad that it has to use nearly twice the power to beat the performance of the 2600k

            • OneArmedScissor
            • 8 years ago

            Intel themselves said 5% better CPU performance for Ivy Bridge, so expect more like 2% that you can’t notice in most cases. It’s a shrink. Again, what did you expect? Where are these inflated numbers coming from? The closest thing to a tangible benefit is marginally reduced cache latency. There was no reason to expect anything else.

            Die size β‰  power use. Transistor count β‰  power use. What the transistors are doing = power use.

            While the CPU cores should use less, what they make of the GPU (new design), memory controller (higher speed), and PCIe controller (double the bandwidth) remains to be seen. These things aren’t free, and they have to measure up to not Bulldozer, which is a grossly inappropriate comparison, not Llano, which will be obsolete, but Trinity. Their attempt is apparent, but won’t be a known quantity until release of [b<]both[/b<]. You'd be saying the same thing about Sandy Bridge's power use if Intel presented it as a server chip with 8 cores and oodles of cache. Exactly that is coming - with a 150w TDP. Obviously, there are other, and better, ways to put a new CPU core to use in PCs, and we have yet to see them.

            • maroon1
            • 8 years ago

            If you are talking about IPC then yea it is 5%

            ivy bridge will probably have higher stock clock speed as well. So, the performance gains is not limited to IPC

            • Draphius
            • 8 years ago

            the part that worries me about that die shrink is the inherent problems with the lithography process and i wonder how many months after launch its going to be till they get all the manufacturing kinks worked out with the die shrink. my first pc i built had a pentium 133 and my next had a pentium 2 and from that point on ive owned amd chips because my pocket book couldnt justify paying intel crazy money for chips that werent a whole lot better then amd’s but since sandybridge came out i dont know why anyone would buy an amd chip. if its money your worried about sandybridge will save u that in power consumption over its life and the performance is better in most catagories. i dont get how amd or intel fanbois can be happy about anything with this situation though cause in the end we need more chip makers and competition otherwise we will be the ones who pay for it. whether it be our bank account or the performance of a chip, competition helps us the consumer more then anything and pushes companies to try harder

          • chuckula
          • 8 years ago

          Nobody is expecting Ivy Bridge to be miraculously faster than Sandy Bridge BUT: 1. there will be real IPC increases in IB on the order of about 5% or so (maybe more for specialized floating point workloads with lots of divisions). 2. Ivy on the desktop will likely have *excellent* overclocking ability.. instead of 5Ghz as a “lucky” SB overclock you’ll have 5Ghz as a “normal” OC for Ivy Bridge with decent thermals to boot.

          When you consider that Piledriver is going to be considered a “huge success” if it gets a 10% performance boost, and that Piledriver is coming out after IB has launched, you get the idea. It’s very amusing that when Intel launches a platform with a 10% improvement and better thermals, OneArmedScissor calls it useless, while AMD launching a platform that is mostly correcting the glaring shortocomings of its last platform is a huge advancement….

      • maxxcool
      • 8 years ago

      990x? I did not know were were comparing chipsets to cpus these days…

        • Arclight
        • 8 years ago

        Behold
        [url<]http://ark.intel.com/products/52585/Intel-Core-i7-990X-Processor-Extreme-Edition-%2812M-Cache-3_46-GHz-6_40-GTs-Intel-QPI%29[/url<]

        • Vaughn
        • 8 years ago

        lol you just made yourself look like the biggest noob!

        Think before speak! or in this case type!

      • sluggo
      • 8 years ago

      The 8150, even overclocked, does not peak at 348 watts. Those numbers are system power usage. Show me an atom that throws off 52 watts, good grief.

        • sircharles32
        • 8 years ago

        I was waiting for someone to pick up on that one.

        +1

          • NeelyCam
          • 8 years ago

          Seriously? CPU and System power numbers are close enough in this case that it doesn’t really matter much… his point is valid nonetheless.

            • sircharles32
            • 8 years ago

            His point is valid, but you marked me down…..thanks buddy.

            • NeelyCam
            • 8 years ago

            When I said “his” I meant kristi_johnny’s. And I marked both of you down – sluggo because he started whining about the CPU power not being 342W since some of it is “system power”, [i<]completely[/i<] missing the point about the FX-8150 CPU power being atrociously high (probably around 250-300W) compared to 2600K... and you because you were supporting his pointless argument. Sorry about that.

            • sluggo
            • 8 years ago

            Whining, thanks. How about we talk about missing the point? Did I ever say anything about the 2600k or any implied comparison? I did not. I pointed out the poster’s misreading of the data and his posting of that misunderstanding as fact. I had nothing at all to say about Intel or his original point, so I didn’t say anything about it. I just like people not to quote data incorrectly, as it tends to get passed along as “fact” sometime later. And by the way, my correction actually makes the 2600k look better, and I’m shocked as hell that you left that particular teat unsucked.

            You cannot let any potential slight to an Intel product pass by. You even choose to defend a position even when no one is attacking it, as I did not.

            Calling out mistakes in data interpretation is not pointless.

            • NeelyCam
            • 8 years ago

            [quote<]And by the way, my correction actually makes the 2600k look better, and I'm shocked as hell that you left that particular teat unsucked.[/quote<] It may be shocking to a lot of people, but I'm actually not biased towards Intel, and I'm not sucking every teat that implies Intel is somehow better. Eliminating non-CPU power obviously makes 2600K look even better than BD, but that's unrelated to my comment. kristy_johnny's post was a bit incoherent, but [i<]I believe[/i<] the point was that 2600K is not the highest end Intel CPU (that is SB-E/IB... the implication being that those two will pummel BD in performance comparisons even more), and that 8150 can meet 2600K's performance overall (not just in a couple of benchies) only if it's overclocked, in which case it will eat a ton of power. The number quoted was the system power, but pointing that out doesn't chance the overall point. I was surprised that sircharles thought your post was particularly commendable. That said, I want to apologize for calling your post 'whining'. It really wasn't. I've had a rough week...I was in a pissy mood, and was unjustifiably offensive. I'm sorry about that.

    • luisnhamue
    • 8 years ago

    I just cant understand, the FX-8150 stock speed results seem a lot different from the ones we got last week.

    I dont know, but last week the picture was too bad.

      • NeelyCam
      • 8 years ago

      These are the ones where FX-8150 was somewhat competitive.

        • travbrad
        • 8 years ago

        Yeah it would be interesting to see how it did in the other tests. Surely a big part of the reason for overclocking would be to overcome it’s weakness areas?

          • NeelyCam
          • 8 years ago

          Maybe, but then again one could just as well overclock the competition (2500/2600K)… BD can only close the gap if it overclocks [i<]more[/i<] than the Intel CPUs at the same price range.

    • UberGerbil
    • 8 years ago

    [quote<]it will be thinking deep thoughts about its own mortality before dropping off into C1E sleep[/quote<]That gave me a very Douglas Adams vibe, which is high praise. Though nothing matches the bark of a laugh that power graph elicited from me.

    • Forge
    • 8 years ago

    As a long time AMD fan, this article, much like the main Bulldozer review, makes me wish I could afford an i7-2600K.

      • Synchromesh
      • 8 years ago

      Yup, last review is the reason I’m pondering going Intel after 9 continuous years of AMD.

        • ChunΒ’
        • 8 years ago

        That’s why I’m happy that i don’t need a CPU upgrade anytime this generation. Who knows, look at what Intel did after Prescott, and what amd had after Barcelona.
        Hopefully ‘Bulldozer II’ will be better?

    • kamikaziechameleon
    • 8 years ago

    Its funny, the long this sits the more promise we see in future releases to bad this release is such a unpolished turd.

    • Arclight
    • 8 years ago

    Excellent, i know how i will heat my room this winter: FX 8150 @4,5 Ghz, 2 x GTX 480s and i’m good to go.

      • khands
      • 8 years ago

      Make sure the CPU and GPUs are on different PSUs.

      • NeelyCam
      • 8 years ago

      Add an Onkyo HT Receiver, and you’ll need an all-year-round airconditioner…

        • Kaleid
        • 8 years ago

        It’ll create jobs. For the firedepartment.

      • FuturePastNow
      • 8 years ago

      Where’s your AMD loyalty, man? You want two Radeon HD 2900XTX’s.

      Make sure you upgrade your home electric service to 200 Amps.

        • Arclight
        • 8 years ago

        I don’t actually care for brand names, i only care for the best wattage/ dollar and i belieave the GTX 480 beats in wattage the 2900XTX…if not i’m getting 2 GTX 590s, that ought to beat it.

          • FuturePastNow
          • 8 years ago

          Oh, no, I don’t think you can beat the 2900XTX on wattage/dollar. Its load power consumption is almost exactly the same as a GTX 480, but idle is much higher, and you get them “buy it now” on eBay for as low as $69.

          Now that’s affordable heat.

            • Arclight
            • 8 years ago

            Oh thanks buddy. Mo’ graphics heat fo’ me!

            • Waco
            • 8 years ago

            I had a 2900XT with the full 1 GB of GDDR4…it ate power like nobody’s business. It was decently fast as long as you didn’t use antialiasing…

    • maxxcool
    • 8 years ago

    Terrible πŸ™ I get a 7.10 in cinebench on a lowly 4.0ghz x6 ….

    • odizzido
    • 8 years ago

    I am glad AMD can see the future of home heating, this is truly a next generation fusion chip.

      • Meadows
      • 8 years ago

      +1, and yet, that’s what I do to save a little on “conventional” heating bills.

      • atryus28
      • 8 years ago

      AMD’s version of the Preshott chip. πŸ™

        • NeelyCam
        • 8 years ago

        Curious how they started designing the BD architecture five years ago – the marketplace crucified Prescott around the same time. Were they not paying attention..?

      • swaaye
      • 8 years ago

      I was thinking I needed a ceramic heater at home, but silicon works too!!!

      • Wirko
      • 8 years ago

      A chip for a “home theater PC” (if you take one “t” away).

      Now try to calculate the current through the CPU … could be close to two hundred amps. (1.55 V x 170 A would be 263.5 W for the CPU alone, an estimate that’s quite probable given that the whole system consumed 76 W at idle when not overclocked).

    • ModernPrimitive
    • 8 years ago

    Stick a fork in me Jerry, I’m done !

    • NeelyCam
    • 8 years ago

    Hey, would you be able to do an “iso-power” comparison between SB and BD? I.e., overclock them both to the same load power consumption level (or, alternatively, overclock/volt SB to BD’s stock power consumption level or underclock/volt BD to SB’s stock levels )?

    It would be interesting to see just how the two compare in apples/apples power efficiency tests… so far, 2600k has been faster while consuming less power.

      • Chrispy_
      • 8 years ago

      Look at the “task energy” graphs in TR’s CPU reviews.

      You’re asking for graphs that show task performance against fixed energy.
      Scott provides graphs showing energy performance against fixed task.

      Surely you can extrapolate?

        • NeelyCam
        • 8 years ago

        Those are merely indicators. If you have to hike the voltage and overclock to get to the same performance level or task time, the power consumption goes up nonlinearly – linear extrapolation doesn’t apply, and there aren’t enough data points to do a non-linear one. There have already been reports saying that undervolting BD works great, and gets you a pretty efficient CPU – it’s just that trying to push SB-level performance out of it requires too much power.

        What I was going for is 1) how much power does the BD consume to get to the same performance level as SB, 2) how much energy does the BD consume to complete a task [i<]as quickly[/i<] as SB, 3) how much faster does SB complete the task if it's overvolted/overclocked to the same task power level with BD, and 4) which one completes a task faster if both are severely undervolted/clocked to very low but equal task power levels. The first two relate to power/heat/noise sacrifice to get to a 'desired' performance level, and the last one relates to a performance sacrifice to get to a 'desired' power/heat/noise level. It's all to get a better estimate of where the 2015 AMD architecture will be with the 50% power efficiency improvement.

          • NeelyCam
          • 8 years ago

          I really don’t know why I keep getting downthumbed for this… I really thought this was a pretty decent, analytical suggestion for comparing these chips.

          Maybe I should quit trying to suggest logic to AMD fanbois.

            • Fighterpilot
            • 8 years ago

            Perhaps rather than accusing others of being AMD fanbois whilst denying you do the same over Intel products could give you a better success rate with your “analytical suggestions”.?

            • NeelyCam
            • 8 years ago

            Personally I don’t think past comments should matter for the current discussion, but I understand some people have trouble understanding logic, and that leaves them somewhat reliant on “feelings”. But all too often that seems to mean things get downthumbed based on the messenger, and not the message.

            • OneArmedScissor
            • 8 years ago

            Says the guy who follows people around, claiming they said things in the past that they never did. Very cute.

            For the record, I don’t down vote you. But stop and think about that and maybe you’ll realize why other people might, regardless of what you say. It’s a taste of your own medicine.

        • dpaus
        • 8 years ago

        He may be a bit of a fanboi, but don’t call him a Shirley.

          • NeelyCam
          • 8 years ago

          don’t call me a Fanboi

      • chuckula
      • 8 years ago

      OK I know that Neelycam gets modded down just because he says something, but this is actually a really good idea. If performance-per-watt really is that important, it would be interesting to see how these systems compare at various power levels. You could even underclock them to try lower-power envelope modes. Maybe Bulldozer is actually more competitve at lower clockspeeds, but had to be overclocked by AMD to keep up with Sandy Bridge.

        • Waco
        • 8 years ago

        The server parts definitely follow that trend. They have two Orochi dies and fit into the standard power envelopes at > 2 GHz speeds.

      • PrecambrianRabbit
      • 8 years ago

      I’d like to see a performance comparison with each processor at its respective Vmin… thus far signs point to SB being more efficient, but I’m never sure until I see data.

        • NeelyCam
        • 8 years ago

        Now [b<]that[/b<] would be interesting! If I recall from ISSCC, AMD chose to use lower-density 8t SRAM for enabling lower Vmin, while Intel was using some read/write assist schemes to bring down their Vmin.. Although, I think that's more for idle than load.

          • NeelyCam
          • 8 years ago

          Checked out the SPCR article on BD – it does idle at higher voltage than SB… So, I guess I’d say Intel wins with the idle power.

          [url<]http://www.silentpcreview.com/article1222-page4.html[/url<]

      • NeelyCam
      • 8 years ago

      C’mon guys – we’re not even at double digits yet!!!

      (Translation: my [i<]god[/i<] amd fanbois are [i<]freaks[/i<]... in what universe does this really deserve a -9?)

    • OneArmedScissor
    • 8 years ago

    No northbridge test? You’re running the base clock more than twice as fast. While I doubt the L3 cache is the sort of transistors that can be pushed as far as Sandy Bridge’s, it’s certainly going to be able to move more than the 300 MHz that last voltage bump accomplished.

      • pikaporeon
      • 8 years ago

      Good observations (and amazing username)

      • Meadows
      • 8 years ago

      It’s unlocked. Nobody’s putting strain on the northbridge.

        • OneArmedScissor
        • 8 years ago

        Northbridge = L3 cache speed, which they have left stuck at 2.2 GHz, while Sandy Bridge’s went right up with the core clock to 4.4 GHz. I’m not referring to the system clock.

          • Damage
          • 8 years ago

          Sandy Bridge’s uncore doesn’t run at the same speed as the cores. The uncore has a separate clock domain. I didn’t overclock the uncore/L3 when I overclocked the Core i7-2600K.

            • OneArmedScissor
            • 8 years ago

            I don’t own one and don’t know what control you have when you’re overclocking, but wouldn’t it at least be 3.4 GHz? I was under the impression it would also increase along with the turbo boost.

            Also, I thought the L3 isn’t part of the “uncore” anymore, which is now a very small part of the chip that’s more or less just some management circuits?

            • Damage
            • 8 years ago

            Ah, wasn’t thinking right. With Sandy, the L3 partitions are associated and clocked with cores. Sorry.

            • kamikaziechameleon
            • 8 years ago

            so was the L3 at the processor clock because on the sandy bridge??? Have you been able to push the FX L3 Cache any further?

          • Martian
          • 8 years ago

          Most testers do it ever since K10, easy way to make AMD look worse.

            • Yeats
            • 8 years ago

            Right now the easiest “way to make AMD look worse” is to run benchmarks.

            Unfortunately.

            • Martian
            • 8 years ago

            No, some benchmarks make AMD look bad, improper overclocking makes it worse. This is a difference.

            • Yeats
            • 8 years ago

            I agree w/you, but I think the testers that don’t OC the NB do so out of laziness, rather than malice. My unlocked/OC’d X3 710 has NB @ 2.5 for nearly 3 years now.

            • Martian
            • 8 years ago

            Like I said, this is the easy way. πŸ˜‰

            • Palek
            • 8 years ago

            I don’t think I like what you’re insinuating. TR has an impeccable reputation of fairness and unbiased reporting, and to suggest otherwise is bordering on slander.

            Besides, you are talking about overclocking here. How well FX may or may not overclock or how it should be done “right” has absolutely no bearing on evaluating the real performance of shipping products. It gives us a glimpse of what [u<]may[/u<] be possible with the new architecture and what additional speed boost a fringe group of hard-core PC users may be able to squeeze out, but that's all it does.

            • Martian
            • 8 years ago

            I’m not insinuating anything, I simply appointed a fact. For some reason, AMD is barely ever overclocked properly. This specific review is about overclocking, not about stock product performance.

            • Palek
            • 8 years ago

            Maybe I misunderstood your comment, then. In that case, my apologies. “Easy way to make AMD look worse” sounds a lot like you’re talking about deliberate action, though. Anyway…

            The article is indeed about overclocking. My point is, how well a chip overclocks does not add to or take away much from the reputation of either CPU manufacturer outside the very narrow circle of “enthusiasts” like us. Even there, I’d venture a guess that the majority of self-confessed PC enthusiasts use their CPUs at stock speeds, so the value of overclocking headroom is limited. I personally find it interesting for the reason I mentioned above: to give me an idea of the potential of the Bulldozer architecture.

            • Martian
            • 8 years ago

            I don’t argue with you. These can be accepted as facts too. Still, that bothers me, if you overclock any hardware to squeeze out the last drop of performance simply out of curiosity, then you squeeze out the last damn drop, otherwise the whole effort is absolutely pointless and leaves questions.

      • maxxcool
      • 8 years ago

      I asked about this last time too. NB overclocking makes a really good DENT in benchmarks. Running my x6 with the HT and NB at 2.75ghz benefited nearly 3x as much as cranking the memory from 1333 to 1866.

      hmmm i will drop him a email and see what he says about tossing 1 more set of benchies at it with a upped NB …

      • maxxcool
      • 8 years ago

      emailed him.. hope well see a upped NB/HTT/fsb bench or two…

    • Jigar
    • 8 years ago

    Gaming benchmarks are missing Damage. I personally wanted to see if this baby is overclocked crazy what will it do in the games ? Will it match 2600 K ?

    EDIT: Funny, once this chip becomes a speed demon (4.7GHZ), it only lost in one benchmark against Oced 2600K… thats nice to see, but those power consumption results are horrible…

      • sweatshopking
      • 8 years ago

      It is rather surprising… makes you wonder what could have been ;( ol’well. maybe the next version, piledriver will run a bit better. Here’s hoping. Higher speeds, and lower power consumption, and this chip would be great!

      • Xenolith
      • 8 years ago

      High end games are GPU-bound. They don’t give you a good benchmark of the CPU.

        • Jigar
        • 8 years ago

        i know that, but that doesn’t kill my curiosity πŸ˜‰

          • UberGerbil
          • 8 years ago

          Then the results, if they were provided, wouldn’t have much effect on your curiosity either.

            • StuG
            • 8 years ago

            I disagree. Though the average or top FPS would be similar, I see huge differences in CPU choice and minimum frame rate.

            • Waco
            • 8 years ago

            This. Minimum framerates are extremely interesting when looking at CPUs.

    • TheEmrys
    • 8 years ago

    Startlingly good results overclocking. I’d be tempted if those power numbers weren’t so terrifying.

    However, it makes me mildly optimistic for where AMD will take this chip in the next couple (6?) years before the next architecture is released. If this can be hammered down to 95w and maybe a die-shrink after that…. could be a pretty good lifecycle.

    • pikaporeon
    • 8 years ago

    Two mostly unrelated questions
    1: Because the cooler had ‘a lot more room’ did you later try scaling back on the fanspeeds / profile and sustain the same overclock? Seems like it might have been possible to save your hearing in that regard
    2: Did you, or do you plan on doing, any Folding perfromance? I remember reading an old F@H benchmark ages ago here, but I’m not sure if that’s kept up. While Bulldozer evidently has a lot of shortcomings, the nature of folding may be suited for it, and considering the folding community here, it might be worth looking at.

      • Damage
      • 8 years ago

      The cooler has two fan profiles: Extreme and Silent. I tried running at 4.7GHz on Silent, and the system wasn’t quite stable. I imagine one could either dial back the clock slightly and/or create a custom fan profile (the app allows for that) in order to hit a nice, happy medium. The cooler is quite effective, even when it’s not at top fan speeds.

      notfred hasn’t updated his F@H benchmark for over a year. Dunno if it still works. Don’t have any plans right now to test that, although I might be persuaded eventually. πŸ™‚

        • pikaporeon
        • 8 years ago

        No problem. I’ve got an FX-6100 on the way (today) and I’ll post in the forums what kind of PPD I’m getting at least.

        • demani
        • 8 years ago

        How quiet is Silent (I assume very, but marketing has been known to gloss over things before)? Not that I am currently looking (my HTPC case is pretty small)

          • Damage
          • 8 years ago

          Silent runs the fans very slowly at idle, and the fans are pretty quiet at that speed. I don’t notice the difference from the tower air cooler with one fan we’d used before. One of the fans had a bit of friction in our unit for a while, which caused some rotational noise, but that seems to have gone away with use. Under load, the speeds can crank up a little once the fluid temps rise, but how much will depend on the degree of overclocking and the load. For normal use without OCing, the fan speeds don’t seem to rise at all from idle. You’ll hear them start to whine some if you’re OCing and running something intensive.

          Pump noise is a small buzz that’s inaudible more than 12″ away from the pump/socket area. Seriously. Inside a case, should be totally shrouded.

          The overall noise levels at idle are comparable to my Corsair H60, I’d say, although this thing is obviously bigger and capable of moving more heat.

    • AGerbilWithAFootInTheGrav
    • 8 years ago

    looks like if this thing was made in Intel fabs we would be looking at reaching 6ghz with such cooling and not 5… ehhh… what if, what if…

      • FuturePastNow
      • 8 years ago

      And if this thing was designed by Intel engineers, it would have 25% higher IPC, 25% lower power consumption, and 25% fewer transistors.

        • derFunkenstein
        • 8 years ago

        even at 25% fewer transistors it’d still weigh in at really honkin’ huge.

      • pikaporeon
      • 8 years ago

      and if it was a better chip, it’d have better results!

    • 5150
    • 8 years ago

    Looks at first graph, “Wow, overclocking really helped!”
    Looks at first graphs header, “D’oh!”

      • Ringofett
      • 8 years ago

      Yeah.. for someone doing a lot of heavy work, or someone doing a DC project 24-7, getting a 2600K level of performance costs draws about twice the watts, which would run me $17 a month with my local utility rates. That’s not accounting for running the AC extra due to 300+ watts being dumped in to my room.

      In practice, wouldn’t run it 24-7, but the money adds up. The money saved going 2500k would pay the small premium on the motherboard, and then stack up in the bank.

      Hard to even justify it out of sympathy, except for folks that heavily use apps that really shine on it.

      I’d be curious to see how an 8120 did on the OC bench, but not too curious — just waiting to see what AMD managed to do with Piledriver now.

Pin It on Pinterest

Share This