AMD 90nm power consumption measured

For ages, moving to a smaller fab process has been the key to achieving lower power consumption and higher clock speeds, but the 90nm process hasn’t worked out that way, at least for Intel. After watching Intel’s struggles with its rather hot and power-hungry 90nm Pentium 4 “Prescott” processors, we’ve been waiting with some trepidation to see whether AMD’s new 90nm chips would have similar problems.

We were finally able to get our hands on a 90nm Athlon 64 3500+ this past weekend, and we’ve been testing it to see how it compares to the 130nm version. Since this 90nm Athlon 64 3500+ runs at the same clock speed as the 130nm Athlon 64 3500+, we were able to do a direct comparison between the chips running at 2.2GHz.

Let me give you the test setup briefly. We compared power consumption by measuring power usage for our entire testbed system, sans monitor, at the wall outlet. The testbed system included an Asus A8V Deluxe motherboard, 1GB of Corsair XMS 3200XL DDR400 memory, an NVIDIA GeForce 6800 GT graphics card, an Asus DVD-ROM drive, a Maxtor MaxLine III 250GB SATA hard drive, and an OCZ PowerStream 470W power supply. (The 130nm chip we used was actually an Athlon 64 3800+ underclocked to 2.2GHz, for what it’s worth.) We also tested power consumption on a similarly-configured Pentium 4 system based on an Abit AA8 DuraMax motherboard with 1GB of OCZ DDR2 533MHz memory and the PCI-E version of the GeForce 6800 GT (all other components were the same as the Athlon 64 rig). The Pentium 4 was a Prescott 90nm core running at 3.4GHz. Update: Cool’n’Quiet was not enabled on the Socket 939 motherboard.

In order to strain the processors several different ways, I ran several of our regular CPU benchmarks and took measurements with each of them running. We’ve used these tests many times in reviews like this one. I included the Sphinx speech recognition benchmark, Sciencemark’s “moldyn” molecular dynamics computation, and Xmpeg video encoding with the DivX codec. I also measured power consumption with the system sitting idle at the Windows XP desktop. Here are the results:

As you can see, our die-shrunk Athlon 64 came out looking pretty darned good. Of course, every chip is a little different, but these differences are probably substantial enough to suggest that AMD’s 90nm Athlon 64s should generally consume less power than their 130nm counterparts.

I also took some quick temperature readings, and I’ll give them to you, although I wouldn’t recommend taking them as gospel. The ambient temperature in my office was about 85F/29C, and it probably varied a bit during the duration of the tests. The test rig was on an open bench and was equipped with a stock AMD CPU cooler. I recorded temps at idle and under load via the Asus Probe utility, with the Folding@Home client providing the CPU loads. The 130nm Athlon 64 idled about 49C, and it ran up to 61C after ten minutes of number crunching. The 90nm version’s idle temp was 41C, and it peaked at 55C under load. All in all, a good performance from the new 90nm AMD chip.

Incidentally, there have been rumors that the new 90nm Athlon 64s incorporate some planned enhancements to the K8 core, including SSE3, better data prefetch, additional write combining buffers, and a tweaked memory controller. We haven’t yet been able to confirm with AMD whether the new 90nm chips include these changes, but the preliminary indications seem to be negative. CPU-Z identifies this chip as a Winchester core and doesn’t list SSE3 among the supported extensions. More tellingly, I’ve run a handful of synthetic memory benchmarks on the 90nm 3500+, and scores didn’t differ significantly from the 130nm chip in my preliminary tests. For now, the new 90nm chips appear to be a successfully die shrunk version of the current Athlon 64: cooler, with less appetite for power, and otherwise largely unchanged.

Comments closed
    • ivanwolf
    • 15 years ago

    One thging to note, in the past Intel has used a copy exact and shrink when using new smaller processes, and with that last few revisions AMD has added features with their die shrinks. This time it is the exact opposite. congrats to AMD for pursuing a process that is proven to work. It is a measure of AMD’s recent success that Intel has felt the need to go outside of their normal bounds to compete. 65nm should be really interesting, electrical leakage with smaller processes has become a definite issue, when comparting the older P3 to the Prescott core total watts of consumption is a definite apples to oranges comparasion. If you somehow managed to more than double the transistorsof the old P3, and pushed the speeds to even the A64 levels yopu would see a huge difference in power draw.

    • aap
    • 15 years ago

    This report stirred quite a contraversy in the PC enthusiasts world.
    However, many people raised objections to the accuracy of measurement tecnique using power at wall. They cite AC-DC converter inefficiency, on-board DC-DC convertor inefficiency, possibly different power consumption of mismatching components, etc.

    To eliminate this kind of objections, I am proposing an improvement to “the wall measurement technique”. It is possible to measure the conversion factor of each individual system. To do this, you need to calibrate the consumption right at the processor socket by measuring the difference in overall power consumption between the normal operation mode (whatever the test is) and the power in the same but with a known resistor connected to the CPU core voltage rails.

    If the “calibration” resistor is, say, 1 Ohm, it should cause 1.5W increase in
    “raw processor power” (I assume 1.5V core Vcc). Due to limited efficiency of DC-DC convertor and ATX PS, the wall power increase is expected to be higher, about 1.5/0.9/0.8 = 2.08W
    The difference between two actual measuremnts will give you compound efficiency of the test system.

    The “calibration” will require to solder down two thick wires to each motherboard, preferrably right to legs of core voltage electrolytic caps. The whole measurement technique “at wall” is better to be calibrated as well, by checking with a big-ass resistor of proper value (in place of the PC).

    Best regards and good luck,

    – Alexei Predtetchenski, aka “aap”

      • aap
      • 15 years ago

      Sorry, small correction: a 1 Ohm at 1.5V will consume 2.25W, with
      power at wall-side correspondingly higher, ~3W (estimated).
      But the idea is correct.
      Even better way is to use the 12V side of the DCDC on-board convertor, and use the same resistor to calibrate it’s efficiency.

      – aap

    • Buub
    • 15 years ago

    Too many posts, jumping in late here, so I won’t respond to any individual one. Just a couple observations:

    1) The 90nm processor runs at a lower voltage than the 130nm. That alone is probably responsible for almost the entire power savings. It would be interesting to over-volt the 90, or under-volt the 130 (or maybe meet in the middle) to see how the power usage compared. It would be interesting to see if you could get most of the benefit by just under-volting existing 130nm processors.

    2) Even though AMD’s process appears fine for making slower chips, hence starting with lower-end chips, that says nothing about how good it is at ramping. The troubling clue here is that they have introduced /[

      • indeego
      • 15 years ago

      q[<3) The days of easy speed gains are over... [snip] <]q Not quite sure what you mean. Each increase of speed was never said as being "easy," yet Intel continues along Moore's law: q[http://www.intel.com/research/silicon/mooreslaw.htm<]Β§ One might say that speed gains have increased. ATI and Nvidia's latest have absolutely blown everything else out of the water. Increases of 100-150% in raw pixel pushing power were seen, far more than previous generationsg{<.<}g

        • HiggsBoson
        • 15 years ago

        The efficiencies that apply to massively parallel GPUs are not at all applicable to general purpose CPUs. It’s not apples and oranges here you’re talking it’s more like grapes and carrots. πŸ™‚

        • Buub
        • 15 years ago

        Uh huh, let’s look at (essentially) an Intel press release to see if physics is slowing them down.

        Or, we could look at statements by lots of other engineers in the industry, such as a head dude at IBM who says that we’re hitting a very hard wall: Β§[<http://eetimes.com/semi/news/showArticle.jhtml?articleId=19502091<]Β§ If you don't believe him, take a look at the industry. Look at the rate of change we had up to the point where 0.13 sorta maxed out, around the 2.8~3.2 timeframe. Look at the rate of change since that time. Notice a difference?

          • indeego
          • 15 years ago

          Physics has nothing to do with explaining the exanding of cores or the increase of transistors. They are building out horizontally, not up. This doesn’t change that Moore’s law stands for the forseeable futureg{<.<}g

            • HiggsBoson
            • 15 years ago

            You’re bringing together two different things. Speed in the sense of the original poster was primarily clock from what I gather. This is slowing down certainly. It’s been slowing down for a while.

            Moore’s Curves with regard to transistor density are something different altogether. AFAIK it doesn’t have anything to do with the switching speed, i.e. clock.

            And once again, you cannot compare the performance of GPUs and general purpose CPUs. The task they are used for, and the way their performance is measured is completely different.

    • crose
    • 15 years ago

    The 90nm A64 is a very hot topic (he he) amongst hardware enthusiasts, but why are there so few reviews among the top hardware sites? Are Anand & Co under NDA or are these chips just very hard to get one’s hands on?

      • R2P2
      • 15 years ago

      Performance is exactly the same as the 130nm chips, so maybe they don’t think it’s worth posting a whole article just for heat and power consumption comparisons?

    • Xspringe
    • 15 years ago

    I second this! All modern A64 motherboards support Cool n Quiet so it’d be logical to include in the testing imho.

    • Wintermane
    • 15 years ago

    Oh I fergotm amds greatest problem… they are in debt and would have to make a gargantuan profit to realy profit from any of this any time soon. Anyhting less just pays the banks not amd.

    • Wintermane
    • 15 years ago

    First off what causes amd the most trouble is thier chips tend to get slapped into realy stupid puters.

    I have seen $1000 amd systems sold with 64 meg integrated vid AND a single channel pc 2700 memory.

    Same with intel comes with twin channel pc 3200 memory and a pcie ati card.

    We wont even get into the fact amd doesnt have enough plopping down into the 4-600 range in oem channels now. Its now dominated by celron d’s.

    As for celeron d and prescott remember for the mainstream buyer who still isnt gona do much all that often the chip runs cool. Its only when you push it to max that it realy gets hot.

    As for duron they are popping up from wacky oddball oems in 300 buck systems sold with integrated vid and 128 megs of slow ram… lets guess how peppy they are shall we?:)

      • Convert
      • 15 years ago

      What the hell does that have to do with anything. Furthermore where the hell are you getting your information?

        • flip-mode
        • 15 years ago

        y[

        • Wintermane
        • 15 years ago

        Im getting my info form the most obvious source imaginable I got off my ass and looked at what they were selling now at the big stores that sell most of the puters people buy these days.

        It doesnt do amd any good to have a better cpu if the puters they get stuffed into bring em to a crawl and thats exactly what is going on right this second.

          • Convert
          • 15 years ago

          Try looking somewhere else for cryin out loud. I know a store that sells underpowered systems with a p4 stuck in it. Crazy isn’t it.

          The system specs are entirely up to who is selling them, that changes from vendor to vendor.

          This doesn’t cause amd any trouble at all. People who buy these systems usually don’t know enough about computers. They will buy any piece of crap the salesman says. A computer is just a computer to most consumers. The only thing a cheap computer featuring a amd processor can do is help amd in sales.

          You mention a extremely screwed up comparison. The only person who should be worried is the place selling those systems if they think they can charge 1000$ for that.

          The point of this article however had nothing to do with what you posted about. The article was not about some store you know of that is screwing over its customers.

          What you said doesn’t even make sense. Amd is generally cheaper in the xp line of chips. Therefore a comparatively matched amd system should cost less than a intel one. If the store wanted to price gouge they would simply match the price of the comparable intel. Amd is not a name that sells. If you worked in retail you would know the retards with money only know they want a intel pentium4 because the commercials told them so. If the store was looking to price gouge in the manner you described it would be with the brand name intel systems.

          All of my local computer stores (big and small) do not practice what you describe. They charge you large amounts of cash for nothing like you describe, but the intel systems Are going to be more expensive/the same and have comparable options.

            • Wintermane
            • 15 years ago

            Um your talking about one bit of a post I made talking about something someone else said and not about the article that started this thread in the first place.

            To wit why people still buy celeron systems. Its because intel did its homework and made sure a ton of cheap puters in a box would come out that would in fact play games mainstream players play WELL.

            • Convert
            • 15 years ago

            Lol, your gibberish still doesn’t make sense.

            • Wintermane
            • 15 years ago

            Normaly id try a few more times to explain such simple things to people like you but in your case ill make an exception.

            Now onto more interesting news. Concider the fact that amd made a profit of only a few dozen million last time. But they have a debt of over a billion… Can we hope amd will make enough profit these next 2 quarters to pay off any signifigant part of that debt and if not what will happen to amd when they are both in debt and fighting with an intel that isnt being goofy and dumb?

            I dont want to loose amd even tho I have yet to own a system with one without em I doubt very much I would have gotten the puter I managed to get for the price I did.

            What the hell is gonna happen when with all the screwups and out and out cluster muffs intel has done for soo long they just made a few million? What happens when intel gets it right? Sooner or later they have to after all they cant screw up all the time.

            • Convert
            • 15 years ago

            You are dense. Your gibbering on about ” Um your talking about one bit of a post I made talking about something someone else said and not about the article that started this thread in the first place.”

            As if some how I am the one at fault for spewing gibberish not even related to the topic. Sorry bud but you started it, I was just there to correct your total BS. If you can’t handle being corrected then I don’t know what to tell you.

            Your original post was COMPLETE BS. Oops, I meant; ALL of your posts were COMPLETE BS. Excluding the debt one, but *GASP* it still has nothing to do with the topic at hand.

            As for the debt thing, time will tell. Amd has managed to stay in the game this long, a infusion of profits can only help them stay a little longer.

    • Athlonman
    • 15 years ago

    Also of note, notice he said that this is the whole system measured at the WALL (110-120V AC) assuming he’s in the US or (210-220V AC) if overseas. It would be helpful to know the voltage he’s running at at the wall. If he’s on 110 then you guys overseas will show a significant less power usage at 220V. The other numbers of Max Power useage from AMD’s specs I believe are at Core voltage which is DC.

      • liquidsquid
      • 15 years ago

      Uh, time to review your electronics class. I have a feeling you are confusing it with current consumption.

      Power==power no matter the source voltage. Only slight differences due to power supply efficiencies at different source voltages. If my house could take less power thus a lower bill just by jacking up the voltage, I am in!!!

        • Athlonman
        • 15 years ago

        liquidsquid,

        Yup your, right brain fart…

    • indeego
    • 15 years ago

    edit: nevermind, I see below that you DO work in 85 degree temps. just wowg{<.<}g πŸ™‚

    • Logan[TeamX]
    • 15 years ago

    Excellent. Winchester S939 3200+ here we come! πŸ˜€

    Actually, a Winchester 3500+ 2.2GHz sounds enterprising. And dual-cores, think of the dual cores on this sweet process.

    • Hattig
    • 15 years ago

    Uh oh, Slashdot :p

      • derFunkenstein
      • 15 years ago

      quick! put up a folding update!

    • stmok
    • 15 years ago

    I don’t think (for Intel) its the actual 90nm process causing the heat issues for Prescotts/Noconas…Its actually due to the design of the P4 itself that’s the problem.

    If you look at the Athlon64 and Pentium-M (full load with power saving features OFF) both 90nm versions of these CPUs run FAR cooler than any P4/Celeron/Xeon could (they’re also cooler than their older 130nm brothers)…So its not just about the manufacturing process, its architecture too. Something to think about.

    And BTW, PIIIs highest is just over 30W.
    (PIII-S 1.4Ghz and Celeron “Tualatin” 1.4Ghz)

    Underclock them and run them at 1.054v like I do and you got yourself a fanless (passively cooled) 1Ghz CPU that craps on VIA’s C3. Great for a BSD/Linux PC or for typical office stuff. As if I need 3Ghz+ to surf the web! πŸ™‚

    • Dposcorp
    • 15 years ago

    Can we say ” real men can perform successful die shrinks?”

    • Gholam
    • 15 years ago

    I’m having strong suspicions that early reports of 90nm Athlon64’s running hot were due to old BIOS not recognizing the new CPU, and feeding it 1.5V instead of 1.4V. It might not account for the whole difference, but likely a good part of it.

      • Proesterchen
      • 15 years ago

      /[

      • IntelMole
      • 15 years ago

      That would account for roughly 15% of the difference….. /[

    • Chrispy_
    • 15 years ago

    Ouch πŸ™ 60W extra between idle and load?

    Folding 24/7 is equivalent to leaving a lightbulb on all the time.

    That costs me …*calculator tapping noises*

    Holy fkucing hsit, it’s 63 quid a year!!! ($115)

      • Koly
      • 15 years ago

      Yeah, I hear the devil, he says something like: “Stop folding and I’ll give you a new motherboard every year…” ;o)

      • Hattig
      • 15 years ago

      You really should change your electricity supplier if 60W 24/7 costs you Β£63 a year! Maybe buy some energy saving lightbulbs to compensate!

        • rmstow
        • 15 years ago

        Where I am, the math works out to about $43 (Canadian) to run a 60 watt
        bulb continuously for one year. Call it 20 British pounds.

        Still a very significant cost – it amounts to foregoing 24 cans of most brands of beer. Run “folding” or buy beer – not a hard decision is it ? πŸ˜‰

          • muyuubyou
          • 15 years ago

          I can’t think of any harder decision right now πŸ˜‰

    • muyuubyou
    • 15 years ago

    y[

    • derFunkenstein
    • 15 years ago

    yes, but how does it overclock?

      • Fearless
      • 15 years ago

      I’m curious about this as well… somewhere I think I read something that claimed it didn’t overclock very well at all… I’m wondering if this turns out to be as true as the whole runs hotter FUD that came out earlier.

    • ElderDruid
    • 15 years ago

    I could not work in an office with an ambient temp of 85F. I’d be sweating my butt off.

      • Damage
      • 15 years ago

      Indeed. I have very little butt left.

    • slymaster
    • 15 years ago

    The power delta under full load is shocking. There is a difference of more than 80 W between the 90nm A64 system and the Prescott system at full load.

    Assuming the power supply runs with 75% efficiency, that is still a 60 W difference in consumption, although the 80 W shows up on the power bill.

    That will cost some significant money for a folding farm running 24/7.

      • Hattig
      • 15 years ago

      Quite clearly the P4 is well over 100W consumption under full load. Looking at the idle graphs, if the A64 is 20W idle, and the Intel chipset uses up 10W more than the AMD one (just trying to be accomodating to Intel and motherboard differences, and doesn’t DDR2 use /less/ power than DDR?), the P4 idles between 40W and 50W consumption / leakage. Add on the 60W full load adds on, and you’re over 100W.

      Which is why there are lots of reports of issues with 90nm Nocona at 3.6GHz throttling so badly under load you’d be better off with a 2.4GHz Northwood.

      Heat is what is stopping Intel ship higher speed processors at the moment. They had better hope that their latest stepping has adjusted things to that they can get a couple more speed grades out the door to match AMD’s upcoming releases.

    • Xspringe
    • 15 years ago

    Does anyone know when we’ll be seeing these in large numbers in the retail channels? And what about the opterons?

    • slymaster
    • 15 years ago

    This would seem to be good news for AMD. I am glad someone finally measured the power consumption, and not just the temperature of the CPU.

    I am surprised that AMD is not hyping this to the press. It has been 7 or 8 days now that people have been talking about how hot the 90 nm A64’s run, but with no real measure of consumption. (see the news on TR from Sept. 24).

    The fact that there seems to be a significant power reduction is very positive.

      • muyuubyou
      • 15 years ago

      They don’t want to swallow those 130nm 3500+ they have in the market already. They would have to price them differently (making it confusing for the home user) or keep the 90nm units in stock (and they can produce cheaper 90nm chips, assuming they have decent yields).

    • Hattig
    • 15 years ago

    Looking at that data, would I be mad in assuming that a 90nm 3500+ uses around 23W in idle mode?

    Assuming power supply is 75% efficient:

    112W * 0.75 = 84W getting to system
    179W * 0.75 = 134W (130nm under load, near TDP of 89W, let’s assume 84W)

    134W – 84W = 58W Mobo, Gfx, IDE, etc power consumption

    84W – 58W = 26W
    26W * 0.9 (motherboard VRM efficiency) = 23W

    I suppose that system power usage also drops in idle mode though as well.

    Yes, these figures are extremely dodgy and vague and aren’t worth much more than the speculation they are.

      • Kylep
      • 15 years ago

      Since when does 134 minus 84 equals 58???
      Other than that I pretty much agree with the method, and with the parameters (i.e. efficiency rates)

      Impressive result if that can be generalized to other 90nm athlon-64 (the question being : is this sample really representative of the others???)

        • Hattig
        • 15 years ago

        err, yeah, oops!

        Well, maybe because the 3500+ tested was in fact a 3800+ underclocked, it was using less power, say 76W under load …

        * hehe for fixing figures to justify the result *

          • Koly
          • 15 years ago

          Heh, I can easily justify it for you :o) The 3500+ will use nowhere near 89W, this is a number for the whole family, the upcoming FX55 included. In reality it will be more like 70W, you can see my post #24. And if you click on the silentpcreview link in my post #28, you’ll see that a 3400+ is about the same in power consumption as Barton 3200+ (i.e.<75W) and 3500+ has half the cache.

            • muyuubyou
            • 15 years ago

            Yes that makes sense, but the fact it’s they rate them 89W, one by one, in their specs.

            Β§[<http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf<]Β§ Pages 10 to 13 (tables)

            • Hattig
            • 15 years ago

            89W is a family TDP though, it has no relation to the actual max power consumed by the processors.

            I was assuming that it was getting close to the TDP because the family has been stuck at 2.4GHz for a while now (3800+) and 2.6GHz is only just coming. Of course, that could be a design issue, not a process/heat issue.

            • Koly
            • 15 years ago

            Yes, I know that sheet very well, extremely informative. For AMD makes a CPU family TDP good sense, it makes MB manufacturers happy. They can design the MB and be sure that it is compatible with any CPU from the family, upcoming included. When you read the article under the silentpcreview link in my post #28, you’ll see tons of arguments and a power comsumption measurment with A64 2800+, 3400+, FX53, couple of Bartons and several P4s. The A64 2800+ is by faaaar the most efficient.

            BTW, I am very curious to see the update of the sheet with the new 90nm D0 stepping included. I don’t expect the 89W TDP at maximum speed to change, but it would be very interesting to see what are the TDP’s at lower speed and voltages. If they were lower it would be an official confirmation that the 90nm process is more efficient.

    • Hattig
    • 15 years ago

    Was that at stock voltage (1.4V)?

    How is the processor with running undervolted at stock clock, and how does it affect power consumption?

    Was Cool ‘n’ Quiet enabled for the Idle power consumption tests?

    Most work PCs are left on overnight, i.e., they are on 24/7. 40W difference even at the idle speed translates into around 350kW over a year difference in operating costs, and however much A/C power is needed to extract that extra heat as well.

    • Gholam
    • 15 years ago

    #24, you’re quite optimistic if you consider 80% PSU efficiency “typical”. Manufacturers might write a lot of different figures, but the fact remains that only the very best PSUs pass 80% efficiency, and then only in a narrow load range they are optimized for – above or below that range, efficiency drops off. For example, Antec Phantom fanless PSU pushes 88% efficiency at 300W load, but only 76% at 90W and 81% at 150W. Keep in mind that this is the most efficient PSU that money can buy – a lot of money, as it retails for over $200. A more “typical” unit, such as Vantec Stealth 420W PSU maintains 69% efficiency at 370W load, and 72% at 90W load.

      • HiggsBoson
      • 15 years ago

      Agreed. The efficiency ratings for PSUs while generally accurate for a reputable band are only appropriate for a certain operating range, IIRC it’s about 80% of the rating. Which means everybody overspec-ing 500W PSUs for their computers are probably running at really horrible efficiencies. Whether or not you care about this is another story–it’s certainly possible that there are legitimate reasons why you might want to overspec a PSU of course.

      • Koly
      • 15 years ago

      You are right, in fact I calculated with 70% efficiency.

    • Samlind
    • 15 years ago

    What does this say about AMD’s .09 micron process vs Intels? Just as good? Better? Any process gurus out there care to comment?

      • muyuubyou
      • 15 years ago

      So far it looks like AMD’s 90nm process runs in circles around Intel’s.

        • Hattig
        • 15 years ago

        No. Intel has a fine 90nm process. They sadly decided that instead of shrinking a proven core (Northwood) down to 90nm, they would create a behemoth of a processor with tens of millions more transistors. The per-transistor power usage dropped for Intel like it has for AMD, and Dothan shows that the 90nm process isn’t terrible, although maybe more leaky than AMDs (I haven’t seen the figures though). I expect that if Intel had dumped Prescott like it eventually dumped Tejas, and went with a simple die shrink, they’d be able to boast 4GHz processors on the shelves *now*.

        OTOH Intel were 6 months late with their 90nm, instead of being a year ahead of AMD, they were merely 6 months ahead in the end.

          • muyuubyou
          • 15 years ago

          Well I can only judge the most apples-to-apples we have now, P-4 3.6Hz 90nm against A64 3500+ 90nm as in the chart. Whether the folks at Intel shot themselves in the foot or not, I don’t know, and desktop chip buyers don’t care.

          But you’re right we don’t know exactly how the *processes* themselves compare.

            • DaveJB
            • 15 years ago

            I’d say Intel’s is a BIT better, owing to the fact that Dothan was able to double its L2 cache while still having largely the same power requirements as Banias. Then again, Intel supposedly improved Dothan’s power-saving tech, so take it with a grain of salt.

            • Lucky Jack Aubrey
            • 15 years ago

            Just at a guess, I’d say there’s nothing wrong with Intel’s 90nm process. The problem seems to me to reside in the Prescott architecture. If true, can you imagine the heat problems Prescott would have if it were based on a 130nm process? Yikes!

            • random gerbil
            • 15 years ago

            I would think it would have more of an issue with power consumption, not heat. Take the chart above for example: we see power consumption decline with the 90nm shrink, however as is indicated by Damage, heat increased with the 90nm shrink. Considering that, i would think the Prescott would have less heat problems, however, the power consumption and leakage would prob be out the wazoo.

            • muyuubyou
            • 15 years ago

            Where did damage say the heat increased with the 90nm shrink?

            y[

            • random gerbil
            • 15 years ago

            oops read that one wrong…coulda swore it said the reverse.

            • fisd123
            • 15 years ago

            IIf you want to know wher the extra heat comes from with Prescott, have a gander at the transistor count for it Vs A64.

            Its the cache that causes the heartache …! ( I work for Intel, used to be in Device too πŸ™‚ )

            • Chryx
            • 15 years ago

            You’re saying it’s an extra 512KB of L2 cache over Northwood that causes the heat issue?

            Not the massively redesigned logic?

            • robg1701
            • 15 years ago

            Cache uses comparatively little power compared to logic, despite using many more transistors….go check the power consumption for the 130nm large cache Xeons versus the northwood P4, not a huge diff for a massive increase in transistor count.

            That said, tehre is more logic in Prescott than northwood also

            • fisd123
            • 15 years ago

            Xeon Vs Prescott is premium Vs desktop, they are two seperate product lines & die that would pass as Prescotts will fail as Xeons, thats why theres a price diference πŸ™‚

            The expectation is that Xeons will run 24×7 so the quality test screens are brutal hence y tendto get very low leakage parts

            • indeego
            • 15 years ago

            I know some people at Intel hillsboro. They want out badg{<.<}g πŸ™‚

            • DaveJB
            • 15 years ago

            If the cache was the problem, Dothan should be a flamethrower as well, and Intel wouldn’t even be thinking of releasing Irwindale.

            • fisd123
            • 15 years ago

            Wellllll, ithis is like any interesting problem. There isnt really any single root cause, rather there is am overlap between several different things.

            – Cache increase, more transistors
            – A long pipeline.
            – Changes to branch prediction
            – A Different transistor revision
            & A much higher density cach, off the top of my head I think we doubled the number of devices per mm^2 in the cache between 130nm & 90nm. ?

            Probably the biggest factor is still the relative immaturity of 90nm when Prescott was released, I’m sure as 90nm matures leakage will improve

            • HiggsBoson
            • 15 years ago

            Leakage current is more a symptom of the die shrink than anything else.

    • kvndoom
    • 15 years ago

    Damage, did AMD send you this particular CPU, or was it picked up at retail? I’d hate to think you were given a hand-picked CPU to review that wasn’t really indicative of actual power consumption. Kinda like reviewers always get the best overclockers, [sarcasm] for reasons unknown [/sarcasm].

    • Koly
    • 15 years ago

    Wow, this is fantastic news. The numbers are pretty unbelievable. If I take the typical 70-80% efficiency of the power supply into account and attribute all of the difference in power consuption to CPUs, which is a good guess IMHO, it looks like this:

    Prescott – Newcastle diff. – cca 40W under load
    Newcastle – Winchester diff. – cca 20W under load

    The power comsumption under load then is something like this:

    Prescott cca 110W
    Newcastle cca 70W
    Winchester cca 50W

    50W is unbelievable. As you are guaranteed (because of Cool and Quite) to be able to run the chip at lower speeds with lower voltage (probably 2.0GHz at 1.3V and 1.8GHz 1.2V) it is sure that it would run passively cooled at 1.8GHz, maybe at 2.0GHz too.

    • ifOow
    • 15 years ago

    indeed sounds like a terrible waste to me, 112 watts idle WTF, and 151 for Intel. If you calculate this you would spend about 15 Euro’s a month (in the netherlands) on the power needed just to power your processor (Idle) Does anyone know a benchmark for all intel and amd processors comparing power consumption?

    • Gholam
    • 15 years ago

    #19, these tests stressed CPU and RAM only, not the video card, which is running pretty much at idle (2D only) the whole time. Running 3dmark05 or something equally graphics-intensive would probably add another 50-60 watts to the figure.

    • crose
    • 15 years ago

    Did Intel produce a 3.4GHz Northwood? Wonder how it would compare to these three chips…

      • Proesterchen
      • 15 years ago

      Yes, they did (and do), though for S478 only. But there’s Northwood’s big brother Gallatin (P4EE) available for both Sockets, which might be interesting.

    • muyuubyou
    • 15 years ago

    Wow. I guess throttling should be enabled on desktop chips too. 112Watts at idle is a pretty ridiculous waste if you ask me.

    y[<112Watts at idle?!?!<]y A Pentium III 800 at full blast uses 20.8 Watts and a Low Voltage P-III 933 uses 11.61 Watts maximum. For the home user it's a pretty bad idea to upgrade to the new generation of CPUs. It would add significantly to the electricity bill.

      • Dr. Fred
      • 15 years ago

      That’s 112W for the whole system, not just the processor.

      • atryus28
      • 15 years ago

      I think you’re forgetting this is the entire system. New video cards HD’s and CPU’s use more juice than before.

        • muyuubyou
        • 15 years ago

        I think you’re forgetting this is at idle. How much do new video cards and HDs suck at idle?

        Check the graph and see the difference between the three systems r[

          • Anomymous Gerbil
          • 15 years ago

          Idle, not iddle.

          • muyuubyou
          • 15 years ago

          In response to #21 and following this discussion (please use “reply” next time).

          Β§[<http://users.erols.com/chare/elec.htm<]Β§ Β§[<http://www.pcsilent.de/en/tips/cpu.asp<]Β§ Β§[<http://www.amd.com/us-en/Processors/TechnicalResources/0,,30_182_739_7203,00.html<]Β§ (third pdf) Athlon 64s 2800+ and up are rated 89W. A Pentium 4 3.4Ghz at idle uses y[

          • Koly
          • 15 years ago

          The video cards suck pretty much power at idle too, the 6800GT about 25W:

          Β§[<http://www.xbitlabs.com/articles/video/display/ati-vs-nv-power_3.html<]Β§ The A64 2800+'s power consumption is much less than 89W, this number is the maximum rating for the whole family, for future processors too. The FX-53 can come closer, but not really, there will still be a FX-55 at 2.6GHz. Check out this very informative article: Β§[<http://www.silentpcreview.com/article169-page1.html<]Β§ As you can see, the A64 2800+ suck less power than Barton 2500+. So, the big differences don't mean that the CPUs suck all the power, they mean that the 90nm A64 looks to be very efficient.

            • muyuubyou
            • 15 years ago

            Yes I’m not discussing that. That 89W is max (see the document from AMD themselves I linked above). Still, while great to me that I can’t live on a P-III when I’m compiling (without wasting precious work hours) and being A64 a better choice than P-4 hands down, I don’t see how can a home user justify the difference with a P-III 800 using around 20W maximum (or a LV P-III 933 using 11W).

            Mind you, most users outside the enthusiast world are stuck between P-II and P-III right now for this very reason: they don’t need more power except for memory-whoring operating systems.

            • HiggsBoson
            • 15 years ago

            I can say from very recent personal experience that a Slot1 PII-400 with only 128MB of memory running Win2K is still surprisingly usable. (And damn near silent except for the HDDs.)

            • Koly
            • 15 years ago

            I’ve got a solution for you, I am sure you have heard about “Cool and Quite”. If it is turned on, the A64 runs at 1GHz 1.1V with 22W maximum TDP (in case of the CG stepping). It will run circles around any P3. It is enough power for almost any general task. When you really need more it jumps back to full speed (or something between according to load). IMHO this is worth more than any “Hyperthreading” or “64bits” or whatever.

            • muyuubyou
            • 15 years ago

            Thanks, but I’m already running that, mate. Care to read my posts? I can tell you my gf wouldn’t care less about the extra speed the A64 has to offer when it requires a full upgrade and sucks more power.

            Also, it isn’t quite “cool’n’quite”, but “Cool’n’Quiet.”

            • Koly
            • 15 years ago

            Sorry, but you where did you mention it? And If you use it then why do you say:

            y[

            • muyuubyou
            • 15 years ago

            QuiET, not QuiTE.

            The P-III is 20W under full load, and a lot lower at idle, while providing enough processing for browsing the web and reading emails. Just pointed out that. For those who need the extra power, Athlon 64 is indeed the way to go.

            • Koly
            • 15 years ago

            Uhh, where? ;o) Yeah, sorry for the stupid mistake, I do it all the time.

            And of course the 22W TDP for A64 is meant under full load at 1GHz 1.1V. You are right, if you have a fast PIII and all you do is browsing, there is no reason to upgrade the CPU. Buying a faster HD is a much better choice.

            • hmmm
            • 15 years ago

            Did you forget to take your pills this morning? Pointing out a type-o once is fine, but harping on it is a little juvenile.

            • muyuubyou
            • 15 years ago

            There were other issues under discussion in the meanwhile. I had a good weekend and it’s my 28th birthday. What can I say?

            • vortigern_red
            • 15 years ago

            /[< I had a good weekend and it's my 28th birthday. What can I say?<]/ "Oh, f*ck I can't belive I'm nearly thirty" At least thats what I thought on my 28th πŸ™‚

            • RoninGyrbill
            • 15 years ago

            Just a little friendly EU rivalry, I think.

      • IntelMole
      • 15 years ago

      I’m gonna assume this is possible:

      Take one A64. Lower the multiplier to 1GHz. Lower the voltage as far as is possible.

      Watch it cream the PIII in *everything*…

      End of Discussion,
      -Mole

        • muyuubyou
        • 15 years ago

        Don’t think it would beat an LV P-III 933 (11.6Watt MAX) but anyway that’s wasn’t my point. For all the extra cash, you get little to y[

          • dragmor
          • 15 years ago

          Β§[<http://www.amd.com/us-en/Processors/SellAMDProducts/0,,30_177_863_10861,00.html<]Β§ AMD Geode NX 1500@6W processor operates at 1GHz and comes in socket A.

            • muyuubyou
            • 15 years ago

            Interesting. One should benchmark that, and if better than a LV P3 or an Efficeon (not sure about Nehemiah), make it available to people who won’t ever need any better. They should make some new Nec Green PC but without the horrible price/performance ratio πŸ˜‰

          • IntelMole
          • 15 years ago

          oops, double post

          • IntelMole
          • 15 years ago

          Which is why I know I need the extra juice πŸ˜›

          You could probably clock it down to a speed grade below the P3 and it would still beat it. Hell, the older athlons certainly did, and this is an improved version of those…

          It might not beat a LV P3 for heat output, but for that low amount of watts, who cares?
          -Mole

          • flip-mode
          • 15 years ago

          y[

    • Division 3
    • 15 years ago

    SOI

    • amphibem
    • 15 years ago

    Also, this is just another reason to buy AMD over Intel. Its not something that people think about often, but saving almost 100W for a similar system would equal a large power saving over the life of the PC.

    • HiggsBoson
    • 15 years ago

    Out of curiosity what are the core voltages on these parts?

      • Proesterchen
      • 15 years ago

      1.4V

      And btw, I don’t think its very surprising to see Winchester using less energy than Clawhammer, given its a simple die-shrink. (and far from the transistor-monster they call Prescott)

    • amphibem
    • 15 years ago

    so does this mean they will be transfering all their A64 chips (including s754) to 90nm as well as releasing the new CPUs with the new die size?

      • Convert
      • 15 years ago

      If yields permit.

    • wagsbags
    • 15 years ago

    Wow if this keeps up I think it will really help AMD.

    • Ryu Connor
    • 15 years ago

    q[

      • Convert
      • 15 years ago

      There wasn’t an increase in heat/power consumption, so things are going well for them it seems in that regard. At least that’s what I think he was trying to say.

      • Decelerate
      • 15 years ago

      y[

    • kvndoom
    • 15 years ago

    Congrats!

    >10 now…

    • eckslax
    • 15 years ago

    Yeah, I heard that they were going to run hotter too. This is definately good news. πŸ™‚

    Wow, I never realized that the Prescott consumed more than 200 watts under load.O_o

      • HiggsBoson
      • 15 years ago

      It’s not Prescott… Prescott’s max TDP is round about 100W only. The other 100W is from the rest of the system.

        • Ardrid
        • 15 years ago

        That’s actually wrong. Prescott’s TDP is 103W. Keep in mind that’s typical, not max.

          • HiggsBoson
          • 15 years ago

          I think you misunderstood… I meant the spec’d TDP is round about 100W max. Although come to think of it, there’s another speed grade up in the low-teens I think. I was definitely not trying to get into the can of worms that is: “What do Intel’s “TDP” ratings mean vs. AMDs?”

          Edit: Regardless the main point was that it’s not just the power consumed by the CPU that’s being measured here.

        • eckslax
        • 15 years ago

        LOL, thanks. I knew something was up. πŸ˜‰

    • Spotpuff
    • 15 years ago

    Er, weren’t these chips supposed to run hotter than their 130nm counterparts?

    If this info is correct it looks like AMD isn’t having any problems with their transition to 90nm parts, which is definitely a good thing.

    Looks good.

      • Chrispy_
      • 15 years ago

      They run hotter than the 130nm counterparts because although they use less power, they have a smaller die with which to dissipate that heat to the heat-spreader on top of the core.

      Less power-hungry but more difficult to cool, in a nutshell.

        • Koly
        • 15 years ago

        The big news is that in this test they ran cooler.

Pin It on Pinterest

Share This