Last week, we heard launch clock speeds for AMD’s upcoming quad-core processors would range from 2.2GHz to 2.6GHz, and that Socket AM2 quad-core chips would only run as fast as 2.4GHz. The Inquirer now reports AMD has had some nice surprises with its latest B0 stepping of the quad-core silicon. In fact, folks at AMD’s campus in Austin were so pleasantly surprised they were reportedly “dancing in the aisles” from the results of the latest stepping.
The Inq says the new silicon allows for a “massive gain in frequency” of as much as a half gigahertz, and AMD fans can look forward to five “Phenom” variants at launch: quad-core models clocked at 2.6, 2.7, and 2.9GHz, as well as one dual-core model clocked at 2.7GHz and a Quad FX-bound Socket F chip with a stock speed of 2.8GHz. The memory controller in B0 chips is also said to have come in “way better than expected,” although The Inq doesn’t elaborate on the subject. AMD’s new microarchitecture is expected to deliver much higher performance per clock than its current CPUs, so the higher clock frequencies could pay big performance dividends.

$535 is not an insignificant amount of money by any means. Whether its worth it to you is a matter of your priorities. Or in some cases, your family’s priorities.
Nice, you had to reach back 7-8 years to find a memorable problem with an Intel chipset. Some brief reading tells me it was mainly a problem with 3 RIMMs and then trying to jury-rig in SDRAM by a translator chip. Not very releveant to those in the know though because anyone with a brain building their own PC at the time was using 440BX-based boards because RDRAM was stupidly expensive. Compare that to how many problems with VIA and NV chipsets over the last 7-8 years? Of course there is always the ‘one example’ but if that’s all there is compared to the others on a chipset which wasn’t all that important at the time anyway, well…it’s ok if you prefer one company over another but at least keep things in perspective 🙂
$130 on an athlon xp 2500 is the most I’ve ever spent on a CPU; $80 for the mobile version of the same chip, $120 for the A64-3000 after that, $75 for a wonderful little Sempron64, and finally $105 for the x2-3600 I’m using right now. Actually, now that I think about it, a great many years ago I paid $220 for an Athlon 800, and I forget what I paid for the Athlon 1400 that replaced it.
I’d rather spend the savings from skipping the premium on that extra 5% in CPU performance on a bigger hard drive, a home mortgage or a new car personally 😛
Well ok, I’ll confess….I wanna waste my money on a dune buggy next year, I love cruising by the beachside, flying recklessly over sand dunes at 30mph.
I don’t even go past $150 on CPUs 😛 I mean who cares for the next 5-10% performance gain on top of the already solid 100% baseline, even for “only” an extra $50 😀 Don’t wanna even imagine the performance/buck ratio on a $500 one, at some point you feel like they’re taking advantage of your geekiness, so I have to draw the line somewhere 😛
But then again, to each his own.
$535 ain’t that much on a cpu when u are an adult with a decent job.
<sorry please delete>.
y[
Process variations are the main enemy of high-clocked designs and even Intel cant do much about it. When your transistor parameters vary significantly across the chip it is hard to hit a high clock rate and contain leakage currents at the same time. At smaller manufacturing process nodes, the problem only gets worse, so I expect the clock rate remain roughly the same in the foreseeble future.
Anyway, I think the debate about high clock vs parallellism is over. Now the question is what kind of parallelism you need for what applications (simple vs complex cores, more cores vs multithreading, cache hierarchy vs local memory, symmetric vs assymetric designs and so on).
y[
You do know Dell sells AMD based systems right?
Even if K10 in all its sizes is quite a bit faster than a Conroe or Kentsfield its going to take years before PC makers in bulk suddenly adopt AMD for better performance.
The bulk market is Intel’s bread and butter,still fat from the days when every school PC had “Intel Inside” P4s.
Even now, the average Joe barely knows Intels C2D chips are the current thing to get yet they’ve been out 8-9 months already.
AMD will need to produce a good chip then batten down the hatches because Intel will no doubt take the next swing.
With Penryn and then Nehalem coming next year AMD had better be working hard on K11.
y[
q[
The point is that if Intel’s product is not hopelessly bad is some respect (as was the case with Netburst power dissipation), Intel can always adjust prices and compete on price-performance in the price range below $300. If Barcelona is really very good AMD may indeed charge a premium for the top of the range CPUs, but typically those halo products do not contribute much to the overall sales figures (and given low yields on such high-end parts, they may not even be profitable to make).
In the unlikely case that AMD CPUs prove to be superior in all three market segments – mobile, desktop and server – Intel indeed may not be able to win the price war. But now we are talking about AMD’s regaining competitiveness only in the server space (and high-end desktops and workstations).
It is nevertheless true. AMD started as a second supplier because chip buyers asked for it, and AMD’s competitive position was always to a very large extent determined by legal factors.
Intel always had much higher margins and if necessary it always could drive AMD out of business. The problem is that it would cost Intel billions of dollars in lost profits and would inevitably result in a major court battle.
And even when Intel complained about tough competition it still remained much more profitable than AMD. Intel just chose to keep its margins rather than its market share.
3ghz * 4 * 2 SSE ops per cycle + fast mem controller == ray tracing heaven!
please jah, let it be so…
Intel? Rock solid? *cough*i820*cough*
AMD bought ATi for a reason… and it wasnt graphics…
He actually got himself banned? I was wondering why he was on dailytech
Veery interesting if true. I just recently said elsewhere that IPC and architectural ‘sexiness’ is only half the equation, max clockspeed matters too for the oc’ing enthusiast. If these improvements are real then things are really going to heat up in a good way. Now if AMD would only come out with an inhouse chipset. The ones avaialbe for AMD don’t give me the same peace of mind as Intel’s rock-solid and problem-free chipsets .
We don’t know because there are no QuadCore Athlons right now, just a hobbled together 2 x 2 core setup. Maybe theres an 8-10% gain in going to a TRUE quad core setup. Maybe theres a 6-7% increase in the memory controller. Maybe the 800mhz clock advantage, onboard mem controller, revised IPC, etc combined can in fact equate to a 40% increase in performance. Obviously, its gonna be tough to know, esp since in all reality, a faster processor doesn’t help us with much in general since games are likely GPU bound, the OS isn’t gonna get much faster from a CPU upgrade, more from an HD upgrade. Scientific stuff and software renderers are gonna show the improvements, as well as folding IFF the clients are optimized for the new setups and quad core. Otherwise, there won’t be a significant increase for most people.
20% faster than penryn would have to be 40-50% faster than A64 and I am not sure the ipc is 50% faster per clock.
Its not as good as you think.. the chipsucked before because it was rynning very slow compared to the dual core.. they just managed to fix that. Very likely intel will have a quad core chip faster then it soon and to even not go totaly bellyflop amd needed many months of strong leads and stronger margins. Instead its very likely thier margins will fall even more.
Um, I just want to see the chips actually hit the review benches and the shelves. I built the first Intel-based system in years to replace my mother’s aging machine. And putting a C2D was massive overkill for her, but it will probably be fine for her for the next 5 years.
My Windows box is, primarily a gaming machine. With not much use for a dual core system –yet– I have stuck by my trusty A64 3200+ and 6800GT. Sure, it is choking a little on Oblivion. But it plays just about everything else at pretty high settings and good frame rates. I figure that by the time games catch up to multi-core use, that a quad core will be the way to go.
I have to say that a Q6600 at $535 is pretty damned tempting. But I might as well wait a couple of months and see what AMD has to offer as an alternative.
welcome back Shintai
Yeah, but you can always charge more for a superior product. If Barcelona is 20% faster than Penryn at a give power utilization level, AMD can definitely charge more for Barcelona. And that price premium need not be just 20% — many customers may be willing to pay a very big price premium (in percentage terms) for a 20% gain in performance. Heck, the fact that AMD and Intel have always been able to sell “extreme” and “fx” chips for twice as much as chips with 80-90% of the performance shows that. Some people’s time is worth a lot more money than others.
The difference being, the NetBurst’s performance was abysmal and ran super hot. C2D isn’t a slouch by any means and runs decently cool.
If these chips are as good as AMD has been claiming the “intel will win with a price war” is questionable. The last time this happened how many Netburst chips were slashed to the bone, loaded with cache and still hung to Intel’s bottomline like the burning hot dingleberries they were. Intel keeps upping clockspeed and enlarging cache as their superior engineering allows them. All that cache takes up real estate that costs cash so the next gen core stuff better be ready or it may end up being wall to wall benchmarks of Intel’s native 45nm processor. Interesting times.
Umm… AFAIK AM3 still has 940 pins. I don’t think there are enough pins left to do quad-channel RAM.
Even if the pins were available, it would probably be a board layout nightmare.
Excellent news, if it is in fact true.
Time will tell.
q[
I think the memory controllers on dedicated parallel GPUs are night-and-day different from memory controllers on multi-component multi-processor serialized busses. The brain-power couldn’t hurt, but I seriously doubt if any of ATI’s memory experience applies to how the memory controller operates in a CPU.
i heard the R600 doesnt have anything to do with AMD, its still an ATI card, but the next-gen after R600 will be truly AMD coz R600 was still being made when amd didnt own ATI
also when are these cpus comming out
Yeah, I like this whole “real competition” thing that’s been happening lately.
I hope its true, cause AMD really needs some extra legs to compete with current C2D’s and the upcoming Penryn.
I have to give AMD credit for one thing. The leaks have been few and far between on K10 and R600. I would expect to have seen far more unauthorized tests at this point.
In a sense, AMD is doomed no matter what. Theoretically, Intel can always start a price war and win it. The only difference is that if AMD’s products are competitive this price war will cost Intel more money.
There are probably two main reasons why intel is not trying to outright bankrupt AMD:
1) It may be considered illegal by courts (predatory pricing)
2) Chip buyers may want an alternative supplier
So Intel has to be careful – it only tries to marginalize AMD, not to destroy it. Even now, in theory AMD may argue in its antitrust case that Intel’s pricing of Core Duo is predatory (since intel’s gross margin in Q1 2007 is significantly lower than in Q1 2006).
So let’s not forget that there is a very complex game going on – AMD’s fate depends on CPU architucture, manufacturing, marketing, finances, legal issues and a lot of other stuff. It is not as simple as who has the best enthusiast CPU.
I call bullsh1t!
OK, maybe too harshly put.. I’ll believe it when I see it. As always.
And yet, AMD said nothing in this article about performance.
Same company and same website making the same performance claims about said company.
I like AMD but man they need to push their products out the door. Stop talking about performance and start performing.
Oops, wrong reply!
Well, until we know how efficient these cores are, who cares about their clock rates?
Ditto; waiting to upgrade to quad-core myself. 2.9ghz is nice, I wonder how much OC’ing headroom AMD will have. Getting 3.5ghz out of an OC’ed C2D keeps me pretty happy, if AMD can match that… that’d be sweet.
If this is true it will be a huge deal.
I was definitely one of those running around saying that AMD was doomed, but I may have to reconsider.
Let it be so. I’m running a DS3 with a 6600 C2D which I love, my first Intel system since the 486-DX66 days. I’ve built quite a few AMD systems. The C2D was just too good to pass up, but I’d build another AMD system in a flash if the product was right. Without AMD, Intel would still totally suck IMHO.
Imagine the K10 with an internal 512-bit memory controller…<homer>mmmm….bandwidth…</homer>
Would be nice if it were true; waiting to upgrade to a quad-core box myself. Would like to have the choice of platforms.
There is a very interesting connection to ATI however. The INQ article did say that the problem with the K8’s was the memory controller and the memory controller problem had been solved in the K10’s. I wonder if some of the ATI expertise in memory controllers hasn’t found it’s way into the K10’s. Very interesting.
uhhhh…..we’re talking CPU’s here. what do graphics cards have to do with anything.
That being said, the employees were probably dancing in the aisle becuase they weren’t going to be losing their jobs. I really hope Barcelona puts a smack-down on Penyrn, if only to keep AMD competitive.
R600 is gonna cream the competition too! Woohoo!