The days of casual overclocking are numbered

The days of overclocking for the casual PC enthusiast are numbered, at least if we define “casual enthusiast” as “a person who just wants to put together a PC and crank everything to 11.”

We've become more and more fenced in about the chips we can and can't tweak on the Intel side of the fence as the years go by, but efforts at product segmentation aside, the continued race to get more and more performance out of next-gen silicon may put the final nail in the coffin of casual overclocking's wizened form regardless of whose chip you choose. The practice might not die this year or even next year, but come back three to five years from now and it'd surprise me if us dirty casuals are still seriously tweaking multipliers and voltages in our motherboard firmware for anything but DRAM.

The horsemen of this particular apocalypse are already riding. Just look at leading-edge AMD Ryzen CPUs, Intel's first Core i9 mobile CPU (sorta), and Nvidia's Pascal graphics cards. The seals that have burst to herald their arrival come from the dwindling reserves of performance that microarchitectural improvements and modern lithography processes have left chip makers to tap.

As per-core performance improvements at the microarchitectural level have largely dried up, clock speeds have become a last resort for gaining demonstrable improvements from generation to generation for today's desktop CPUs. It's no longer going to be possible for companies to leave clock-speed margins on the table through imprecise or conservative characterization and binning practices—margins that give casual overclockers reason to tweak to begin with. Tomorrow's chips are going to get smarter and smarter about their own capabilities and exploit the vast majority of their potential through awareness of their own electrical and thermal limits, too.

AMD's Ryzen 7 2700X

AMD has long talked about improving the intelligence of its chips' on-die monitoring to lift unneccessarily coarse electrical and thermal restrictions on the dynamic-voltage-and-frequency-scaling curve of a particular piece of silicon. Its Precision Boost 2 and XFR 2 algorithms are the most advanced fruits of those efforts so far.

Put a sufficiently large liquid cooler on a Ryzen 7 2700X, for example, and that chip may boost all the way to 4 GHz under an all-core load. Even if you manage to eke out another 200 MHz or so of clock speed from such a chip in all-core workloads, you're only overclocking the chip 5% past what its own monitoring facilities allow for. That performance comes at the cost of higher voltages, higher power consumption, extra heat, and potentially dicier system stability, not to mention that the 2700X is designed to boost to 4.35 GHz on its own in single-core workloads. Giving up any of that single-core oomph hurts.

When the difference between a Ryzen 5 2600 and a Ryzen 5 2600X is just $20 today, and a Ryzen 7 2700 sells for just $30 less than its X-marked counterpart, I have to wonder whether the tweaking is really worth the time. If one can throw $100 or so of coolant and copper at the problem to extract 95% of a chip's performance potential versus hours of poking, prodding, and testing for stability, well, I know what I'd rather be doing, to be honest. As I get older, I have less and less free time, and if it's down to gaming or not gaming, I'm going to do the thing that lets me game more.

The slim pickings of overclocking headroom for casual tweakers these days doesn't stop with CPUs, either. Nvidia's Pascal graphics cards enjoy a deadly-effective dynamic-voltage-and-frequency-scaling algorithm of their own in GPU Boost 3.0. Grab a GeForce GTX 1080 Ti equipped with any massive air cooler or hybrid arrangement, for just one example, and you're already within single digits of the GP102 GPU's potential.

Corsair's Hydro GFX GTX 1080 Ti

We got just 6% higher clock speeds versus stock out of a massive air-cooled GTX 1080 Ti and about 7% higher clocks out of a liquid-cooled version of that card, all at the cost of substantially higher system power draw. I don't feel like the extra heat and noise generated that way is worth it unless you just enjoy chasing the highest possible benchmark numbers. That's a fine hobby in its own right, but single digits just aren't going to make me pursue them for their own sake these days.

The Intel Celeron 300A. Image: Querren via Wikimedia Commons; CC-BY-SA 3.0

Lest you think I'm being fatalistic here, there was a time—almost 20 years ago, to be exact—when ye olde Intel Celeron 300A with 128 KB of L2 cache could famously be tapped for a whopping 68% higher clock over its stock specifications with a good sample and the attentions of a casual enthusiast. The 300A sold for much lower prices than the chips it proceeded to outpace at those speeds, too. When we talk about casual overclocking, the Celeron 300A is perhaps the high-water mark for what made that kind of tweaking worth it.

Intel's Core i7-8700K

Sure, you might take a Core i7-8700K from its 4.3 GHz all-core speed to 5 GHz under non-AVX workloads, but that 16% of extra speed comes with roaring CPU fans and an exceedingly hot chip without some kind of thermal-interface-material-related surgery. You can bet that double-digit margin will rapidly shrink as soon as Intel releases a next-generation architecture with more intelligent Turbo Boost behavior that's not just tied to the number of active cores.

Turbo Boost 2.0 was introduced with Sandy Bridge chips all the way back in 2011, and the technology has only received a couple notable tweaks since then, like Turbo Boost Max 3.0 on Intel's high-end desktop platforms and the XFR-like Thermal Velocity Boost on the Core i9-8950HK. Like I've said, Precision Boost 2 and XFR 2 both show that there's more dynamic intelligence to be applied to CPU voltages and frequencies.

AMD, to its credit, is at least not working against casual overclockers' chances with TIM under its high-end chips' heat spreaders or by segmenting its product lines through locked and unlocked multipliers, but that regime may only last as long as large amounts of clock-speed headroom become exposed through better microarchitectures and process technology. The company's lower-end APUs already feature TIM under the heat spreader, as well, limiting overclocking potential somewhat. More capable Precision Boost and XFR algorithms may ultimately become the primary means of setting AMD CPUs apart from one another on top of the TDP differences we already come to expect.

 

As we run harder and harder into the limits of silicon, today's newly-competitive CPU market will require all chip makers to squeeze every drop of performance they can out of their products at the factory to set apart their high-end products and motivate upgraders. We'll likely see similar sophistication from future graphics cards, too. Leaving hundreds of Hertz on the table doesn't make dollars or sense for chip makers, and casual overclockers likely will be left with thinner and thinner pickings to extract through manual tweaking. If the behavior of today's cutting-edge chips is any indication, however, we'll have more time to game and create. Perhaps the end of casual overclocking won't be entirely sad as a result.

Feature image: Querren via Wikimedia Commons; CC-BY-SA 3.0

Comments closed
    • DevilsCanyonSoul
    • 1 year ago

    This is a great article built on solid common sense.

    • gh32zT
    • 1 year ago

    I couldn’t agree more. People forget to factor in the value of their time when overclocking these days. I haven’t overclocked the PC’s I’ve built since around 2005. It’s funny that you mention the Celeron 300A — that’s one of the two I always think of. I also overclocked my Pentium 233 to 292 MHz, a nice 25% overclock that was low-effort and instantly stable.

    Since then, I haven’t seen anything like those chips.

    • BorgOvermind
    • 1 year ago

    So basically the big ones will give us CPUs and GPUs already clocked to the limit. ‘Great’. They should know this is not a solution, but a delay of the inevitable. They will have to switch technology entirely at a point.

    • notinuse
    • 1 year ago

    My first overclock attempt was taking a Motorola 68000 from 7.16 MHz to 14 MHz. I grabbed a 28 MHz clock signal from the Agnus chip, and setup a D flip flop as a binary divider to cut the frequency in half to provide 14 MHz. I didn’t have much success with that. My first successful overclock was taking a 25 MHz 68030 to 33 MHz, by desoldering the crystal oscillator, installing a socket, and plugging a new 33 MHz into the socket. That setup worked great.

    • slushpuppy007
    • 1 year ago

    Paying the extra cash for the top model CPU or GPU that can auto-overclock itself, I would still dig around the internet to see if there’s a way to max out the auto-overclock and at least ensure your clock rate remains stable.

    Having decent cooling is a no-brainer, then open up the power limits on your CPU / GPU and see where it gets you. GPU BIOS flashes possibly needed to open it up properly.

    Like in the case of Ryzen 2, XFR can be “unlocked” (Power Limit Increase) and adjusting the Ref Clock Frequency by a few Mz can give some additional boost speeds under single and multi threaded workloads.

    [url<]https://www.youtube.com/watch?v=S0mR4IoNWkQ[/url<] So yeah, maybe the Casual Overclocker will be downgraded to the Casual Tweaker in a few years, but I do think one should investigate your particular CPU or GPU to find its best possible "auto settings", which may just make you feel like the enthusiast you think you are 😉

    • End User
    • 1 year ago

    I’ve kept my 1800X and my 1950X at stock clocks.

    As a PC gamer my attention lies with the GPU. My RX 580 is at stock clocks but my GTX 1080 is OC’ed as far as I can take it.

    If Nvidia lets me overclock its next gen GPU, I will.

    • ronch
    • 1 year ago

    I never was an avid overclocker because I prefer maximum stability over a few more clock ticks and I understand Turbo and power saving mechanisms like Cool & Quiet don’t work when you OC, but my finest hour of getting ‘free’ performance was with my Phenom II X3 720. Back in 2009 I had an Athlon 64 X2 and my target was to get at least Core 2 Duo E8400-level performance on my next upgrade. Back then the X3 720 was cheaper than the E8400 but delivered 20% less single thread performance. AMD targeted the E8400 with the 720, arming it with one extra core to compensate and pricing it lower. As it is, it was a fine alternative, and being able to plug it directly onto my AM2+ board was a nice money saver. But the kicker was when I upgraded later on to an AM3 board and was able to easily unlock the 4th core. So with each core delivering 80% the performance of the E8400 but with twice as many cores, it has theoretically 60% more aggregate performance than the E8400 while costing less. Such a great bang for the buckazoid.

      • derFunkenstein
      • 1 year ago

      Core unlocking was the height of awesome CPU gains (IMO, though I guess the Celery 300A was close). I had a Phenom II X2 550 Black Edition that also unlocked its dormant resources. Couple that with an overclock to 3.4GHz and I got something like 120% extra performance for free.

      edit #65535: my math sucks.

        • Redocbew
        • 1 year ago

        Behold! The 16 bit edit.

          • ronch
          • 1 year ago

          Technically it’s just 2-bit.

          Just my 2 cents.

            • Redocbew
            • 1 year ago

            Only if you start counting from 1.

            There are 10 types of people in this world: those who understand binary, and those who don’t.

    • TurtlePerson2
    • 1 year ago

    I haven’t kept up with this stuff as well as I used to, but I don’t remember AMD chips ever being great overclockers compared to the Intel chips. I had 2.4 GHz C2Q back in the day, which hit 3.2 GHz with stock voltages. That made my $200-300 chip effectively the same as what was being sold for $1000.

    What I don’t understand is how this article justifies it’s claim. AMD chips were never the best overclockers, yet the article uses them as the example. Graphics cards were never great overclockers either, but they’re again the example. Then a high-end Intel chip is held up to show that you can only get 10-20% overclock out of it without thermal and power concerns.

    Is there something I’m missing? Do mid-range and low-end Intel parts no longer overclock to 50% higher clock speeds with inexpensive cooling?

      • mightymightyme
      • 1 year ago

      AMD had some sure hits in the overclocking department. I remember getting a Duron 1GHz (remember those?) and changing the FSB to 133 MHz for a 300Mhz overclock. No to mention the old Athlon Pencil trick. I remember looking at serial numbers for Barton processors to get the special model 2500 that overclocked best. I even had one of the Phenom II’s you could unlock the extra 2 cores on. I don’t think they ever had anything legendary as the Celeron, but AMD certainly had some very cheap processors that could overclocked very well. There were a lot of $100-$150 chips that could be overlocked to be similar $300-$500 processors.

      • TheRazorsEdge
      • 1 year ago

      [quote<]Do mid-range and low-end Intel parts no longer overclock to 50% higher clock speeds with inexpensive cooling?[/quote<] The entire point of the article is: No, not anymore. There are still some choices for overclocking, but there is nothing close to what was available in the late-90s and early-2000s.

      • zmx
      • 1 year ago

      One system I built back in the day used a Sempron 2600 @ 1.6GHz. It was $50. I overclocked it to 2.6GHz, a 63% increase. That was also the same speed as the $700 AMD FX-55.

      It didn’t have the same cache as the FX-55, but outperformed it in some benchmarks and did great in games.

      The old Opterons were pretty good too. My Opteron 165 @ 1.8GHz overclocked to 2.7GHz. 50%, yo

    • DeadOfKnight
    • 1 year ago

    Personally, I think it’s about time. Artificial segmentation has been a bad thing for consumers in pretty much every other regard. At least if chips are being pushed to their limits and binned accordingly, then you get what you pay for, and warranties needn’t be voided any longer.

    I, for one, welcome a new paradigm where overclocking your CPU really means adding extra cooling and letting the CPU overclock itself. Of course, I would hope that deceptive marketing would go away along with it, ie. a $300 motherboard for overclocking a chip ~10% higher.

      • Anonymous Coward
      • 1 year ago

      Well, they’re not going to let lower-priced chips compete with the higher-priced ones, they’ll set clock limits to prevent anything from happening that will cost them money.

    • cybot_x1024
    • 1 year ago

    I remember pushing a Celeron 420 from 1.6GHz to a blistering 3.2GHz on my Gigabybe P45 DS3 motherboard. That’s a 100% OC! Those were the days.

      • strangerguy
      • 1 year ago

      Nice, but your chip also existed in a time when CPU market segmentation was getting a lot tighter not just with clocks and cache sizes, but also core counts. And your single-core CPU also ran out of steam quickly into the Core 2 era.

    • Zorb
    • 1 year ago

    I have a 300A still in the box and wrapped just because i knew one day it would be remembered as the king of the OC… I believe it cost me $68 so i bought 2

      • Anonymous Coward
      • 1 year ago

      The first king perhaps, but its debatable if it is [i<]the king[/i<]. (Also had a 300A, good times except for the rate it became obsolete...)

      • Krogoth
      • 1 year ago

      It is because Intel was so dang ultra-conservative with the binning on the Mendocino. They gave it a 66Mhz FSB despite the fact that architecture and platform it rode on could effortlessly handle 100Mhz FSB without an issue.

      They also sold the SKUs at a fraction of the cost of the Pentium II. If anything Mendocino Celeron were the “K6 killer”.

    • Flapdrol
    • 1 year ago

    Techpowerup clocked their 2700X to 4.2 on all cores. It made games run slower.

    [url<]https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700X/12.html[/url<]

      • strangerguy
      • 1 year ago

      The funnier part is not only the manual 4.2GHz 2700X was overall slower than stock XFR2 idiotproof Fisher-Price mode, it also used 27W more power during stress testing.

      For all intents and purposes it is a chip with no real OCing headroom whatsoever because the stock mode is already too good at boosting itself.

      As the Celeron A they existed because Intel need something cheap and fast enough to fight off the AMD K6s (not as good as the P2s but a lot cheaper and certainly good enough for MS Office) in the then <$1000 mass market PCs, and the cacheless L2 Celerons were a disaster. I really doubt they cared about the tiny OCing scene eating into their profits when there was a much bigger fish to fry.

        • bhtooefr
        • 1 year ago

        Intel actually cared a [i<]lot[/i<] about overclocking back in the day, especially after what happened with unlocked multiplier Pentiums. What they were fighting back then wasn't enthusiasts doing it, though (although they certainly didn't have a problem with enthusiasts having to buy a higher-end part). They were fighting counterfeiters rebranding lower-spec CPUs as higher-spec CPUs, and whiteboxers overclocking lower-spec CPUs and not disclosing it to their customers. In the 486 era, you saw a little bit of counterfeiter rebranding, but it was mostly rebranding a 486DX 25 to a 486DX 33. The 486DX2 50 wasn't common enough to rebrand as a 66, 40 MHz FSBs were unstable in VLB systems so nobody wanted a 486DX2 80, and multiplier tweaks weren't really practical even when they were possible (486s were all top-locked AFAIK, and FSB was the limiting factor). Note that multipliers were a new concept with the 486, and the model referred to the multiplier (well, 486DX4 was a 3x multiplier). With the Pentium, though, Intel didn't want to have to have different silicon for each multiplier configuration, and made things configurable and unlocked. You'd set the jumpers based on the rated multiplier and FSB. So, you'd see things like a Pentium 90 (1.5 * 60) rebranded as a 120 (2 * 60). You'd see things like a Pentium 120 rebranded as a 166 (2.5 * 66). Or, you'd even see off-the-wall things like a 75 (1.5 * 50) or 90 rebranded as a 166. (You'd also see straight FSB overclocks, like a 75 to a 90 or 100, a 120 to a 133, a 166 to a 200, that kind of thing.) So, to fight this, top-locking the multiplier became a thing on the later Pentium Classics (especially the 200), as well as most Pentium MMXes - as these tended to be 66 MHz FSB already, you could get gains by dropping the multiplier and increasing the FSB, but Intel's chipsets didn't tend to like it. Then, with the Pentium II, the multiplier was fully locked, and there were autodetection mechanisms put in place (motherboards could override them for FSB, but when you installed the CPU, it'd default to what Intel shipped it as AFAIK, unless the counterfeiter jumpered some pins), which mostly killed counterfeiting... but it also hurt overclocking, especially on the high end (where you might actually want to drop the multiplier to get a FSB overclock, for better performance). It actually meant that sometimes, you wanted a lower-end part to get a better multiplier for your target OC (this is why Celeron 300As and 333s were more desirable than 366s in many cases), although mobile parts tended to keep the bottom unlocked to handle dynamic underclocking schemes (but Intel was good about keeping them incompatible with desktop sockets), and eventually those schemes came to the desktop.

    • ET3D
    • 1 year ago

    It was never the case that all CPUs were good overclockers. The Ryzen 3 1200 is a pretty good overclocker. The Ryzen 3 2200G can have the GPU overclocked by quite a bit.

    I do think that the gist of the argument is correct, it’s harder to get a high trivial overclock than it used to be. But there are still a lot of cases where overclocks are worth it, and I believe that would continue to be the case. Software and BIOS also make it quite easy these days to play around with overclocking. I also think that chip makers do realise that overclocking is a feature for enthusiasts, so they might not be so quick to simply cut it out.

    So even though overclocking headroom is being eroded, for now I think the fear is premature.

      • Anonymous Coward
      • 1 year ago

      Of course they know that OC is a feature, but clocks are hard to come by these days, they’re not leaving them laying around for people to play with.

    • crazybus
    • 1 year ago

    I’ve recently found that undervolting can be quite effective at boosting performance in TDP or thermally constrained platforms. Tweaking voltage offsets on recent Intel chips with their Extreme Tuning Utility can open up some higher Turbo Boost levels than would be available stock.

    With quad-core 15W chips the new mainstream, and these chips being severely TDP limited, voltage tweaking could go a long way.

    • TEAMSWITCHER
    • 1 year ago

    Six months ago, I performed a BIOS update and forgot to restore the 4.2GHz overclock on the X99 system I built in late 2014. I never once noticed the over 400GHz performance loss. More cores, all SSD storage, and dual graphics cards driving a 4K display, all collaborated to provide an experience that to this day is still … Buttery Smooth.

    Component selection is more important than overclocking.

    • hansmuff
    • 1 year ago

    Besides the chip implementations making overclocking almost pointless, we’re also living in a time of renewed competition thanks to AMDs engineering efforts, and a 6-core/12-thread Intel chip is $350.

    So it’s at a point where you get absolutely top notch consumer performance for a price that is very much acceptable.

    At this point, overclocking is mostly bragging rights and “looking like an enthusiast.” I’m doing the same thing, getting 10% more out of my CPU and GPU respectively, because I can. But I feel more and more foolish for doing it and mostly just wasting electricity.

    Overclocking was multiple things. Chasing performance, chasing bang/buck, chasing cred. The first 2 are just gone, the last one is there but I’m too old to compare OC notes with friends, we talk about the kids and our new mowers. It was a fun time while it lasted.

    It feels a lot like when PCI-E made the DMA, I/O and IRQ jumpers irrelevant. It all just works now, who jockeys around jumpers anymore to get the most optimal configuration? It’s poof gone, overclocking is going the exact same route.

    • deruberhanyok
    • 1 year ago

    I completely agree with this – I’ve found myself wondering recently if the extra premium on Z chipsets and K processors even makes sense for most users. Doubly so with the AMD parts – even if you want to overclock them, it’s not like you NEED the “X” processor to do it. So why pay more?

    I feel like overclocking is more useful as a spectacle sport now – a thing that people who go by internet handles, who have sponsors, do for tradeshows and competitions using liquid nitrogen and other stuff.

    Anyways, I wanted to mention the only other chip I’ve ever seen hit the same highs as the Celeron 300A: The Northwood Pentium 4 1.6A.

    I had a couple of these and all of them could reliably run at 2.4GHz. I didn’t even bother running them stock – put the system together and changed the FSB first thing. Maybe you could get 50% overclocks out of other chips, but I’ve never seen it go as easily as it did with this and the 300A.

    Good times man. I kinda miss those days. But also, I like just being able to put a system together and not having to think about it anymore.

    • Ummagumma
    • 1 year ago

    Do we OC “because we can” or “because we run software that SUX”?

    I admire people that have the time and money to pour into the effort of squeezing the last tenth of a percent of performance out of a computer. I have neither the time nor the money to pursue such adventures.

    As a former programmer (among other trades), I can’t stand “bloatware” software. I wonder how much overhead is added by all of the various unremoveable, undeleteable, and/or “always on features” found in modern operating systems?

    I have no experience OC for gaming purposes. Perhaps some that game have to OC to get better performance due to the “software security” schemes that are sometimes liberally used on gaming software. I wonder about the programmers that write games and what types of computer hardware they use to develop games that demand super-mega CPUs, super-mega GPUs, and super-mega-whatever else in order to get the last bit of performance out of that game.

    • WaltC
    • 1 year ago

    I thought pretty much the same thing when cpus hit 1GHz +…;) I was wrong. The limiting factor today is the performance-per-watt envelope–constantly shooting for the most performance at the lowest power consumption. Of course, smaller dies and electromigration (hand in hand with power considerations) are putting a damper on things, too. Remember when over-volts @ 130nm would turn certain Intel cpus into inert key-chains? ([H] had several on their front page, IIRC.) But today we have commercial water-cooling unheard of in those days. So all-in-all, I don’t see the trend for us DIY’ers to give up a little OC here and there as being much stronger now than before. Where there is a will there is a way, etc…;)

    • brucethemoose
    • 1 year ago

    There’s still hope in 2 areas: mid-level chips and big HEDT/server chips.

    Even with no voltage headroom, there’s still room for exploitation in chips with lots of cores squeezed into a low TDP, as long as they remain unlocked. You just gotta nudge them up the voltage/frequency curve, at the cost of some efficiency.

    • Lazier_Said
    • 1 year ago

    It isn’t that the hardware is overclocking itself so that we don’t have to. It’s that the hardware has gotten so far ahead of the software that even the lowest binned i3, not overclocking itself, will already run through most consumer workloads literally instantaneously and the few that it won’t are I/O limited and an i9 would do no better.

    Rose colored hindsight of the golden age of tweaking leaves out that the reason we were doing it was because those golden age computers weren’t mature products and they needed that tweaking to get your tasks done smoothly, or even done at all.

      • ozzuneoj
      • 1 year ago

      Yeah, people are spoiled by imperceptible delays on computers these days. Between the super faster multi-core processors, huge amounts of RAM and solid state storage, few people have any reason to push their computers beyond their normal limits.

      It used to be that a brand new computer could have a CPU that provided 20fps in a new game and was considered “slow but at least it runs!”… and you could tweak a few BIOS settings to overclock the CPU substantially to break 30fps.

      These days, that would be like buying a pre-built desktop with a dual core Kaby Lake Celeron, flipping a couple BIOS switches and unlocking two more cores at the expense of maybe some additional cooling. This would offer a similar night and day difference in many situations.

      I’ve overclocked every system I’ve owned since 2000, except for a one or two (early Athlons with sub-par cooling). I’m still rocking a 2500K running at 4.2Ghz after 7 years, and the thought of upgrading my system and only getting stock performance (or maybe 5% more with lots of added heat) seems pretty unappealing. The fact that CPUs offer such minor improvements and the relatively high cost of an upgrading (partially due to the price of DDR4) just kills the idea all together.

    • synthtel2
    • 1 year ago

    It looks particularly dead right now because Ryzen’s / GloFo 14nm’s voltage/frequency curve gets so cliff-like at the end and because OCing tends to leave a lot of Intel and AMD’s more advanced clocking tech behind, but I don’t expect there to be a shortage of people looking to push their chips harder than the factory will anytime soon.

    Intel/AMD/Nvidia are getting very good at squeezing everything out of the chips given particular limitations, but they’ve got to worry more about things like “what are the chances this chip will still be operational after a decade of hard work?” If that target is reduced to something more like 3 years at 20% duty cycle, the gains are often non-trivial. Using that doesn’t have to mean leaving all the advanced clocking tech behind, either; it could work more like it does for Nvidia, or would for Nvidia if Nvidia didn’t tend to keep power targets locked down so aggressively. The way OCing CPUs now means setting voltages and clocks like it’s 2008 is just an implementation detail.

    While the gains from OCing are smaller, the proliferation of HFR monitors means CPU performance matters more than it did a couple years ago. When the CPU-side of most games on most decent CPUs can reach well above 60 and 60 is the main target, it seems a bit of a waste of effort. When 144 (or 165, or 240) is the target and the GPU is the problem, at least you can just turn settings down. If you’re trying to hit 144 or beyond on most games CPU-side, what are you going to do except overclock? If everything’s linear (it isn’t, in both directions), 5.0 versus 4.3 could mean 144 fps instead of 124, and there may literally be no other way to get to 144.

    • SnowboardingTobi
    • 1 year ago

    My best overclock was taking an AMD Duron 600 up to 1.1GHz. I could hit 1.2 but it wasn’t very stable. Almost double!

      • JustAnEngineer
      • 1 year ago

      I used the pencil trick to overclock several Durons and Athlons of that generation.

      • derFunkenstein
      • 1 year ago

      Same. and it was insane. It could run the Quake 3 benchmark as fast as my poor GeForce DDR could carry it. 100+ FPS at 1024×768 and the bad news is that I still sucked at the game so it didn’t matter.

      • Mr Bill
      • 1 year ago

      My first overclock was running my 80MHz AMD 80486 at 100MHz on a Vesa Local Bus (VLB) board with a 50MHz bus speed.

      Later, the PA-2007 motherboard based on the VIA VP2 chipset was superior to the Intel TX and HX chipset motherboards. The PA-2007 overclocked the P133 to 150MHz just fine with a 75MHz bus speed. The 233MHz AMD K6 CPU overclocked at 266 MHz on the PA-2007 very nicely. But these are small overclocks compared to the Celeron and Duron overclocks you guys got.

        • srg86
        • 1 year ago

        I think your real feat was getting that Vesa Local Bus motherboard to 50MHz, they were notorious for being unreliable above 33MHz!

        I can see that with the PA-2007, the VIA VP2 I think was factory certified to 75MHz, as that was the native bus speed of the Cyrix 6×86 PR200+.

        My first overclock was running my AMD K6 166MHz at 200. Later I ran an Athlon XP 2500+ as a 3200+ (2.2GHz) with just an FSB jump.

        I’ve not bothered in years though.

    • Ninjitsu
    • 1 year ago

    Mhm. At the same time, I think we now have tools like the Rockit88 that makes de-lidding more accessible to people. I currently have an i5 3570K and 4690K, probably will end up de-lidding them eventually and overclocking them (especially the ivy bridge chip, since that’s what i have current access to – the Haswell is back home).

      • Krogoth
      • 1 year ago

      Delidding only makes sense if you want to reduce loaded temperatures with an overclocked chip.

      Haswell/Ivy Bridge ~4.5Ghz and beyond are held back by DDR3 under the majority of applications.

        • designerfx
        • 1 year ago

        anandtech had an article about delidding reducing about 20c in temperature off a few specific processors. I’d not say at all that overclocking is the required area of focus.

        • Ninjitsu
        • 1 year ago

        Reduced temps help a bit with how far you can push voltages without the chip throttling.

        And yeah, I’d like a cool and quiet 4.5/4.6 GHz please.

    • setaG_lliB
    • 1 year ago

    My favourite overclockers:

    Pentium M 745 on a desktop board: 1.8GHz –> 2.72GHz
    Athlon 64 3000+ Venice core: 1.8GHz –>2.75GHz
    Core 2 Quad Q6700 (G0 stepping): 2.66GHz–>4GHz

    My current CPU is also a fairly decent overclocker. It’s a 4930K (Ivy-E) clocked at 4.6GHz with only 1.24v going into it.

      • Anonymous Coward
      • 1 year ago

      The 300A is legend, but it was a flash in the pan compared to OCing from the C2D and up… the later chips had staying power.

    • dragosmp
    • 1 year ago

    Let me rephrase: days of zero-skill overclocking are numbered.

    For the last many years, with no competition, Intel restricted OCing to its highest end. The 300A was just the first of a series of low end performers, followed by CPUs like the 550 K7, 2500+ K7, A64 2800+ or the Q6600. Since then you has just one option – buy the most expensive Intel for OCing. What’s the point, it was the fastest anyway.

    Now, OK Turbo takes care of the frequency gain. However, I bet there are ways to improve, and they will apply to more than just the highest end, at least as long as AMD competes:
    -improve cooling for more headroom – CPU and VRM
    -delid to improve thermal contact
    -tune offset voltage to spend less TDP on useless generic Vcore, thus increasing your thermal headroom

    It’s as you say, gone are the days of casual OCing; but this was never a thing unless you had 600$ to spend on mobo/CPU/RAM. If you had 250, stuck. We’ll get in an age where a lowly CPU can be tweaked to be as good as one double the price. That is surely more interesting, isn’t it?

    • EndlessWaves
    • 1 year ago

    Casual overclocking has been dead for years, when you have to spend 25% of the cost of the CPU on extras just to enable overclocking it can no longer be called casual.

    Casual overclocking was when you came across this interesting article on the internet on this thing called overclocking and could try it on your perfectly normal PC. That’s been dead for a while.

    It does look like the current non-technical overclocking for enthusiasts will die too, but I’ll be welcoming the shifting of interest to other areas.

    • jihadjoe
    • 1 year ago

    They’re often overlooked because of looking back on the Mendocino days with nostalgia, but I think some of the best overclocking chips were the budget 4000-series Core 2 Duos. Anandtech [url=https://www.anandtech.com/show/2149/2<]got their E4300 to 3.375GHz[/url<], which was almost a 100% overclock.

      • Krogoth
      • 1 year ago

      Conroe, Wolfdale and Sandy Bridge overclocked like a dream. The motherboard was usually the bottleneck long before the CPU.

        • Srsly_Bro
        • 1 year ago

        That and cpu clock multiplier. The q6600 was x9 iirc which allowed 3.6 with an easy to achieve 400fsb speed.

      • Kougar
      • 1 year ago

      So were the E6300 chips, and I was able to run at 100% on mine (3.8GHz) after swapping a 965P chipset for a P35 board that could handle the crazy FSB required. Ran it stable 24/7 for years until the Q6600’s got real cheap. Still used as an email/web desktop for my father these days.

      • puppetworx
      • 1 year ago

      Let’s not forget the Pentium E2140 which was essentially a Core 2 Duo E4200 with half the L2 cache and at half the price. A few BIOS tweaks and you had an 80% overclock on a $60 chip.

    • JosiahBradley
    • 1 year ago

    Don’t tell me this right when I’m about to watercool everything!

      • Kougar
      • 1 year ago

      Watercooling is mostly for lower system noise and potentially better longevity through lower temps. Don’t count on it to help with boosting OCing much, though it will take the heat off hot chips.

        • JustAnEngineer
        • 1 year ago

        To the contrary: If you’ve got great cooling, today’s CPUs overclock themselves automatically.

    • maxxcool
    • 1 year ago

    For me CPU speeds are nice to bump.. but i am way more interested in uncore speeds, memory controller speeds, memory speeds,chipset hacks and tweaks and latency hacks.

    It seems these days there is as much or more gains to be had by making the cpu-chunks talk faster to each other than the cpu moving bits around.

    • Takeshi7
    • 1 year ago

    My best overclock was still all the way back in 2005, when I got my Celeron D from 2.53GHz to 4.13GHz, and it still kept cool on the stock Intel heatsink. Every generation since my achievable overclock was small enough to not really make it worth it.

    I really like the idea behind the 2700X. Just let the chip auto-overclock to whatever the thermal limit allows, and it doesn’t void your warranty. I’m OK with overclocking being dead as long as every chip automatically performs the best it can.

    • chuckula
    • 1 year ago

    Overclocking is [url=https://www.extremetech.com/computing/109821-is-overclocking-over<]dead![/url<]. Long live overclocking! I would say that overclocking isn't going away, but the turbo-boost implementations on modern chips are trying to give you overclocking without the headache of manually stabilizing everything right at the edge of what the chip can do.

      • Krogoth
      • 1 year ago

      It will remain as a niche for those willing to time and effort to squeeze every ounce of power or simple bragging rights.

      • RickyTick
      • 1 year ago

      Overclocking is over-rated.

      • frenchy2k1
      • 1 year ago

      I’m not sure why people are complaining about it though.
      Turbo is most of the benefits with none of drawbacks, compared to overclocking.
      GPU now max your clock out within your power curve, why even bother about changing it yourself?
      CPUs will soon do the same. AMD has XFR, inspired by their GPU the same way.

      Overclocking is not dead, it has been democratized.
      Everyone is enjoying maximized performance for no effort.

      Of course, once you safely maximize performance, the margin left out is rather small…

    • mat9v
    • 1 year ago

    While I agree with the premise of the article and conclusions, there are “pearls” still in hardware land 🙂 Take for example Ryzen 1700 that has all core turbo at 3.2Ghz and you can OC it to 4Ghz – that is still 25% of free performance. Ryzen 2700 is marginally worse since it has all core boost of 3.4Ghz and you can OC it to 4.2Ghz.
    4.3Ghz all core boost on 8700K is in a lot of cases possible to OC to 5Ghz – it is still 16%, nothing compared to 60% of the old days but … not bad 🙂

    • Srsly_Bro
    • 1 year ago

    However this is the Golden age of casual PC users. One only needs to look at computer forums.

    Sup?

    • fellix
    • 1 year ago

    The moment Intel and AMD began incorporating advanced resource and power management from their mobile designs into the desktop architectures, the race to tighter OC margins was on. The precise IC binning and model segmentation became another profit opportunity to prop-up the Moor’s Law for several more generations. The result is steeper power walls and short suicidal runs on LN2, well outside of the casual 24/7 overclocker, looking for some free performance.
    Now you have to pay for that as well.

    • ptsant
    • 1 year ago

    Overclocking at the high end is finished, because there is an obvious incentive to squeeze a maximum of performance (and price) before selling.

    Where hope remains is in the mid-tier products, where you can find a chip that is clocked below its capabilities. The Ryzen 1700 was quite popular for that reason. Some people got it close to 1800X, although it can be argued that the cooling and the effort is probably not worth the price difference…

      • strangerguy
      • 1 year ago

      IMO, the 1700 before discounts and the even the 2700 non-X is already made redundant by the 2700X. Paying slightly more for the latter which has better IPC, better IMC, much better Freq/V curve than 1st gen Zen, and presumably better binned chip, a better HSF and resale value than the 2700 non-X is already a no-brainer decision. Saving $100 with the 2600X for 2C/4T less is also not worth it either, since the useful lives of CPUs are now so long that $100 is a pittance in grand scheme of things.

      Choosing a new CPU now is very boring to be honest, since there’s no more free lunch to speak of.

        • msroadkill612
        • 1 year ago

        There is a free lunch, but it is open to all, even newbs.

    • derFunkenstein
    • 1 year ago

    Basically just +1 to the whole article, including the final sentence. You used to have to work to get every last drop of performance out of a CPU but nowadays you don’t because they’re already on the ragged edge of the performance/power consumption curve.

    I imagine that AMD’s refusal to lock down the CPUs that aren’t the top end of each family has been part of what drives its average selling price down. Enthusiasts who want to tweak (like me at launch) bought the cheapest model of each desired core configuration and then cranked the clocks best they could. That 3.7GHz all-core boost on the Ryzen 7 1700 works out to be just over 20% free performance that didn’t even take a non-stock cooler to achieve.

    Without a huge corporate presence, it might be in AMD’s best interest to lock down non-X CPUs when Zen 2 rolls around. And that would truly be the end of casual OCing.

    • leor
    • 1 year ago

    Intel may keep the trend going with their crappy heat paste. I got my i9-7900x delidded and have it all core overclocked from 3.3 to 4.6. I’m not sure if delidding falls into the category of casual anymore, but it wasn’t a big deal for me to just buy a chip from silicon lottery.

    • Ifalna
    • 1 year ago

    Well, that is to be expected when CPUs already go past 4GHz on their own.

    While I never OC’d aggressively myself I will miss the gentle overclocks that still run on air like my current 3570Ks 4.6GHz. It was fun to toy around with the system at that level.

      • Waco
      • 1 year ago

      Yep. I used to do hardcore overclocking and these days it’s just as easy to crank up the all-core clocks and call it a day. I miss fiddling with cutting traces, socket modding, etc…but it definitely is easier to get everything out of a chip these days.

    • tipoo
    • 1 year ago

    It seems to me Intel would also benefit from upgrading the TIM or going soldered IHS like high end AMD parts, with Ryzen increasingly nipping at their heels and as mentioned Turbo Boost and Thermal Velocity Boost trying to use all the thermal overhead they can, eliminating some of the benefit of overclocking, increased efficiency in heat removal = increased potential performance for their boost technologies.

    • Shouefref
    • 1 year ago

    Reminds me of the days of writing macro’s.
    I don’t do it anymore, because I don’t need it anymore.

    • Krogoth
    • 1 year ago

    It is because we were so spoiled by how easy it was overclock low to mid-tier chips for over a decade.

    Physics has caught up and Intel/AMD are no longer super-conservative with their binning. Their highest-tier SKUs are clocked close to their clockspeed/thermal wall and they have locked down their lower-end SKUs hard. Turbo-clocking pretty much makes arm-chair overclocking obsolete. You just need to get a good HSF solution and power supply. The CPU takes care of the rest.

    The new focus for gaming is now trying to reduce frame-times as much as possible. Overclocking is only a portion of the equation. Power users just try to find the sweet spot for stability, power consumption, clockspeed across the cores on their HEDT-tier chips.

      • Walkintarget
      • 1 year ago

      [quote<]The new focus for gaming is now trying to reduce frame-rates as much as possible.[/quote<] Wait ... what ?? Kids these days ... I like my FPS at higher framerates, thank you.

        • tipoo
        • 1 year ago

        Frame times maybe?

          • cygnus1
          • 1 year ago

          Yeah, agreed, pretty sure he meant frame times

    • Kretschmer
    • 1 year ago

    I am quite content to let my CPU seek its highest frequencies on its own. If anything, that enthusiast zeal seems to be moving to gaming laptops, where undervolting and tweaking can mean the difference between throttling and sustained max clocks.

    • blastdoor
    • 1 year ago

    From my perspective it’s great that CPUs can figure out how to safely overclock themselves so that I don’t have to bother.

    It seems what this means is that cooling becomes the primary focus for enthusiasts. You provide the cooling and the chip will provide the overclocking.

      • SuperSpy
      • 1 year ago

      IMO that’s a fine trade. I’d much rather spend my time engineering a way to better cool the machine than have to screw around with overclocking manually and the stability dance that follows.

      • Chrispy_
      • 1 year ago

      The granular 25MHz dynamic increment changes of the 2000-series Ryzens* is incredibly impressive. I’ll repeat my sentiment that you’ve better off putting your overclocking efforts and funding into RAM these days, since it’s not worth losing the dynamic clock changes for the sake of overclocking.

      Even on the Intel platform, few of us are hardcore enough that we want to give up dynamic clocking since the power and noise savings of speedstep when idle are well worth having, and several of the more serious overclocking methods require you to disable those features.

      *As long as they’re not TDP-limited like the 2700, which loses a lot of clockspeed beyond 4T and tanks hard at 16T.

    • YukaKun
    • 1 year ago

    I realized this with Sandy Bridge: it was way better to sacrifice “max OC” capability for “second best OC + power saving”.

    Now with my 2700X, I just went away with “OC” on my own and just slapped a Noctua NH-D15 to it. It boosts itself to the max with no help at all and I just need to play with the RAM.

    Cheers!

      • msroadkill612
      • 1 year ago

      Its a neglected selling point imo, if folks could get it.

      Factory kosher, automatic optimal clocking, with the option of more clocks via a cooling improvement.

      And as you say, to put all that aside, and focus entirely on tweaking vital memory, should yield a better overall result.

    • Mentawl
    • 1 year ago

    Sounds about right, to be honest. The most “overclocking” I’ve done to my newish 8700K is to let it boost up to 4.7ghz on all 6 cores instead of just 1 – that creates plenty of heat as it is. Plus I don’t need really need more single threaded performance.

    • Chrispy_
    • 1 year ago

    The Celeron 300A was a bit of a unicorn though because the way it was made was actually by mistake, The original Celeron was a bit of a dog because [i<]zero cache[/i<] wasn't enough to make it competitive with AMD and Cyrix. In attempting to reduce manufacturing costs of the Pentium II, Intel removed the on-package L2 cache and use a much smaller amount of [b<]on-die cache[/b<] instead. Conventional wisdom at the time said that cache quantity was what mattered, but at least the 128kb of cache moved on-die for the Celeron 300A would be able to run faster, and at lower access latency. What nobody at intel realised at the time, was that cache latency was just as important, if not [i<]more important[/i<] than the size of the cache for most consumer workloads, and thus many of us purchased the rich pickings of Intel's accidental discovery of this new fact. My trusty Asus 440BX board pushed my 300A to the limits of my RAM (125MHz) and probably had more headroom, if my eardrums could stand the punishment. FYI that was a $149 chip running at 564MHz against 1998's range-topping, $670 PII 450. Not only was the Celeron A nearly identical in IPC to the PII, it also seemed to have much more overclocking headroom, once freed from the burdens of off-die cache that required longer traces and extra clock control.

      • K-L-Waster
      • 1 year ago

      [quote<]What nobody at intel realised at the time, was that cache latency was just as important, if not more important than the size of the cache for most consumer workloads, and thus many of us purchased the rich pickings of Intel's accidental discovery of this new fact.[/quote<] That's definitely part of it, but the other part was that many 300A's were actually Pentium IIs that had been pulled to the Celeron line just to ensure they had enough parts to sell and keep market share. The PII had a base clock of 100 MHz, whereas the Celeron had 66MHz. So the simplest "overclock" in the world was to just put your mobo back to 100 MHz and let the chip run at the silicon's actual speed of 450MHz. Instant 0-risk 50% clock speed gain.

        • blastdoor
        • 1 year ago

        I’ll add that what nobody at Intel — at least not in the decision-making management offices — realized was that some of their customers were pretty sophisticated and that the Internet provided those customers with the means to share their ideas with a broader range of people eager to learn.

        If Intel had been trying that ham handed market segmentation strategy in 1990 it probably would have worked just fine, since there was no world wide web at that time.

        • Chrispy_
        • 1 year ago

        You’re getting confused with the Celeron 300 (vanilla, original). It was the same silicon as the PII 450 and the only differences between the PII 450 and Celeron 300 was the external L2 cache on the riser card either side of the CPU, and the default FSB clock (66MHz on the Celeron, 100MHz on the PII). They both shared the same 4.5 multiplier lock and I don’t doubt that Intel simply disabled the cache on PII 450’s to meet Celeron 300 demand – It would still be better than letting the sale go to AMD or Cyrix.

        The 300A was actually different silicon, more advanced and newer than the PII line. It had 128kb of L2 [i<]on-die[/i<] that the PII silicon lacked, and as an added bonus of it being so physically close (on-die) they didn't have to run the L2 cache at half the clock rate of the CPU and the Celeron A's ran their cache at the full CPU clock. The result of Intel's low-budget experiment was that an Overclocked 300A' 128kb of 450MHz cache was in some cases significantly faster than the Pentium II 450's 512kb of 225MHz cache.

        • Coyote_ar
        • 1 year ago

        No, thats the thing. The idea of the Celeron was to use Pentium IIs without the external L2 cache chip. It had a tiny L1 and no L2 … so it sucked big time.

        Then when they tried to fix it by adding a smaller L2 on the die … boom the thing was faster than the fastest P II when overclocked.

        Even then the fastest P II wouldnt overclock past 450 due to the L2 cache chips. But the Celerons would go past 550 with ease, specially if cooled with a TEC.

        Good times 🙂

      • Krogoth
      • 1 year ago

      Nah, Mendocino was Intel testing out on-die cache with the P6 architecture. They were faster than Pentium IIs at the time with mainstream applications because they didn’t really take advantage of the extra cache size on Pentium II. Pentium IIs were only faster at professional-tier stuff at the time. Marketing people thought that cache size and clockspeed is all that mattered.

      I tend to think the Mendocino as the “proto” Coppermine of sorts.

        • Chrispy_
        • 1 year ago

        Maybe they expected it to be faster and it was a prototype. I’m not saying you’re wrong.

        If that was the case though, why didn’t they produce a Pentium II “A”? like they did for the Celeron?

        I believe (and there’s no evidence for this, it’s just my opinion) that Intel were just plain stubborn and refused to shift their thinking away from “more is better”, even when the results were staring them in the face. My reasoning for this is that the stubbornness allowed AMD to overtake them, and it’s the same stubbornness that created the awful “more is better” Netburst architecture, despite AMD’s 2-3 years in the market proving that IPC, floating point and branch predicition improvements were more valuable than clockspeed.

        Maybe I’m wrong, but I firmly believe Intel was not smart about its business back in the 90’s and that’s the sort of complacency that caused them to lose a lot of business to AMD, both in desktops and also in the server rooms and datacenters. Only their corrupt antitrust, anticompetitive behaviour kept them afloat during this time, since all of their eggs were in one basket – the CPU business. AMD won 1.25bn off them, but the damage was done and Intel were now back in the game; A poked bear but with all the ruthless, illegal practices running as usual behind the scenes. The Athlon, followed by Intel’s Netburst blunder, should have ended Intel, but even further lawsuits from the FTC after AMD walked away from AMD vs Intel weren’t enough to undo the stranglehold of Intel’s business practices.

        None of that last paragraph is opinion or conjecture – go look it up, there are full-on, hour-long documentaries and numerous articles worth reading on the issue, and yet so many people are still in denial.

        /dismount soapbox.

          • willg
          • 1 year ago

          As others have said, the original Celeron 300 had no cache at all, it was a Pentium II die without the off-chip cache essentially. It performed pretty poorly against the competition.

          The Celeron 300A was a different chip, with 128kb of on-die cache running at full clock speed. The Pentium II’s of the era mostly had 512kb of off-die cache running as 1/2 or 1/3 of clock speed as I recall.

          As Krogoth has said, the Celeron line was likely chosen because 128kb of cache was feasible in the silicon manufacturing process of the time, and reduced the BoM for assembly. No off-chip cache package also allowed Intel to put it into a socketed design (Socket 370) to further reduce platform costs.

          I imagine Intel engineers knew ahead of time what the performance would be like, but product schedules and marketing probably complicate things more than we think.

            • Chrispy_
            • 1 year ago

            Ugh, I was just annoyed that I had to use a slocket adapter, and that the adapter didn’t have the plastic housing of the Pentium II, so it wouldn’t clip in properly to the slot 1 support bars.

          • Coyote_ar
          • 1 year ago

          the problem with a pentium II A … or Pentium III Katmai for that matters it was cache quantity.

          with the 0,25 micron process, a 512k cache was out of the question. and even a 256k one was a long shot.

          128k L2 was good enough for games, but for other applications it wasnt. But no one was using a celeron for high workloads, so that was ok. now the P IIs and P IIIs, they needed to respond to the demands for high end parts. and that required more L2 cache.

          only when they got the P IIIs to the 0,18 micron process on the coppermines, they managed to fit in the 256k L2 on die.

          As for AMD … remember the 1st gen athlons (argon) where using off die L2 aswell. they had the same kind of issues as intel had.

            • setaG_lliB
            • 1 year ago

            Everyone seems to be forgetting the K6-III. A 250nm part with 256K of on chip L2 that existed before the Athlon and P3 Coppermine.

            On a decent motherboard, the K6-IIIs offered up some beastly integer performance.

            • Anonymous Coward
            • 1 year ago

            If they hadn’t made a K6-III it would have been easy to write off the whole K6 core as garbage, but turns out there was really something there. Too bad about the floating point performance, and fabrication problems.

            • Coyote_ar
            • 1 year ago

            its not forgetting … but i do recall AMD lost a shit ton of money on those. Cause they werent very profitable. Thats why they didnt sell that much, and why they moved on to the Athlon. They went from ~9 millon transistors on the K6-2 to ~21 millon on the K6-III

            K6 III+ was a stopgap between K6-2 and the athlon. Performance of the K6-2 was barely competing with celerons, they needed something to get closer to P2 performance. remember they were still using a really small FSB from the socket 7 architecture. and adding more cache on die was the only option for K6-III+ (and turning the on board L2 into L3).

            You can tell AMD was bleeding money with those K6-III, the last K6-2 parts were actually K6-III chips with half the L2 disabled.

            You can have a nice read on the matter over here.
            [url<]http://www.amd-k6.com/history/[/url<]

            • Anonymous Coward
            • 1 year ago

            I don’t see how you conclude that disabling half the L2 was evidence of bleeding money, similar things been standard industry practice at various times without causing concern. Also note that the 128k L2 K6-2+ was fabbed on 180nm; no 180nm K6 was made without L2 on die.

            • Coyote_ar
            • 1 year ago

            They didnt disable half the L2 to save money, they did it to be able to sell parts that were defective otherwise. Of course its a usual practice, every company does it specially when they are having yield issues.

            The 250nm K6 III was a monster of a chip for that process, and yield wasnt that good. When they moved to 180nm they started using the defective parts for the K6-2+.

            K6 III wasnt a cost effective chip. The 450mhz part at launch was at the same price as a P3 450 Katmai. Sans the small detail that the Katmai was a 9,5m transistor part vs a 21m trasistor K6 III.

            Even when compared to the coppermine P3s, the 180nm K6s werent that profitable.
            by the time they were launched (apr 2000) they were mostly obsolete. And the only way AMD could sell them was by slashing the price to attract those who wanted to boost performance on an otherwise old socket7/super socket7 platform.
            Not only where they facing competition from Intel with the P3 coopermine, but also from the K7. In the end, 180nm K6 ended up competing with Celerons … not the intended target for a costly chip.

            • Anonymous Coward
            • 1 year ago

            Disabling parts of cache as needed is a time-honored way of boosting yields, apparently still used today. I don’t see why you’re fixated on it when it applies to a K6.

            The K6+ chips were aimed at mobile, I gather. Of course they were of no use against K7, but at the same time K7 was of no use in mobile (especially considering the packaging). This all seems fine.

            • Coyote_ar
            • 1 year ago

            a common practice today, wasnt a common practice back then. so thats why it was special.
            even more so when you consider that a chip needs to be designed in a way that allows for you to disable a part of the cache as a means to save a defective part. the initial 250nm K6-III didnt have that option, thats why there werent any 250nm K6-IIIs with part of the L2 disabled (and yields for those 250nm parts were even worse than the 180nm parts).

            and no, K6+ chips werent aimed at mobile. There were mobile CPUs, and there were embedded CPUs. the K6 2+ and III+ parts, were just the comercial name for the 180nm CPUs.

            they were mostly popular with mobile applications, due to low cost (specially for the K6-2+) and low power, while keeping a simple socket 7 format. but they were also very popular with people still stuck with low end platforms (super socket 7), and looking for a cheap performance upgrade.

            • bhtooefr
            • 1 year ago

            K6-2E+ and K6-IIIE+ were the embedded SKUs (although AFAIK they were basically identical to the non-E SKUs) – the entire point of the non-E + SKUs was mobile.

            However, because they were a desktop socket, and they had as much or more clock speed headroom than their desktop counterparts with lower power consumption and heat generation, that’s where enthusiasts got them. Motherboard support for them wasn’t fantastic, though. IIRC, a K6-III+ 400 at 560 was an extremely common overclock.

          • bhtooefr
          • 1 year ago

          They actually did produce a Pentium II with on-die L2 cache, the Dixon core, with 256 kiB – this replaced the Tonga (mobile Deschutes). In fact, Dixon is basically Mendocino with more cache, it identifies with the same CPUID family/model as Mendocino.

          (And then the last Dixons were actually die shrunk to 180 nm, before Coppermine. [i<]That[/i<] is the proto-Coppermine, although the die layout is completely different.)

      • Bomber
      • 1 year ago

      This…exactly this. My 300a was able to run similarly. Kept cold under the old Golden Orb…those were the days. It was insane how much you could eek out of that thing.

        • Chrispy_
        • 1 year ago

        Abit BE6? That board was a true champ.

        RIP Abit, I miss you.

          • Bomber
          • 1 year ago

          That’s the one. Yes…poor Abit.

      • PBCrunch
      • 1 year ago

      My personal Celeron experience was a pair of socket 370 Celeron 366 chips in an Abit BP6. Those chips ran at 550 MHz with stock voltage from day one until the system was pulled out service five years later. I’ve never looked forward to a new Windows release as much as when I was waiting for Microsoft to bridge the enormous chasm between Windows 98 and Windows NT 4 with Windows 2000. Every pre-Win2K bootup was a decision between game support (98) and getting to use the second CPU (NT).

      The gains from overclocking modern Intel CPUs are much weaker, but that is mostly because the company won’t let anyone overclock the chips that actually have some headroom. The K chips are already fast. Trying to overclock a Pentium III 550 wouldn’t get you much back when the 550 was the top-dog chip.

      Graphics chip overclocking died when Nvidia (and to a lesser extent AMD) went from using one or two chips to address all market segments via clock speed adjustments and feature cuts to having a whole family of dies within a generation. There isn’t much use in overclocking a GeForce GTX 1050 Ti when it will never come close to catching a bone-stock GTX 1060 because of massive bandwidth and shader count differences.

      Intel seems to have put more resources into product segmentation lately than it has into improving its technology. In the old days, the company couldn’t even keep users from using Celerons in 2P systems. Now Intel’s engineers seem to be able to keep clock speeds completely locked down and disable fine-grained features on a SKU-by-SKU basis. Imagine if those engineers had their eyes on keeping hackers out of CPUs’ caches instead of preventing those awful enthusiast buyers from having some fun.

        • Coyote_ar
        • 1 year ago

        Dual 366s @ 550 on a BP6 was THE setup back then. And the succesor to that combo was a pair of P3 700Es @ 933 on a VP6. Good times, overclocking was fun and a good deal.

          • Anonymous Coward
          • 1 year ago

          My 366’s were never stable in Linux, I was so annoyed that I threw big money at a pair of P3 Xeon 700’s, later on dual Opterons, after which point the serious stuff migrated over to a line of Thinkpads.

          • bhtooefr
          • 1 year ago

          …annoyingly, my dual 366s can’t play with the big dogs up at 550 MHz, although it’s stable at 517.

          (I need to try a better power supply, this one’s suspect. And maybe get better cooling, these are reference coolers, although the motherboard’s thermistors are indicating 43 C tops.)

      • iatacs19
      • 1 year ago

      I still have a Celeron 300A brand new in box, never opened for the nostalgic moments such as these… 🙂

    • psuedonymous
    • 1 year ago

    tl;dr: Everything already ‘turbo boosts’ (AKA overclocking from the factory) as hard as it can, generally more effectively than anyone could hope to manually overclock it while not compromising stability.

      • tipoo
      • 1 year ago

      I wonder how much of our performance gain in the last several years came from just using more thermal headroom, rather than architectural gains.

        • kurazarrh
        • 1 year ago

        It seems like, at least on Intel’s side of things–quite a bit. I still run a Core i5-2500K clocked at 4.5Ghz, and I have yet to find myself in a situation where the processor is the bottleneck in any given situation (I mainly use that PC for gaming).

        Every 6 months or so, I re-evaluate and check out benchmarks to see whether it’s time to upgrade. Sure, in synthetic benchmarks the 2500K shows lower numbers, but real-world use hasn’t suffered in my experience.

          • drfish
          • 1 year ago

          Depends on [url=https://techreport.com/review/31410/a-bridge-too-far-migrating-from-sandy-to-kaby-lake<]what you're playing[/url<]...

          • derFunkenstein
          • 1 year ago

          Your “yet to find (yourself) in a situation where the processor is the bottleneck” has nothing to do with Intel’s gains in later architectures. It’s a weird argument you’re making there.

          OTOH, ever-rising TDPs at least seem to indicate that Intel is willing to make that trade-off.

    • strangerguy
    • 1 year ago

    I had the same thoughts with the 4790K back in 2014: It’s already 4.2 – 4.4GHz out of the box with little headroom, whats the point of OCing?

    CPUs these days are already market segmented so much that there is little reason not to buy the highest clocked-and-most cores mainstream $300 SKU and be done with it without any OCing.

    Gone are the days of easy 50% OCs of <$100 chips that matches and even exceeds the performance of the stock $1000 vanity editions.

Pin It on Pinterest

Share This