The days of overclocking for the casual PC enthusiast are numbered, at least if we define "casual enthusiast" as "a person who just wants to put together a PC and crank everything to 11."
We've become more and more fenced in about the chips we can and can't tweak on the Intel side of the fence as the years go by, but efforts at product segmentation aside, the continued race to get more and more performance out of next-gen silicon may put the final nail in the coffin of casual overclocking's wizened form regardless of whose chip you choose. The practice might not die this year or even next year, but come back three to five years from now and it'd surprise me if us dirty casuals are still seriously tweaking multipliers and voltages in our motherboard firmware for anything but DRAM.
The horsemen of this particular apocalypse are already riding. Just look at leading-edge AMD Ryzen CPUs, Intel's first Core i9 mobile CPU (sorta), and Nvidia's Pascal graphics cards. The seals that have burst to herald their arrival come from the dwindling reserves of performance that microarchitectural improvements and modern lithography processes have left chip makers to tap.
As per-core performance improvements at the microarchitectural level have largely dried up, clock speeds have become a last resort for gaining demonstrable improvements from generation to generation for today's desktop CPUs. It's no longer going to be possible for companies to leave clock-speed margins on the table through imprecise or conservative characterization and binning practices—margins that give casual overclockers reason to tweak to begin with. Tomorrow's chips are going to get smarter and smarter about their own capabilities and exploit the vast majority of their potential through awareness of their own electrical and thermal limits, too.
AMD has long talked about improving the intelligence of its chips' on-die monitoring to lift unneccessarily coarse electrical and thermal restrictions on the dynamic-voltage-and-frequency-scaling curve of a particular piece of silicon. Its Precision Boost 2 and XFR 2 algorithms are the most advanced fruits of those efforts so far.
Put a sufficiently large liquid cooler on a Ryzen 7 2700X, for example, and that chip may boost all the way to 4 GHz under an all-core load. Even if you manage to eke out another 200 MHz or so of clock speed from such a chip in all-core workloads, you're only overclocking the chip 5% past what its own monitoring facilities allow for. That performance comes at the cost of higher voltages, higher power consumption, extra heat, and potentially dicier system stability, not to mention that the 2700X is designed to boost to 4.35 GHz on its own in single-core workloads. Giving up any of that single-core oomph hurts.
When the difference between a Ryzen 5 2600 and a Ryzen 5 2600X is just $20 today, and a Ryzen 7 2700 sells for just $30 less than its X-marked counterpart, I have to wonder whether the tweaking is really worth the time. If one can throw $100 or so of coolant and copper at the problem to extract 95% of a chip's performance potential versus hours of poking, prodding, and testing for stability, well, I know what I'd rather be doing, to be honest. As I get older, I have less and less free time, and if it's down to gaming or not gaming, I'm going to do the thing that lets me game more.
The slim pickings of overclocking headroom for casual tweakers these days doesn't stop with CPUs, either. Nvidia's Pascal graphics cards enjoy a deadly-effective dynamic-voltage-and-frequency-scaling algorithm of their own in GPU Boost 3.0. Grab a GeForce GTX 1080 Ti equipped with any massive air cooler or hybrid arrangement, for just one example, and you're already within single digits of the GP102 GPU's potential.
We got just 6% higher clock speeds versus stock out of a massive air-cooled GTX 1080 Ti and about 7% higher clocks out of a liquid-cooled version of that card, all at the cost of substantially higher system power draw. I don't feel like the extra heat and noise generated that way is worth it unless you just enjoy chasing the highest possible benchmark numbers. That's a fine hobby in its own right, but single digits just aren't going to make me pursue them for their own sake these days.
Lest you think I'm being fatalistic here, there was a time—almost 20 years ago, to be exact—when ye olde Intel Celeron 300A with 128 KB of L2 cache could famously be tapped for a whopping 68% higher clock over its stock specifications with a good sample and the attentions of a casual enthusiast. The 300A sold for much lower prices than the chips it proceeded to outpace at those speeds, too. When we talk about casual overclocking, the Celeron 300A is perhaps the high-water mark for what made that kind of tweaking worth it.
Sure, you might take a Core i7-8700K from its 4.3 GHz all-core speed to 5 GHz under non-AVX workloads, but that 16% of extra speed comes with roaring CPU fans and an exceedingly hot chip without some kind of thermal-interface-material-related surgery. You can bet that double-digit margin will rapidly shrink as soon as Intel releases a next-generation architecture with more intelligent Turbo Boost behavior that's not just tied to the number of active cores.
Turbo Boost 2.0 was introduced with Sandy Bridge chips all the way back in 2011, and the technology has only received a couple notable tweaks since then, like Turbo Boost Max 3.0 on Intel's high-end desktop platforms and the XFR-like Thermal Velocity Boost on the Core i9-8950HK. Like I've said, Precision Boost 2 and XFR 2 both show that there's more dynamic intelligence to be applied to CPU voltages and frequencies.
AMD, to its credit, is at least not working against casual overclockers' chances with TIM under its high-end chips' heat spreaders or by segmenting its product lines through locked and unlocked multipliers, but that regime may only last as long as large amounts of clock-speed headroom become exposed through better microarchitectures and process technology. The company's lower-end APUs already feature TIM under the heat spreader, as well, limiting overclocking potential somewhat. More capable Precision Boost and XFR algorithms may ultimately become the primary means of setting AMD CPUs apart from one another on top of the TDP differences we already come to expect.
As we run harder and harder into the limits of silicon, today's newly-competitive CPU market will require all chip makers to squeeze every drop of performance they can out of their products at the factory to set apart their high-end products and motivate upgraders. We'll likely see similar sophistication from future graphics cards, too. Leaving hundreds of Hertz on the table doesn't make dollars or sense for chip makers, and casual overclockers likely will be left with thinner and thinner pickings to extract through manual tweaking. If the behavior of today's cutting-edge chips is any indication, however, we'll have more time to game and create. Perhaps the end of casual overclocking won't be entirely sad as a result.