Just had a thought cross my mind. These days, manufacturers (CPUs and GPUs alike) are pushing their cards to the ragged edge from the factory with dynamic frequency boost algorithms. With TR's introduction of inside-the-second analysis, the industry quickly became more in-tune to improving frame-time consistency and largely solved the "problem". However, with dynamic clock rates becoming increasingly common, perhaps we're effectively taking a step backward in "smoothness" in order to consistently squeeze out every ounce of performance a system can muster? Also, your "experience" may vary greatly depending on how much cooling you've invested in?
Main: i5-3570K, ASRock Z77 Pro4-M, MSI RX480 8G, 500GB Crucial BX100, 2 TB Samsung EcoGreen F4, 16GB 1600MHz G.Skill @1.25V, EVGA 550-G2, Silverstone PS07B
HTPC: A8-5600K, MSI FM2-A75IA-E53, 4TB Seagate SSHD, 8GB 1866MHz G.Skill, Crosley D-25 Case Mod