LOL, there you go; Proof that an algorithm can recognise the problem. Why do we let humans do rubbish work when we know a machine will do it perfectly?
It's even dumber than that. There isn't even a need for fancy algorithms to guess where things may be clipped if you do it ahead of time. It's a simple "Will this audio file, when mastered at this level of gain, cause the level to exceed the range of a 16-bit (or 24-bit in the case of HD audio) number? Literally "is A greater than B or less than -B". A modern desktop computer could check an entire CD's worth of audio for this in a couple of seconds.
Sure, you don't have that luxury in a live recording situation, but that's not what we're dealing with here. These are studio recordings, which have been carefully recorded and mixed. Then some dumbass decides to crank it to 11 during mastering, and squares all the peaks off.
I get really annoyed that people encoding things can't use the whole volume range. Nothing's worse than having to crank my sound system up twice as far as it should be, just because the audio of what I'm watching only utilises 0-30% of the available channel range. Not only does that amplify the interference from other sources, it also means that when I get a notification or other noise at "normal" volume it startles/deafens me and probably doesn't do my surround system any good either
Time for a digression/rant...
Real-life audio is very "peaky". The loudest transients, like drum hits, can have an instantaneous peak that is many times higher than the surrounding audio; it doesn't sound
that way though, because it is so brief. In order to keep the peaks from overloading things (or, in the case of vinyl LPs back in the day, from cutting over into the next groove), the peaks need to be kept at a safe level. But this means the average
level (which is what people mostly perceive as the volume level) is much lower.
A few decades ago, some genius figured out that if your song sounded louder on the radio than everyone else's, people were more likely to notice and remember it. So how do we make a song sound louder overall, without making the peaks too big? Well, we raise the overall level, but reduce the gain during the peaks to keep things in a safe range. And this is how the "loudness wars" were born. Originally this gain manipulation was done manually ("gain riding") by the recording engineer. Nowdays it is done via various automated "dynamic range compression" algorithms.
In a nutshell, DR compression makes everything sound louder. But if overused (as it too often is), it squeezes the life out of the music. The transient peaks are part of what makes music sung and played on real instruments by real people sound, well, real. DR compression is effectively clipping's less evil twin. It tries to preserve the overall shape of the waveforms, but varies the gain to maintain near-constant volume. With clipping, OTOH, you just crank the gain, and throw away the tops and bottoms of any waveforms that no longer fit.
Ever notice how commercials often sound louder than the program they appear during? That's intentional too, and it's DR compression -- advertisers are trying to get your attention by being louder.
So where am I leading with all this? Well, the things that you are complaining are "too soft" probably haven't been DR compressed. It is quite possible that they are actually of higher
fidelity, in the sense that they preserve more of the original dynamic range of the material.
Which brings us to something called "replaygain". This is an algorithm that analyzes an audio track (or album), using a model of human hearing to estimate how loud it will sound. Then a "gain factor" is placed in the meta-data of the audio tracks. Replaygain-aware playback software and devices can then automatically turn the gain up or down at the start of a track. This eliminates the annoyance factor of having to turn the gain up or down manually, while preserving the dynamic range of the content (since the gain is set just once per track, not instantaneously as the intensity varies within a track). There's also an "album mode", which applies a constant gain setting across multiple related tracks; this preserves the relationship between loud and soft movements in classical pieces, and prevents jumps in gain for albums where multiple tracks run one into another (think the medley at the end of Abbey Road
Replaygain doesn't alter the original content at all; it's just a hint to the playback equipment telling it where the gain should be set to preserve a constant perceived volume level for a given track/album relative to other content. In a perfect world, all audio content would have replaygain meta-data.
Don't get me wrong; DR compression has its place. For spoken word, it can improve intelligibility by compensating for moments where the person leans away from the mic, for example. And used sparingly in music, it is unobtrusive and can improve the overall listening experience in noisy listening environments, and by keeping the peaks from exceeding the limitations of typical consumer audio gear. But it is just way over-used. The loudness wars need to DIAF.