Is there something on a conceptual level to better understand modern video? Or is it just a mudpile as it sometimes seems to be? Here is some of how I conceptualize what is going on in the computer to put a display on my screen or in that new TV. Any comments about how I might better see what is to better understand what might be and where things might go would be welcome.
It seems digital would be just like digitizing VGA but then, that is just RGB channels presented as for a CRT complete with scan lines, blanking intervals and such. It seems digital video might use something like the camera sensors with a shift register type data pump but that doesn't seem to be the case.
As I understand it, DVI/HDMI uses 2 methods of data compression and maintains the timing characteristics that go back to CRT days, including the blanking intervals. One method of compression is that of differential color channel resolution (luminance, chrominance) harking back to old CRT color TV technology. The other is jpeg like where blocks of screen real estate are analyzed as a whole. How that jpeg like compression and the scan line model work together is a conundrum as it seems the display device would need to reassemble several scan lines before showing them, but I guess modern displays assemble an entire frame before putting it up for display. But then, if doing this, why the timing instead of some more direct pixel addressing scheme (like in image files?)?
The blanking intervals, as I understand it, are being used for audio and control signals. That is some testimony to bandwidth if something less than ten percent of the signal time is sufficient for 7.1 channels of audio plus some other stuff. It also brings up some thinking about reassembling the data for continuous output at a proper rate.
The old, analog mechanisms were highly synchronous. It seems the newer digital methods are keeping some of this as unnecessary artifacts and I wonder why. But that gets into the issues of streaming and the definition of codecs for live display and their issues. Then there is the modern codec model and its processing needs and how those are being met in hardware and the evolution of its placement in the video data reception and display string.
The reliance of modern digital video on concepts developed in the very early days of TV says something about the quality of that research but I wonder if there is an indication we'll get something other than incremental improvements on that?
Then there's video over firewire and USB - do they follow the DVI scheme or something different?
Is this view somewhat in line with reality? When might the future break with the past? Or if not, why not? If I'm lost at sea, which way should I point the life raft?