just brew it! wrote:Parallax wrote:just brew it! wrote:We're basically trading off blur for flicker; with fast enough frame rates the flicker will not be perceptible. Sounds like we're nearly there.
Please be careful with how much flicker you introduce though. Perceptible flicker frequency varies greatly from person-to-person, and often reaches several hundred Hz. Imperceptible flicker (i.e. faster than perceptible flicker) can continue to have neurological effects even into the multiple kHz range.
Which is why I said "we're nearly there" instead of "we're there".
Correct, sir. Flicker is a tradeoff versus motion blur.
I'll use scientific terminology here;
Sample-and-hold effect can only be eliminated by shortening frame sample lengths.
Sample lengths (length that a stationary refresh is illuminated) can only be reduced by two methods:
(1)
More samples. Higher "fps=Hz"! Extra frames and extra refreshes, either natively or interpolated (e.g. LCD)
and/or
(2)
Black period between samples. Irregardless of how it's done (e.g. CRT phosphor decay, plasma pixel modulation, DLP pixel modulation, backlight control, black frame insertion, SED pulses, OLED pulses). It's _all_ exactly the same perception trick from the human eye perspective, the black period between refreshes. e.g. You do need at least 10:1 ratio (dark 90% of time) for 90% motion blur elimination.
Example:
High Speed 1000fps YouTube Video of CRT Scanning -- A 'point' (pixel) on CRT is dark roughly 90% of the time.
All CRT's do that. Every single television tube ever made. Every single monitor tube. Any tube that does a raster. Can you believe our human eyes turns that scanning mess, into a perfectly clear image? It's an amazing perception trick of the human vision system.
Yes, CRT was a perception trick. Real world does not flicker like a CRT.
With an oscillscope and a photodiode, I have measured the sample lengths of a LightBoost LCD to be 2.4 milliseconds -- (1/500sec). It was also confirmed to have 3 times less motion blur than a 144 Hz sample-and-hold display. On a CRT, the sample lengths of a pixel is approximately 1 to 2 milliseconds (phosphor illuminate-and-decay cycle).
Excluding other variables (eye tracking inaccuracies, pixel persistence which is now bypassable, source-generated motion blur)
Motion blur is already scientifically proven to be directly linearly proportional to sample length. e.g. Sample length of 6ms always has twice as much motion blur as a sample length of 3ms.
Also, here are my measurements from my oscillscope, with a photodiode pressed against the LCD in LightBoost mode:
PixPerAn (a motion test) also easily show a motion blur elimination directly proportional to the reduced length of the sample length -- exactly the same motion blur behavior as a CRT flicker. To the human eyes, it is exactly the same 'perception trick' regardless of how the flicker is being accomplished. Even the
university and TV manufacturer research agrees (scroll down halfway for the relevant links).
In other words, all scientifically proven exactly the same perception trick. CRT flicker, plasma flicker, precisely-controlled backlight strobe, SED pixel strobe, OLED pixel strobe (for impulse-driven OLED's) -- all the same. It simply is shortened sample lengths, and that's what causes elimination of motion blur. The phrase "sample-and-hold" is common in display research / display science papers.
Just as with all technologies, CRT is an artifical invention because the real world is not scanned/flickered in a sequential raster scan. All a perception trick. Humans vision is essentially an analog system that do not have a "frames per second". Dividing motion objects into a flipbook of discrete images, is an artifical invention necessary for recording of motion/playback of recorded motion because there's no way to record motion images without dividing them into separate/discrete frames that's played back in sequence (e.g. 24fps, 60fps, etc). If we call LCD with LightBoost a perception trick too, then it total nonsense to not call CRT a perception trick, either. That said, it's a great and wonderful perception trick -- an amazing electronic implementation of the zoetrope principle -- and when you truly think about it, it is impressive (from a vision research perspective) that our human eyes can turn a flickery/scanny CRT into a continuous motion image and with blur-free results. We're just used to not calling it an amazing parlour trick or perception trick, because CRT raster scanning is the way television has been done for more than 60 years.
Feel free to ask me any questions!