I think I understand what is happening. The frequency at which the monitor switches on and off to regulate brightness is nearly in phase with the camera, but not exactly. If it was perfectly in phase, I would get an steady image. But, as it is nearly there, it shows smooth transitions which are not real but perceived. I think Nyquist's theorem was at work here.
Yes. You'd get a steady image, or black, depending on if they were exactly in or out of phase (assuming the duration of the backlight's "off" pulse matched the camera's "shutter speed" exactly). But because they're close but different, they come into and out of phase slowly and you get a "signal" that is a product of the two frequencies. Superheterodyne receivers
take advantage of this effect to effectively bring a high frequency input down to a much lower frequency where it is easier to manipulate, using a second input that is close to but not exactly at the frequency of the target signal (this is, among other things, what made cheap radar detectors possible).
You actually used to see this in videos of CRTs if the camera and CRT were very close in frequency, but instead of a luminance variation you'd see a slow roll of the blanking interval going up (or down) the CRT. This is also related to the old wagon wheel effect
And as JAE points out, anybody who has tuned a guitar using harmonics has taken advantage of this effect whether they understood what was happening or not.