The guy giving the presentation wouldn't say what the underlying mechanism in the real-time display was. I can give a little more detail on the static ones though (since he was willing to talk about those and there were several examples on display), and can speculate on what they might be doing in the real-time one.
The static ones weren't projected "off of" anything. They looked like sheets of flat plastic a few mm thick, which were (as I noted before) lit from above with a normal halogen incandescent bulb. The illusion of objects floating in the air above it (and in the floor below it as well) was rather eerie. This was a full 360-degree view; you could walk around it or spin it (it was mounted on a turntable), to see it from all sides. One of the examples showed two different things depending on which hemisphere it was viewed from.
There is a viewing angle limitation -- your eyes need to be somewhere in the 90 degree cone coming out of the center of the "image"; if you look at it from an oblique angle (more than 45 degrees off-axis), it just looks like a sheet of flat black plastic.
They have the ability to generate these things from almost any input source -- CAD models, 3D scans of physical objects, or even virtual 3D reconstructions of physical objects created from a series of still shots or video of a physical object taken from multiple angles. They've apparently been using these things to provide 3D maps of 'areas of interest" in Iraq and Afghanistan to the military, but the price is coming down to the point where commercial/consumer applications are now becoming feasible as well.
It was actually a little creepy -- if you try to touch the floating image, you get this vague sense of "wrongness" because your brain says there should be something there, but (obviously) your hand goes right through it.
I'll try to summarize the tech as I understand it; the slides from the presentation haven't been made available for download yet, so this is from memory.
Like any hologram, the goal is to create virtual "wavefronts" of light, which arrive at your eye as if they had emanated from a particular point in space. "Classic" holograms required a physical object which was scanned with a laser, creating interference patterns in photographic film; when later illuminated with a laser and viewed, these interference patterns re-create the virtual wavefronts, tricking your eye into seeing the original 3D object. Unlike the binocular 3D tech used in 3D movies, etc. you're not just using binocular parallax to apply the illusion of depth to a 2D image; you're simulating the light being reflected off of a physical object, in all directions (well, in this case at least all directions within a 90 degree cone...) at once.
The "magic sauce" these guys seem to have is the ability to generate a color hologram from pretty much *any* 3D representation of an object, real or virtual. If I followed the talk correctly, what they're basically doing is creating over a million tiny (0.7mm by 0.7mm) color holograms in the plastic sheet (they call these "hogels", for "holographic pixels"). Each hogel, when viewed from any angle within the 90 degree viewing cone, uses the above mentioned optical interference effects to reflect precisely the color and intensity of light that you would see if viewing the object from that angle. Creating these holograms is an extremely computationally intensive task, since each hogel must be rendered from all possible viewing angles; I have no idea what the angular resolution is, but whatever it was there was no visible jerkiness when the hologram was rotated -- it was very smooth. But the *really* tricky part is exposing the film -- IIRC it was stated that the distance between the head that does the "writing" of the hogels and the film must be controlled to sub-micrometer precision or the interference patterns are destroyed.
For the real-time display each hogel would need to be able to emit a 90 degree cone of light, with the color and intensity modulated based on direction of emission. Off the top of my head, the only way I can think of doing this would be via some combination of next-gen DLP imaging element (micromirrors) that allows fine control over direction of reflection in 2 dimensions, lasers, and rotating mirrors. IIRC it was stated that each *frame* of the real-time display represents approximately 1.5TB of data, and that they've achieved frame rates of 15Hz... that's a frikkin' *lot* of data!
Here's the web site of the company that gave the presentation:
http://www.zebraimaging.com/