Fun with raytracing

Traditional GPUs render a scene by taking the scene geometry (expressed as 3D meshes of triangles) and “pasting” 2D bitmap images (textures) onto those meshes. Pixel and vertex shader programs running on the GPU may also be used to apply additional visual effects to the scene. The advantage of this approach is that it is easy to build highly parallel hardware that can execute texturing operations and shader programs very quickly.  All current consumer GPUs rely on this technique.

3D images can also be rendered via ray tracing, which attempts to model how light actually behaves in the real world.  Properly applied, it can yield very realistic images. Ray tracing is particularly effective at rendering reflective and refractive objects. Unfortunately, ray tracing is computationally very intensive. Affordable hardware that can do ray tracing in real-time at a level of detail comparable to that of rasterizing GPUs doesn’t exist, and is probably still a few years off. Since framerates measured in seconds-per-frame tend to have a negative effect on playability, I don’t think we are going to see mainstream ray traced games for a while yet.

Interestingly, most ray tracing algorithms actually work in reverse—instead of tracing the rays from the light sources, reflecting them off the objects in the scene, and then into the camera, they “shoot” rays out from each pixel of the image into the scene, tracing the paths of the rays backwards until they hit a light source. The optical properties of the surfaces each ray reflects off (or passes through), as well as the color of the light source determine the color of the corresponding pixel of the image. The reason for doing things backwards like this is efficiency—very few of the rays of light coming from a light source ever reach the camera, so doing things backwards reduces the amount of calculation required by several orders of magnitude.

Although real-time ray tracing for the masses is still a pipe dream, you can play around with ray tracing algorithms on your PC today (you just have to wait a while for each frame to render). The POV-Ray ray tracer is a popular ray tracing package; it has been around for years, and is available as a free download for Windows, Mac, and Linux. On Windows or Mac, download and run the appropriate installer from the POV-Ray site. For fans of the mighty penguin, your best bet is to install POV-Ray directly from your Linux distro’s repositor.  On Ubuntu, the packages you want to install are povray and povray-includes; you probably also want to install povray-doc (local copy of the POV-Ray manuals) and povray-examples (sample POV-Ray images). Complete documentation for POV-Ray is also available on their web site, here.

POV-Ray images are created using POV-Ray SDL (Scene Description Language), a programming language that has some superficial similarities to C. In POV-Ray SDL, complex objects are built up out of simpler ones, much as complex data structures in C are built out of the basic data types. Objects are constructed according to the principles of CSG (Constructive Solid Geometry), which allows new objects to be created as the 3-dimensional union, difference, or intersection of simpler objects. The properties of the material each object is composed of are also defined, so that the ray tracer knows how the light rays are affected when they reflect off or pass through the object.

Many options are available for defining the properties of a surface or material. Simple surface pigmentation (as a red/green/blue color triple) can be specified, as well as how reflective or rough the surface is. Complex surface patterns can also be created using procedural textures. POV-Ray comes with a number of pre-defined textures for metal, wood, and stone surfaces; these can be used as-is, or modified to suit your whims. Translucent or transparent materials can also be defined, including a material’s refractive index (which determines how the light rays bend as they pass through the object).

Simple bitmap images can also be applied to a surface of a CSG object, much like the texture mapping of a rasterizing GPU.

I’ve created a sample POV-Ray scene that illustrates many of the principles of CSG and demonstrates the use of POV-Ray’s predefined stone, wood, and metal textures. This scene also shows how POV-Ray handles transparent refractive objects. You can download the POV-Ray file for the sample scene here.

The first four images were generated by the sample POV-Ray program linked above. They are all of the same scene; the only thing that has been changed is the camera location. Click each picture for a higher resolution version.

Still life with chessboard and LCD monitor

Closeup of the red glass pawn (nifty refractive effects!)

Another view, from off to the left and a little lower down than the first view

Closeup of the brass rook (you can see multiple images of the monitor reflected in the rook, and in the pawn off to the right)

All things considered, this is still a fairly primitive POV-Ray scene. The lighting and object models are simplistic (I chose to use only pawns and rooks because they are relatively easy to model in CSG), and I haven’t enabled any of the more sophisticated effects (focal blur, radiosity, etc.), which would result in more photo-realistic images (along with much longer rendering times). But even without these effects, it is possible to produce some interesting images.

POV-Ray isn’t limited to creating images that mimic everyday objects—its capabilities are only limited by your imagination and willingness to experiment. You can download another sample SDL file here that’s actually quite a bit simpler than the first one. It creates a 3D grid of reflective metallic spheres and places the camera and a single light source at the same point inside the grid. The resulting image (consisting solely of repeated reflections of the light source between the spheres in the grid) is surreal:

I hope you’ll decide to download a copy of POV-Ray and play around with it. Once you grasp the basic concepts behind CSG and get the hang of working in SDL, ray-tracing can be a lot of fun… and rather addictive!

Comments closed
    • indeego
    • 10 years ago
      • jobodaho
      • 10 years ago

      That might be the nerdiest video that I both understood and enjoyed…good find.

      • Meadows
      • 10 years ago

      His speech had far too much “um” and “uuuh” for my liking.

        • Mourmain
        • 10 years ago

        Completely… that was an almost perfect example of how to disadvantage a good product with bad presentation.

          • Meadows
          • 10 years ago

          It looked quite “set-up”, so I just don’t see why they didn’t rehearse it.

    • lycium
    • 10 years ago

    povray’s reflection model is absolutely hideous :S it is to cg as comic sans is to typography.

    (i specialise in coding physically-based ray tracers fwiw)

      • just brew it!
      • 10 years ago

      I know little about the specifics of the algorithm. But to put on my speculation hat for a moment, I imagine the algorithm being used by POV-Ray was developed a number of years ago, when compute power was much more expensive. Perhaps the problem is that they take short cuts to reduce processing time? Or has state-of-the-art simply advanced way beyond what was known when POV-Ray was developed?

        • zqw
        • 10 years ago

        It appears to be mixing phong highlights with perfectly sharp reflections. That, plus the CSG-only geometry, and no texture/displacement makes for some very 90s (if detailed) pictures. Adding the radiosity mentioned might help a lot. Multiple secondary (reflected) rays are a must for the glossy (semi-sharp) reflections, also some less-perfect refraction.

        I’m not trying to bag on some cool free software, just answering a question. I think it’s great that it’s out there.

          • just brew it!
          • 10 years ago

          Some (all?) of the issues that you mention are probably limitations of the (admittedly crude) CSG models I used, not POV-Ray in general. The mix of Phong highlights and sharp reflections in the image are likely just a consequence of using the default metal textures; this could probably be tweaked quite a bit to get better looking results. I know that surface bump/displacement maps are supported as well; I just didn’t code those (because I wasn’t sure how, and didn’t have time to do the digging to figure it out).

          That said, yes POV-Ray is showing its age. They still don’t have proper SMP support in the current released version (though the current beta version supports it). There are probably other areas in which technology (and state-of-the-art for ray tracing in general) have overtaken it as well. But it is still a fun package to play around with, and I like the fact that it is completely cross-platform. I’ve certainly learned quite a bit about the basics of ray tracing just from messing around with it.

          If lycium is still watching this thread, I’m also genuinely curious about what he thinks is broken with POV-Ray’s reflection algorithm.

            • lycium
            • 10 years ago

            still watching 😉 there’s nothing wrong with their computation of reflection (if we’re talking about just the reflection vector), it’s their /[

        • lycium
        • 10 years ago

        absolutely correct; in fact, the newer non-diffuse models also have their roots in the phong model, though some key improvements have been made (based on the half-vector not reflection vector, energy conservation and reciprocity etc.)

    • Inane_Dork
    • 10 years ago

    If you want a mental challenge, look up how monte carlo ray tracing, photon mapping or bidirectional path tracing work.

    “Ray tracing” is, anymore, a (large) group of related algorithms rather than a single idea.

      • lycium
      • 10 years ago

      it’s not so bad 😉

    • indeego
    • 10 years ago

    That you for doing this with the blue TR theme, as far as I can tell the only proper way to view the siteg{<.<}g

      • Dagwood
      • 10 years ago

      White on blue, ugggh, didn’t you get enough of that with Word Perfect 5.1

      • flip-mode
      • 10 years ago

      I am so used to the new default theme that the blue theme would be very unpleasant for me.

    • Dirge
    • 10 years ago

    Very cool read, and I also enjoyed the article about python

    • Mourmain
    • 10 years ago

    Nice! The Python article actually made me wonder about how hard it would be to learn to use POV-ray. So this next article is right on point! 🙂

    • SuperSpy
    • 10 years ago

    We’re missing trshot3.png

      • crazybus
      • 10 years ago

      Read the comments. You need to replace the image with one of your own choosing.

      • just brew it!
      • 10 years ago

      You can just substitute a screenshot of your choice. You may also need to copy arialbi.ttf (Arial Bold Italic font file) into the current directory to get the render to work properly. The comments at the top of the file explain this.

    • The Dark One
    • 10 years ago

    I thought radiosity was actually cheat to /[

      • HurgyMcGurgyGurg
      • 10 years ago

      All of raytracing at this time is a cheat to cutdown rendering times since all algorithms used (To my knowledge) are only approximations of the required calculations for true raytracing simulating all of lights properties.

      Radiosity is just one of many of those approximations within the rendering equations.

      I think photon mapping is more accurate and does a much better job at photorealism than ray tracing. But it takes a whole lot longer.

        • Meadows
        • 10 years ago

        You may want to go for photon mapping with spectral rendering added in for good measure.

      • just brew it!
      • 10 years ago

      You’re thinking of what POV-Ray refers to as “ambient lighting”. Radiosity is where you attempt to calculate the actual of light reflected off of all the other nearby objects in the scene when determining how a particular object is illuminated.

        • Scrotos
        • 10 years ago

        I have read that raytracing excels at specific lighting but doesn’t have a good model for overall ambient light. Unfortunately, I cannot re-find the article I read on that which went into specifics as to why. That’s probably why POVRay hacked in some type of global illumination.

          • just brew it!
          • 10 years ago

          POV-Ray also includes support for radiosity, which is a more realistic lighting model. Unfortunately, radiosity also increases the computational complexity quite a bit (a moderately complex scene with quality settings turned up can literally take hours to render).

    • HurgyMcGurgyGurg
    • 10 years ago

    While different, Blender is another nice free program for 3D creation. I’ve never used povray so I don’t know how it compares but Blender can still do interesting things with only basic knowledge. Yeah, learning it all is a pita but I’ve never felt the need to take courses or anything just for playing around like this.

    The Blender internal ray tracer probably can’t compare to povray though. Although Blender projects can be exported to programs like Indigo and Yafa-Ray.

    • Meadows
    • 10 years ago

    g{

    • ssidbroadcast
    • 10 years ago

    Thanks for the neato compare/contrast lesson in RayTrace and Raster, jbi.

    Just don’t quit your day job.

    • skitzo_zac
    • 10 years ago

    “Since framerates measured in *[

      • barleyguy
      • 10 years ago

      Nope. Seconds-per-frame is correct. Which means it’s slow.

        • DancingWind
        • 10 years ago

        Well 😀 FPS is still possible 😀 0.02 Fps – 1 frame every 50 seconds 🙂

          • Meadows
          • 10 years ago

          Possible, but that doesn’t accentuate the problem. 😉

      • just brew it!
      • 10 years ago

      Nope, I really meant seconds-per-frame — as in it takes many seconds to render. The images of the chessboard scene took several /[

      • UberGerbil
      • 10 years ago

      …in the same way the fuel efficiency of the M1 Abrams tank is measured in gallons per mile.

    • BoBzeBuilder
    • 10 years ago

    I want to see the GPU manufacturers at least attempt to make dedicated raytracing hardware in the near future.

    Also I don’t get how the rays are traced backwards. If raytracing starts at the camera, how would it know where to concentrate to form a light source? I have a hard time imagining it, it’s like you have the answer(image) and you have to calculate the problem(source)?

    • Fighterpilot
    • 10 years ago

    Nice article…that second pic the red pawn, is cool.

Pin It on Pinterest

Share This