New video shows more of Euclideon’s voxel rendering mojo

You may have seen our writeup about Euclideon yesterday. The company’s technology couples voxels with 3D-scanned data from the real world, and the results are pretty compelling.

The firm has now posted a new video that shows its voxel-infused graphics in greater detail. Take a look:

Yep. Still very impressive. The church model looks particularly lifelike—although I’d expect no less, given the size of the compressed point-cloud data: 3.8GB, according to Euclideon CEO Bruce Dell. The outdoor environment has some rough edges here and there, but I’d expect 3D-scanned scenes like these to get smoothed out and prettied up by artists once Euclideon’s technology makes it into games next year.

Check out the image gallery from yesterday’s article for some screenshots of these environments.

Comments closed
    • jihadjoe
    • 5 years ago

    As impressive as this demo is I can’t help but feel their tech is outdated from day one.

    Nvidia’s VXGI has none of the “static only” drawbacks, and as shown in their keynote demo is already working with an engine that developers are familiar with (Unreal).

    • kamikaziechameleon
    • 5 years ago

    I’m interested to see what happens when this makes it into a product I use.

    🙂

    • sschaem
    • 5 years ago

    Same comment from me.

    All is static and there is no ‘real’ lighting. By no light I mean, its all backed.
    This work for ambient and diffuse material, but not beyond.

    I also believe the room data is in the Terabytes.
    And it might be a lot more if they have to include material property for dynamic lighting.
    Also rendering might be even slower if they add lighting, specially specular as this can require at least once scene bounce.
    I also dont believe dynamic object would work to well without more computation to re light the surrounding scene…

    But its a neat tech demo. What it show most , is that they do have a fast streaming database traversal.

    • moose17145
    • 5 years ago

    If this is what they are capable of doing with just a 6 core CPU…. is anyone else curious what these scenes could look like if they would just give in and use a dang high end videocard already…

    I’m just saying… we have these exquisitely powerful videocards already as is… we may as well use them.

      • sschaem
      • 5 years ago

      There is a reason why they use the CPU…

      Because its faster then a GPU for this type of tech demo. There is actually very little rendering going on.

      What the demo does is not render a 3d scene, but parse a point cloud of pre lighted data.

      Its a streaming database traversal.

    • ColdMist
    • 5 years ago

    Welcome to Toy Story 1. Static elements.

    This has potential, but not until it is upgraded to version 20.1 or something.

    When they get swaying trees, leaf fronds that move, etc, then come back. Until then, nice for interiors of buildings, spaceship models, etc. Not so good for any outdoor shots.

    • f0d
    • 5 years ago

    imo videos like this show off the euclideon tech better than most videos they have shown
    [url<]http://youtu.be/BehxAWT8Gs8[/url<] this tech is pretty amazing but im not quite sure how well a game would be on it

    • SnowboardingTobi
    • 5 years ago

    i like it a 100x more than before just because it doesn’t have that annoying guy narrating it

    • sweatshopking
    • 5 years ago

    you guys are whining but there’s nothing on the planet that looks close.

      • chuckula
      • 5 years ago

      Yeah there is!

      [url<]http://www.peterskirche.at/home/[/url<]

      • Waco
      • 5 years ago

      UE4 is pretty damn close with some care…

        • moose17145
        • 5 years ago

        Cryengine looks crazy good too with some care…

        • sweatshopking
        • 5 years ago

        Ue4 does not look even close to this. You’re dreaming. If you’ve got a link where it looks close to this good, please share it. I haven’t seen anything even near as good.

          • jihadjoe
          • 5 years ago

          Nvidia’s VXGI demos used UE4
          [url<]https://www.youtube.com/watch?v=O9y_AVYMEUs[/url<] The haven't committed their lighting stuff yet, but Nvidia says VXGI should be in the main UE4 tree by Q4 this year so that's very soon.

      • Hattig
      • 5 years ago

      Rubbish – this looks pretty damn amazing for computer graphics of a religious building, and probably with a dataset more along 16KB in size: [url<]http://www.linehollis.com/wp-content/uploads/2013/02/dosbox-2013-02-08-21-35-03-55.png[/url<]

      • derFunkenstein
      • 5 years ago

      Honestly I’d rather have the more repetitive outdoor scenes than this. Sure everything is distinct, but it’s all distinctly blurry.

        • Meadows
        • 5 years ago

        The farther away something is, the blurrier it will get.

        This is not a full-on city simulation or a rendering of a whole forest. It’s just a tech demo showing the detail potential up close.

          • derFunkenstein
          • 5 years ago

          In that regard it failed. Up close, the trees are just pixelly junk.

            • Meadows
            • 5 years ago

            Like I pointed out in an earlier comment, there’s nothing stopping designers and artists from creating better assets for both near and far. Eucledion don’t seem to have any of those types of people, so laser scanners were a shortcut for them to assist with presenting their tech to investors in the shortest time possible.

            • derFunkenstein
            • 5 years ago

            Yeah, they’d certainly have to, but I wonder how much time that actually saves an artist then? One of the selling points was that there are too many artists working on games, but if you’re texturing so many different distinct models anyway, are you saving people-hours?

            • sschaem
            • 5 years ago

            Yes there is. Data size.

    • The Egg
    • 5 years ago

    Some of the interior shots look amazing, but it looks as if you took a flat picture and wrapped it around 3D objects. Viewed from the right angle it looks fantastic, but (in this demo at least) the lighting and shadows appear completely static. This gives the objects in the scene a funky “cardboard cutout” feel when viewed from anything other than the ideal angle. It’s a cool tech demo, but they need to solve the lighting and shadows.

      • MadManOriginal
      • 5 years ago

      ooo I think you’ve hit the nail on the head about why this looks awesome but weird at the same time. I wonder how much of that has to do with capturing the environment from a static position? Even if they used multiple positions for the laser capture it would still have the same effect I’d think. (In one video they posted on Youtube, the douchy-sounding Australian bro was going on about how their technology looks real and photographic and you’ll see why shortly..then he ‘surprises’ you by saying the whole last two minutes was there tech. It was pretty obvious that it wasn’t real the whole time imo.)

      For games, it seems that capturing real-world objects and then positioning them in an environment, or making the environment, would work better. Games can’t seem like a video tour.

      • sschaem
      • 5 years ago

      Yep, there is actually no lighting going on, you are indeed looking at a static image but instead of 2d pixel, its ‘volumetric pixel’.

      Anything that is remotely reflective will look wrong when looked at any angle different then when the ‘pixel’ was captured at.

      But this could be addressed…
      But I dont see the gaming industry using this as a core technology, it would be ‘incompatible’ with all the console on the market.

    • Laykun
    • 5 years ago

    This is both impressive and a naive approach to surface capture. It’s impressive because this is generally quite expensive to do. The problem is it doesn’t capture surface properties, only the output. What I mean by this is surface + light = output. Ideally you want the surface part of the equation, not the output. Otherwise you can’t re-light the scene and when you move your perspective around you don’t get dynamic effects produced by light ray bouncing, like specular highlights, reflective surfaces or transparency/refraction.

    Until it can actually capture the actually properties of a surface, like roughness, colour, reflectivity, subsurface scattering then this is a dead end technology that won’t go very far in the gaming industry. This is great for industrial purposes though.

      • derFunkenstein
      • 5 years ago

      TR’s writeup yesterday indicated that you could relight it however you wanted, though I have no idea how it’d look.

        • Laykun
        • 5 years ago

        Did you also notice the lack of any details? I know I did.

          • derFunkenstein
          • 5 years ago

          Yes.

          [url<]https://techreport.com/news/27111/new-video-shows-more-of-euclideon-voxel-rendering-mojo/?post=851644[/url<]

    • danny e.
    • 5 years ago

    meh.

    • odizzido
    • 5 years ago

    Looks pretty funky in a lot of ways, though it has good detail. If this were back in 1995-2000 abouts it could have made some nice backgrounds for games like the longest journey I bet.

    • derFunkenstein
    • 5 years ago

    Super cool for sure, but once you get up close those trees look awful. They did a pretty good job not to get too close to other things, but I have a feeling it’s not just limited to outdoors.

    • SecretMaster
    • 5 years ago

    Pretty neat. From an academic standpoint this stuff is really awesome. I’ve done some work with TLS (Terrestrial LiDAR, what they are using for scanning). A common topic of discussion was turning our point clouds into more real-world environments (perhaps even some sort of Minecraft conversion).

    If people are curious about exploring point clouds, a there is a great online viewing app called Potree with some datasets up there (including one I submitted).

    [url<]http://potree.org/[/url<] I do think this might be a bit clunkier from a true gaming perspective, but there could be some super useful forestry/ecology applications with this.

      • sschaem
      • 5 years ago

      I think as presented, no game would go this way.

      Point cloud / volumetric dataset do have value that games could leverage,
      but yea, they have a better chance promoting their streaming database for pro use then gaming.

        • derFunkenstein
        • 5 years ago

        For gaming I look at this as more of an intermediary step. Scan an area to get started, and then render the area with more traditional methods.

          • SecretMaster
          • 5 years ago

          I know there are methods to turn point clouds into 3D meshes. Photoscan (http://www.agisoft.ru/products/photoscan) has that built in capability. I’ve generated 3D models from very sparse SfM point clouds, never tried it with a TLS cloud. It would be computationally intense but maybe something cool would come out of it.

          Then again, what you do from there I have no idea. I’m beyond my area of expertise, but I imagine someone clever enough (or with enough time/money) could develop techniques to turn this into something game usable. It’d be an interesting approach, whether or not it’d be better I have no idea.

    • geekl33tgamer
    • 5 years ago

    Got to hand it to them, that looks impressive. It may not be ready for games yet (if ever), but that has huge potential elsewhere.

    • chuckula
    • 5 years ago

    There’s something unnatural about the relative movement of the objects in those scenes. You know how a lot of side-scrollers have that false-parallax effect where two or more segments of the background move at different rates to give you the false-illusion of depth? I kind of get that feeling looking at these videos, especially the outdoor scenes.

      • geekl33tgamer
      • 5 years ago

      I think that’s because nothing is actually moving. The scenes are fixed renders that you can move around in only. It needs more than that to ever make it into a game…

      • Meadows
      • 5 years ago

      It looked perfectly fine to me. Granted, the FOV is not the same you find in claustrophobic console FPS games and there’s very, very little movement.

      What little inaccuracies I could find were all about the trees. I imagine no matter how good your laser scanner is, you still can’t capture a whole tree before it’s wiggled around by the breeze and foliage might start looking weird once you move towards their sides.

      This is why artists need to create handmade assets from all sides for this technology, like how it is with current games.

      • green
      • 5 years ago

      I can understand.

      The forrest was the most obvious where the scene appeared quite dead. Now technically the engine could produce movement by adding an additional step to adjust the effective ‘points’ of the various assets over a period of time. Calculations to adjust the point data could get complicated, but with any luck their ‘filler’ algorithms can get things going okay.

      That and, as someone else mentioned, the FOV / perspective is off. Going through the scene everything straight looks straight. While that sounds odd, when you follow a large straight line or edge it appears straight. When trying to take it all in in the one view where it extends out your field of view, the line is curved. As a result when it was going through the church scene it felt like you were in some kind of weird scale sized model.

    • Kharnellius
    • 5 years ago

    Holy Voxels, Batman!

Pin It on Pinterest

Share This