Euclideon preps voxel rendering tech for use in games

We first heard about Euclideon back in 2011, when the company posted a video of a voxel-based rendering engine designed to enable environments with unlimited detail. This month, the firm made headlines again with a new video—and even more boisterous promises. Take a look:

Euclideon’s core technology is a scene renderer that uses voxels—basically 3D pixels—in place of the polygons that are fundamental to virtually all conventional 3D graphics. Voxels can be a useful way to represent 3D space, as witnessed by the fact that Nvidia uses voxels in the VXGI lighting scheme introduced with its Maxwell GPUs.

The folks at Euclideon have figured out how to use 3D scanners to capture real-world environments as point-cloud data. They can then feed the results into their voxel renderer and create a real-time rendered facsimile of that environment. As you can see in the video above, some of the results are quite striking.

The Australian start-up already offers this technology to businesses for things like geospatial analysis, but next year, it plans to make its entry into gaming. We spoke to Euclideon CEO Bruce Dell to find out more about those plans—and about the software that underpins them.

In a nutshell, Euclideon’s Unlimited Detail technology relies on a search algorithm that “efficiently grabs only one point for every screen pixel.” Scene data is stored in point-cloud form, and it’s streamed dynamically from mechanical storage with apparently negligible load times. Dell demoed this aspect of the technology at the SPAR laser-scanning conference earlier this year. In his demo, he told us a $600 laptop was able to load a 3TB model of the city of Vienna in just 0.8 seconds.

To create such expansive models, Euclideon uses laser-scanned data along with some special sauce that fills in the gaps between scanned points. The special sauce involves “basically reblending and remodulating all the colors with a little bit of artificial intelligence,” and it also handles other missing information. The result is a claimed thousand-fold increase in detail over the raw data from the laser scanner.

This scene from a church includes a lot of complex shapes. Source: Euclideon.

Perhaps the most surprising aspect of Euclideon’s technology is that, for now, it runs only on the CPU. Dell said the current implementation produces 2000×1000 frames at around 32 FPS on a six-core processor. He claims there’s “no reason” the technology can’t be sped up using, for instance, OpenCL on a GPU, but there are “lots of software ways” to improve performance yet. Jumping straight to GPU optimization would be “admitting defeat,” in his view.

Two games based on Euclideon’s technology are currently in development. Dell said he hopes they will be out in May of next year, though he conceded that the schedule could slip. Interestingly, he also said that “people will be very surprised when they find out which hardware platform [the games] are on.” Then again, when we asked about hardware requirements, Dell suggested that a six-core CPU will be the “median” requirement, with detail scaling down on lower-end processors. Hmm.

Both of the upcoming Euclideon-powered games will feature “directly imported graphics from the real world,” and they’ll be entirely voxel-based, with no polygons even for animated models. Dell told us that animating voxels is “not the hardest thing in the world,” but Euclideon’s implementation is only about 80% done. That’s why we haven’t seen it demoed yet. “If I were to put that up today, I think people would look at the 20% that was missing,” Dell explained.

Speaking of models, Euclideon offers an interesting alternative to traditional modeling apps for asset creation. While artists can still use 3ds Max and Maya, they also have the option to make physical models using clay or putty. They can then scan in those models with laser scanners. Dell suggested that creating a physical model could be faster in some cases than modeling one in software.

Euclideon’s technology can also reproduce natural environments. Source: Euclideon.

Delving a little deeper, we asked Dell how Euclideon’s technology handles dynamic lighting and antialiasing—two important features in real-time 3D rendering today.

Euclideon’s technology does support dynamic lighting, and Dell claimed the results are better than those from polygon-based games. However, he added that he prefers to preserve original photographic lighting from real-world scans whenever possible. Real-world lighting is “so much higher [quality] than what computers can generate.” The same goes for pre-baked lighting from offline renderers. “We’ve been . . . setting [3ds Max] and Maya to really high lighting settings and then running that through our converter to turn it back into XYZ voxels,” Dell noted. The decision to forego dynamic lighting apparently isn’t tied to performance constraints, though. “If we had any performance limitations,” Dell told me, “we’d probably go to something like CUDA and actually start using the graphics card.”

What about antialiasing? Euclideon has been “experimenting” with a new AA technique, but that technique is being kept under wraps because of “patent issues.” All Dell would say is that the “one voxel per pixel” formula doesn’t mean pixels have to fall “exactly on the pixel grid,” and the AA scheme may “make some decisions about where it does want to grab just a few voxels extra in an extremely efficient way and blend them together.” Dell also suggested that antialiasing will play a part in improving image quality on lower-end systems that may render scenes at lower resolutions. In any case, high-res screenshots from Euclideon’s latest demo still show hard, jagged edges between objects in some places. (See the gallery at the bottom of this page.)

Finally, we asked if Euclideon plans to license its technology à la Crytek and Epic. Here’s what he replied:

We’re moving forward with our own game engine. We’re moving forward doing a lot of very interesting things in a lot of different industries with our engine. We might—I’m not saying it’s a total no, as in there’s a few big names who we talked to about this, and I’m not allowed to talk about what’s going on in some of those areas—but in general, are we suddenly going to come out with this game engine and say, “Everyone please license our game engine?” I’m not sure that that’s an area we want to be competing in, when I’m not sure that there’s enough money in it even to survive with the way that the price of these things is dropping.
We do have a game engine. I suppose we call it a game engine. It extends also to simulation and training and other areas. . . . We’re trying to get to the point where it’s an engine that does not require any programming at all yet still can do anything anyone can think of.

We tried to do a little more prodding to find out if Euclideon is working with major studios or middleware vendors, but all Dell would say is this: “If we were, we’re probably under an NDA that says we’re not allowed to say that.”

There were many doubts about Euclideon’s credibility when the company first showed its technology few years ago. Today, as a player in the geospatial analysis market with nascent partnerships in the game industry, Euclideon has more clout.

Still, one can’t help but be dubious about the near-term prospects for voxel-based rendering when real-time polygonal 3D graphics are so close to photorealism as it is. Shifting to an entirely new fundamental representation of world data would be a wrenching change for game developers, graphics APIs, and chipmakers—essentially, the entire industry. Euclideon’s data sets appear to be rather large, too, which could become a constraint if entire game worlds reach into the multi-terabyte range.

That said, Euclideon’s technology has tremendous promise, especially over the longer term. The fact that they’re able to produce images with such fidelity using only a multi-core CPU, without a single flop contributed by fixed-function graphics hardware, is remarkable. The approach of capturing light and other environmental information from the real world makes a lot of sense, too. Already, game developers are scanning the real world to create highly realistic polygonal graphics. And the hottest thing in real-time graphics is “physically-based” lighting and other simulations, for which polygonal models are often poorly suited. Having a point cloud or voxel grid of the world space on hand could allow much better lighting, shadowing, physics interactions, and more.

Despite its potential, we expect Euclideon may have to struggle to get its foot in the door in gaming initially. Then again, if two games based on this technology are in the works, then at least some studios must share Euclideon’s enthusiasm. This technology could be intriguing in the gaming space if it can enable new levels of realism on hardware that lacks the GPU horsepower to produce comparable images via polgyon-based rendering. We’ll be very curious to see how these first Euclideon-based games look.

Update 9/26: Euclideon has posted a new video of its technology in action. Here it is:

As I said in the corresponding news post, the church scene is particularly impressive—though Dell says it uses 3.8GB of compressed point cloud data, which is relatively hefty.

Comments closed
    • FireGryphon
    • 5 years ago

    This, only 15 years after NovaLogic developed voxel-based 3D graphics ahead of the polygon folks. NovaLogic was apparently way ahead of its time.

    • TheMonkeyKing
    • 5 years ago

    So another polished static walkthroughs (well, oooh waving preprogrammed fern undulations.).
    No mention of object deconstruction (explosions, bullet hols, physics, etc.)

    Wake me when they can do permanent chaos.

      • Meadows
      • 5 years ago

      Theoretically, destruction should be doable and to much greater detail than is traditionally possible. Just give it enough processing power.

      Today’s physics demos and games already use big-ass “atoms” in a way, so this wouldn’t be stepping out of programmers’ comfort zones much.

    • hansmuff
    • 5 years ago

    [quote<]"using only a multi-core CPU, without a single flop contributed by fixed-function graphics hardware"[/quote<] That's pretty interesting. NVIDIA just announced VXGI (Voxel Global Illumination) technology supported by their GPUs, not sure if only the 970/980 can handle it or if it's backward compatible. In any event, here's something that works with voxels on the CPU exclusively, and NVIDIA is going that route at least with some new technology in their GPU development. I suppose this could become an interesting technology. I wish we had a little more information about the amount of data needed for the scenes.. if and when compression comes into play, we can all take a look at Rage and Wolfenstein to see what happens on Megatexturing when glorious theory and implementation hit PC limitations. Those games are large, yet their megatextures heavily compressed and in some parts very ugly because of it.

      • sschaem
      • 5 years ago

      Different technology even so both have “voxel” in their name.

      From what I know, Euclideon never released a demo of any sort, only youtube videos.
      I’m looking for a link, but I read at some stage that the room demo used about a terabytes of data. I also recall that parsing 1 TB of data takes about 20h on their server.

      Remember all this is static and the database is pre-sorted.
      Its seem like an asymetrical compression type of system, super slow to ‘compress’ , very efficient to ‘decompress’. Here writing voxel data is a non realtime operation, but reading the data is realtime.

      Other voxel games dont have those limitation, but they also dont work with TB databases.

      I dont think Euclideon system make sense, people will continue to use Voxel on smaller datase.

      People that would be interested ? the big boys, google, apple, MS. for their map service.

      google is already doing pretty good with their automatic tessellation, would be nice if they can add point cloud of interiors (and some exteriors).

      Maybe when we all have google fiber…

    • Flapdrol
    • 5 years ago

    They should remake delta force 2 or commanche 3, I’d buy that.

    • Meadows
    • 5 years ago

    Both Eucledion and commenters here are pretty stuck up on the fact that laser scanning can be used to model real environments to a reasonable amount of zoom-in precision.

    But that has nothing to do with gaming, per se.

    What everyone’s forgetting is that theoretically, this can be used to model imaginary and fantasy scenarios just as well, if you have the proprietary CAD tools this needs.

    Indeed, their last demo used nothing but handmade assets and looked as detailed as any other videogame at the time, or since. Yes, it was ugly, but it was detailed and it wasn’t slow. That’s the kind of technology they need to put in the hands of actual artists and designers.

    This is just a tech demo for museum site shareholders.

    • Buzzard44
    • 5 years ago

    Maybe we can get some virtual house tours on zillow that aren’t just slideshows.

      • TheMonkeyKing
      • 5 years ago

      It’s actually cheaper just to walk through the house with several cameras and then join perspectives using motion control or extrapolation.

    • GrimDanfango
    • 5 years ago

    It’s that time of year again is it?
    Yes, it’s very impressive being able to visualise baked point-cloud data in realtime.

    No, it’s not going to revolutionise games.

    You can’t light it, any more effectively than you can light a photograph. The lighting is baked in. You can’t animate anything… any more than you can animate a photograph.
    You can’t store it in anything approaching a sensible amount of space.

    This might actually be an interesting contribution to the ongoing R&D surrounding laser scanning and 3D scene reconstruction, if only this Bruce Dell guy could stop desperately selling snake-oil every chance he can, and actually admit that he’s *only* doing some meaningful R&D, and not revolutionising the world as we know it.

    Seriously… this guy is like refined, highly weaponized Molyneux. I really wish everyone would stop getting taking in by his pitches year after year after year, and finally realise what a monumental bulls**ter he is.

      • Laykun
      • 5 years ago

      Exactly what I was going to say. I knew this sounded familiar, it’s the infinite detail guy again. No body took up his infinite detail engine back then FOR GOOD REASONS (technical), adding laser scanning to it doesn’t make it any better.

        • Meadows
        • 5 years ago

        Of course not, because it wasn’t finished and it still isn’t, at least for some purposes like animation. For other, more “static” purposes, Dell claims they already have several active customers.

          • Zizy
          • 5 years ago

          Will it ever be? I doubt it. We have not seen anything to suggest otherwise.

      • Meadows
      • 5 years ago

      They’re working on the animation part and they already support dynamic lighting. The only problem is the amount of space a scene will take up – if this becomes widespread, then I will demand that games bundle their own external HDD at the very least.

      • Johnny Gatt
      • 5 years ago

      You can animate and light in this engine I got to see a video of it today yes it was scanned parts to make up the room but the walls looked like walls the model tiger like thing moved through the light and the shadows and lighting worked on all objects. I posted a screenshot on gamingface.com. I wish people who know nothing about this tech would stop pulling crap out of thin air about what they think it cannot do.

    • divide_by_zero
    • 5 years ago

    I understand that this tech may or may not end up being viable for gaming as many other commenters have pointed out.

    And yes, this could end up being vaporware or be something requires vastly more computing horsepower than stated by Euclideon.

    And no, it’s not perfect – it was fairly obvious that the footage was computer based and not video of the real world.

    All that aside, I’m taking off my cynical/skeptical hat for a minute to say wow, that was really, really flippin’ impressive.

    • south side sammy
    • 5 years ago

    back when this first came out people would scoff and say how phony it was. seems as if the pretenders have been working hard at something.
    but as gaming sits now you already don’t own anything because it’s very web based. not a lot is on your computer but stored in magic land. This guy says we’ll stream it to you. so I guess we will own less of nothing as this thing progresses.
    all in for greater computer graphics to go along with the “prematurely” released high def monitors that not much of anything can yet take advantage of.
    but we ourselves can take a high res photo( textures) and incorporate it into a game. I’ve done it. so I guess to sit back and keep watching.

    • LordVTP
    • 5 years ago

    Am I the only one thinking you could bridge this tech with a implementation of NVidia’s global voxel illumination tech?

    • Freon
    • 5 years ago

    I was not fooled for a split second by his “[this is the real world!]” subterfuge. The one brief shot of the broken up asphalt was impressive, not sure about the rest. It looked like a median filter was run over everything, which may literally be what they do to fill in gaps. It definitely doesn’t look quite “right.” Good, but I’m not sure any better than the latest Crytek or Unreal engine.

    This video is a lot more interesting and informative:
    [url<]https://www.youtube.com/watch?v=Irf-HJ4fBls[/url<] Translation to game tech might be a rough road. It seems kinda like Mega Geometry that The Carmack alluded to in contrast to his Mega Texture tech. Let artist create stuff at whatever detail they want, then have the precompute and realtime engine deal with grabbing data as needed.

    • Bensam123
    • 5 years ago

    Neat, but could tell it was computer generated before they told us. I don’t know of the application of such technology in the video game industry as well. It’s nice to theorize and everything, but it turns it from something fantasy to something more like a movie.

    There is no way to do futuristic sci-fi shooters for instance as they don’t already exist or midevil ones, you’d be limited by what you could turn into props and then into set pieces, then back into virtual reality. You’d spend all the time scanning locations just like shooting a movie as well (as long as it exists).

    I can see this being very good for virtual tours (like the cathedral was great) and the porn industry, but beyond that I don’t think this will really go anywhere.

    • Wildchild
    • 5 years ago

    I personally don’t care for overly “realistic” games whether it be in game play or graphics. Part of the reason I play games is to get away from reality!

    • Milo Burke
    • 5 years ago

    By a six core processor, does he mean more like an FX-6300 or an i7-5930k? I’m guessing the latter.

    The man-made environments look fantastic, particularly the inside of that church with the rich shapes, colors, and lighting.

    However, the low-res but unique textures bother me more than high-res recycled textures found in most modern games. Also, as soon as they show nature, like grass, ferns, and trees, it looks completely fake. It’s supposed to look revolutionary, but Tomb Raider looks better.

    • ozzuneoj
    • 5 years ago

    It looks incredibly realistic from a distance, but what is making it look good is the fact that it is pixel-for-pixel copied from the real world. I don’t see how you could add higher resolution textures so that it doesn’t look so awful up close. Similarly, they said themselves that they prefer to use the original photograph’s lighting, even though they said they can support dynamic lighting.

    It just doesn’t sound very flexible to me.

    The amount of data, the lack of resolution and the static nature of it makes me doubtful that we’ll see this in games on any significant scale. I’m sure someone will make one, but its probably going to be missing a lot of the things we have in modern polygon based games in order to have low resolution photo realistic static scenes. For example, anything that could be moved or destroyed would have to be made separately, and it’d stand out, like in old games where you could always see the difference between the static scenery and the objects that could be interacted with.

    Don’t get me wrong, it looks incredible for real time graphics, especially considering the fact that it runs entirely on the GPU, but its limitations seem like ones that make it quite difficult to use for games that would be taken seriously theses days. Reminds me of the game Outcast from 1999. It had some incredible graphics effects for its time (it used voxels for expansive outdoor scenes and it had bump mapped textures on characters and some amazing procedural animation techniques) but it was software accelerated so the framerates and resolutions were awful and things were blocky looking up close. The trade offs wouldn’t be that severe (since the average smartphone is 5-10 times faster than PCs of that time) but people also expect far more from modern graphics (per pixel real time lighting, day\night cycles, shader effects on just about everything).

    Now, mapping a museum or a natural park (outdoors) and all its exhibits\attractions and letting people take virtual tours of them from their home computer (or eventually a tablet\phone)… that seems like a far more realistic use for the technology. In that case it would just be a matter of delivering that huge amount of data to the user, which they say they have worked out… so, I imagine we’ll see real results of all this fairly soon.

    • Kaleid
    • 5 years ago

    Getting Bitboys OY vibes.

    • Arclight
    • 5 years ago

    Very interesting. Loading maps is still a problem for games with large scale, it would be cool if that problem was solved, but at the same time, maybe it could be solved in a different way with existing technology. Since there are no gaming cards designed for this, how complex could the first tech demo games actually be?

    • Anovoca
    • 5 years ago

    Euclideon + Oculus VR + Porn.

    That is all.

      • Vaughn
      • 5 years ago

      Get a woman and leave your hand alone.

        • Scrotos
        • 5 years ago

        Will she have her own Occulus as well?!?

          • Anovoca
          • 5 years ago

          Demolition man

      • Meadows
      • 5 years ago

      Only if said porn is otherwise illegal to perform.

    • Pancake
    • 5 years ago

    2000×1000 @ 30fps? That works out to be 60 million pixels per second. Or 10 million pixels per core. Or about 300 clock cycles per pixel assuming a nominal 3GHz core. That’s not a whole lot of processing that can be done in that time especially if there’s non-cache memory accesses which is most likely if it uses some sort of ray casting technique through multi TB of data stored in some sort of tree. My background is in graphics and assembly language programming.

    So, there can’t be that much special going on. The demo doesn’t particularly impress me as the scene is static. Reminds me of those “real time ray tracing” demos. I got a feeling they’re not going to get dynamic lighting or animation working well any time soon despite what the CEO says. Much less explosions or other effects we expect in games.

    • UnfriendlyFire
    • 5 years ago

    “the fact that they’re able to produce images with such fidelity using only a multi-core CPU, without a single flop contributed by fixed-function graphics hardware, is remarkable.”

    So that means people who are using anything less than 6-core CPUs are SOL, unless if they’re planning on bringing the computations to GPUs.

    On that note… [url<]http://www.istartedsomething.com/20081126/direct3d-warp10-to-enable-you-to-play-dx10-crysis-using-software-renderer-only-albeit-slowly/[/url<] Yeah, you can run Crysis in DX10 using Windows 7's software rendering mode without needing a GPU. The Core 2 Duo running at 2.1 GHz logged 2.1 average FPS, at 800x600 resolution and lowest graphic settings.

    • Mad_Dane
    • 5 years ago

    A 6 core is the median? so he expect gamers to buy 12 core xeons to play these games or what? Your special sauce needs an update, it still looks fake, I’ll bet you 99.99% of the viewers of this homepage saw it in milliseconds, hope you are not so condescending in your next video.

    • adisor19
    • 5 years ago

    Perhaps nvidia will now make the NV2 😀

    A bit of history for the younglings, nvidia’s first 3D chip, the NV1, was voxel based and because of that, they almost went bankrupt. 🙂

    Adi

      • Damage
      • 5 years ago

      NV1 used nurbs and splines, did it not?

        • chuckula
        • 5 years ago

        In your interview were you able to wheedle them enough to figure out why they insist on calling Voxels “atoms”?

        Additionally, they make a big deal about streaming data from a 3TB point cloud, but that just proves they can extract a subset of data in an efficient manner. How about compression so that you don’t actually need 3TB of storage for the whole point cloud to begin with?

          • Ninjitsu
          • 5 years ago

          From what I understood from the video, you’d need 3TB [i<]after[/i<] compression.

        • Milo Burke
        • 5 years ago

        Reticulating splines?

      • Meadows
      • 5 years ago

      I wasn’t able to find anything supporting that claim. What I found is that the NV1 used “quadratic texture mapping”, but unfortunately I’m not sure what the hell that even means and I wasn’t able to find any screenshots.

      (The first accelerator I owned was a Voodoo around that time, I’ve never seen the NV1 in action.)

      • Rza79
      • 5 years ago

      No, NV1 wasn’t voxel based. It wasn’t triangle based but quadrilateral based (just like the Sega Saturn).
      Games from Novalogic like Commanche and Delta Force used a voxel engine and so did unique game Outcast.
      I played the first Commanche game when I was 14 and was blown away (and so was my poor 486).

      You can play a HTML5 version of the engine here:
      [url<]http://simulationcorner.net/index.php?page=comanche[/url<]

        • Klimax
        • 5 years ago

        IIRC Delta Force 1,2 allowed you to hide in grass. (when Novalogic used standard rendering to get GPU acceleration for 3 this was no longer possible)

    • The Dark One
    • 5 years ago

    It seems like they’ve done some really impressive work, but I still don’t buy the line that this is useful for gaming as a whole. It’s great for modelling real-world locations, and could probably be great for previsualization and virtual sets for movies, but games are often intentionally set in places that [i<]don't[/i<] mimic reality. Even games set in 'realistic' settings futz with scale and architecture to facilitate actual gameplay. Everything about Euclideon's approach seems backwards to that.

      • SoberAddiction
      • 5 years ago

      Basically, what this guy ^ just said.

      • jihadjoe
      • 5 years ago

      It’s a tool, and can be used just like any other. No reason someone creating a red giant stage solar system can’t use voxel-based lighting to create a red sun.

      There’s already a lot of lighting tech that was originally made to help simulate realism that is currently being used to help build surreal game worlds.

        • The Dark One
        • 5 years ago

        [quote<]No reason someone creating a red giant stage solar system can't use voxel-based lighting to create a red sun.[/quote<] But the whole point of the using laser scanners and getting your textures from reference photos is to avoid sidestep the traditional content creation pipeline. Once you have to create the stuff yourself, the work-load is going to balloon.

        • SoberAddiction
        • 5 years ago

        Ray Tracing + Euclidion Voxel tech FTW!

      • floodo1
      • 5 years ago

      he also talks about preserving the natural lighting instead of creating their own, which is great if you set your game at the precise time of day when you laser scanned real life I guess 🙂

        • Meadows
        • 5 years ago

        Theoretically, they could scan six different times of day and interpolate between them to create natural day-night cycles, as opposed to the washed out fullscreen lightbulb-orange haze that was GTA 4 (to mention one egregious counter-example).

    • WhatMeWorry
    • 5 years ago

    Really, what are the odds of the CEO having the last name Dell?

      • lilbuddhaman
      • 5 years ago

      I would dare say he initially got funding for the company simply because of his name.

    • cygnus1
    • 5 years ago

    I wonder if that multi TB dataset is compressed and if not how compressible that dataset is.

      • DPete27
      • 5 years ago

      I was just going to say this. We may be in the age of having more hdd space than any normal user would ever need in their lifetime, but games that take up multiple TB… that’s going to severely limit adoption.

        • cygnus1
        • 5 years ago

        I was also thinking about it from a distribution standpoint. The only way they could sell a multi TB game today would be to distribute it on hard disks. It will have to be compressible down to the 100GB range in order to distribute it on the BDXL format. Online distribution would also be severely hampered by pitiful speeds and data caps, even if they get it to the 100GB range.

          • godforsaken
          • 5 years ago

          I’ve never been a fan for the idea of streaming games, due to the added lag, but, this may end up being a…. quasi streamed game.. think of it like google maps, you don’t need to download the whole world to see what you want to see, just what is on your monitor..

          I still have yet to convince myself this would be workable, but, the idea is starting to fester, so, this is mostly brainstorming:

          now, there is the obvious problem that you have to wait for the map to load when you get to a certain point, but, all they would need to do is load what is able to be within your viewpoint shortly… taking in which way your are moving to load more in that direction than any other.. let it fill your memory, and offload where you are furthest from to keep a set maximum amount of information in ram (which now 8 gigs is normal for gaming rigs, it may be to 64 gigs by the time this is realized)

          also, they said that they take an image that’s to the mm range then extrapolate that image to fill in the gaps, they would just need to upload the lattice and the computer can do the extrapolating (at what point will average ‘built for gaming’ desktops have the rendering power for that, who knows) so that would drastically lower the required data throughput to stream the visuals for the game/gallery/movie for the rift/real estate showcase

          all you need is the engine, and AI and some extremely common visuals (like in fps games, it would have all your guns on file)

          right now data speeds aren’t high enough, the computers ability to render the missing parts of the lattice aren’t there (though, that may be what the gpu will be used for if it becomes obsolete for this).. no, i dont really think what im saying is feasible , but, it is an idea

    • chuckula
    • 5 years ago

    Will continue to tentatively call it a scam until I can run it on my own system… but I will give them credit for at least having slick demos (the frame rate could use some work though).

      • Billstevens
      • 5 years ago

      What they are saying no longer sounds like a scam. Sounds like it isn’t able to take on AAA game graphics engines yet. It also sounds like it will present new and different problems than current methods.

      It will be interesting to what this technology is better and worse at in the gaming world. Its obviously pretty good at large complex static scenes.

        • ferdinandh
        • 5 years ago

        [quote<]What they are saying no longer sounds like a scam.[/quote<] So they must be in talks with Intel who would be the biggest supporter of a cpu-based engine that can compete with gpus. Intel must be demoing all their 60+core cpus with this engine. Also they must be lying that they don't want to support gpus yet because it is really stupid to not want to use the most powerful chip in a computer. And we should forget that Intel tried to make a cpu based gpu and failed with all their resources. On top of that forget that pure software games have died and all games use the gpu.

      • Meadows
      • 5 years ago

      The framerate might triple or quadruple once their promised optimisations are completed and they actually start using the GPU as well. At least that’s my tentative guess.

      • Farting Bob
      • 5 years ago

      What they showed in that video and talked about were basically 3D scans of real world areas. Other than old school point and click adventure games that isn’t going to do much good in gaming when you cannot interact with the environment without a very jarring switch from the “background” scan as you see in the video and traditional effects used to manipulate objects in games.
      Also, as they showed, you cannot zoom in from the original scan without it looking shitty. So if the laser scanner thingy was 20 feet away from an object, in game getting any closer than 20 feet would look bad (getting worse the more you zoom in) unless they had another scan of everything from closer up, and soon you start getting impractical with 100’s of terabytes of data for your incredibly lifelike static world.

      This technology is pretty amazing from what i’ve seen, but not suitable for gaming.

    • killadark
    • 5 years ago

    hmm if only cpu was used this would seriously hurt GPU sales
    i mean who would want that if it ran so well on just a high end CPU

    this would also mean that AAA devs wont have to have huge amount of staffs only a few might be enough and perhaps games could get cheaper 😛

      • SoberAddiction
      • 5 years ago

      While the voxel graphics that they showed might run well on 6 CPU cores, they have already stated that the resolution is 2000×1000 @ 32 FPS. (That seems like a weird resolution to me, but a quick search lead me to digital camera forums. ) If it is only getting 32 FPS on a 6 core processor without animation, audio, AI, graphical ambiance and whatever else goes on in the background then we’re going to need more cores or make the jump to GPU. This is just a demo, and I think that what they’re achieving is great. But don’t tell me that it’s ok to game with this at lower frames than I’m used to. Of course, this might be why they came out with G- / Freesync.

        • chuckula
        • 5 years ago

        With highly re-programmable modern GPUs I don’t see why it’s not possible for the GPU to accelerate voxels as well as traditional polygons*. Having said that, the software stack to use the GPU in such a manner is not widespread and standardized ala D3D or OpenGL.

        * Yes, I saw NVidia’s Voxel demo too.

          • SoberAddiction
          • 5 years ago

          I’m sure that in it’s current state it’s easier / faster to run the math on a CPU than a GPU. Anything else I can think of to say is just agreeing w/ you at this point.

    • Anonymus_notthetroll
    • 5 years ago

    You could tell that those were NOT real-life images. Impressive tho. Indeed.

    Heres to another future “myface” buyout…

      • Forge
      • 5 years ago

      Same. I was looking at the pan as he said “Let’s look at the real world” and thinking that the lighting in the doorway was horribly wrong. There was extra light there, but no light source and no easy route for an off-camera source to be casting it.

      I think one of the most subtle things that 3D isn’t really doing yet is the very faint but universal haze of atmosphere. It’s hard to do right and hard to describe, but something our eyes are very used to.

      Still, cool tech. Not quite ‘real time’ though, when he’s chunking around that church interior at way under 30fps, and without any actors or action, no scripting outside the immediate area, etc, etc.

        • Liron
        • 5 years ago

        Maybe Neil Armstrong is also holding the camera here so he’s the extra light source.

          • chuckula
          • 5 years ago

          That punk Neil Armstrong just loves to dress up in his EVA suit and photobomb us!

      • Ninjitsu
      • 5 years ago

      Yeah, the moment he said “oh look real life buildings”, I was like “what? doesn’t seem right…”

        • Meadows
        • 5 years ago

        The straight movement of the camera gave it away before anything else.

          • Ninjitsu
          • 5 years ago

          For me , it was mainly just a weird uncanny valley feel about the buildings that troubled my brain.

          Then of course, the camera.

Pin It on Pinterest

Share This