Personal computing discussed

Moderators: morphine, SecretSquirrel

 
just brew it!
Gold subscriber
Administrator
Topic Author
Posts: 52313
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 7:45 am

I'm not familiar with VRAY, but unless you've got 300GB+ of system RAM it isn't just going to be spilling over into system RAM, it is going to be spilling out to disk as well.
Nostalgia isn't what it used to be.
 
Aranarth
Graphmaster Gerbil
Posts: 1188
Joined: Tue Jan 17, 2006 6:56 am
Location: Big Rapids, Mich. (Est Time Zone)
Contact:

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 7:58 am

Considering that 8k video resolution in a video game needs about 16 to 32gb video memory I can't think of a scene that would require 300gb+ of video memory to produce.

Now if you wanted to know about a data array of multi terabytes that is crunched by video cards I'd look at how the computers that process weather forecasting handle memory usuage across hundreds of video cards at once.

Now if I DID have a scene that was that large what i would do is process it at a lower resolution that fit in my memory limits and then upscale to the resolution required and there are already ways to do that thought up by people 10x smarter than me...
Main machine: Core I7 -2600K @ 4.0Ghz / 16 gig ram / Radeon RX 580 8gb / 500gb toshiba ssd / 5tb hd
Old machine: Core 2 quad Q6600 @ 3ghz / 8 gig ram / Radeon 7870 / 240 gb PNY ssd / 1tb HD
 
dragontamer5788
Gerbil Team Leader
Posts: 220
Joined: Mon May 06, 2013 8:39 am

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 10:48 am

Aranarth wrote:
Considering that 8k video resolution in a video game needs about 16 to 32gb video memory I can't think of a scene that would require 300gb+ of video memory to produce.


Video games are low-poly compared to cinema. A singular Blender Benchmark scene at 1080p is 12GB of RAM ("Cosmos Laundromat"), which is trying to be representative of scenes from these movies.

Consider the CGI work that has to go into modern movies: Lion King (remake), Dumbo, Avengers: Infinity Wars. These movies look almost lifelike, and the CGI puppets stand next to real life humans on camera. That requires more polys, more textures, and far deeper raytracing calculations than ever before.

I've never seen cinema's actual numbers though. I'd be surprised if it were as big as 300GB, but I wouldn't be surprised to see it in the 50GB+ size (larger than the GPU). In that case, its pretty simple how to handle it. GPUs would simply perform the rendering in steps: first Raytrace (which only needs vertex data), and then later shade (which only needs texture data + the information from raytracing). By alternating back and forth, less RAM is used, so it can fit inside of VRAM.

Note that GPUs have a relatively small link between them. Two Titans with 24+24GB of RAM isn't effectively 48GB, because the two video cards need to copy data back and forth between each other. Its 24GB of RAM per GPU node. So the BVH Tree of the Raytracer should never go above 24GB if you want good performance.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10354
Joined: Thu Jun 19, 2003 3:11 pm

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 1:58 pm

Back when "Cars" was the latest thing from Pixar (over a decade ago now), they engineers released a paper describing their techniques; towards the end (section 8 ) they mention a single ray-traced scene taking 414MB. Techniques have improved, but so has scene complexity and effects.
 
Aranarth
Graphmaster Gerbil
Posts: 1188
Joined: Tue Jan 17, 2006 6:56 am
Location: Big Rapids, Mich. (Est Time Zone)
Contact:

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 3:14 pm

I find it cool that more and more tv and movie special effects are being doing in game engines like unreal.

There was a recent article on the immersive weather video's the weather channel is doing.
Main machine: Core I7 -2600K @ 4.0Ghz / 16 gig ram / Radeon RX 580 8gb / 500gb toshiba ssd / 5tb hd
Old machine: Core 2 quad Q6600 @ 3ghz / 8 gig ram / Radeon 7870 / 240 gb PNY ssd / 1tb HD
 
dragontamer5788
Gerbil Team Leader
Posts: 220
Joined: Mon May 06, 2013 8:39 am

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 3:38 pm

Aranarth wrote:
I find it cool that more and more tv and movie special effects are being doing in game engines like unreal.

There was a recent article on the immersive weather video's the weather channel is doing.


If you want realtime graphics rendering, video game engines are basically the cream of the crop right now.

However, realtime graphics has major downsides. Shadows are often calculated ahead of time and are actually non-interactive (aka "Baked Shadows"). There are a whole bunch of other tricks that reduce the amount of work and processing involved. Even if you turn off all of those shortcuts, rasterization doesn't really follow reflections very well. (Even a normal wall reflects the light: stand with a blue shirt close to a white wall, and the white wall will turn a slight-shade of blue).

Modeling those interactions between objects can only be done with Raytracing. And even with RTX Cards, raytracing is a poor estimation pushed through an absurdly blurry denoising filter. To have any real accuracy, you need ~500+ samples at the minimum (maybe 5000+ samples per pixel for Hollywood level effects). The 1-sample-per-pixel methodology that is pushed by RTX Cards is impressive, but still far behind the movie studios.

UberGerbil wrote:
Back when "Cars" was the latest thing from Pixar (over a decade ago now), they engineers released a paper describing their techniques; towards the end (section 8 ) they mention a single ray-traced scene taking 414MB. Techniques have improved, but so has scene complexity and effects.


Thanks for the link, but the 414MB scene wasn't a realistic test. It was 15 clones of the same 1960s-style Cadillac with only 2155 NURBs to model the car with a blank background. In fact, I doubt that most things would be modeled off of NURBs (and expect most modern movies to be "sculpted" into a typical trangle mesh... it uses more RAM but it has benefits to the artist who models)

In any case, a modern scene today would use a 20000x10000 HDRi environment map, which is 800MB alone (just for the skymap !!). That's just one element that's part of a modern 3d scene.

In any case, the pdf is a really cool insight into the computational requirements needed for Cars.
 
synthtel2
Gold subscriber
Gerbil Elite
Posts: 879
Joined: Mon Nov 16, 2015 10:30 am

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 4:44 pm

dragontamer5788 wrote:
Modeling those interactions between objects can only be done with Raytracing.

Raytracing isn't the only way to do that, and for realtime rendering on current hardware isn't even a good way to do that. Plenty of other realtime GI algos exist.
 
caconym
Gerbil
Posts: 28
Joined: Tue May 17, 2016 3:28 pm
Location: Reno, NV
Contact:

Re: Supercomputer GPU rendering

Thu Jan 24, 2019 5:27 pm

Production 3D assets eat a lot of RAM. I've worked with some of the models from Avatar. An average character model from that film has over 200 GB of textures applied to it. Renderers are generally pretty smart about using mip-maps to only load as much resolution as needed for the current frame, but you're still talking about scenes that need 48-64 GB to render, at a minimum. And that's a ten year old movie. Hero characters and environments for modern big-budget shows are 200-300% heavier. I'm talking about 1000 or more 4096x4096 16 or 32-bit image files applied to each of the Jaegers in Pacific Rim 2.

Within the past few years, GPU renderers like Redshift and V-Ray have started supporting "out-of-core" memory when rendering on GPUs, which just means that they're now capable of swapping data from system RAM to the GPU as needed. It's not as quick as rendering with everything contained in VRAM, but at least it doesn't result in a hard crash, like what typically used to happen when video RAM filled up. Bridging system RAM and VRAM in a way that's stable and fast enough to keep a GPU fed is apparently a pretty tough problem. Up until recently, you really had to either keep your data small enough to fit on a video card, or render with a CPU instead.
3D art desktop: 6600k@4.2 - 32gb@2400 - 500gb m.2 850 EVO - GTX 1070 + 750ti (for PhysX) - Noctua NH-U9B - Phanteks P400
2D art laptop: Thinkpad Yoga - i7 4500u - 8gb RAM - 128gb SSD - way too many different Wacom EMR styluses

Who is online

Users browsing this forum: No registered users and 4 guests