Personal computing discussed
Moderators: renee, morphine, SecretSquirrel
morphine wrote:I don't want to Krogoth this presentation, but I want numbers. Cold, hard data. They used Mantle. How much faster is it? Did it get more scaleable? What other problems did you find? What about sound?
Etc, etc, etc.
remember when AMD released a 64 bit patch for FarCry that didn't offer anything significant either..... yeah at the moment I feel the same.Saying it's better without quantifying that just makes me ignore any news item about it.
Waco wrote:Saying it's better without quantifying that just makes me ignore any news item about it.
Savyg wrote:They kinda did, since they said it could handle nine times the draw calls of DirectX 11/roughly 18 times as many as DX9.
As to actual performance benefit I suppose it depends how much is being drawn in the first place, but we'll have to see.
sschaem wrote:But here is the kicker. PC are draw call limited.
Aphasia wrote:So, before they have final silicon out you want to have hard numbers
Meadows wrote:sschaem wrote:But here is the kicker. PC are draw call limited.
[citation needed]
It can vary from almost nothing at all to a huge overhead,' says Huddy. 'If you're just rendering a screen full of pixels which are not terribly complicated, then typically a PC will do just as good a job as a console. These days we have so much horsepower on PCs that on high-resolutions you see some pretty extraordinary-looking PC games, but one of the things that you don't see in PC gaming inside the software architecture is the kind of stuff that we see on consoles all the time.
On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call.
Now the PC software architecture – DirectX – has been kind of bent into shape to try to accommodate more and more of the batch calls in a sneaky kind of way. There are the multi-threaded display lists, which come up in DirectX 11 – that helps, but unsurprisingly it only gives you a factor of two at the very best, from what we've seen. And we also support instancing, which means that if you're going to draw a crate, you can actually draw ten crates just as fast as far as DirectX is concerned.
But it's still very hard to throw tremendous variety into a PC game. If you want each of your draw calls to be a bit different, then you can't get over about 2-3,000 draw calls typically - and certainly a maximum amount of 5,000. Games developers definitely have a need for that. Console games often use 10-20,000 draw calls per frame, and that's an easier way to let the artist's vision shine through.'
jihadjoe wrote:Off of an interview with AMD's Richard Huddy
Sauce: http://www.bit-tech.net/hardware/graphi ... -directx/2
Waco wrote:jihadjoe wrote:Off of an interview with AMD's Richard Huddy
Sauce: http://www.bit-tech.net/hardware/graphi ... -directx/2
So the guy with a clear motive to bend the facts his way is the best source? Forgive me for being skeptical.
And, I would assume, not all calls are equal (even "draw calls" since I'm assuming there isn't just one DirectX11::Draw() function). Surely if this were such a huge issue with today's games in terms of development we'd have heard about it before (from someone not on the AMD payroll) right?
Waco wrote:I found this: http://forums.tripwireinteractive.com/s ... 260&page=1
A nice rant from someone at CodeMasters.
I have to wonder if DX11 has a properly threaded dispatcher these days -- do the same arguments hold up with modern drivers and DirectX / OpenGL?
sschaem wrote:Even if Dice and AMD are lying about half of the benefit its an industry paradigm shift that we are witnessing.
So this is some seriously potent PR .
maxxcool wrote:sschaem wrote:Even if Dice and AMD are lying about half of the benefit its an industry paradigm shift that we are witnessing.
So this is some seriously potent PR .
Hardly. Glide died a horrid death. this will as well in the same 4-5 years of time.
Airmantharp wrote:So does that process even need to be improved, given that DX11 is radically more efficient than it's predecessors on the subject of the number of draw calls needed to be generated to render a particular scene?
ish718 wrote:I can't wait until BF4 and AMD R9 290x hit the market and the DX11 vs Mantle Benchmarks show up...
Ryan Shrout: Focusing back on the hardware side of things, in previous years’ Quakecons we've had debates about what GPU was better for certain game engines, certain titles and what features AMD and NVIDIA do better. You've said previously that CPUs now, you don't worry about what features they have as they do what you want them to do. Are we at that point with GPUs? Is the hardware race over (or almost over)?
John Carmack: I don't worry about the GPU hardware at all. I worry about the drivers a lot because there is a huge difference between what the hardware can do and what we can actually get out of it if we have to control it at a fine grain level. That's really been driven home by this past project by working at a very low level of the hardware on consoles and comparing that to these PCs that are true orders of magnitude more powerful than the PS3 or something, but struggle in many cases to keep up the same minimum latency. They have tons of bandwidth, they can render at many more multi-samples, multiple megapixels per screen, but to be able to go through the cycle and get feedback... “fence here, update this here, and draw them there...” it struggles to get that done in 16ms, and that is frustrating.