I don't know about how easy to use this is for developers, but if you tried to release a game with this graphics stuff right now, people would (a) complain loudly about all the aliasing and (b) say it's terribly slow because extreme exists. W.r.t. (a), just throw some SMAA at it and don't reduce DoF res when going to 4K/8K optimized and all would be fine (it would look massively better for very little performance cost - yes, I checked, the DoF they're using isn't so heavy as all that). W.r.t. (b), 99.99% of gamers should not be using extreme. 4K high is barely any slower than 1080p extreme, fer crying out loud (and that's without resorting to 4K optimized). In this case, 4K downsampled to 1080p doesn't look as good as it should because they're downsampling wrong, but is anyone really going to say that 1080p extreme looks better than 4K high would properly downsampled?
It looks like if they optimized for 4K more properly, it could look fantastic while getting 10+ fps on my 960. That means GP102 still might not be able to maintain vsynced 60, and the PS4 Pro would need a decent amount of customization in addition to the checkerboarding trickery to hit 30. IOW, it'd be about the maximum weight that anyone would want to run with the hardware on the market right now. Given that weight as a constraint, I think that 4K with settings somewhere between "4K optimized" and high is about as good as it's going to look, and about where most people with appropriate graphics power and framerate preferences would want to run it (regardless of screen resolution).
That's a very different workload from 1080p extreme, and probably loads a system in substantially different ways. For instance, I very much doubt that 1080p extreme is spending much time limited by rasterization (geom throughput / ROPs). I don't want a benchmark for SSGI (which appears to be most of what extreme is burning all that work on), I want a benchmark for how this would be in a proper game.
Someone's going to try to excuse extreme as a good test for future-proofing, and I have two responses to that. First, the future in question is just too damn far out. We've got some pretty powerful hardware in this thread and *not one of us* can maintain 1080p30 vsync even in this puny little test room. If I've got a 1080 Ti, am I going to be content with 1080p30 and too much DoF/MB to cover up the low pixel count? Hell ----ing no! If 1440p60 is as far as anyone wants to go before cranking up the settings instead (which seems low, I'd rather go at least 1440p120), that still means the future we're thinking of is cards in the 30+ TFLOPS ballpark. We're gonna be waiting a while for that, and 9th gen consoles probably won't be able to wrangle that anyway. Second, games don't blow their whole render budgets on SSGI now, and they aren't going to five years from now. There are reasons nobody does SSGI, and all indications are that you're looking at the second biggest one right now in all these abysmal framerate results. It still won't be the usual when we've got 30 TFLOPS cards because at that point we'll be able to do what we want in world space (think VXGI), probably filling in just some fine details in screen space (low radius means it's massively faster). It'll both look better than SSGI can and be easier for artists to work with (arguably even more important).