Xbox Project Scorpio is ready for 4K at 60 FPS

Here at TR we like to think we're pretty authoritative when it comes to PC hardware, but game consoles—even as close to PCs as they are these days—are a bit out of our wheelhouse. Thankfully, the guys over at Eurogamer's Digital Foundry are doing solid work on that front. The site got an exclusive look at Microsoft's upcoming Xbox, codenamed Project Scorpio, and it seems to be a pretty serious piece of kit.

First, the numbers that most gerbils will care about. The Scorpio SoC has eight custom x86 CPU cores running at 2.3 GHz. That CPU shares a die with a customized Radeon GPU boasting 40 GCN compute units clocked at 1172 MHz. The entire SoC shares 12 GB of GDDR5 memory running at 6.8 GT/s on a 384-bit bus. In total, that gives the CPU and GPU 326 GB/s of shared memory bandwidth to play with. Primary storage consists of a 1TB 2.5" hard drive, same as the Xbox One S.

Microsoft says "a lot of really specific custom work" went into the Project Scorpio's SoC. Much of that work seems to have gone into the unique x86-64 CPU cores the new machine uses. The cores are still derived from the AMD Jaguar low-power design (just like the Xbox One), but they apparently have seen "extensive customization" to reduce latency and improve CPU-to-GPU coherency. Digital Foundry says the new cores ought to be 31% faster than those found in the Xbox One, although that number happens to be the same as the clock rate uplift from the old machine to the new one.

The Xbox One's GPU was often compared to the Bonaire chip aboard the Radeon R7 260. They're both similar GPUs in terms of core configuration and potency, although the unit inside the Xbox One was hampered somewhat by the machine's use of relatively slow DDR3 memory. The GPU in Project Scorpio appears to be similar to the Polaris 10 chip in the Radeon RX 480, except it's even larger. It has 40 GCN compute units versus Polaris 10's 36 CUs, and the memory bus—now connected to GDDR5 memory—is half-again as wide. Both of those changes should help the chip accommodate Microsoft's aim of running games at native 4K resolution.

Digital Foundry got to play a tech demo based on Microsoft's lauded Forza Motorsport series running in 4K resolution and at 60 FPS, and came away reasonably impressed. Check out the whole story over at Eurogamer if you're curious what Microsoft's been up to with the Xbox.

Comments closed
    • B166ER
    • 3 years ago

    “clock rate uplift”

    Yeah. That.

    • tipoo
    • 3 years ago

    Hm, it sounds draw calls were already just a few commands and cycles on the PS4 to start with –

    “Oles Shishkovstov: Let’s put it that way – we have seen scenarios where a single CPU core was fully loaded just by issuing draw-calls on Xbox One (and that’s surely on the ‘mono’ driver with several fast-path calls utilised). Then, the same scenario on PS4, it was actually difficult to find those draw-calls in the profile graphs, because they are using almost no time and are barely visible as a result. On PS4, most GPU commands are just a few DWORDs written into the command buffer, let’s say just a few CPU clock cycles.”

    This was before the XBO got DX12, but still interesting in that a few dwords and cycles did it on the base PS4.

    • Krogoth
    • 3 years ago

    4K is pretty much limited to media playback and legacy titles. It is better to say that Scorpio can handle 1920×1080 and 2560×1440 at 60Hz on current titles.

      • Airmantharp
      • 3 years ago

      They should certainly be able to hit 4k30 if games are tuned to such- but I agree that just being able to do 1080p60 outright is a blessing.

    • Ninjitsu
    • 3 years ago

    I can believe it’ll do 1080p at 60 fps and hold that, it might even manage that with 1440p under some circumstances.

    But I highly doubt it can do what even a 1080 Ti cannot. There’s a very high chance it’ll be scaled up from 1080p to 4K. I’ll be shocked if this is not the case.

    [quote<] He and his team did a really deep analysis across a breadth of titles with the goal that any 900p or better title would be able to easily run at frame-rate at 4K on Scorpio. ... Traditionally, games creators have to work to the characteristics of the console platform, but because Scorpio's design brief was to scale up existing titles to 4K, the hardware team could profile actual, shipping games and customise the design to fit common characteristics. ... By hand we went through them and then extrapolated what the work involved would be for that game to support a 4K render resolution [/quote<] It's hard to tell what exactly they're doing, but it looks like they're taking existing titles and scaling them up, but actually rendering them at 4k with appropriate textures. I'm still skeptical.

      • RAGEPRO
      • 3 years ago

      Appropriate textures? You still gain a hell of a lot of IQ by quadrupling the resolution, even if you change literally nothing else.

        • Ninjitsu
        • 3 years ago

        I kinda disagree, low quality textures become painfully apparent at higher resolutions (play CoD MW 2 at 1080p or higher, for example).

        OTOH you can get a way with really nice image quality by using high quality textures at lower resolutions or with reductions to other aspects of the image, since textures are more memory/bandwidth dependent than actually rendering stuff.

        So using high quality textures but scaling up resolution would actually be a smart way to do it in this case, imo.

          • RAGEPRO
          • 3 years ago

          Using high quality textures with a lower render resolution is a quick path to severe aliasing.

          Using low-quality textures at high resolution just means things get a little blurry (or if you’re not a fan of texture filtering and mip-mapping, a little pixel-y.)

          There’s no real need to make this trade-off usually, but given the choice I’ll take the latter every time. It doesn’t take a lot of pixels for a skilled texture artist to impart a great amount of detail to a low-quality model, but said model is still going to look jaggy as heck in low resolution.

            • synthtel2
            • 3 years ago

            Mipmapping solves that texture aliasing issue. 😉

            That’s not entirely true, because naive mipmapping doesn’t work right on normal maps, but there are [url=http://blog.selfshadow.com/2011/07/22/specular-showdown/<]decent fixes[/url<] for that and the problem with excessive texture resolution these days tends to just be memory use.

            • RAGEPRO
            • 3 years ago

            Funny you should use that link, as I’ve been saying for a while now that Rockstar is the only company that can actually do mipmapping in open world games correctly. Then again, my only points of comparison are Fallout 4 (heh) and Dragon’s Dogma, so… 🙂

      • derFunkenstein
      • 3 years ago

      [quote<]appropriate textures[/quote<] Are they stirring their witch's brew? That's magical right there, that they could conjure up higher-detail textures from what they have. That's some CSI "Enhance" next-level shit right there. Also, what Zak said. Ask any PS4 Pro owner, or just try going from 900p to a higher resolution on your PC. I'm playing MLB The Show 17 on a PS4 Pro at 2560x1440, which performs better than the 1080p version on the old PS4. I have a 1080p TV, though, but what I'm getting is "free" anti-aliasing by downscaling the image.

    • sophisticles
    • 3 years ago

    I think people are ignoring the fact that desktop Windows is a bloated beast with tons of overhead, compiled using generic compiler options, designed to run on a wide range of hardware. I doubt that it does any runtime cpu checking to pick an optimal execution path.

    Scorpio on the other hand MS knows the exact hardware specifics and capabilities and it will run a stripped down embedded version of Windows that’s been compiled with flags that target that specific hardware with all the bloat removed.

    You talking about an OS and API’s that are “closer to the metal”. Similarly a game developer knows what the Scorpio hardware consists of, so they can have an optimized code path.

      • Voldenuit
      • 3 years ago

      100% of a Jaguar CPU is not 100% of a hell of a lot.

      And we’ve all seen the RAM and core overhead figures just to run the OS and services on the PS4 and XBone. Hint: It’s not 0.

        • sophisticles
        • 3 years ago

        All one has to do is install a Linux distro using the default configuration and then custom compile a kernel using that specific cpu’s optimizations to see what a simple recompile can do. Or take a look at Clear Linux which is custom built for Intel cpu’s to see how much faster it is across the board compared to other distros that use a generic build.

        Why is it my cell phone can record and play back 4k HDR video, but to do the same thing with a desktop pc you need a decent cpu and video card?

        Answer: because my cell phone runs a highly stripped down and optimized OS, that’s why.

        Same thing will happen with Scorpio.

          • Redocbew
          • 3 years ago

          No amount of optimization in software can get you more resources in hardware. Slow hardware is slow even if it’s running good code.

          • Andrew Lauritzen
          • 3 years ago

          > Why is it my cell phone can record and play back 4k HDR video, but to do the same thing with a desktop pc you need a decent cpu and video card? […] Answer: because my cell phone runs a highly stripped down and optimized OS, that’s why.

          I hate to burst your bubble but it has nothing to do with the OS… it’s simply fixed-function media hardware and if you have similar hardware on your PC (Kaby Lake or whatever), it’s the same…

          • tipoo
          • 3 years ago

          I think you’re overestimating the impact of a “stripped down” OS. People say this about ChromeOS, and as I experienced first hand, it’s only true to a degree, a workload is a workload and a processor is a processor, and Windows was only taking a few percent of that with interrupts. The XBO also runs three operating systems technically, and one is the Windows kernel for UWP.

          The bigger thing is a fixed spec to target, developers know what to prune back until they can hit a steady 30fps on most Scorpio games (we only saw 60fps a few times on racing games and such). There’s also that most PC gamers prefer 60fps to higher resolutions, an overclocked 480 can run many things at 30fps at 4K.

          The phone thing is because of dedicated hardware. If you use Quicksync or similar you get similar on PC. You can’t just strip down an OS to the point of getting an underpowered chip to run 4K video, bitrate requirements are firm.

          • maxxcool
          • 3 years ago

          LOL @ 4k in software…NOT, it has a dedicated media accelerator .. without it, your running 4fps

      • Andrew Lauritzen
      • 3 years ago

      Maybe in the past…

      These days I have several cores sitting around doing nothing even if I run multiple games at once (sometimes I even forget I left a game running) and leave all my work stuff up in the background.

      The reality is if Windows runs fine on my 4W fanless ultrabook (and it does; very well) then it’s in the noise on my desktop. I’m pretty sure once “game mode” or whatever stupid marketing gimmick comes out people will realize there’s no mythical huge amount of performance they are being denied on desktop processors…

      And as noted – 100% of 8 core Jaguar is *maybe* equivalent to 2 modern Haswell+ cores, and that’s if you can get perfect utilization out of it with no overhead (which is actually impossible, but let’s pretend). There’s nothing you can do on a console CPU that doesn’t run just fine with zero additional optimization on a quad core PC CPU. There’s more tricks on the GPU front that you have access to on the consoles though.

        • tipoo
        • 3 years ago

        6 and a half ish of which are usable to game code btw. Twoish haswell cores sounds right, iirc two jaguars times a perfect 3.25 was coming up to about a fourth gen i3.

      • TurtlePerson2
      • 3 years ago

      As an electrical engineer, I love the phrase “closer to the metal” or “deep metal” to refer to a computer code that takes advantage of the hardware more fully. Such phraseology makes no sense whatsoever if one knows how microprocessors are manufactured. All the transistors are at the bottom of the metal stack, so it’s impossible to do any computation without reaching the “deep metal.”

    • brucek2
    • 3 years ago

    I feel like I just read an article this morning demonstrating that of all current video cards, only the 1080ti can come close to reliable smooth 4K frames, and even that’s not a sure thing.

    So how is Scorpio doing it?

      • Laykun
      • 3 years ago

      Jittered checker board rendering then upscaled. It’s very unlikely many games will render at true 4k.

    • bfar
    • 3 years ago

    This is the generation where consoles fell miles behind the pc. These mid life updates are a sign of desperation.

    4k remains a ludicrous render target, even for today’s high end GPUs. Upscaling techniques appear to have made major strides forward, but it’s no panacea. It’s interesting that the consoles are desperately moving toward 4k, at all costs while current pc hardware targets faster/smoother animation. Which is better – 1080p checkerboarded to 4k@60/30 hz, or native 1440p @ 144hz? There are a lot of games that play far better on the latter.

      • swaaye
      • 3 years ago

      4K is a powerful selling point buzzword in TV land. That’s the motivation.

      I promise you that most people have no idea what would be best for their gaming enjoyment. 🙂

      • DPete27
      • 3 years ago

      Considering most people plug consoles into TVs….4k @ 60Hz seems like a perfectly normal target since that’s where the TV market is headed.

      • Voldenuit
      • 3 years ago

      [quote<]It's interesting that the consoles are desperately moving toward 4k, at all costs while current pc hardware targets faster/smoother animation.[/quote<] This is probably because there aren't any TVs that actually take 120+ Hz input (120 Hz TVs only double/interpolate 60 Hz signals), nor do any TVs support VRR. Console makers can only make devices that target available display hardware, after all.

      • synthtel2
      • 3 years ago

      The checkerboarding in question here lights half as many pixels as 4K, which is still twice as many pixels as 1080p. It’s just a touch more than 1440p in non-post per-pixel cost – if this checkerboarding gets a game to 60 fps, 1440p isn’t going to make it to 80, much less 144.

      Edit, I should have read that presentation more closely, it’s a bit of a bigger difference because they’re sampling geometry from every final pixel to get the resolve right. This means writing out a bit more data in rasterization and a slightly more complex resolve.

    • Voldenuit
    • 3 years ago

    Also worth noting that the PS4 Pro has 36 (Polaris) CUs, so the 40 CU count on Scorpio is not sounding like a big bump, especially given the $700 price point that many publications are speculating for Scorpio.

    And yes, most “4K” PS4 Pro games are doing sparse/checkerboard shading, but PC games have also done similar things with foveal/centered rendering, and it’s not a panacea. It hasn’t magically let underpowered hardware hit “4K”, even when it lessens the load for each frame.

      • Antimatter
      • 3 years ago

      The XBox is almost 50% faster. The PS4 Pro GPU has a clockspeed of 911MHz (~4.2 TFlops) vs the XBox’s 1172MHz (~6 TFLops).

        • Voldenuit
        • 3 years ago

        Good point, I saw the 1172 MHz vs 911 MHz and my head went “eh, only a couple hundred MHz”, but the gains do add up when you stack them with the CU count (+11%), any architectural improvements if they do go Vega or Polaris+, and memory bandwidth (+49%).

      • synthtel2
      • 3 years ago

      Which PC games are doing foveated rendering? It’s quite a bit tougher to make that work right than the PS4 Pro checkerboarding. They’re not mutually exclusive, though.

        • Voldenuit
        • 3 years ago

        My bad, I actually meant adaptive/dynamic resolution, as seen in Dishonored 2, but forgot the exact term.

        Watch Dogs 2 on PC also uses checkerboarding, from what I’ve read.

          • synthtel2
          • 3 years ago

          Ah, no worries. Adaptive resolution doesn’t really help out with quality/performance ratio though, unlike lots of these other advances (and it’s been used for probably half a decade now).

          Nvidia’s SMP can help out a lot with foveated rendering, so I was thinking that might have prompted someone to get in on it.

    • I.S.T.
    • 3 years ago

    As for the tweaks to the CPU? Prolly just altering the memory controller and bug fixes in the CPU cores. The latter is a worthy thing alone, IMO.

      • NTMBK
      • 3 years ago

      Did Jaguar have that many bugs?

        • Bumper
        • 3 years ago

        I was wondering the same thing. They did say they ran simulations to identify areas that could be improved. Any ideas about what that could be? (Yes, I am lazy and did not read even a little about the jaguar arch outside of front page specs. I’m not going to spend three days deciphering all that computer engineering. I appreciate answers though and can probably understand them.)

          • tipoo
          • 3 years ago

          I’m curious too. They said it was specific to 60 big games?

        • I.S.T.
        • 3 years ago

        It’s more that almost every processor ever has had them. Fixing any of them is a good thing.

    • I.S.T.
    • 3 years ago

    I have to wonder exactly what they’ve done with the GCN GPU in this one. The ones in the consoles weren’t exactly 1.0/1.1/1.2. It had features from all three.

    • Andrew Lauritzen
    • 3 years ago

    > “We’d hate to build this GPU and then end up having to be memory-starved.”

    12GB DRAM, okay. But how do I stream my lovely “4k assets” in from a conventional HDD? :S Or are we back into load screen city?

    This is a constant weakness of consoles in my opinion right now. You need something between 300GB/sec DRAM and… maybe 100MB/s storage. PC now has *several* levels in between those in the hierarchy.

    Most disappointing thing is we still have to live with a garbage CPU for another generation. So don’t expect your >4 cores to see much use in most games until consoles start to suck less in this area :S

      • cegras
      • 3 years ago

      It seems that the hardware DX12 dispatcher is meant to alleviate CPU load?

        • Andrew Lauritzen
        • 3 years ago

        Sure it’s nice and will help a bit but the rendering API overhead is already pretty small on PC now as well. The hardware dispatch is more awesome in terms of indirect GPU-generated rendering stuff really.

        But we still need CPU for… like… the rest of the game 🙂

        • tipoo
        • 3 years ago

        API call load was already in single digits though. So halving that is a nicety, but not a game changer.

      • tipoo
      • 3 years ago

      They said the drive is faster but didn’t touch on it much, not sure if it’s just 7200rpm, or some sort of hybrid drive with NAND cache.

        • Andrew Lauritzen
        • 3 years ago

        Yeah I was already giving it that benefit in my “100MB/s” figure… even a 2x faster wouldn’t change the massive delta of having neither conventional DRAM or an SSD here unfortunately.

        And unfortunately NAND caches don’t really help a lot here. With streaming you’re doing nice sequential accesses but over large-ish portions of the data set over time. The majority of reuse is already captured by the streamer itself leaving things in DRAM.

        • LostCat
        • 3 years ago

        I hope so. The 500gb HD on the X1 is embarrassing. SSHDs should be default on any gaming kit these days.

        (I just got a Firecuda to go with my SSDs. 🙂 )

      • jihadjoe
      • 3 years ago

      It could clearly use some Optane!

        • tipoo
        • 3 years ago

        But would also benefit from 5x the NAND for the same price even more 😉

    • derFunkenstein
    • 3 years ago

    Everyone here is focusing on the CUs and 4K, when in reality I think Polaris has been limited by memory bandwidth. Getting a 50% faster memory bus will probably make a huge difference.

    Although now that MS does [url=http://www.xbox.com/en-US/games/xbox-play-anywhere<]Play Anywhere[/url<] for its exclusives, there's basically no reason to own an Xbox. I have Halo Wars 2, Gears 4, and Killer Instinct and they all run fine on my PC. Xbox is redundant.

    • synthtel2
    • 3 years ago

    Before saying a 480 doing 4K doesn’t check out, mind that this is a touch more powerful than a 480 and will usually only be going for 4K30. I don’t doubt at all that they can nominally pull that off, the question is just how stuff other than resolution scales. They say it will, but I’m skeptical.

    Why do they want a whole extra gig for the OS now? That’s like 30 4K RGBA8 framebuffers, and 3 GB was already a lot.

    • Mat3
    • 3 years ago

    Would have been so much cooler if it used 4-6 Ryzen 3+Ghz cores (separate die same interposer), HBM2 memory and a beefier GPU (64 CUs). Yeah I know, cost and delivery schedule and all that, but do-able I think. It could have been good enough as their last console for a very long time.

      • Chrispy_
      • 3 years ago

      Enjoy your $1200 XBOX with 1year warranty.

      • DancinJack
      • 3 years ago

      Not doable. Sorry Mat3

    • Voldenuit
    • 3 years ago

    RX 480 has 36 CUs. Scorpio has 40. Given that the 480 has a hard time hitting 30 fps at 4K, claiming 60 fps from a 10% bump in CUs seems… optimistic.

      • DPete27
      • 3 years ago

      1) Don’t try to compare performance to PC environment
      2) If the past is anything to go by, they most likely won’t natively render in 4k. More likely they’d do 2k and upscale.

      • Pville_Piper
      • 3 years ago

      That is true… But generally you have better optimization on consoles so it might be possible.

      The thing I would worry about is longevity. A the rate graphics and games in general are eating up resources how long will it be before that console can’t play at 60 FPS?

      Nope… Not giving up my PC and trading it for another box that you will need to upgrade every 2 to 3 years just to play the latest triple A titles.

        • Voldenuit
        • 3 years ago

        [quote<]That is true... But generally you have better optimization on consoles so it might be possible. [/quote<] Not being sarcastic to you personally, but these are the same optimizations that give us 20 fps dips in BoTW and framerate dips on both major consoles in nearly all their AAA games. One of the most frequently addressed concerns in reviews of console games is "does it have a steady 30/60 fps?", and the answer is almost always, "not quite". EDIT: Also worth mentioning that a [i<]lot[/i<] of console games are actually rendering at 900p and upscaling to 1080p and [i<]still[/i<] can't hit framerate targets.

      • the
      • 3 years ago

      There should be an increase in ROPS and L2 cache to go along with the wider memory bus considering how AMD scales things. These should help tremendously at higher resolutions. I however remain very skeptical on the 60 fps calm. Even modern cards on the PC have trouble achieving such frame rates at such high resolutions on current games.

      • Chrispy_
      • 3 years ago

      The console version of the game has no detail sliders. Typically a console game runs all the detail sliders at what would be the equivalent to [i<]low[/i<] or [i<]medium[/i<]. True, an RX480 cannot run 4K at [i<]ultra[/i<] settings very well, but on a PC you wouldn't normally crank settings up to [i<]ultra[/i<] if you're only getting 20fps. There are several games that I turn down from [i<]ultra[/i<] to a mix of [i<]medium[/i<] and [i<]high[/i<] so that I can reach 144fps on my gaming monitor, despite the games barely breaking 60fps when running on [i<]ultra[/i<] settings

      • homerdog
      • 3 years ago

      I would rather have 2K resolution with better details and lighting. Fortunately as a PC gamer I can choose to do this.

    • superjawes
    • 3 years ago

    4K at 60 FPS*

    [i<]*Some Restrictions May Apply*[/i<]

      • tipoo
      • 3 years ago

      Reminds me of the old Regginator saying “1080p, check that box”, about the Wii U.

    • DragonDaddyBear
    • 3 years ago

    As a gaming system (EDIT: in terms of performance), the Xbox One is clearly short of the PS4. But I’m not the gamer I once was. Taking a step back and looking at some of the other features is what made it much more attractive to me. I really hope this new version makes it more attractive to the more die hard gamers because I’m really happy with my One S and I would love to see XBox One Scorpio get more attention.

    EDIT:
    When I say other features that interested me I’m talking better integrated streaming with Windows 10 (I use that more than the actual console thanks to the proximity of the console to my children and lack of stairs my 1 year old thinks she can walk down), permissions for my children and integration into my Windows 10 systems, integration of controller support for my PC, 4K Blu-ray with HDR (cheapest at the time), convergence of my Kodi NUC (Plex and HD Homerun are on the XBox One), Roku, and Blu-ray player into one device, etc. As a value proposition I get more with less work out of my XBox than I could with a PS4.

      • tanker27
      • 3 years ago

      Hmmm. I don’t know about that. I prefer the Xbox OS over the PSs. Its more user friendly and intuitive. But HW wise, yeah I would choose a PS4.

        • Growler
        • 3 years ago

        That, and the XBox One controller is much better than the PS4 controller. It feels quite a bit more sturdy, and fits my hands better.

        If I have a choice of getting a game on the XBone or the PS4, I’ll generally go with the XBone even if the PS4 version would probably look a little better.

          • derFunkenstein
          • 3 years ago

          The biggest improvement of the Xbone controller over the Dual Shock 4 is battery life. I feel like I have to plug the controller into my PS4 every time I’m done, where I could just set it on the coffee table with the Xbox. Part of it is that the Xbox controller uses AA instead of rechargeable batteries and part of it is that the PS4 can only get around 4-5 hours on a charge.

          • David
          • 3 years ago

          I’ve hated every PS controller to date and enjoyed every Xbox one. Even The Duke. While the Xbox One controller is an improvement, Sony nailed the controller with the PS4. It’s easily the most comfortable controller I’ve used.

      • HisDivineOrder
      • 3 years ago

      The Xbox One as a piece of tech where you ignore the technical specs has a better UI than PS4.

      But as a gaming device is really falls short for me. It doesn’t:

      1) Deliver the best gaming experience even among similarly priced devices with a similar release dates
      2) Offer the best first party exclusives (especially in 2017 where Xbox has completely fallen away from PS4’s Nioh, NieR, and Horizon bonanza).
      3) Give me anything my Gaming PC can’t do a LOT better (especially again with exclusives).

      As a result, Xbox One is just a non-starter. Even a 4K Xbox One has me asking, “What games would I be playing on it?” Any Xbox exclusive I’d rather play on my PC. It feels like Scorpio is for the gamer who should be gaming on PC but is too lazy. Even its likely pricing is going to make PC gaming look the cheaper longterm option…

        • tanker27
        • 3 years ago

        Being a predominately PC gamer I cannot argue your points but simply agree with them. Hell, for full disclosure, I don’t even own either of these consoles.

        • I.S.T.
        • 3 years ago

        NeiR’s on PC… Unless you mean exclusive within the realm of consoles.

          • Voldenuit
          • 3 years ago

          His point stands, though. If you want to play NieR: Automata (which I highly, [i<]highly[/i<] recommend), your options right now are PS4 or Steam.

            • I.S.T.
            • 3 years ago

            Which is why I asked if he meant just on consoles, or altogether. It’s a multiplatform game. If you are strictly a console player, then yeah, you’re doublescrewed.

        • DragonDaddyBear
        • 3 years ago

        I think I was clear i agree that for pure gaming it’s not the best. It’s all of the stuff around it that makes the One S so attractive for me and my family.

    • Anovoca
    • 3 years ago

    The SoC aside, the biggest hardware advantage this console will have over the PS4-Pro, for most everyday consumers, is the 4k Bluray player. Most people with the knowledge capable of (or even caring to for that matter) discerning the difference in performance between the two, are PC Hardware enthusiasts that don’t do the brunt of their gaming on consoles anyways. In the end, the console with the most amount of exclusive titles and bullet points in the ad-sheet will win.

      • tipoo
      • 3 years ago

      Still baffling that the company that charges licences for UHD BD…Didn’t have it in their own system. I guess they only just made the 400 price point and wanted it rounded, but still.

    • NeelyCam
    • 3 years ago

    4K at 60FPS? That’s amazing!!!

    Consoles 3 – Gaming PCs 0

      • ImSpartacus
      • 3 years ago

      Yeah, these consoles better slow down so the pc can keep up.

        • tipoo
        • 3 years ago

        I can’t wait to get something a bit faster than a 480! And when do we get Jaguar?!

        • Aquilino
        • 3 years ago

        It’s worse: consoles are getting pclized. One of these days they’re getting mods and shit like that. Denigrating.

        No, but seriously. They gave us FPS with autohealing, asymmetric X/Y axis, checkpoints everywhere… let’s give them something in return, send them our bad hombres: fill those videoconsoles with neons to the brim.

    • tipoo
    • 3 years ago

    Most interesting unexpected thing for me was that they “moved the DX12 API into hardware”, supposedly having dedicated hardware for draw calls, reducing DX12 load on the CPU by half (wasn’t Nvidia planning an ARM core in their GPUs that did that? Was that canned?)

    I am however curious what the total load reduction of DX12 feature integration into hardware actually is. 50% reduction in DX12 CPU use, but that’s not the same as total game CPU use, the API was probably using single digit percents with the overhead reductions already in DX12.

    Forums are running away with “CPU use halved!” it looks like. Just the API CPU use is halved.

      • chuckula
      • 3 years ago

      Always be careful when they trumpet reductions in draw call overhead as the be-all end-all metric.

      First, the vast majority of half-way well-written games were already minimizing draw calls even before DX12, which is one reason that DX12 games aren’t always insanely faster than even average DX11 equivalents.

      Second, DX12 was supposed to only be standardizing the “advantages” that consoles were purportedly already enjoying with their “low level” software APIs… meaning that draw call overhead was [i<]supposed[/i<] to have already been a non-issue even in older consoles to begin with.

        • tipoo
        • 3 years ago

        That was my thinking, people are mistakenly trumpeting this as 50% reduced CPU load, when really it’s more like a 50% reduction in the what, 5, 10, 15 (?) percent CPU load DX12 draw calls were taking.

    • mcnabney
    • 3 years ago

    A slightly better than 480 GPU running 4K at 60fps.

    If true, how amazingly inefficient is Windows?

      • Jeff Kampman
      • 3 years ago

      It probably has nothing to do with Windows and everything to do with the fact that developers will likely have the opportunity to run Xbox One software on much more powerful hardware.

        • DPete27
        • 3 years ago

        Ehhh, I beg to differ. Consoles have been doing more with less resources than PCs for…ever. That’s the benefit of designing for a singular system in mind (or aside from exclusives, I suppose it’s 4 console systems now XB1/Scorpio/PS4/PS4Pro)

          • tipoo
          • 3 years ago

          To a point, but it’s not squeezing a not-4K PC chip into a 4k chip just through illusive single system optimizations. It’s a bigger chip than RX480, and most intensive games are at 30fps on console, that would get an overclocked 480 running many games at 4k already. Just that most PC gamers would prefer turning down the resolution for at least 60fps.

          The potato masher project was interesting, matching as closely as possible a PC to a PS4, and seeing how they aged over time. The assumption that the PC would be left behind was mostly unfounded.

          • Laykun
          • 3 years ago

          Yes, and they’ve also had lowerer level APIs to deal with forever too. Generally with GPU performance I don’t think you’re going to find much if any of a gap between console and PC performance with both platforms being programmed for optimally (i.e. not shitty console port) as it’s actually relatively easy to peg a desktop GPU at 100% utilitzation. CPU on the other hand is where you’re likely to see better gains where you don’t have to deal with your threads being stalled by system services and the likes.

          With things like DX12 and Vulkan this idea that you can just squeeze more out of a console is becoming less and less of a reality.

      • tipoo
      • 3 years ago

      Running *what* at 4K at 60fps is the question you should ask yourself. It’s running most games at 30fps still, with racing games and other simpler games at 60. PC hardware could run at 4K on similar hardware, but most PC gamers prioritize framerates. And then there’s hand tuning by devs to hit that steady framerate.

      DF found the 480 could run SW Battlefront at similar settings as the PS4 Pro at 30fps, just most PC gamers go for 60 or more.

      With only 30% faster CPUs that are still Jaguar, that’ll limit most 60fps upgrades.

      • BobbinThreadbare
      • 3 years ago

      They’re just going to run the games at low quality

        • Anovoca
        • 3 years ago

        Yeah, sadly there is no metric to measure texture/shading quality. Even on PC games, saying you can run a game on “Ultra” settings is an entirely subjective equation from one engine to another.

      • maxxcool
      • 3 years ago

      lol, with lower texture quality, less AA quality, less Physics 4k should be easy for the xbox.

      • Andrew Lauritzen
      • 3 years ago

      For *new* games there’s basically zero chance that this is “brute force” 4k shading all the pixels. More likely checkerboard and fancy reconstruction as has been the norm on other platforms. Frankly it’s a waste of resources to render “natively” at 4k.

      It’s worth keeping that in mind if you’re comparing “set resolution = 4k!” on class PC games vs. more advanced decoupled shading stuff coming down the pipe.

      Oh wait, the screenshot of Forza even says 4:2x EQAA on it 😀 Haha, “native 4k” now officially a marketing term. Again, that’s a good thing, but just mind your comparisons:
      [url<]https://cdn.gamer-network.net/2017/screenshots/Forza-Tech-Screenshot.png[/url<]

        • tipoo
        • 3 years ago

        Oh yeah, sparse rendering like the PS4 Pro was confirmed for it.

          • Andrew Lauritzen
          • 3 years ago

          Right, and that’s a good use of resources. Just don’t compare that to the cost of doing the dumb brute force thing on the PC, which I imagine will become the legacy way to do it there as well. There’s no reason you shouldn’t be doing smarter reconstruction on *both* platforms really… you’re just throwing performance and quality away shading every pixel.

            • tipoo
            • 3 years ago

            Agreed, i’d love to see this stuff on PC. Same with FP16, I think some hardware supported it back in what, the Nvidia FX days? But DX didn’t for a while so that never went anywhere, while it was always a good idea when not everything needs full precision. PC would have led the charge on that with the proper foresight and integration.

            • Andrew Lauritzen
            • 3 years ago

            There’s PC hardware that supports double-speed fp16 (Skylake), but unfortunately some folks are using it as a product segmentation thing to protect deep learning server hardware (NVIDIA) so we’ll see how ubiquitous it becomes.

            The way it was exposed in HLSL back in the day is not appropriate for modern uses, but the more modern version is exposed via the (admittedly somewhat messy) min16float stuff in HLSL. Not a lot of use of that yet though.

            • RAGEPRO
            • 3 years ago

            [url=https://techreport.com/news/31516/amd-next-graphics-cards-will-be-called-radeon-rx-vega<]AMD Vega will support double-rate (packed) half-precision, too.[/url<] Maybe Intel and AMD can force Nvidia's hand on the matter. Then again, that was supposed to happen with Freesync...

        • RAGEPRO
        • 3 years ago

        That’s an interesting point of view you have Andrew.

        I have to admit I don’t really have any experience with the sort of thing you’re talking about (with regard to sparse rendering, checkerboarding, and similar technologies) but historically speaking things that are “just as good as super-sampling”, well, aren’t. In particular I haven’t been impressed with post-processing AA methods.

        The game I play the most, I play in 4K by “brute force” shading all the pixels, and it looks amazing. I might just be old-fashioned, but I do find it hard to believe that I could get to this level of IQ without doing the work. You have any resources I could look at to soothe my concerns?

          • Andrew Lauritzen
          • 3 years ago

          Quick reply, but it’s not a question of “does supersampling look better than not supersampling” – obviously it does, but there are hugely deminishing returns and thus for a fixed amount of performance/hardware resources, it’s usually not a good use of performance to push it too far. For reference, TAA is effectively super-sampling and all modern engines make use of that as an input to the final resolve shader.

          For some good comparisons of how you can look nearly as good as raw super-sampling/native resolution shading for a *hell* of a lot less performance, check out here:
          [url<]http://www.eurogamer.net/articles/digitalfoundry-2016-bf1-and-fifa-17-frostbite-shines-on-ps4-pro[/url<]

            • RAGEPRO
            • 3 years ago

            [quote=”DigitalFoundry”<]BF1's temporal anti-aliasing solution remains in effect, adding some softness to the presentation, this effect is added to by the utilisation of checkerboard upscaling. It's interesting to note that checkerboard artefacts are visible, but only really on still shots we extracted from our captures - the anomalies only occur in motion and the 'sample and hold' technology used by modern displays (which reduces perceived resolution in motion) tends to hide this effect rather well.[/quote<]Haha. Yeah man, I dunno. I think I'll stick to my "dumb, legacy, brute force" method. 😉

            • Andrew Lauritzen
            • 3 years ago

            You’re still sorta missing the trade-off point, though. (And also note they are talking about TAA in general there, not specifically upsampling. You’re getting TAA on PC as well even when rendering “natively”, and that’s a good thing otherwise it looks like aliased trash :).

            The question is how much quality you can get *for a fixed amount of FLOPS* or similar. The overall quality you can get from applying those FLOPS to other more useful things – including but not limited to putting samples where they can do the most good instead of literally running another shader invocation that does nothing but interpolate the same texels again. Decoupled shading and reconstruction are not really a debatable thing… they’re clearly the direction everyone has gone in both online and offline rendering because they get you better bang for your buck.

            If all other things were equal would you not just want more samples? Sure. But that’s not how you design renderers. In reality it’s more like – do you want supersampling @ 30fps or almost-as-good at 60fps? I know which I’d choose every time 🙂

            • RAGEPRO
            • 3 years ago

            Well, it’s less that I’m missing the point and more that I’m side-stepping it.

            I don’t play console games explicitly because I don’t want to have to work within “a fixed amount of FLOPS”. If my compute power is insufficient, I can add more to get to the IQ (whether spatial or temporal) level that I want; that’s the nice thing about PC gaming. It may not be economically efficient, but it’s my prerogative.

            I don’t fault anyone for being satisfied with console games though, or for using upsampling (and etc) on PC. I’m well aware I expect a higher standard than most people. As long as games on PC don’t start force-enabling FXAA or sparse rendering, I’m a happy camper.

            So I hear your argument, and I get where you’re coming from and where you’re going. I’m just the next town over. 🙂

            • Andrew Lauritzen
            • 3 years ago

            I’m not speaking about consoles – on every platform you have a “fixed amount of performance” at any given time. So if someone wrote a game optimized to produce the best quality on *your setup*, it would absolutely make use of decoupled shading and such.

            The fact that you can buy a much more powerful GPU on PC and just get it to do brute force type stuff is simply due to the fact that the games themselves aren’t really targeted to those high end cards per se, not that the ideal use of those additional FLOPS is super-sampling…

            I’m a PC guy myself and absolutely in cases where I have excess GPU power, why not have it do something brute force for a minor quality increase vs. sitting idle. But that’s entirely unrelated to the discussion around the best ways to do high quality rendering. As I said, it’s merely an artifact of the fact that modern games don’t really have anything good for a 1080 Ti to do with the additional power… ideally they would be able to fill that up nicely with additional quality increases that would far surpass the minor benefit of supersampling @4k.

            As an aside, I’ll still take 120Hz+ over “native 4k shading” any day! 🙂

            • RAGEPRO
            • 3 years ago

            Alright alright alright, but hold up though.[quote<]So if someone wrote a game optimized to produce the best quality on *your setup*, it would absolutely make use of decoupled shading and such.[/quote<]This makes the rather poor assumption that these approximate rendering techniques won't introduce artifacts that you find acceptable or even un-noticeable but which I find distracting or unacceptable. I mean, hey man, I'm right there with you on the 120-Hz displays thing. Heck, I even take it a step further and use a scanning backlight for blur reduction when I play games that demand clear motion. So when Rich at DF says checkerboarding artifacts are "mostly hidden by the sample and hold effect" that sounds real bad to me. Like I said in my first post, I don't have much experience with this stuff. I don't really play many AAAs, which is where we're starting to see these kinds of techs. So please do excuse my curmudgeonity on this matter, but when someone says "95% of x for 50% of cost" (where in this case "x" is image quality and the cost is render cost), I get [i<]reeeal[/i<] skeptical.

            • Redocbew
            • 3 years ago

            Being skeptical is good, but I’ve seen the kind of behavior Andrew is talking about in other types of software before. Many of the common tests used for primality are probabilistic, because it’s considerably faster to allow a tiny margin of error than to exhaustively search the entire scope of the problem. It seems reasonable to me that with 4k being such a buzzword there may be enough pressure on game developers that they’ll need to come up with a similar solution that may be just as good in practice as a brute force approach.

            • Andrew Lauritzen
            • 3 years ago

            > This makes the rather poor assumption that these approximate rendering techniques won’t introduce artifacts that you find acceptable or even un-noticeable but which I find distracting or unacceptable.

            Sure, but that’s not an interesting argument for the sake of discussion, as *everything in rendering is an approximation* and just holding the trump card of “well I find that unacceptable” doesn’t make for a very interesting discussion 🙂 Any practical renderer is a series of approximations and compromises, and that includes offline as well.

            > So please do excuse my curmudgeonity on this matter, but when someone says “95% of x for 50% of cost” (where in this case “x” is image quality and the cost is render cost), I get reeeal skeptical.

            Being skeptical is fine, but if you have limited experience with these trade-offs as you note, I’d encourage you to extend the benefit of the doubt to the folks that do 🙂 If you think folks doing this for jobs don’t have an eye for these details and make very deliberate decisions, I can assure you that’s not correct.

            It’s also not the sort of thing where those statements like the above should set off your bull-shit detector in general. Getting very good results with a fraction of the cost is absolutely common and critical to lots of things in computers. Take something like video compression: sure there’s some variation in where people fall on the quality/size trade-off, but you’d have to be slightly nuts to say that keeping everything completely uncompressed makes sense…

            Similarly with rendering there’s nothing sacred about the way things have been done for the past 5-10 years. In fact the offline folks rightfully scoff at the fact that we think XYZ is important but yet we have hard polygon edges and aliasing all over the place. Why should I use the additional power for shading in 4k vs. doing ray tracing and getting some actually nice pixels in there vs. stretching out some more flat baked textures to larger pixel counts? I hope you see my point here at least.

            Just try and avoid the conservative bias that the way real-time rendering has worked is some special local maximum of efficiency… it absolutely is not. It’s just what we had to do when hardware was less flexible. With faster and more flexible hardware it absolutely makes sense to do something more complex and efficient.

            • RAGEPRO
            • 3 years ago

            [quote<]With faster and more flexible hardware it absolutely makes sense to do something more complex and efficient.[/quote<]Yeah, I can dig it. I'm looking forward to trying out some of this stuff for myself. I'm big on withholding judgement until I've had the chance to fool with stuff. [quote<]Just try and avoid the conservative bias that the way real-time rendering has worked is some special local maximum of efficiency... it absolutely is not.[/quote<]Naw, I know it's inefficient, heh. I was never trying to argue that it was even [i<]good[/i<] efficiency, much less the most efficient way to do things. Believe it or not, I'm actually all about efficiency, I'm just also pretty unwilling to compromise on quality. Case in point... [quote<]Take something like video compression: sure there's some variation in where people fall on the quality/size trade-off, but you'd have to be slightly nuts to say that keeping everything completely uncompressed makes sense...[/quote<] *[i<]Glances at his 2.2TB of FLACs...[/i<]*

            • Andrew Lauritzen
            • 3 years ago

            Haha well I’m with you in terms of having high end PC hardware being worth the expense for the improvements in quality 🙂 I just know that even current high end PC hardware can do a lot more than the relatively native things we’re getting it to do today and want to see that happen!

            > *Glances at his 2.2TB of FLACs…*

            😀 Well for audio it’s not totally unreasonable simply because it’s really not that expensive (from a storage POV) to do it loss-lessly; so why not just avoid the whole issue of finding where your personal perception line is and store it all :). Audio is just not that much data overall.

            For video it’s still a bit nutty and expensive to go uncompressed even if you somehow had access to an uncompressed source…

        • derFunkenstein
        • 3 years ago

        From the time I read the article, I assumed this to be true of the Forza demo Digital Foundry witnessed. I know that Scorpio is going to be considerably faster than a PS4 Pro, but there’s no way it’s going to be doing “native” 4K.

        And like you said, there’s no point to native 4K. The people faffing about counting pixels on NeoGAF or whatever have kind of forgotten the most important thing: they’re not sitting close enough to the display to even see the difference between a checkerboard pattern, native 3200×1800, or native 4K. A 60″ screen with 8 million pixels sitting 10 feet away is a bit of a waste.

        That said, I’d take the 60″ screen if someone offered it to me.

          • JustAnEngineer
          • 3 years ago

          4K 60″ TVs are just $600 at your local Sam’s Club. If you want to see see all 8 million pixels from the other side of the room, you should be wishing for that $8,000 85-incher.
          [url<]http://ca.rtings.com/tv/reviews/by-size/size-to-distance-relationship[/url<]

            • derFunkenstein
            • 3 years ago

            Feels like it wasn’t that long ago that I paid $500 for my 46″. 🙁

        • Ninjitsu
        • 3 years ago

        Excellent, I suspected as much (that there was some trickery going on to hit “4K”).

      • OptimumSlinky
      • 3 years ago

      Everyone is getting hung up on the “RX480 running 4K @ 60-fps” but consoles don’t run games on “Ultra” settings with everything cranked. Often times, console versions are a mish-mash settings, of high textures, low lighting, medium particle effects, et cetera. So while an RX480 might not be able to achieve 4K/60-fps on Ultra, I bet if you fine-tuned your options down to a mix of Low and Medium settings, you’ll find it hits those metrics just fine with the equivalent experience.

        • OptimumSlinky
        • 3 years ago

        Also, remember that console gamers generally sit anywhere from 6-12 feet away from the their TV, depending on circumstances. Image quality is a lot more forgiving when you’re that far back. I’m not saying that you can’t see the difference between 720p, 900p, 1080p, and 1440p at those distances (you absolutely can), but there’s a bit of diminishing returns. I’m currently playing Far Cry 3 via backwards compatibility on my Xbox One and Battlefield 1, both of which render around 720p on the Xbone. From 6-ft away on a 4K Vizio M-Series 50″, both look astoundingly good relative to the weaksauce Xbone hardware.

    • tahir2
    • 3 years ago

    Looka like a serious improvement over the Xbox One. What is the comparison to PS4 Pro like?

      • DeadOfKnight
      • 3 years ago

      It’s about 1.5x as powerful as the PS4 Pro.

      • RAGEPRO
      • 3 years ago

      Pretty rough, though not drastically.

      PS4 Pro is still using a 256-bit bus and only has 8GB of RAM (vs. 384-bit and 12GB.) Its GPU runs 911 MHz and has 36 CUs, vs 1172 MHz and 40 CUs in the Scorpio. Both are closely related to Polaris family as far as anyone knows.

      The PS4 Pro also uses eight apparently-unmodified Jaguar cores at 2.1 GHz, although who knows what if anything AMD really did to the Jaguars for Microsoft’s machine.

      All told they are again pretty similar, with the advantage falling clearly in MS’s court this time not at all unlike the original PS4 vs. Xbox One.

      • DeadOfKnight
      • 3 years ago

      Scorpio is being made from the ground up to handle XB1 games in 4K. The PS4 Pro was built for Playstation VR. They both have marketing strategies to compete with one another, but the bottom line is Microsoft is making a more capable system and may get VR in the future. Of course, it is releasing over a year later as well.

        • tahir2
        • 3 years ago

        What does it mean to be made from the “ground up” these days as AMD design parts based on the power, cost, time and performance parameters supplied by MS and Sony using commodity parts. This isn’t a brand new GPU Arch nor are the modifications to Jaguar cores going to be anything but minor improvements.

        MS have at least gone fully GDDR5 and increased their memory bus to 384bit – that’s going to be a huge deal as is the increase in FLOPS from the extra CU’s and clock speed. However for the average consumer the difference in graphics fidelity between the PS4 Pro and Xbox Scorpio are not going to be that obvious IMHO due to the massive increase in power required to achieve a real discernable difference. Before I get flamed, this is referring to the average consumer and not the kind of owner that visits Digitalfoundry et al.

      • DeadOfKnight
      • 3 years ago

      Well, PlayStation consistently gets better exclusives for games and Xbox may offer better cross platform support for PCs. For readers on this site who are PC gamers first and foremost, this is probably the bottom line comparison.

    • DPete27
    • 3 years ago

    [quote<]The cores have seen "extensive customization" ....new cores ought to be 31% faster than those found in the Xbox One, although that number happens to be the same as the clock rate uplift from the old machine to the new one.[/quote<] /Slow clap

      • chuckula
      • 3 years ago

      RYZEN CONFIRMED!!

      (to not be in Scorpio at all)

        • tay
        • 3 years ago

        +1 for your persistence.

        • I.S.T.
        • 3 years ago

        Okay this is funny.

Pin It on Pinterest

Share This