id software talks about Ryzen

AMD posted up a video to its Youtube channel starring Robert Duffy, id Software's CTO. In the video, Duffy lavishes praise on AMD's Ryzen processors and speaks at length about how id's software performs on them—great, as we already knew—and also briefly about what's coming next from the company.

It's no secret the next game from id is Quake Champions. The game will be running on the next iteration of the company's "id Tech" engine. Duffy says that the upcoming version of the legendary game engine will be "far more parallel" than the id Tech 6 version powering Doom. He also says that id will be optimizing all of its software to make use of Ryzen CPUs' surplus of cores.

Duffy mentions AMD's upcoming Vega GPUs in passing too, stating simply that id is already optimizing for the new chips. Quake Champions is one of the first games to be developed targeting a 120 Hz refresh rate for "normal" gameplay. Hitting that target while maintaining all the eye candy we expect from a modern AAA fast-paced shooter will take all the optimizations id can manage.

Comments closed
    • squeeb
    • 3 years ago

    Played the Quake Champions closed beta this past weekend, was so much fun. Ran silky smooth on my Intel i5-7300HQ but damn it eats memory like no tomorrow. 74% of 16GB was in use while running it.

    • Mad_Dane
    • 3 years ago

    For all the talk there has been about IPC around the RyZen launch, it baffles me not a single review site(That I’m aware of, please educate me if you have found one doing this) has actually tested it.

    Peg all the i7’s back to Sandy Bridge and the AMD FX + RyZen at 3.5-4GHz, whatever frequency you can get all the chips stable at and lock them at the same number of cores, then we can see which one actually has the highest IPC.

      • jts888
      • 3 years ago

      Agner Fog is still working on his Ryzen testing.

      Most review sites just race for day-1 article clicks/pageviews against their competitors, and results from actual researchers and analysts can take a bit longer.

    • synthtel2
    • 3 years ago

    A lot of people here seem to be thinking AMD influence is the only reason id would bother (with the tech, not just the marketing). They run a tight ship, and iT6 is already very well optimized in most senses of the word (of which there are way too many). If they want to seamlessly scale further upwards, what better way would there be to do it? This even has the added benefit of helping out on consoles.

    • tanker27
    • 3 years ago

    id trying to stay relevant post Carmack.

      • tipoo
      • 3 years ago

      Doom was a smash success?

      • DeadOfKnight
      • 3 years ago

      id finally becoming relevant for the first time in ages?

        • nico1982
        • 3 years ago

        True, sadly.

      • Concupiscence
      • 3 years ago

      And succeeding. I can’t think of a time when both an id game and an id engine were held in this kind of esteem since Quake III Arena back in 1999. Heck, maybe further back than [i<]that[/i<].

        • Vhalidictes
        • 3 years ago

        It’ll be hilarious to see people try to play the Elder Scrolls 6 at anything above 60FPS. Maybe id can give other devs [i<]at the same company[/i<] access to their world-beating game engine...

          • Concupiscence
          • 3 years ago

          Bethesda’s done a pretty good job of getting its development teams to talk with one another about design issues. If they can do the same for engine tech, it would help them out massively. On that note it’s nose-wrinklingly weird that Prey’s coming out built on CryEngine instead of one built inside Bethesda, but ¯\_(ツ)_/¯

            • ImSpartacus
            • 3 years ago

            It’s good that you point that out about Prey.

            I’ve been looking at that game and it looks awesome, but it’s a little unnerving that they switched engines.

            I speculate it’s due to them getting an excellent deal (Cryengine is going out of business) and it being an single-player FPS-focused engine with great physics. Prey is all of those things.

          • LostCat
          • 3 years ago

          They did, see Dishonored 2 and…well, I don’t even know since I didn’t play it til after I got Ryzen but people were bitching about its CPU usage all over the place.

          • synthtel2
          • 3 years ago

          While it’s a cool thought, those two engines both seem to be pretty strongly targeted at their respective game genres. For all of Creation’s problems, it has some tricks that iT6 doesn’t appear to (yet, at least).

      • Laykun
      • 3 years ago

      Why is Carmack the ONLY talented software engineer out there? Carmack might be famous but don’t let that fool you into thinking he’s some sort of programming god.

        • jihadjoe
        • 3 years ago

        IMO he’s qualified to the title ‘programming god’ just for his early work on 3D with Hovertank, Wolfenstein and Doom. He pioneered many of the 3D rendering techniques in use today, and at the time ID were regularly doing things thought of as impossible by other studios.

          • jihadjoe
          • 3 years ago

          Even today it’s insane how productive he is. Did you know he basically [url=http://venturebeat.com/2015/09/24/how-john-carmack-pestered-microsoft-to-let-him-make-minecraft-for-gear-vr/<]hand-optimized the VR version of Minecraft[/url<]? And the [url=http://techblog.netflix.com/2015/09/john-carmack-on-developing-netflix-app.html<]Netflix VR[/url<] app... He had a working prototype three days after speaking with Netflix, and finished the app within a week! Try and outsource the ground-up development of a virtual theater app to any regular studio and they'd probably say you'd need a team of 10-15 people working on it for 30 days, if not a couple of months. And this guy, who you say is NOT a programming god did it by himself in one week. More feats: About 10 years ago John Carmack got hold of a smartphone and started fiddling with it. Found it interesting, and [url=http://www.gamasutra.com/php-bin/news_index.php?story=15909<]over a 4-day weekend put together the engine for DoomRPG[/url<].

            • Concupiscence
            • 3 years ago

            He also contributed to the Utah GLX project in the late ’90s, getting some very early OpenGL acceleration working in Linux when it needed all the help it could get. And though he’s lost the source code, he retargeted GCC to work with the Atari Jaguar’s custom RISC CPUs to make porting Wolfenstein 3D and Doom to the console easier. And he’s talked at length about doing these things for years in great detail – his .plan updates were informal technical treatises. From stories I’ve heard I wouldn’t have liked working for him, but his work ethic and technical capability are both pretty legendary.

          • the
          • 3 years ago

          One shouldn’t discount his early work with regards to networking and audio either.

          There was also his instance on making his code open source after a generation has gone by for the community to improve and learn.

          Though there is a bit of overhype and he does get attributed to thing that weren’t his creation. [url=http://www.lomont.org/Math/Papers/2003/InvSqrt.pdf<]Inverse root (PDF)[/url<] is one such example as many first discovered it in Quake 3's source code.

    • w76
    • 3 years ago

    id must be wrong, all the Intel monopoly apologists say there’s no real consumer applications past 4 cores (and, really, 2 core mobile i7s are totally cool) and parallelism requires godlike programming acumen. Must be fake news. :p

    TBH, I’d rather of course have the same number of cores in Intel chips than what Ryzen offers, but the defensiveness of some folks is amusing.

      • DeadOfKnight
      • 3 years ago

      Apparently some people don’t know sarcasm when they see it. Either that, or there’s a loud minority of people who don’t see the value of developers using more parallelism. We’ve had 8 core/16 thread CPUs out now for what seems like ages and an embarrassingly low number of programs that take advantage of them​. Kudos to AMD for raising the bar since Intel clearly did not provide much incentive for this to happen much sooner.

    • DPete27
    • 3 years ago

    As much meh as Doom was to play (IMO) it has become my new benchmarking tool. It draws considerably more power than Furmark on the GPU, Vulkan mode especially.

      • Chrispy_
      • 3 years ago

      Each to their own. I can see why some people didn’t like the new Doom but to me it was a thoroughly enjoyable riot that melded some storyline, some impressive tourism through nice level design, some epic arena-style combat á la UT/Q3A/Overwatch variety, and some impressive boss fights.

      That it happened to be extremely well-coded and looks amazing is just the icing on the cake.

        • DPete27
        • 3 years ago

        I can respect the game design of Doom, but I need more “substance” these days. Sure, sometimes it’s fine to mindlessly kill the same monsters over and over again. Sometimes, that’s all I’m in the mood for. But more often than not, I like more “creative” add-ons to FPS gameplay. I actually found myself more entertained about accomplishing the 3 kill types per level than anything…even though it had no real effect on your advancement in the game.

        I played through to the end. So it wasn’t THAT bad.

          • Jigar
          • 3 years ago

          Witcher spoiled you ?

          • tipoo
          • 3 years ago

          Titanfall 2 was far more creative imo. It kept introducing new gameplay concepts, and then retiring them just before they overstayed their welcome, every level was something new. Doom was “here’s a gameplay loop and 14 hours of situations to do that in”.

            • Kretschmer
            • 3 years ago

            I’ve played both. TF2 was weaker in the mechanics (and its campaign was criminally short). Both were excellent experiences, but my money is on Doom.

            TF2 has better multiplayer by far, of course.

            • tipoo
            • 3 years ago

            Hm, the mechanics were both good in different ways, but agree to disagree, with wall hangs and jet packs and clamoring and kick-sliding Titanfall 2 made me feel far more mobile than even already wicked fast Doom. More mobile than any FPS I’ve played actually.

            Anyways, both demonstrate the appeal of 60fps, the same gameplay would feel sluggish on 30 while they were greased lightning even on consoles at 60.

            • Chrispy_
            • 3 years ago

            True, the fast-action parts of Doom’s core gameplay were “arena shooter”, but that’s a whole genre by itself with multiple successful franchises that have only that gameplay and nothing else to offer.

            If you don’t enjoy arena shooters then it’s not surprising that you’ll be disappointed with Doom.

            I haven’t played Titanfall 2 yet. I’ve heard it’s great but I’m still sour at being ripped off by EA/Respawn the first time.

          • Kretschmer
          • 3 years ago

          I guess we’re different, then. I hate hate hate hate most gimmicks that are added to my FPS play. Its almost always weaker mechanics.

        • kvndoom
        • 3 years ago

        Just wish it had New Game + though. I know you can select any level after beating it, but starting from the beginning and playing the contiguous story with all weapons and abilities would be sooooo sweeeet.

      • shaq_mobile
      • 3 years ago

      Yeah I didn’t even finish doom and I’m a pretty big fan. The pacing was all wrong. It got boring because of all the up and down time they I assume was supposed to be spent looking for upgrades. The encounter style level design that has become so common in modern shooters works sometimes, but I always feel stressed by the constant up and down. I think bio shock infinite was the best example of that. It’s not bad, I just think it’s highly disruptive to the theme.

        • DPete27
        • 3 years ago

        Doesn’t help when the music tells you when to be alert for baddies.

    • Bensam123
    • 3 years ago

    $230 for 4c processor, $230 for a 6c process with 10% less performance per core. Hmmm…

    Just another hallmark that it’s a very bad idea to base your long term buying decisions off of what performance some games are getting right now. I’m sure Epic games is doing very similar with the UE.

      • Welch
      • 3 years ago

      Not only is it not eactly 10% less per core as that number fluctuates (granted sometimes fairing worse)… But 4c/4t vs 6c/12t… Yeah I’ll take 3 times the number of threads any day over 4 threads even if at a 10% IPC disadvantage in some cases.

      If Intel wants to be competitive, they simply need to close the price gap. Start offering the i3’s for slightly sub $100, the i5’s around the $160 area and the i7 for about $250.

      If the 7700k was $250 vs say the 1600x, I’d probably go for the 7700k. Just doesn’t make sense to me to go for a 7700k unless you know a large majority of your workload is in single threaded applications and little multi tasking.

        • Bensam123
        • 3 years ago

        Yup, that’s a rough number… It can be better then that… Can also be worse, I like saying 10% less just to add emphasis to how little it actually means when you’re adding 50-100% more cores.

        I don’t think Intel will become competitive by doing that. They have to offer more cores for less money. Either going odd cores or simply tossing the same number of cores at the same pricepoint. They just can’t do it with their current lineup, their optimizations and per core performance aren’t THAT good.

        Even more forward looking the $330 1700 which is 8c/16t vs a 7700k 4c/8t is even less of a contest then the i5. It’s weird that reviewers were comparing a $300 chip Vs a $1500 one as if that’s where the battle is going to be fought.

        Yeah, there is the niche case scenario for programs that highly utilize single threads, but those are going to go the way of the dodo unless you want to build a box for playing legacy games.

      • Kretschmer
      • 3 years ago

      With a hypothetical 10% difference any application that doesn’t use >4 threads will be…10% slower. Cores are only useful if you can leverage them.

        • Bensam123
        • 3 years ago

        And that accounts for a handful of games today. Even some games today that only use ‘4c’ will still see benefit from a 6c machine as there is stuff operating in the background, Overwatch for instance is the example I like going back to.

        System builders shouldn’t be looking at making something obsolete in a year. Most people hang onto a computer for 3-4 years.

    • Kretschmer
    • 3 years ago

    I’ll be really interested in seeing single-threaded performance vs additional threads in upcoming gaming benchmarks at the $200-$300 price point.

      • DPete27
      • 3 years ago

      IPC is not affected by clockspeed or number of cores/threads.

      (Unless you meant that Intel = higher IPC and less cores, and AMD vice-versa?)

        • Voldenuit
        • 3 years ago

        It sounds like he wants performance comparisons of a high IPC system vs a high thread system with a lower IPC.

        I’d be interested, too.

          • Firestarter
          • 3 years ago

          IPC is irrelevant, single thread performance is the metric you’re looking for

            • DPete27
            • 3 years ago

            I feel like IPC and single thread performance are being referred to interchangeably these days in gaming benchmarks.

            • Firestarter
            • 3 years ago

            yeah, and that’s not helping the discussion

      • Kretschmer
      • 3 years ago

      I meant a comparison between processors with fewer cores/stronger single-threaded performance and more cores/weaker single-threaded performance. Apologies for poor wording. Edited.

    • Voldenuit
    • 3 years ago

    Here I am ‘stuck’ on my 4-core, 4-thread Haswell.

    It’s good enough for now, but I would be looking very hard at Ryzen II’s 6-core/12-thread offerings.

      • Jigar
      • 3 years ago

      My Q6600 lasted me 7 years but i can’t say the same for my i5 4670K. I am desperately waiting for Second version of Ryzen 7 and hoping they clock atleast 4.5 GHZ

        • Anonymous Coward
        • 3 years ago

        I’d prefer that in v2, AMD aims to maintain the current clocks, and focuses on a refined architecture. No surprises, no screwups.

          • Prestige Worldwide
          • 3 years ago

          whynotboth.jpg

            • Anonymous Coward
            • 3 years ago

            Success depends on reasonable goals.

            • derFunkenstein
            • 3 years ago

            Well, a little bit of each would take AMD towards single-threaded equality. A refined 14nm process, some targeted improvements, and a little more clock speed.

    • tipoo
    • 3 years ago

    Doom was a game that had pegged all 7 cores on consoles for some frames, id should be able to eke out some impressive parallelism out of Ryzen/other designs over 4 cores. Excited to see how this will do on Ryzen.

      • ptsant
      • 3 years ago

      Doom is, imho, one of the best-coded games in a very long time. The performance was stellar on my old FX8350, easily hitting 80+fps with great visuals (high, then almost all to ultra when I got my RX480). I also thoroughly enjoyed the gameplay, even though I’m not a fan of hellish settings.

      Just because people aren’t willing to invest time and effort to properly optimize for multicore doesn’t mean it can’t be done. It’s hard, maybe it’s not cost-effective on their end (they sure aren’t paying our CPU purchases but they are paying star coders for their work), but it can be done.

        • tipoo
        • 3 years ago

        Yep, it was a technical tour de force, looking that good AND having those framerates was impressive. Who says they’re irrelevant post-Carmack, heck they made a better game without him than the last Doom.

        • DPete27
        • 3 years ago

        Devs also are less motivated to optimize for niche hardware because like you said it’s not cost-effective. Yes, there’s been 16 thread CPUs on the market for a while, but I’m sure they know that not many gamers were playing with that type of system, so they optimize for the 4 thread norm and maybe occasionally give a nod toward 8 threads, but that’s about it.

        AMD is introducing 12 and 16 threads at a consumer-friendly price point with Ryzen and presumably Intel will increase core counts as a result. As the average core count of the majority user base increases, devs will optimize for more cores….

        • Kretschmer
        • 3 years ago

        Note that not all applications or games will scale with additional cores past four, even with time and effort. Note every problem is parallelizable.

          • tipoo
          • 3 years ago

          Not every problem is parallelizable, but they did show that a large full fledged game is. Maybe another company has some huge AI script that isn’t very parallel and forces things to 30, but certainly this shows at least that many more games should/could use more cores.

          • ptsant
          • 3 years ago

          Writing parallel code can be a mess, especially with respect to “side-effects”, like writing to/reading from devices. My own impression is that, most of the time, people realize they can’t parallelize code that was conceived for single-threaded. Introducting multithreading as an afterthought usually won’t work. Then they simply make 1 thread for sound, 1 thread for loading textures or whatever and 1 thread for handling input and call it a day.

          However, a surprising number of problems can be parallelized using the Map-Reduce cycle (Google does that for seearch) and many fundamental algorithms have parallel versions.

    • chuckula
    • 3 years ago

    You made sure the check cleared first, right?

    Anyhoo, a 6/8/10/12 core Skylake-X will certainly rock on this game, without requiring Intel to pay Id money to compensate for design shortcomings.

    Not to mention a 6 core CoffeeLake.

    Or for that matter, a Kaby Lake that probably won’t have a problem with this game even if it has half the cores and no bribes paid for “optimization” work.

      • OptimumSlinky
      • 3 years ago

      Unless Intel suddenly becomes very aggressive in its pricing, a Skylake-X will certainly cost a hell of a lot more than Ryzen CPU.

      And who cares if AMD paid id Software to say nice things in the grand scheme? For the first time in decade, there’s actually some competition in the CPU space and consumers have a choice between top-tier tech with top-tier pricing (Intel) and bang-for-your-buck power (AMD).

        • chuckula
        • 3 years ago

        [quote<]And who cares if AMD paid id Software to say nice things in the grand scheme?[/quote<] Who cares if Intel pays off Dell to offer its chips at a discount to consumers?!?!!? AMD can still sell its chips to anybody willing to buy them! It's lovely that AMD spent the last 6 years studying Intel's designs and has a competitor for Haswell (at least in legacy applications that don't use modern x86 features). It's even more lovely that Intel is launching the chips this year that you'll be calling "innovative" when AMD launches their clones in 2021 or so.

          • kuraegomon
          • 3 years ago

          OK chuck, I’m calling BS on you for this one. AMD has a _long_ history of working on gamer-friendly technologies, and their contributions leading to Vulkan in particular are more likely to have helped them build a strong relationship with id than what little cash they could have scraped together to throw their way.

            • chuckula
            • 3 years ago

            Vulkan only exists because the proprietary Mantle (That AMD dumped a crapload of money on paying developers to use) failed miserably.

            As for the fact that somebody at id had probably heard of AMD… so what? I’m sure they had heard of Nvidia too, so does that mean that Nvidia has never paid anything to game developers to optimize for Nvidia products simply because people know that Nvidia is a thing?

            • Concupiscence
            • 3 years ago

            > Vulkan only exists because the proprietary Mantle (That AMD dumped a crapload of money on paying developers to use) failed miserably.

            Yet we’re all better off for Vulkan existing, in much the same way that SGI’s IRIS GL paved the way for OpenGL in the ’90s. Making sausage isn’t pretty.

            I’ll also add that you’re way too upset by all of this.

            • freebird
            • 3 years ago

            It’s “that time of the month” for Chuckula…
            It must be a full moon… he just can’t believe that Id can’t be bought and paid for by Big Blue
            which means they are “DOOMed” 4.

      • ImSpartacus
      • 3 years ago

      Yeah, I’m interested to see how Skylake-X changes the competitive landscape.

      I feel like Intel dgaf with respect to Broadwell-E. It barely moved the dial from Haswell-E (actually making the 6-core option MORE expensive…).

      Skylake-X might actually see Intel getting aggressive with pricing. Obviously the architectural improvements from Skylake don’t hurt either.

        • Kougar
        • 3 years ago

        Sure. Instead of a $1650 10-core part Intel may be generous and offer 12 cores for the same price next year.

        If Intel genuinely cared about getting aggressive with pricing they would have changed Broadwell-E series pricing by now. That they haven’t changed HEDT prices hints we shouldn’t expect much change in Sky-X model pricing.

          • swaaye
          • 3 years ago

          If AMD could command the same kind of pricing, they would. There was a brief time in 2005-2006 when they were competitive enough and did just that.

            • BurntMyBacon
            • 3 years ago

            That sounds like the Athlon 64 FX vs P4 EE time frame. Prices undoubtedly went up for the Athlon 64 FX (upper $700s IIRC) and Athlon 64 X2’s (nearly $1000 IIRC) while AMD was trouncing Intel. The irony here is that every time I checked (despite having a clear and significant performance disadvantage) Intel’s P4 EE and other upper end chips were still priced higher (usually hovering at or above $1000 for the P4 EE).

            So, I agree that AMD would (and should) command much higher pricing if they could. However, they won’t push as high as Intel (probably due to mindshare). Furthermore, Intel has a history of not really caring how their performance stacks up, so I wouldn’t expect massive changes in Intel’s pricing structure.

            I think it is evident that AMD could raise the prices on Ryzen in the HEDT space (R7 models) given their performance relative to Intel’s HEDT counterparts. AMD’s problem is that they need more mind share, more market share, and the revenue stream needs to start quickly. The HEDT space isn’t very large, so it wouldn’t bring them a lot of market share or revenue. Positioning them as upper mainstream solves these problems at the expense of margin. Given that Ryzen is a pretty small chip and the processors are selling a lot higher than AMD has been able to sell them in years, it is likely that they still have some room to play with margins.

          • ImSpartacus
          • 3 years ago

          C’mon, they accelerated the Skylake-X release rather than bother with Broadwell-E (whose flagging stock would’ve probably sold out too soon if discounted).

          Yes, Intel isn’t bending over backwards to compete, but they are making changes.

          Don’t forget coffee lake. Disappointing as it is to not see a shrink, at least it’s getting a 50% bump in cores, which should allow it to be much more competitive with Ryzen 9 (assuming Kaby Lake-tier clocks).

      • 1sh
      • 3 years ago

      HAHAHAHAHAHA!!!

      AMD Ryzen is still competitive with Intel’s offering even though it’s L3 cache memory and CCX interconnect bandwidth are crippled. Goes to show you that Ryzen’s IPC is pretty solid.
      Just wait for RYZEN 2!

        • DPete27
        • 3 years ago

        Ryzen 2 – When you can’t quite get it right the first time.

          • highlandr
          • 3 years ago

          Ryzen 2 – In 30 minutes or so I promise you’ll like it!

          • Concupiscence
          • 3 years ago

          I didn’t kvetch when Phenom II came along and fixed my complaints about K10. I won’t complain when Ryzen 2: Your Sister is a Werewolf comes along, either.

          • sreams
          • 3 years ago

          “Ryzen 2 – When you can’t quite get it right the first time.”

          Of course, you could say that about any processor once something faster comes out. How could the next gen be faster unless the previous gen was crippled? Right?

          • BurntMyBacon
          • 3 years ago

          Ryzen 2 – The Challenge

          • freebird
          • 3 years ago

          Don’t you mean Core 2?

      • kuraegomon
      • 3 years ago

      Also, considering how shamefully miserly Intel was with delivering IPC improvements while AMD was scuffling in the weeds, I’d slow your roll on cheerleading them now. Ryzen’s approach of delivering more cores per buck will absolutely help drive greater parallelism into game development – and this story is a _perfect_ example of exactly that occurring. Please do enlighten us as to precisely how this is a bad thing.

      • Goty
      • 3 years ago

      Go easy on chuckula, guys. He’s been spending a lot of his time under his bridge since the Ryzen launch and his eyes haven’t quite adjusted to the sunlight yet.

        • nico1982
        • 3 years ago

        His hate towards AMD looks more pathologic than anything else, to be honest.

          • chuckula
          • 3 years ago

          Nonsense, I’ve been very nice towards RyZen cosidering the religious adulation it gets as being a literal miracle from the same people who act like Optane is inferior to a hard drive in every possible way while literally every benchmark shows a first-generation product blowing the doors off of enterprise-grade SSDs.

            • Kougar
            • 3 years ago

            What have you been reading?? Tech Report illustrated it barely outperforms a Trion 150, the bottom budget TLC SSD. And that’s ignoring caching only works part of the time.

            For the exact same cost of a 480GB Trion 150 + Optane 32GB (as TTR tested) you can get a 512GB Plextor M8Pe NVMe drive that will outperform that combination in every scenario.

            Intel had to issue a press release saying it couldn’t even guarantee reliable Win 10 boot performance, and that’s the only OS it works on.

            • Goty
            • 3 years ago

            Imagine if AMD got 2-3 shots at launching Ryzen like Intel has had with Optane!

            • alphadogg
            • 3 years ago

            Just because you like Intel over AMD, and AMD (like Intel) has some passionate zealots doesn’t excuse the crass and unjustified original post, nor your need to be some sort of “equal opposing force” to balance those zealots.

      • cynan
      • 3 years ago

      To be fair, it does seem a bit strange for TR to be posting AMD ads under the guise of news.

      • Redocbew
      • 3 years ago

      Ermagerd! Maybe there’s some deal made with AMD, or maybe they know something about Ryzen we don’t, or maybe they know something about the next-gen consoles we don’t, or maybe they know something about the CoffeeLake we don’t.

      Or maybe they’re just continuing the trend we’ve seen in deriving increased performance from multithreading that’s been going on ever since multicore chips became a thing.

      I don’t see why this story implies any kind of communication with AMD at all. If I was a business person at id, and I wanted to put out a statement about a nifty upcoming game engine that uses MOAR COREZ would I pick a recently launched, much buzzed about relatively affordable chip, or would I pick a nosebleed expensive, probably soon to be replaced chip? Hmm…

        • alphadogg
        • 3 years ago

        Because it’s the name of the game in any online debate: the world is black and white.

      • credible
      • 3 years ago

      Oh you must mean like Intel and Netflix, as well as Microsoft making sure we need a Kaby lake to watch 4k streaming on PC…those kinds of payoffs.

      • Concupiscence
      • 3 years ago

      We’ve had our differences, man, but even by past performance standards this is a jump the shark moment for you. It bears mentioning that after Ryzen’s release Doom was notable as a game that showed little to no difference after the power profile tweak release, and no specific patch to “fix” any limitations regarding Ryzen was released either. Doom’s performance is pretty unimpeachably solid on AMD’s newest kit.

      As for the video itself, it’s PR blather, but it’s not like Ryzen’s a bad product or that anything said here is disingenuous. Doom was one of the only places where the FX chips performed irrationally well due to the engine’s high level of parallelism. It stands to reason that even if a chip sports 2-15% lower IPC than Intel’s finest, at a low price point for lots of well-performing threads Ryzen still constitutes a good, forward-looking offering for people who want to stream and avail themselves of well-threaded engines. [i<]Quelle surprise,[/i<] someone with an interest in a large demographic having access to hardware that will run their intellectual property well confirms that it's a solid product. It's not sinister, there's no reason to suspect bribery (especially when id's a subsidiary of a company probably worth more than AMD in absolute financial terms), a little compensation from AMD to have someone at id take half an hour out of his work schedule to affirm a product's value is anything but scandalous, and the only person who comes out looking foolish after all of this is you.

      • albundy
      • 3 years ago

      hahaha! well played. i have no idea why you got thumbs down! on the plus side, not many amd fanboys left. +1 for check clearing the bank!

      And of course, any new intel cpu will rock this game. why wouldn’t it?

        • danny e.
        • 3 years ago

        The issue isn’t paying for advertising which is seemingly obvious in this case. Rather pretending that intel doesn’t do the same thing in a rather pretentious and douche-baggy manner. 🙂

      • danny e.
      • 3 years ago

      [url<]https://www.google.com/amp/mobile.reuters.com/article/amp/idUSKBN0EN0M120140612[/url<]

      • Bumper
      • 3 years ago

      R u talking about id or tr?

      • ermo
      • 3 years ago

      What’s the difference between “compensate for design shortcomings” and “optimize for performance” on the hardware in question?

      Yes, that was a rhetorical question.

        • alphadogg
        • 3 years ago

        Rhetorical answer:

        TL;DR, because a) you are being facetiously douchy, or b) you joke because don’t actually understand processor design. Pick one.

        A new design approach or feature might not be used properly by code, therefore there is a difference that requires code optimization but is not a shortcoming.

          • ermo
          • 3 years ago

          [quote<]A new design approach or feature might not be used properly by code, therefore there is a difference that requires code optimization but is not a shortcoming.[/quote<] Exactly my point. Chuckula framed it as if it could only be a shortcoming.

      • YellaChicken
      • 3 years ago

      And are you going to spend $1500+ on a CPU alone? No? Then if you want that many cores Ryzen welcomes you.

      And despite this being the exact same kind of marketing deal that Intel/nvidia/AMD have done in the past with software developers, the compensation for the supposed ‘design shortcomings’ you mention comes from the lower price of the CPU and the higher popularity it enjoys as a result. Far more people worldwide are going to end up gaming on a 12/16 thread Ryzen than a 12/16 thread Skylake-X.

      If you want to sell your game, what’s point touting performance benefits of a processor that few gamers can afford and even fewer want to buy?

Pin It on Pinterest

Share This