Second-generation Ryzen Threadripper will have 32 cores, 64 threads

Today at Computex, AMD announced its second-generation Ryzen Threadripper CPU family. The company says second-gen Threadrippers will have up to 32 cores and 64 threads, and they'll drop into the existing TR4 socket on X399 motherboards. Like other second-gen Ryzens, Threadripper with Zen+ will be built on GlobalFoundries' 12-nm process. Second-gen Threadrippers will have all four dies under their integrated heat spreaders enabled to reach that core count.

AMD demonstrated a 24-core, 48-thread second-gen Threadripper running a Blender render versus Intel's Core i9-7980XE. The AMD chip beat out the Intel part, of course, but AMD was also keen to point out that it was cooling its 24-core chip on air, not liquid or even the phase-change system we saw Intel use yesterday as part of its 28-core server chip demo. The company also showed a 32-core Threadripper running the same Blender render in a non-competitive workload. Second-gen Ryzen Threadrippers will arrive in Q3 of this year.

Comments closed
    • ronch
    • 1 year ago

    With twice as many dies they’ll probably be less eager to price these like first-generation ThreadRippers unless they really wanna lose money. Any idea how much these will cost?

    Anyway, looks like the core count wars is really on.

    • Krogoth
    • 1 year ago

    Another stunt CPU (Epyc reject) that is on par with Intel’s Xeon Platinum/Gold “rejects” minus the ridiculous exotic cooling 5Ghz overclocking non-sense.

      • chuckula
      • 1 year ago

      Almost there.
      Needs more “not worth the upgrade over a 1950X” and it’s there.

        • Krogoth
        • 1 year ago

        If you need a CPU with 16-cores or more, then you should really be getting a Xeon W/Epyc.

        Chances are pretty good that the 128GiB (192GiB for XCC Skylake-X chips) limit of UDIMM DDR4 is going to be a problem with workloads that [b<]genuinely need[/b<] that many cores.

        • Mr Bill
        • 1 year ago

        have +3 🙂

    • srg86
    • 1 year ago

    Wow, according to Ars, TDP goes from 180W (Gen 1) to 250W (Gen 2), so you’ll probably need a new motherboard for these (even though the chipset stays the same).

      • chuckula
      • 1 year ago

      I’m assuming that most of the existing TR motherboards are overprovisioned enough to handle these parts.

      That said, it might not give you a lot of overclocking headroom.

        • Goty
        • 1 year ago

        I don’t recall exactly where, so take this with a huge grain of salt, but I do believe I saw something about mobo makers (maybe MSI in particular) redesigning the power delivery for their newest X399 mobos to support these new chips. You’re probably right about the existing boards running these just fine at stock, but would anyone spending up to $2000 on a processor really want to take that chance?

          • Waco
          • 1 year ago

          I’m sure we’ll see updated compatibility lists as time goes on. The cheapest boards might be out of luck but 70 more watts wouldn’t be out of the ordinary for many X399 boards. Overclocking would definitely be limited though.

            • blastdoor
            • 1 year ago

            Maybe I lack creativity and/or courage, but I can’t imagine trying to manually overclock a 32 core processor that costs $2000. I would rather get the best cooling I can and let the CPU figure out how far to overclock itself.

            Of course, it’s still the case that in taking that approach the motherboard could end up being a constraint on what the CPU can do. And maybe that’s what everybody means anyway….

            • chuckula
            • 1 year ago

            [quote<]Maybe I lack creativity and/or courage,[/quote<] I checked your iPhone: Don't worry, you've got lots of courage!

    • blastdoor
    • 1 year ago

    One factoid I noticed in coverage of this event is that apparently AMD has only sold about 5 million Ryzen chips in the last year. That is a pretty small drop in the bucket. It is approximately equal to the number of Macs are sold in a [b<]quarter [/b<]. Basically, it’s been a DIYer product. No wonder Intel’s margins are down only a little tiny bit. Hopefully AMD can get some more OEM wins.

      • chuckula
      • 1 year ago

      How is that possible when every Intel chip is a failed reject that uses 1KW of power?!?

      HOW COULD POOR INNOCENT APPLE BE FOOLED INTO BUYING INTEL WHEN AMD PARTS ARE ALWAYS SUPERIOR?!?!?!

        • blastdoor
        • 1 year ago

        I used to think that Chuckula was the result of a transporter accident resulting in a split of Paul Ottelini into an id and ego version, with of course Chuckula being the id.

        But now I’m starting to think that he is the most diabolical pro-AMD troll imaginable, possibly created in the deepest darkest labs of AMD’s PR department. He makes all AMD fanboys seem like highly rational, intelligent people and seriously undermines the credibility of people who favor Intel through guilt by association.

          • Amiga500+
          • 1 year ago

          There is only Juan internet STRONGMAN chuckula could be.

          • derFunkenstein
          • 1 year ago

          I really liked the transporter accident theory. Have some +3

            • blastdoor
            • 1 year ago

            Thank you 🙂

          • LoneWolf15
          • 1 year ago

          Every once in awhile I think he’s sane. Then he goes off his meds again.

          • jihadjoe
          • 1 year ago

          Chuckula is a hero!

          Evidently he is very smart as you will occasionally glean through technical detail in his postings, and yet he uses his smarts in the silliest way possible. He is literally the human equivalent of the Voodoo5 from those old “Hey, we can use it for games!” commercials done by 3Dfx.

            • chuckula
            • 1 year ago

            That post deserves a [url=https://www.youtube.com/watch?v=o66twmBEMs0<]reference link[/url<]. Anything that starts off like a corporate BS-athon and ends up sounding like one of my posts is a work of unmitigated genius.

          • LostCat
          • 1 year ago

          I wouldn’t go so far as to say ALL AMD fans. I’ve seen some weird ones around here.

          …but close.

          • JustAnEngineer
          • 1 year ago

          +3 I like your bearded Spock analogy.
          It seems like a reasonable [url=https://techreport.com/news/33752/intel-teases-28-core-56-thread-hedt-cpu-on-stage-at-computex?post=1080525<]supervillain origin story[/url<].

            • blastdoor
            • 1 year ago

            Ah yes, bearded Spock… I guess that could have been what I meant.

            But I actually meant:

            [url<]https://en.wikipedia.org/wiki/The_Enemy_Within_(Star_Trek:_The_Original_Series)[/url<] My thought was that Paul Ottelini was victim of a transporter accident in which good Paul joins Steve Jobs on stage wearing a bunny suit, but evil Paul (aka, Chuckula) does.... well, what Chuckula does.

            • JustAnEngineer
            • 1 year ago

            Perhaps the transporter accident was an over-used plot device?
            [url<]https://www.youtube.com/watch?v=nW-NiGp1gys[/url<]

      • derFunkenstein
      • 1 year ago

      I think that had to be expected, though. For so much of that time there was nothing with integrated graphics. If they only sell 5 million in 2018, it’ll be a much bigger disappointment.

      • Kretschmer
      • 1 year ago

      OEMs want IGPs, so Ryzen was really for enthusiasts.

      Also, AMD is still coming off half a decade of bulldozer; it will take sustained design prowess to recover the firm’s reputation.

      • ET3D
      • 1 year ago

      That’s indeed an interesting figure. Market estimates for last year were of ~12 million AMD CPUs sold. If both figures are true, it means that AMD is still selling a lot more non-Ryzen CPUs than Ryzen ones.

        • derFunkenstein
        • 1 year ago

        The Carizo APUs were all over the low-end notebook scene, so that seems possible.

      • Zizy
      • 1 year ago

      5m sounds pretty low.
      Mindfactory.de sold about 100k Ryzens by now. I guess this puts numbers to about 500k-1m in Germany, about 5m in EU, about 15m worldwide? Sure, this includes 5 extra months of 2018, but I find it unlikely AMD would sell more chips in these 5 months than they did in the past year.

      But yeah, this low number clearly shows AMD has nearly no presence in non-DIY.

      • Sahrin
      • 1 year ago

      Do you have basically no major OEM or systems integrator design wins during all of 2017. So while initially I was surprised by this as well, you have to look at their penetration in the markets they actually competed in, which were almost entirely enthusiast/DIY Market. 5 million chips in the DIY Market is actually a pretty big share.

    • chuckula
    • 1 year ago

    The whole “OMG air cooled” bit is of course disingenuous.

    The 7980XE can run [url=https://www.phoronix.com/scan.php?page=article&item=intel-7980xe-extreme&num=1<]just fine with air cooling too[/url<] but of course AMD doesn't like to admit that.

      • Puiucs
      • 1 year ago

      No, it doesn’t run “just fine”. It runs very VERY hot. You also didn’t link a benchmark with reported temps. For example, Techspot reports stock temps of 65C using a very big and expensive 360mm rad (water cooling). Any light OCing pushes the temps very high.

      You also have to consider a few things: 1. benchmarks run at controlled ambient temps. 2. during benchmarks the workstations are placed in well ventilated areas not under a desk like most people. 3. Benchmarks don’t take in account dust that can accumulate over time. 4. during summer things get really hot in most offices.

        • chuckula
        • 1 year ago

        Yet another one showing that the 7980XE was more energy efficient that a lower-core Threadripper while using — gasp — an aircooled tower: [url<]https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review/5[/url<] On top of that, I seem to recall that the 7980XE ran with lower power consumption that the 1950X in TR's own Blender benchmark, and AMD went out of its way to use Blender itself: [url<]https://techreport.com/review/32607/intel-core-i9-7980xe-and-core-i9-7960x-cpus-reviewed/13[/url<] But I'll take your unsubstantiated anecdotes as proof instead of real-world tests.

      • Mr Bill
      • 1 year ago

      Well, the chiller was air cooled. Thats got to count for something, right?

    • Anovoca
    • 1 year ago

    Not sure if that guy is blinking or if I am just being a stereotypist right now.

    • blastdoor
    • 1 year ago

    32 cores at 3 GHz is pretty sweet given that it’s on air cooling.

    Cutting the per core memory bandwidth in half is a bummer.

    Overall, though, I love threadripper — its a bit of a Frankenstein targeted at a crazy small niche, but I respect the no holds barred, pull out all the stops competition. And it just so happens that I do sometimes spend some time in that crazy small niche.

      • Anonymous Coward
      • 1 year ago

      It should be easy enough for customers in this class to determine if 4 dies represents a better deal than 2 dies, so I guess its not a big deal.

      I’m impressed that having two dies (allegedly) stuck using only remote RAM is (allegedly) competitive. Ideally the OS’s will be aware of this level of non-uniformity in memory.

        • Waco
        • 1 year ago

        All modern OS’s are NUMA-aware to varying degrees.

          • chuckula
          • 1 year ago

          NUMA-awareness* in an OS helps these chips to a point: If done right it can basically make them act like the old 12 or 16-core Threadrippers in low-thread workloads by basically pretending the extra 2 dice don’t exist.

          After you scale past the cores on the first two dice, even NUMA schedulers can’t fix all your problems though.

          *AMD would like to declare H2 2018 as NUMA-awareness half-year. #AMDcares.

            • Waco
            • 1 year ago

            I’m hoping AMD wouldn’t do a 2×2+2×0 config…the OS can’t reliably determine what workloads are latency sensitive and what aren’t.

          • dragontamer5788
          • 1 year ago

          Will the other two dies have their own NUMA node? NUMA is about non-uniform MEMORY access, emphasis on memory. It has [b<]nothing[/b<] to do with layout of cores or caches. The Threadripper platform has 2-NUMA nodes, one for each memory controller cluster (2x memory controllers per node). The user will likely need to play with [b<]affinity[/b<] settings, not NUMA settings. I'm betting this will look like 2x NUMA nodes to the OS, even with the four dies.

            • Waco
            • 1 year ago

            I’m unsure what you mean – the core layout in this context absolutely determines NUMA layouts. I would assume that TR2, regardless of memory controller layout, will have 4 NUMA nodes.

            • dragontamer5788
            • 1 year ago

            One of the bootup procedures in Linux / Windows is to run core-to-NUMA node latency tests. For example, core#0 takes 70ns to talk to NUMA #0, but 150ns to talk to NUMA #1.

            But if NUMA #2 and NUMA #3 don’t have any RAM attached to them, how can this test even happen?

            [quote<]I would assume that TR2, regardless of memory controller layout, will have 4 NUMA nodes.[/quote<] I'm not so sure if 4-NUMA nodes fully makes sense. I'd wait for AMD engineers to say otherwise. Maybe they have a solution, but things don't look so straightforward from my perspective. I wouldn't be surprised if they were forced to make TR2 show off only 2-NUMA nodes.

            • Waco
            • 1 year ago

            I guess I’m confused why this is an issue – NUMA nodes are not required to have local memory attached unless I’m completely confused. NUMA nodes aren’t required to have a CPU either, you can have a remote node with memory only.

            • dragontamer5788
            • 1 year ago

            Well, you made me look it up.

            [url<]http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf[/url<] SRAT and SLIT are the tables that are relevant to NUMA specifications from ACPI. I would assume that these tables are generated by AMD and/or the motherboard makers. SLIT allows for "255" which implies "infinite distance" from one node to another. And SRAT seems to have no restrictions on processor / memory (and x2ACPI or... other kinds of entries). The "proximity domain" is likely the NUMA Node, but I'll be honest that I only really skimmed those entries. Anyway, memory-lengths of zero probably can represent NUMA Node #3 and #4 (since they'd be disabled by motherboard).

            • Waco
            • 1 year ago

            RE: memory length of zero – that’s exactly what I would expect of a node with CPUs but no memory attached.

            I still hope it’s a 4×1 configuration, though. 🙂

      • Freon
      • 1 year ago

      Where did you read this? Unfortunately TFA here has no links to sources and very little hard info itself.

        • chuckula
        • 1 year ago

        Anandtech is reporting numbers [url=https://www.anandtech.com/show/12906/amd-reveals-threadripper-2-up-to-32-cores-250w-x399-refresh<]here[/url<] but take with salt.

          • Waco
          • 1 year ago

          “Unofficial layout” == “we made a chart based on guessing”

            • chuckula
            • 1 year ago

            The lead editors of Anandtech — which has gone downhill big-time since Anand jumped ship — also flat-out claimed that the only possible solution for Intel to get new HEDT chips to market is to have EMIB dies connected together.

            So yes, salt is a good thing until AMD officially confirms what they did.

            I watched the press conference in its entirety and at no point did they state anything about the memory controller layout.

            • blastdoor
            • 1 year ago

            Wow, I agree with everything stated here, so three up-thumbs for you!

    • Chrispy_
    • 1 year ago

    Would anyone care to guess on pricing for the 32-core options?

    I wasn’t far off pulling the trigger on 8x 1950X builds, but assuming the thermals and costs are feasible it makes sense to wait for a couple of months and double up on cores!

      • blastdoor
      • 1 year ago

      I’ll guess $1800.

        • Chrispy_
        • 1 year ago

        I’ve emailed a contact of mine at AMD. Will post back if I get anything juicy!

      • chuckula
      • 1 year ago

      [quote<]Would anyone care to guess on pricing for the 32-core options?[/quote<] $19.95 + S&H! Order in the next 15 minutes and we'll thrown in a Sham-wow!

      • chuckula
      • 1 year ago

      Why not just get Epyc systems right now?

        • Zizy
        • 1 year ago

        1950X is generally the best perf/$ right now for number crunching. 7551p isn’t that much better as it is more expensive, unless you are memory limited. This 32C TR should offer similar perf/$ to the 1950X, if not even slightly better.

          • Chrispy_
          • 1 year ago

          Yeah, you’re not wrong.

          The requirements are split evenly between three main criteria:

          1) perf/$
          2) Can they be cooled at full load with a single 120mm air cooler or not?
          3) Do they drop into an mATX TR4 motherboard like AsRock’s Taichi?

      • derFunkenstein
      • 1 year ago

      I would guess that the top-end 32-core version is more than 2X the 1950X, because diminishing returns. The 1950X seems to be going for around $949 online, so I’d guess $2000 at a bare minimum, and I’d go further and figure closer to $2200.

        • brucethemoose
        • 1 year ago

        A 32-core 7551P is $2300, and has more features than the new TR being an expensive server chip and all.

        I’d bet on a flat $1999, but it could end up less than that.

          • chuckula
          • 1 year ago

          $2000 is clearly a ceiling and lower prices might happen too.

          • Waco
          • 1 year ago

          I’d bet it’s lower than that, if only because you could just jump to Epyc for double the memory channels and a LOT more PCIe if the price gap is small.

            • brucethemoose
            • 1 year ago

            EPYC mobos are pricey (if you can even find them), and they have locked multipliers.

            Then again, you’re probably right, as neither of those things ever stopped server CPUs from being significantly more expensive than their HEDT counterparts.

            • Waco
            • 1 year ago

            True. I can’t imagine many will be willing to deal with the heat/cost/longevity issues from overclocking 32 cores of fury though. It wouldn’t be very hard to exceed 500 watts if you’re going crazy trying to get all cores running at full speed.

            If AMD is smart, they’ll have a very fine-grained XFR curve for these chips. 4 GHz+ peaks with light loads (say, less than 8 cores active or something), scaling down to the base clock as you ramp up more cores. The incentive to overclock will be limited to those that want those all-core clocks way up…and they’ll pay for it in noise or extreme cooling.

            • derFunkenstein
            • 1 year ago

            I wouldn’t mind being wrong.

            • Waco
            • 1 year ago

            I love being wrong! Especially when I’m just being overly pessimistic. 🙂

            • Srsly_Bro
            • 1 year ago

            Then go read my reply and love it some more, bro.

      • frenchy2k1
      • 1 year ago

      When the first gen threadripper came out, they had 2 ryzen dies and cost 2x the 1800x.
      Now a Ryzen 7 2700x costs about $320 and this new threadripper includes 4.

      I’d guess this new processor should come out arounnd 4x the 2700x price, maybe $1250…

    • SkyWarrior
    • 1 year ago

    It is happening.
    32 cores on a single socket desktop system bitch!!!

    • Klimax
    • 1 year ago

    That reminds me: Any way to get next time Iray benchmarks? (DAZ 3D has it)

      • Jeff Kampman
      • 1 year ago

      Is Iray a CPU renderer? Seems like not unless you force it to be for some reason.

        • caconym
        • 1 year ago

        Yeah, it doesn’t excel when run on a CPU, and I don’t see a lot of people buying Threadrippers for Daz renders when buying a better GPU is generally going to make a lot more sense.

        A more useful benchmark might be the one Chaos Group has for V-Ray. That’s an industry-leading renderer that gets used by major VFX studios, runs well on CPU, and has a user base that overlaps with the kind of people who buy Threadrippers.

        (edit: to be fair to iRay, it’s a good renderer that does get used professionally, especially in product design, but again, it’s GPU-focused)

          • Klimax
          • 1 year ago

          It should be noted, that when VRAM is insufficient Iray runs in CPU-mode. And scenes can get massive fast. (Also IIRC it is CUDA-only so CPU-only mode for AMD users)

    • pirate_panda
    • 1 year ago

    Neat! What features will segment TR2 from their Epyc line, if the max number of cores and support for ECC memory are the same? Is it the ability to have multiple sockets per motherboard, number of PCIe lanes, and/or something else I’m missing?

      • NTMBK
      • 1 year ago

      Number of memory channels- if it fits into a Threadripper motherboard then it can only have one channel per die enabled.

      • Zizy
      • 1 year ago

      Yes (2P vs 1P), yes (128 vs 60+4) and you missed memory channels (8 vs 4).
      Plus there are some other less relevant differences that generally shouldn’t matter much:
      1.) Epyc has validated LRDIMM and other fancy memory stuff, enabling up to 2TB/CPU. TR doesn’t have that disabled, but it doesn’t have it tested either. So, you end up with 128GB/CPU (instead of 1TB/CPU). Given fancy RAM prices (heck, even non-fancy) this distinction is completely irrelevant.
      2.) Some (in)security stuff like memory encryption is Epyc only.

    • Waco
    • 1 year ago

    I wonder how the memory channels are wired…hopefully it’s not 2 channels from 2 dies and 0 channels from the other two. One from each would make a lot more sense.

      • chuckula
      • 1 year ago

      I’m assuming one from each.

      That part’s easy. The power distribution from the exact same half-socket that is supposed to run two dies now driving power to four pieces of silicon with half the pins of Epyc? That’s where it gets tricky and for all the whining about Intel’s demonstration you’ll note that nobody called TR2 an “overclocker’s dream”.

        • Waco
        • 1 year ago

        I’d be willing to bet if you scoped each of the pins…all the power/ground pins are active just like SP3.

          • Anonymous Coward
          • 1 year ago

          Are you saying that for TR1 or TR2? Seems like it would be hard to [i<]not[/i<] have some change in which pins are connected to something, and I wonder if that means that existing mobos have been required to support power/ground pins that TR1 does not use.

            • Waco
            • 1 year ago

            That’s what I was implying. Assuming AMD planned all of this ahead of time – the sockets likely have all of the pins already active, just potentially unused, for TR1.

      • NoOne ButMe
      • 1 year ago

      Two* dice have dual memory access each, other two do not have direct access.

        • chuckula
        • 1 year ago

        Source?

          • mczak
          • 1 year ago

          Anandtech mentions this directly, and have some slide showing it (albeit an unofficial one, it’s not in the official slides).
          But it can’t really be another way I suppose – this is the same platform as Threadripper 1, which just had only 2 active dies. You can’t just rewire things so each die has one memory channel. (You also don’t get more pcie lanes neither.)
          Thinking about it, for gaming it may actually be better if 2 dies have 2 memory channels each – because gaming won’t need all cores anyway if the OS is just slightly clever it will only use the cores which have memory channels attached in this case, so the per-core memory bandwidth is twice as much as if there would be one MC per die. But obviously workloads really requiring all cores will suffer.

            • Zizy
            • 1 year ago

            You actually can, if you change interposer of the TR – but yeah, much simpler to have 2×2+2×0.

            • Waco
            • 1 year ago

            Which they have to do anyway unless they were just placing dummy dies on the same interposer as Epyc.

            • blastdoor
            • 1 year ago

            For gaming, it would be better to buy a different CPU

            • jarder
            • 1 year ago

            An even better buy would be a GPU!

            • chuckula
            • 1 year ago

            I have yet to see any of these fancy GPUs improve my Dwarf Fortress experience.

            But AMD did just launch on 7nm so maybe there’s hope!

            • jarder
            • 1 year ago

            Not really, my point was that if you’re really into games, buy a good GPU, AMD or Nvidia, I don’t care. When it comes to frame rates, you can barely tell the difference when you change CPU, but you can when go from a mid-range to high-end GPU.

            And did they launch that 7nm GPU? I thought it was just an announcement of an impending technology, you know, like Intels new 28-core HEDT chip. Maybe you were being sarcastic, it’s hard to tell sometimes…

            • auxy
            • 1 year ago

            This is an old and pernicious idea that needs to die really fast. Like right now. As in, don’t ever post it again here or elsewhere.

            GPU is important, for sure. Most gamers are playing on single 1080p displays though. Even a GTX 1050 Ti is enough for most games at this resolution, buying more GPU isn’t really getting you anywhere unless you start looking at higher resolutions, refresh rates, or both.

            Some games are CPU-heavy. Really CPU-heavy, in some cases. With a Core i7-5775C @ >4GHz I’m still CPU-limited in many games that I play, like Phantasy Star Online 2. Buying a faster GPU also isn’t making turns go faster in Civilization, or improving world generation time in Terraria. A faster CPU sure does help though.

            CPU matters, and buying a Threadripper to play games on is an egregious-bordering-on-atrocious waste of everything involved. I agree that it’s fine to play some games on your workstation if you’re buying a Threadripper for “real work” that will actually benefit from it. However, saying that it’s “just as good” or “nearly as good” as an actual good gaming CPU like a Core i7-7700K or Core i7-8700K or even a mildly overclocked Core i3-8350K is so far beyond the pale it’s actually disgusting. So don’t say it.

            • jarder
            • 1 year ago

            The price difference between a Core i7-8700K ($350) and a Core i3-8350K ($175) is about $175. The price difference between a GT 1060 ($230, yes, I saw one on newegg) and a GT1030 ($90) is about $140. Where do you think gamers should use their money? So I stand by what I said, if you are really into games buy a good GPU. The CPU is a minor concern.

            You do know that Phantasy Star Online 2 and Civilization are not the most popular games out there don’t you? Sure, a very small minority of games buck the CPU-consuming trend of the majority, but that’s not a reason to use niche games to give out general advice.

            [quote<]I agree that it's fine to play some games on your workstation if you're buying a Threadripper for "real work" that will actually benefit from it. However, saying that it's "just as good" or "nearly as good" as an actual good gaming CPU like a Core i7-7700K or Core i7-8700K or even a mildly overclocked Core i3-8350K is so far beyond the pale it's actually disgusting. [/quote<] I also agree that it would be stupid to get a Threadripper for a dedicated gaming PC. Thankfully most people are not that stupid and most Threadripper buyers I'm sure, have a non-gaming workload in mind. But for those people that use use a Threadripper for both gaming and certain CPU-bound productivity tasks, the suggestion that Threadripper is not just (or almost) as good as a dedicated Gaming CPU is simply not borne out by evidence. I refer you to: [url<]https://techreport.com/review/32390/amd-ryzen-threadripper-1920x-and-ryzen-threadripper-1950x-cpus-reviewed/15[/url<] Have a look at the 99th percentile gaming graph, the 1950X managed to outperform the i9-7900K and is only a couple of fps behind the top gaming CPU of the time, the i7-7740K. So if you think what I'm saying is disgusting you must have a serious problem with this web site.

            • blastdoor
            • 1 year ago

            I would hypothesize that auxy might not have been responding to what you wrote, but instead to another argument that some other people might have once made that your post reminded him of. I’ve seen this sort of thing happen on the Internet before….

            • Pwnstar
            • 1 year ago

            Wrong pronoun. Auxy is a girl.

            • Waco
            • 1 year ago

            Honestly, on the internet, everyone is an “it” unless stated explicitly at the time they are posting. 😛

            • auxy
            • 1 year ago

            To respond to blastdoor’s post below, the actual statement I took issue with is this:
            [quote<]When it comes to frame rates, you can barely tell the difference when you change CPU, but you can when go from a mid-range to high-end GPU.[/quote<] This is just awful and wrong for most gamers. As I noted in my previous post, you have to start playing in >2MP resolutions or >60Hz refresh rates before you'll get any real benefit from anything from a GPU beyond a 1050 Ti or 1060 6GB (depending on the game.) And most gamers are casual gamers anyway who won't even care about the difference in resolution. WITH THAT SAID, I'm wasting my time here trying to argue with someone as disingenuous as you but just for the sake of showing everyone else here how laughable your points are, let me point out that your "evidence" doesn't hold up under scrutiny. [quote<]The price difference between a Core i7-8700K ($350) and a Core i3-8350K ($175) is about $175. The price difference between a GT 1060 ($230, yes, I saw one on newegg) and a GT1030 ($90) is about $140. Where do you think gamers should use their money? So I stand by what I said, if you are really into games buy a good GPU. The CPU is a minor concern.[/quote<]This is completely insane and doesn't support your argument at all! You don't get to make random statements and then proclaim victory. [b<]*I*[/b<] brought up the i3-8350K as a point against the Threadripper, and against every other CPU more expensive. And the GT1030 was at no point under consideration by anyone! The ACTUAL comparison is of the GTX 1050 Ti ([url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814500411<]$200[/url<]) or perhaps GTX 1060 6GB ([url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814137095<]$300[/url<]) versus the GTX 1070 Ti ([url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814932012<]$460[/url<]), GTX 1080 ([url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814932009<]$530[/url<]), or GTX 1080 Ti ([url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814137111<]$750[/url<]). Telling a gamer with a mediocre CPU and an already fast GPU to just get a faster GPU isn't going to help them at all. As you may be aware, [b<]"this web site"[/b<] is famous for inventing "inside the second" frametime benchmarks. If you actually go to [url=https://techreport.com/review/32390/amd-ryzen-threadripper-1920x-and-ryzen-threadripper-1950x-cpus-reviewed/9<]that review you linked[/url<] and look at the game tests, the Threadripper chips are in the bottom half of the frametime results in every game. They trade places with the also-awful-for-gaming mesh-based Intel chips, but they're soundly beaten by the lower-clocked Ryzen 1800X and the two ring-based Intel chips almost without exception. (Crysis 3 isn't really a useful benchmark, it even runs well on FX chips.) So, no it's not "just as good as a dedicated gaming CPU." Take your crap and shove it back in your mouth where it belongs. CPU matters for mainstream games and you can see that in Zak's review [url=https://techreport.com/review/31464/msi-trident-3-compact-gaming-pc-reviewed/8<]here[/url<]. In case you want to cherry-pick from the results, I'll quote from him: [quote<]Dropping the resolution down to 1920x1080 cleans things right up on the Trident 3, and it still looks pretty nice. Meanwhile, changing the resolution on the EN1070 has no real effect on performance—the Core i5-6400T is the primary bottleneck of that machine.[/quote<] I'm aware of these results because I actually [b<]read[/b<] [i<]this web site[/i<] instead of skipping to the conclusion of every article. I'm done with this discussion and won't be checking back for replies. I hope anyone who comes upon this discussion later realizes the error of your ways.

            • jarder
            • 1 year ago

            bee in bonnet much?

            [quote<]you have to start playing in >2MP resolutions or >60Hz refresh rates before you'll get any real benefit from anything from a GPU beyond a 1050 Ti or 1060 6GB[/quote<] If you want to bring monitors into the discussion, it's reasonably safe to assume that people who go for the high-end GPUs will pair them with high-end monitors. [quote<]This is completely insane and doesn't support your argument at all! You don't get to make random statements and then proclaim victory. *I* brought up the i3-8350K as a point against the Threadripper, and against every other CPU more expensive. And the GT1030 was at no point under consideration by anyone![/quote<] So you can bring up random chips but I can't. And why not consider a GT1030? It's a perfectly valid choice of GPU. You seem to have a very narrow view of the hardware that people might have. [quote<]Telling a gamer with a mediocre CPU and an already fast GPU to just get a faster GPU isn't going to help them at all.[/quote<] I never said that. It's just as blastdoor thought, you didn't read my post properly and jumped to some crazy conclusion. I said that somebody would notice going from a "a mid-range to high-end GPU." more than they would a CPU upgrade. If you want to talk about somebody who already has a "fast GPU", that's a completely different discussion. [quote<]As you may be aware, "this web site" is famous for inventing "inside the second" frametime benchmarks. If you actually go to that review you linked and look at the game tests, the Threadripper chips are in the bottom half of the frametime results in every game.[/quote<] At first I thought you were being disengeneous, linking to the GTA V page which I'm sure you know is a gaming outlier for the Threadripper. But now I think I understand, you spend your time looking at outliers and not at the average performance, so it's no wonder your ideas are skewed. [quote<]CPU matters for mainstream games and you can see that in Zak's review here[/quote<] I never said that CPU's don't matter, just that the GPU usually matters more for gaming. And that review you linked to was of thermally constrained Small Form Factor machines, the conclusions do not apply to standard Desktops. [quote<]I'm aware of these results because I actually read this web site instead of skipping to the conclusion of every article.[/quote<] Are you saying that the conclusions of the Threadripper article are not supported by the data. If so, please point out the error that is in the Threadripper conclusion. I'll try to put this as plainly as I can. The conclusion pages are far more important than any individual page in a review because it distills down all of the data on the previous pages and gives you an average upon which you can base proper conclusions. Looking at individual pages does not in any way give you an insight into the bigger picture.

        • Waco
        • 1 year ago

        I’ll wait for AMD to confirm that. 🙂

      • Zizy
      • 1 year ago

      Higher bandwidth is available for inter-die communication than DDR bandwidth. Configuration shouldn’t matter too much, and 2×2+2×0 is a much simpler drop-in replacement.

        • Goty
        • 1 year ago

        There’s more bandwidth but also a lot more data. Latency is also a factor to consider.

    • Srsly_Bro
    • 1 year ago

    my boi @Chuckula predicted the 32 core AMD miracle chip with Zen2 but AMD being so great is doing it even sooner.

    #ThankyouAMD

      • chuckula
      • 1 year ago

      We needed it after Intel cheated by releasing lasy year’s Xeons in HEDT!

      Finally the ultimate gaming CPU that in no way tries to cheat by recycling server hardware in HEDT!

    • setaG_lliB
    • 1 year ago

    I just love this energized new AMD that is somehow managing to keep pace with Intel. Even at the ultra high end. Hey remember Cyrix?

      • watzupken
      • 1 year ago

      I feel the 2 key reasons for Intel struggling to take the lead recently are,
      1) AMD did not take the risk of a radical design on Ryzen as oppose to Bulldozer,
      2) Intel losing their edge with the fab as they have been stuck on 14nm for a long time.

      Previously AMD struggled because the 2 points were clearly not in their favor.

        • blastdoor
        • 1 year ago

        Hmmm… you might be on to something there

        • Sahrin
        • 1 year ago

        The reasons Intel is struggling is that they failed to use a radical redesign and they failed to execute on their manufacturing strategy.

        AMD did both – Zen is a radical redesign for AMD. AMD’s manufacturing is executing on their strategy.

        Its not just something AMD is doing right (which, by the way, they’ve done over and over again in the past…K6, K7, K8…Bulldozer was a rare miss-step from AMD), it’s also about Intel’s failure to innovate in any meaningful way.

        Intel’s advantage has been that IBM approached them to make the original PC CPU. They got the ‘power’ they have from IBM. There’s no ‘special Intel sauce.’

        • the
        • 1 year ago

        A good chunk of their initial 10 nm product line up have been cancelled.

        Cannon Lake for desktops was axed two years ago as Intel was hoping to simply get Cannon Lake out of the door for mobile last year. First device using it appeared last month.

        Knight’s Hill for HPC was also quietly cancelled as it was supposed to appear relatively early on the 10 nm process. It has totally missed its window of opportunity. Intel is also shifting their approach to HPC. For the AI niche they have Nervana and their developing a discrete GPU which may appear in this sector as well.

        Presumably there was a server chip on 10 nm that was supposed to land between Sky Lake-SP and Ice Lake server following the old tick-tock model. That disappeared before formal announcement and Cascade Lake appeared on the roadmap. Even now, Cascade Lake looks to have a short life due to additional delays.

          • blastdoor
          • 1 year ago

          It seems that the OEMs have hung back from Ryzen/Epyc so far, perhaps allowing the DIYer crowd to act as beta testers.

          If the OEMs conclude that beta test was successful, though, 2019 could be an interesting year.

      • Kretschmer
      • 1 year ago

      At this stage of CPU design, firms focus on different attributes. This allows a smaller firm to eke out wins in under-served facets of CPU design by making other trade-offs. E.g. power vs. cores.

    • derFunkenstein
    • 1 year ago

    This gonna be great in my Cinebench farm (thanks this morning for helping me figure that one out)

      • chuckula
      • 1 year ago

      The inside-the-second gaming benchmarks are going to by EPYC!

    • chuckula
    • 1 year ago

    That part where they showed it destroying the 28 core Intel parts at Cinebench happened so fast you might have missed it!

    I can’t blame them though, the 5 GHz overclock was just so Epyc.

      • Jeff Kampman
      • 1 year ago

      To AMD’s credit, the demo kicked the Cinebench addiction and used Blender instead.

        • chuckula
        • 1 year ago

        It was funny though, that guy behind the desk accidentally said “18 cores” for Intel’s highest end HEDT chip.

          • derFunkenstein
          • 1 year ago

          “Accidentally”

      • Unknown-Error
      • 1 year ago

      Air-cooled vs liquid cooled. They weren’t ready for Intel going full beast mode with the 28-core part.

      • Amiga500+
      • 1 year ago

      True, they couldn’t.

      The 28 core Intel part destroyed itself shortly after the cinebench run. A component life so fast that you might have missed it. Although, that life, while admittedly short, did burn as bright (and as hot) as a star…

      Even more of an Emergency Edition than the P4EE? I think so…

        • chuckula
        • 1 year ago

        Good, then when TR runs the review I damn well expect these magic 32-core parts to beat those failed Intel chips at DAWBench by at least 40%.

        I’m being fair here by keeping the margin of victory under 100%!

        For somebody who’s username claims to be directed to a product that was associated with the Video Toaster, I’d expect you to place a great deal of emphasis on DAWBench.

        Let’s throw x265 encoding in there too, I want to see those AVX-512 units that ONLY AMD has destroy Intel too.

          • blastdoor
          • 1 year ago

          I propose a drinking game in which we all take a drink every time Chuckula says “AVX-512”

            • Goty
            • 1 year ago

            I have a wife and child to care for, man!

          • ptsant
          • 1 year ago

          Do you actually use DAWBench or did you just decide to quote it because it is tuned for Intel?

            • chuckula
            • 1 year ago

            I don’t personally use DAWBench but TR sure does for actual use outside of testing purposes.

            But I sure have used x265 and improvements there are always welcome.

            As for AMD, what’s their typical biggest margin of victory? Cinebench? Yeah.

            My larger point is that given the level of shrillness around here about how much Intel has failed, I’m just pointing out that a “failed” chip surely shouldn’t be able to compete with TR2 in any workloads at all.

        • derFunkenstein
        • 1 year ago

        wait, what? The 28-core part self-destructed? is there proof?

          • Srsly_Bro
          • 1 year ago

          No, it self-destructed. The proof kinda takes care of itself.

          Srs comment, he’s a casual who thinks that because of n cores it must be y wattage.

          He’s the type of casual who would think a 32 core is 32x power usage of a single core from 2007.

            • derFunkenstein
            • 1 year ago

            Oh that’s precious. Thanks for clarifying.

            • Waco
            • 1 year ago

            Calling people casuals is not helping your case, just FYI. Intel apparently used a phase change cooler on the system that could dissipate more than 1500 watts of power…so yes, it probably did pull north of a kilowatt loaded up. They even had a special motherboard for this stunt with a ludicrous VRM setup.

            [url<]https://www.anandtech.com/show/12907/we-got-a-sneak-peak-on-intels-28core-all-you-need-to-know[/url<]

            • Srsly_Bro
            • 1 year ago

            “Probably” again with no real evidence that it did. There is a 1500w phase change. Do you know the different capacities?

            I feel like you watch ancient aliens and agree with everything that is said on the show, convinced it is hard evidence.

            The existence of a 1500w phase change does not mean that it used 1000w.

            To go even further, not one casual dork on here had compared over clocked 7980xe numbers, but instead ranted about their feeling and how they feel the day the 28 core hit 5.0 GHz.

            You start?

            • Waco
            • 1 year ago

            Sigh. You’re just ranting for no reason.

            There are plenty of [i<]Cinebench[/i<] power measurements with the 7980XE at nearly 5 GHz...and they pull nearly 900 watts at those speeds. In Cinebench. With 10 fewer cores. Are you done yet? You're just digging a hole.

            • Srsly_Bro
            • 1 year ago

            All I want is evidence and proof. Please provide it. I’m begging you, bro. I don’t care about what you think. I care about what you can prove and show me.

            I’ll be waiting, like usual.

            • Waco
            • 1 year ago

            Google. This is a comment section on an article. Have a nice day.

            • Srsly_Bro
            • 1 year ago

            My point still stands. Go back to ancient aliens, bro.

            • Waco
            • 1 year ago

            Nobody here is making crazy claims contrary to evidence (aside from the stupid self-destruction rant). This is not an academic paper, this is a message board where you clearly have the time to be a pain but not the time to spend a few seconds Googling.

            • Srsly_Bro
            • 1 year ago

            The time to message me could have been used to provide a link. Suspicion increasing.

            • Redocbew
            • 1 year ago

            Ok, let me try to break this down for you.

            Facts:

            1. Software is made of threads.
            2. Threadripper rips threads all the time.
            3. The purpose of Threadripper is to flip out and rip threads as fast as it can.

            Threadripper has the REAL ULTIMATE POWER. What part of that do you not understand?

            • chuckula
            • 1 year ago

            NEW FACT:

            AMD has banned all video reviews of TR2.

            Nobody wants to see naked reviewers after their clothes disintegrate!

            • Srsly_Bro
            • 1 year ago

            Thanks for the help, bro. I truly understand ripped threads and ultimate power.

            • Goty
            • 1 year ago

            [url<]http://www.legitreviews.com/intel-core-i9-7980xe-18-core-processor-review_197903/10[/url<] So, let's assume the chip displayed was a golden sample, had amazing power characteristics, and somehow the power consumption scales linearly with core count. In that case, the power consumption should be somewhere around 845W *(28/18) = [b<]1314W[/b<]. Let's give them even MORE leeway and assume they magically save an additional 20% on the power consumption. That puts it at [b<]1051W[/b<]. Please explain to us how you think it came in significantly under that number. Assuming you're able to convince yourself of that fact, please then also explain why Intel felt the need to resort to such extreme cooling measures when a large watercooling loop should have been able to handle the power with ease (recall that the above link managed to remove 845W with an AIO cooler, at least for the duration of a small benchmark.) I'm having a really slow day at work. Entertain me. Please.

            • Waco
            • 1 year ago

            You’re far too kind to him considering it’s the first link on Google when searching for high-clocked 7980s. 😛

            • Srsly_Bro
            • 1 year ago

            All I said was information and proof. Did you see me dispute the claim or the information for the claim?

            Go back and read bro. I teased people for unfounded claims backed by zero evidence. I shouldn’t have to explain this.

            • Waco
            • 1 year ago

            No, you disputed statements that anyone familiar with the matter already understood without providing any useful information yourself. It was not helpful.

            • Srsly_Bro
            • 1 year ago

            I disputed claims that had no evidence. It seems you don’t know the difference, bro.

            • Waco
            • 1 year ago

            It seems you can’t have a conversation. I’m sorry.

            • Goty
            • 1 year ago

            Come now. It was 845W, not 900.

            See! You’re just making things up!

            (Am I doing it right?)

            • Waco
            • 1 year ago

            🙂

          • chuckula
          • 1 year ago

          It’s Amiga500+.

          The only difference between what he says and what I say is that he’s stupid enough to think it’s all true and he signed up for enough phantom accounts to make it look like people believe him too.

            • Amiga500+
            • 1 year ago

            Aww, the wee blue troll isn’t liking it.

            Poor wee blue troll.

            • cegras
            • 1 year ago

            Says the ‘impartial AMD fanboy caricature’ who really wanted to believe the 28 core 5 GHz chip was cooled on ambient water?

          • chuckula
          • 1 year ago

          Of course there’s [url=https://www.youtube.com/watch?v=4y9NtHlJvbY<]proof![/url<]

            • derFunkenstein
            • 1 year ago

            Not helpful

      • maroon1
      • 1 year ago

      We all know that Ryzen does better on Cinebench than 95% of the other benchmarks. Cinebench alone will not give you the full picture

      1950X and 1920X both beat 7900X in cinebench but loses in a handbrake. In blender, 7900X performs equally to 1920X
      [url<]https://techreport.com/review/32390/amd-ryzen-threadripper-1920x-and-ryzen-threadripper-1950x-cpus-reviewed/13[/url<] You should not be surprised that AMD will always use cinebench in their marketing slides and demos, because they want their product to look as good as possible.

        • chuckula
        • 1 year ago

        I appreciate your earnestness.

        Perhaps my sarcasm was a touch too dry.

      • Unknown-Error
      • 1 year ago

      -25? Come now chuck! You can do better than that. I think you record is -40 or -50 somthing. I am going add +1 out of spite. :p

        • chuckula
        • 1 year ago

        No I’m going to wait.

        I’m going to wait until the NDA release date.

        Then I’m going to settle all the fanboy business.

          • Pwnstar
          • 1 year ago

          Said like a true fanboy. Pathetic.

Pin It on Pinterest

Share This