AMD’s second-generation Ryzen Threadripper CPUs revealed

AMD is taking the wraps off its second-generation Ryzen Threadripper CPU lineup this morning, and the company is upping the ante in the war for multithreaded superiority on the high-end desktop. Two new chips for workstation and content creation use—the Threadripper 2990WX and Threadripper 2970WX—put more cores than ever before in AMD’s X399 motherboards, while the Ryzen Threadripper 2950X and 2920X bring the bounty of second-generation Ryzen improvements to users who need both multi-threaded grunt and single-threaded performance from their high-end systems.

All second-generation Threadripper CPUs incorporate the extra smarts we’ve already seen in second-generation Ryzen parts for Socket AM4. Those improvements include Precision Boost 2 and its fine-grained control over boost clock speeds as work occupies more cores and threads on the chip, plus the benefits of XFR 2 for sustained performance when builders choose a heavy-duty cooler for use with a Ryzen CPU. Second-generation Threadrippers also inherit the baseline performance improvements offered by the move to GlobalFoundries’ 12LP fabrication process.

  Cores/

threads

Base

clock (GHz)

 Peak boost

clock (GHz)

L2

cache (MB)

L3

cache (MB)

TDP Suggested

price

Threadripper 2990WX 32/64 3.0 4.2 16 64 250 W $1799
Threadripper 2970WX 24/48 12 64? $1299
Threadripper 2950X 16/32 3.5 4.4 8 32 180 W $899
Threadripper 1950X 16/32 3.4 4.2 8 32 $999
Threadripper 2920X 12/24 3.5 4.3 6 32 $649
Threadripper 1920X 12/24 3.5 4.2 6 32 $799
Threadripper 1900X 8/16 3.8 4.2 4 16 $549

AMD says Threadripper WX CPUs are for users that always want more—more cores, more threads, more memory capacity—and whose workloads are bound primarily by the number of threads they can get in a workstation. (WX doesn’t officially stand for anything, but the Radeon Pro family of workstation graphics cards use it to mean as much, so I’m rolling with “workstation.”) AMD broadly describes this group as “creators and innovators.” This shouldn’t surprise any readers of TR’s high-end desktop CPU reviews, but AMD doesn’t envision these chips powering gaming rigs.

Instead, the WX-series parts are about beating Intel’s high-end desktop parts in workloads like rendering where performance can generally scale with every available thread. In evergreen benchmarking favorites like Cinebench, POV-Ray, Corona, and Blender, the $1799 2990WX delivers double-digit speedups over the $1879-at-e-tail-right-now Core i9-7980XE. Even if Intel cuts its latest Extreme Edition’s price tag back to parity with the 2990WX, the AMD chip still appears to be in a great competitive position for those who can take advantage of its every core and thread. The Threadripper 2990WX is available for pre-order today and will hit e-tail shelves on August 13.

For gamers who work hard during the day and want decent performance off the clock, AMD is refreshing the Threadripper X-series family with two new parts. The Threadripper 2950X is an evolution of the popular Threadripper 1950X. It now offers a single-core boost speed of 4.4 GHz—the highest of any Ryzen CPU so far—and a 3.5-GHz base clock, although we’ll be keen to see how Precision Boost 2 actually affects the delivered all-core boost clock this chip can sustain.

Although the Threadripper 2950X likely won’t unseat the Core i7-8700K as the fastest gaming CPU around, AMD’s internal numbers paint a competitive picture for it versus the Skylake-X Core i9-7900X. In Cinebench, Handbrake, and 7-Zip, AMD believes the 2950X will beat out the i9-7900X, while the company’s gaming benchmarks suggest the 2950X is just 6% behind the i9-7900X on average across 11 games at 1920×1080. We think anybody shopping for a 1920×1080 gaming CPU in this price bracket is out of their minds, but if you insist on making bad decisions, the Threadripper 2950X at least doesn’t seem to punish you much for massively unbalancing your graphics-card-and-monitor budget.

The 2950X also slices $100 off the 1950X’s price tag. At $899, this second-generation Ryzen part promises i9-7900X-beating multithreaded performance in at least some workloads—a value proposition that catapulted the original 1950X to a TR Editor’s Choice award on the strength of its performance and the value of the X399 platform. We will, of course, reserve judgment until we’ve been able to subject all of these chips to our own test suite, but the rosy competitive picture the 1950X first painted doesn’t seem likely to change much. The Threadripper 2950X will arrive at e-tail August 31.

AMD is also teasing two chips that will launch later in the year: the 24-core, 48-thread Threadripper 2970WX and the 12-core, 24-thread Threadripper 2920WX. It might seem strange for AMD not to launch a full-stack Threadripper lineup today, but the company says the Threadripper 1950X was its best-selling Threadripper in the year that chip family has been on the market. Assuming that data is correct, it makes sense for AMD to launch its most powerful (and most expensive) Threadripper yet today and follow up with the in-between parts later. The 2970WX and the 2920X will arrive sometime in October.

AMD has also introduced a new packaging design for the second-generation Threadripper family. Before we pulled the Threadripper 2990WX and Threadripper 2950X out of their packages and slathered them in thermal paste, we made a video about unboxing those chips and about some of the other hardware AMD sent over to assist us with our testing. Have a watch:

Stay tuned for full benchmark results from these chips soon.

Comments closed
    • Srsly_Bro
    • 1 year ago

    [url<]https://www.techpowerup.com/246676/goldman-sachs-upgrades-stock-ratings-for-amd-downgrades-intel-to-sell[/url<] AMD just keeps on winning and Intel struggles.

    • rudimentary_lathe
    • 1 year ago

    That Threadripper logo gives off a real DOOM vibe.

    I have absolutely no need for this kind of workstation chip, but kudos to AMD for raising the bar. I hope they’ll be able to funnel some of the money they’re going to be making on Ryzen, Threadripper and Epyc into the GPU R&D.

    • Jigar
    • 1 year ago

    So this is the fastest HEDT CPU in the world. WOW AMD did it.

      • blastdoor
      • 1 year ago

      Yeah, well intel could have done it, they just didn’t want to :-p

      (Literally true, btw)

    • ronch
    • 1 year ago

    I bet a 15-year old kid designed the new ThreadRipper logo.

    • Mr Bill
    • 1 year ago

    I want to know more about that Threadripper 2 heatsink (7:29 in the video). Is that a metal plate on the bottom onto which the heatpipes are bonded or (more exciting) is it a thermal vapor chamber?

    Edit: From the sharpness of the edges, I’m guessing its not a vapor chamber, but I hope I’m wrong.

      • Jeff Kampman
      • 1 year ago

      Not a vapor chamber.

        • Mr Bill
        • 1 year ago

        DOH! That’s too bad. $100 would have been an excellent price if it also had that feature. With all those heat pipes, guess they have it covered well enough.

        Edit: In case anybody is interested… [url=https://celsiainc.com/blog-heat-pipes-and-vapor-chambers-whats-the-difference/<]blog-heat-pipes-and-vapor-chambers-whats-the-difference[/url<]

    • trieste1s
    • 1 year ago

    The CPUs that will help AMD establish mindshare in the market. Even if I’m never buying one of these because I’m broke, I’m impressed they are this powerful for the prices they command. And I can’t be the only one with this opinion.

    • Forge
    • 1 year ago

    Look at all those 64s in the stats. I must have one. Who wants to buy a kidney? Mine are in great shape, and I can arrange a showcase of others if you have specific requirements or need for more than one.

      • Redocbew
      • 1 year ago

      Those stories about waking up in a bathtub filled with ice and seeing a disturbing warning written on the mirror started with you didn’t they?

      • ronch
      • 1 year ago

      Great! I’ll have 5, please. Two blue, 3 yellow.

    • Lazier_Said
    • 1 year ago

    All of the other bars in the graphs for benchmarks that AMD won are true to scale.

    While the 6% loss in the gaming suite is 4 pixels high.

    • jarder
    • 1 year ago

    Call me strange, but I find the 12 and 24 core chip the most interesting, I couldn’t work out why until I worked out the all important dollars per thread ($/T) value for each:

    Threadripper 2990WX 32/64 $1799 = 28.1 $/T
    Threadripper 2970WX 24/48 $1299 = 27.1 $/T
    Threadripper 2950X 16/32 $899 = 29.1 $/T
    Threadripper 2920X 12/24 $649 = 27.1 $/T

    The 12 and 24 core versions are simply better value 😉

      • Rurouni
      • 1 year ago

      It might look like a better value until you realized that you need more cores and you’ll be kicking yourself not buying the CPU with more threads because it is more expensive/T when you can afford it in the first place.
      I’m actually surprised that they don’t put more premium for 2990.

        • jarder
        • 1 year ago

        I’m just guessing here, but the lack of a premium on the price of the 2990 could be down to the memory bandwidth constraints of trying to run 32 cores on quad-channel memory. Granted, Threadripper will probably be using faster quad-channel memory than you average server, but the chip will be running faster too. Many applications will be fine with this limitation, but some look likely to take a hit, so it’s a shame that we may have to wait until October to see these chips benchmarked. Hint, maybe some review site could disable a few cores on the 32-core chips to give us a preview of what 24-cores would look like. (Similarly for the 16 core chip, i.e. disble 4 cores…).

          • adamlongwalker
          • 1 year ago

          Going by past experiences, most recently the price vs performance of certain chips components I’ve purchased in the past year, I’m going to wait to see the actual performance of the new chips being shown as well when the market settles down a bit.

          I am not a spontaneous buyer. I will wait when I see real world performance numbers are and what the real market pricing will be on these CPU’s as well as others components. After a month of so the launch numbers for cost will have most likely changed whatever the market will bear.

          It is when that I will do that cost performance analysis when the market settles down a bit on all parts that I review.

    • derFunkenstein
    • 1 year ago

    Wonder how that 24-core is going to be partitioned. Is it three fully-enabled dies or two full and two half dies? And will the half dies be a pair of half CCXes? So many Lego bricks, so many possibilities.

      • Krogoth
      • 1 year ago

      It would most interesting to see how NUMA/cache coherency is affected by it.

      • Waco
      • 1 year ago

      Four 6 core enabled dies I would assume for simplicities sake, but it would be interesting to see two full and two half.

        • derFunkenstein
        • 1 year ago

        Oh, right. I missed the most obvious setup of them all. Whoops. 😆

        I think it’d be neat to see two full (the ones connected directly to memory) and two half but you’re probably right about 4×6.

    • TheMonkeyKing
    • 1 year ago

    Well, at least they know now to match up the CPU ID with total wattage used: 2990W(atts)X(treme)

    I kid, I kid…
    /now goes off to find that hotplate coil + mini tower combo for some delicious hot meals

    • setaG_lliB
    • 1 year ago

    64 graphs in Task Manger for well under $2000. What a time to be alive!

    • uni-mitation
    • 1 year ago

    All thanks go to Su the Beast-Machine, and company leadership for turning around this company. We are gonna continue ripping threads, and giving tailors much needed work.

    chucky, keep fighting the good fight!

    MAGA AMD!

    Rick MoarCoars
    AMD PR Head Chief

    • maroon1
    • 1 year ago

    [url<]https://cdn.wccftech.com/wp-content/uploads/2018/08/AMD-Ryzen-Threadripper-2000-Series_2990WX.jpg[/url<] 77.8% more cores but only performs around ~40% in rendering. Not sure if this is impressive. Rendering benchmarks scale so well with cores, so things is going to be worse if you use something else like video encoding for example I mean Intel could even compete with this with just 22 cores with little clock speed boost over 7980XE which is not hard to achieve when using soldered and moving to 14nm++. They already using soldered for 9900K and 9700K. And even it if slightly loses in rendering (which scale well with cores) it will still win in other benchmarks

      • chuckula
      • 1 year ago

      All of these chips are getting into the realm of performance tradeoffs to accommodate higher core counts.

      There are going to be situations where there’s nearly perfect scaling, negative scaling — and I don’t only mean in single-threaded applications — and in-between scaling AMD is claiming with rendering benchmarks.

        • Mr Bill
        • 1 year ago

        +3 Even Krogeth is going into negative scaling. But I take your meaning.

        • Concupiscence
        • 1 year ago

        Chuck’s right. Things only scale so well – there are issues with thread contention, fundamental application design choices that can impair multithreading efficiency, possible bandwidth concerns, cache issues, and the simple fact of clock throttling when a bunch of cores all engage simultaneously and at length.

          • blastdoor
          • 1 year ago

          Major Buzzkill, reporting for duty!

      • ptsant
      • 1 year ago

      With ridiculous amounts of compute horsepower, scaling is not trivial. Almost anything can be a bottleneck with this kind of chip: memory (remember, only 4-channel memory vs 8-channel in the servers), SSD/HDD, inter-process (at the OS) or inter-core (at the CPU) communications.

      That doesn’t mean that the chip itself is bad. It merely shows that a different strategy for system building and software design may be required for this generation of chips. And this would apply to Intel of course.

      • cygnus1
      • 1 year ago

      I think the non-linear increase in performance vs increase in core count can most probably be attributed to the memory design with two core complexes not having direct access to RAM. Unless the benchmark/task fits into the onboard caches, you’re not going to get the same performance out of probably half the cores.

    • Unknown-Error
    • 1 year ago

    2990WX….64-threads………..I still can’t wrap my head around this…

    2950X looks quite interesting. $100 cheaper than the 1950X.

      • chuckula
      • 1 year ago

      [quote<]I still can't wrap my head around this...[/quote<] That's probably a good thing, but if you succeed then I think a chiropractor will be very happy.

    • techguy
    • 1 year ago

    Here’s to hoping the Tech Report can get some video production benches in the upcoming ThreadRipper2 review. I find it interesting that AMD cites the Handbrake performance of the 2950x (16 core) and its lead over the 7900x (10 core) but does not do this when comparing the 2990wx to the 7980xe. I would like to know if it will be worth my while to replace my 7900x with a 2990wx for these types of workloads, Handbrake in particular is largely analogous for me.

      • chuckula
      • 1 year ago

      [quote<]I find it interesting that AMD cites the Handbrake performance of the 2950x (16 core) and its lead over the 7900x (10 core) but does not do this when comparing the 2990wx to the 7980xe.[/quote<] That's actually a little interesting since AMD would normally be expected to tout its new highest-end part walloping Intel instead of just going back to the well for the speed-bump 2950X vs. an old 7900X.

      • dragontamer5788
      • 1 year ago

      Handbrake (x265) uses AVX512 acceleration. So Intel has a major advantage there and is often faster than AMD in the x265 benches I’ve seen.

      Handbrake x264 usually is in AMD’s advantage, but by single-digit percents. I wouldn’t be surprised if Intel’s 18-core is actually faster than AMD’s 32-core for some reason (AVX or otherwise).

      My [b<]guess[/b<] is that Handbrake is beginning to hit Ahmdal's law. So you'll need to run Handbrake 2x or 3x to scale to these larger core counts. Useful for people making video archives (you always have more video files to run Handbrake on), but less useful for people doing video editing.

        • techguy
        • 1 year ago

        Oh, I’m well aware of the nature of the workload at hand. The AVX512 support is precisely why I chose a 7900x over a 1950x when I built the machine in question a year ago. You’re probably right on the scaling question too. There may be a hard limit in Handbrake itself or it could be that there isn’t a hard scaling limit per se but rather a practical limit depending on configuration. e.g. 1080p H.264 targets in the 20-30Mbps range won’t be as demanding (and therefore not as scalable) as say a 4k HEVC target at 100Mbps or so.

    • dpaus
    • 1 year ago

    This is why I was asking about the TDP of the custom SoC for the Chinese game console: if AMD knows that some customers are OK with a 250W TDP (raises hand tentatively…), I’d really like to see what they could do with a 34W or 64W 2nd-gen Ryzen core and see how much Vega they could stuff in with the remaining 220 or 185W. I think that just might prove Krogoth’s point about the future of discrete GPUs.

      • chuckula
      • 1 year ago

      You need a little intellectual consistency. Here you are all excited about a non-integrated CPU product while jumping up and down with excitement about the end of the GPU that Krogoth has been predicting incorrectly for the last decade.

      Which one is it? Should we be amped up and excited for the “death” of standard CPUs or be amped up and excited for the “death” of separate GPUs including the “death” of discrete GPUs that happen to be slapped onto a substrate and connected to the CPU for use in consoles instead of the exact same product being put into an add-in card?

        • dpaus
        • 1 year ago

        Chuckula, buddy, calm down… I’m not ‘jumping up and down with excitement’ about anything; I merely asked a simple, perfectly valid ‘what if’ question.

        A side comment about ‘intellectual consistency’ – our current generation of systems are all Xeon-based on the server side, and Core i5s or i7s on the client side. The previous generation were Bulldozer based, because our internal tests at the time showed that our Java application got better performance/$ at the time. And in the end, as a manager, that’s all I care about: giving our clients the best performance/$ that we can. And for the next generation of systems, which we’ll starting testing later this year… I’m seeing a LOT of promise from the Threadripper family. But we’ll make our choices based on testing, not religion.

        • Krogoth
        • 1 year ago

        Nope, discrete GPUs have been slowly becoming more and more niche. Just like what happened to discrete audio cards. It didn’t happen overnight. It took years but it eventually happened.

        AC’97 standard was arse at first but it lay down the groundwork for integrated audio to become standardize and mainstream. Likewise, the iGPU on Clarksdale/Sandy Bridge – AMD’s APUs have been laying down the foundation for next generation of iGPUs and semi-intergrated solutions. It just a matter of time before they start becoming “good enough” for the masses and causal gamers. The raison d’etre for mid-range GPUs evaporates and demand destruction is inevitable.

          • Redocbew
          • 1 year ago

          I’m seeing a Locutus of Borg meme in there somewhere.

            • chuckula
            • 1 year ago

            What’s funny is that Krogoth is pushing his usual agenda in an article about a product that literally can’t play minesweeper without a discrete GPU.

            And he’s not attacking the product that requires a GPU for anything outside of a headless server.. he can’t say enough positive things about it. While also claiming that the GPU required to make it useful is dying.

            • Krogoth
            • 1 year ago

            Neither can any of the Skylake-X silicon.

            Beside, I’m talking about mainstream platforms not server/workstations. Servers are already firmly in the semi-integrated GPU land (been there since late 1990s). Graphical workstations will likely be the one of the few platforms were discrete GPUs will continue to endure.

            • chuckula
            • 1 year ago

            [quote<]Neither can any of the Skylake-X silicon.[/quote<] Yeah, and I never once said that GPUs are dying either and have been correctly calling your BS for years. What is your point exactly... that the existence of these parts proves me right? Because here's the thing... you might think that a 32 core Ripper is "niche" and you'd be right. But riddle me this Batman: Name the one and only product on AMD's roadmap for 2019 that's actually interesting to the TR userbase outside of [b<]the discrete Navi GPU[/b<]: That's right "RyZen 2". Oh yeah, and tell me again how the [b<]standard consumer-grade RyZen 2[/b<] CPU is going to play minesweeper without a discrete GPU. [b<]In 2019[/b<].

            • Krogoth
            • 1 year ago

            Becoming an niche =! dying

            Ryzen 2 family also includes APUs (they are coming later) which are targeted at mainstream crowd. Those SKUs can play “minesweeper” without the need for discrete GPU.

            Intel is already ahead of the game since all of their non-HEDT/server SKUs have integrated GPUs. They just need to improve on performance. AMD’s upcoming iGPU and semi-integrated solutions will provide the incentive.

            • Krogoth
            • 1 year ago

            Yep, it is the natural consequence of miniaturization. The entire computer hardware industry is build upon it. Specialized hardware platforms are not immune to it.

    • leor
    • 1 year ago

    After a boring 10 years in the CPU world, AMD is finally making things interesting again.

      • just brew it!
      • 1 year ago

      Hey, Bulldozer was “interesting”. Just not for the right reasons.

        • moose17145
        • 1 year ago

        I do have to agree with you. I actually did think bulldozer was a legitimately interesting architecture. Yea it got spanked by what Intel had… but I still honestly thought it was a more fascinating architecture than what Intel was pushing at the time. Not better… just more interesting in an academic kind of way I guess.

          • jihadjoe
          • 1 year ago

          Agreed. I really enjoyed reading [url=https://www.realworldtech.com/bulldozer/<]DKanter's analysis[/url<] of Bulldozer. I've often disparaged AMD for building a Pentium 4 just after they've soundly beaten it for several years, but many of the underlying decisions behind Bulldozer do seem sound in retrospect. Increasing core count was a concession that they couldn't beat Intel in outright IPC, so they tried to change the game. It could be argued that this is exactly what Ryzen and Threadripper has now accomplished. Intel still has the IPC lead, but AMD's ability to cheaply make very wide chips has certainly changed things up.

          • adamlongwalker
          • 1 year ago

          Adored TV did an interesting video named “Benchmarks, what to trust”.

          [url<]https://www.youtube.com/watch?v=gFBKFz9n2hc[/url<] Reason I'm putting this is the FX-8370 has been mentioned and its increase performance in gaming over the years. Also TR was mentioned as well 🙂 The Author of the video also states the following about Bench Marks which in his core argument I concur. "Always do your homework when making a logical conclusion". The FX-8370 was a hot mess but it was a stepping stone for Ryzen I believe And that's good for we now have a healthy competition in choosing which CPU is right for you.

        • Sahrin
        • 1 year ago

        I don’t know what you mean by “interesting” – but Bulldozer was a fascinating attempt at creating a heterogenous architecture. It was – by far – the most advanced effort of that type undertaken so far (even ARM’s watered down HSA was pathetic by comparison).

        It’s still not clear if the problem was AMD’s ability to optimize the compiler/software stack or if there is some kind of emergent limit on parallelism that was causing the problems with BD.

        But it was awesome to see AMD swing for the fences on a project like that. I wish Intel was willing to do something so innovative. They were once (EPIC), but not anymore.

          • chuckula
          • 1 year ago

          [quote<]I don't know what you mean by "interesting" - but Bulldozer was a fascinating attempt at creating a heterogenous architecture.[/quote<] Name one thing that is "heterogeneous" about Bulldozer other than the excuse that was offered for its poor performance that "oh nobody cares about CPU performance because GPUs exist so Bulldozer is awesome because only idiots care about CPU performance." I have a 5 year old Atom table that's more "heterogeneous" than Bulldozer ever was.

            • Sahrin
            • 1 year ago

            …are you not familiar with Bulldozer’s architecture…at all? It attempted to separate fixed and floating point math into three different threads per core. It was a radical redesign of the machine programming model.

            • chuckula
            • 1 year ago

            [quote<]...are you not familiar with Bulldozer's architecture...at all?[/quote<] Oh I'm all too familiar, which is why I'm going to correct the drivel in your post that followed your opening sentence: [quote<] It [s<]attempted to separate[/s<] [u<]intentionally neutered floating point functionality in the name of CHEAP & EASY while pumping up mediocre fixed[/u<] [s<]and floating[/s<] point math [u<] units in the name of MOAR COARZ[/u<] [s<]into three different threads per core[/s<] [wrong about the "threads" too BTW, different dispatch units in a core for floating point and integer operations are not equivalent to "threads" and had been in use for decades before Bulldozer launched]. It was a radical [s<]redesign of the machine programming model.[/s<] [u<] attempt to make a chip look big while not requiring too much hard to design hardware to be in the chip[/u<][/quote<]

            • Sahrin
            • 1 year ago

            Whoosh

            • chuckula
            • 1 year ago

            The fact that people are actually upvoting that crap shows that there most certainly [b<]are[/b<] shills on TR. And they don't work for Intel.

            • K-L-Waster
            • 1 year ago

            Radical doesn’t automatically mean right.

            • Sahrin
            • 1 year ago

            No, it doesn’t. But right doesn’t mean innovative, either.

            You only learn by trying, and Intel never does. I was impressed that AMD attempted something so complex, and even though it cost them a ton of money (and it cost us a lot of performance gains and money in lack of competitiveness) it definitely contributed to their current position.

            Who dares, wins.

            • chuckula
            • 1 year ago

            [quote<] You only learn by trying, and Intel never does.[/quote<] Really, because for all the koolaid drinking you engage in trying to convince the world that Bulldozer was great, there are more major features in Zen that are flat out taken from the [b<]Pentium 4[/b<] than from Bulldozer. Two major features that I can think of off the top of my head are AMD's copy of Hyperthreading, which had only existed in a few multi-million dollar mainframe architectures before the Pentium 4 and the trace cache that stores decoded instructions in the cache. Try turning off either of those features on your magical made out of thin air RyZen and get back to us about how much it destroys Intel chips with the same core count. That's not to mention the fact that the Pentium 4 was the first processor to implement the SSE instructions that RyZen targets to the detriment of more modern instruction sets. Thanks Pentium 4.

            • NoOne ButMe
            • 1 year ago

            Bulldozer had SMT.

            P.S. First ones who started serious work on an SMT enabled CPU for mass market would be DEC.

            • chuckula
            • 1 year ago

            1. I hope that your first statement is an intentional troll. It’s so bad that it’s not even wrong.

            2. As for DEC, do you mean the never-released 21464?

            Because even if it had been released as an extremely expensive part that’s not equivalent to the market category of PC, it still would have been released to market in 2003 a full two years after the Pentium 4, so that’s not exactly “first”:

            [quote<]The 21464's origins began in the mid-1990s when computer scientist Joel Emer was inspired by Dean Tullsen's research into simultaneous multithreading (SMT) at the University of Washington. Emer had researched the technology in the late 1990s and began to promote it once he was convinced of its value. Compaq made the announcement that the next Alpha microprocessor would use SMT in October 1999 at Microprocessor Forum 1999.[3] [b<]At that time, it was expected that systems using the Alpha 21464 would ship in 2003.[/b<][/quote<]

            • NoOne ButMe
            • 1 year ago

            Bulldozer has SMT technology active in every single core on every single die ever manufactured in every single FPU.

            You do know when SMT was launched with Pentium 4 parts, right? I’ll give you a hint, the year ends with the number 3.

            And now we’ve moved from multi-million dollar mainframes not being good enough to mid thousands of dollars starting price computers not being enough.

            Should we say AMD for being the first to integrate the memory controller because no one else (as far as I can tell) had done so as a way to improve performance until they did it?

            • chuckula
            • 1 year ago

            I’ll grant you that the first commercial versions of the Pentium 4 with SMT launched in 2003.

            It’s still the first consumer-level product that actually made it to market with SMT and frankly going on about DEC also pursuing HT in a chip that never launched kind of shows the point about why DEC died in the first place… Intel’s much maligned Pentium 4 had the same features as crazy DEC chips but did so in a cheap mass-market platform.

            As for Bulldozer having SMT…that word means something and you can’t just throw it around.
            Even the questionable Wikipedia article doesn’t claim that Bulldozer had real SMT but then goes on to posit that “flexFPU” is SMT.. and it’s not.

            If your chip has real SMT here’s what it has: Lots of compute resources that can’t always be used efficiently by a single thread so SMT provides a light-weight implementation to potentially let two or more threads use the compute resources in a core … [i<]simultaneously[/i<]. Bulldozer is literally the diametric opposite: There [b<]weren't[/b<] enough FPU resources in a single core so two cores had to be munged together to provide the functionality of a single FPU. That's literally the opposite of what SMT actually means.

            • Waco
            • 1 year ago

            Beat me to it. CMT and SMT are *not* the same thing. They’re pretty close to polar opposites.

            The huge advantage to two threads per core sharing units versus two cores sharing some units is that a single thread has potentially many more available resources. Sure, it saves on die space to share resources between two cores but it comes at an overall throughput detriment on both lightly threaded [i<]and[/i<] heavily threaded workloads.

            • Concupiscence
            • 1 year ago

            This is all true. IIRC there were also memory throughput issues that made fast RAM [i<]really[/i<] important for feeding those modules efficiently. But I digress. The FX family was only close to competitive at those tasks where the largely discrete integer units could work in parallel - video encoding, compilation, virtualization, some branches of scientific computing, and a few others. For my use cases they were fine - that's not Stockholm Syndrome talking, the eight core Piledrivers did everything I needed - but from Haswell onward they were comprehensively outclassed.

            • Waco
            • 1 year ago

            Yep. I still run a good number of Bulldozer/Piledriver servers. They’re great for DB2 given the cost (especially when there was a firesale on the top-end SKU).

            • K-L-Waster
            • 1 year ago

            You’re making it sound like trying something different is an end in and of itself. That may be the case in academia, but in business trying something different is only valuable if it actually produces a better solution to the problems faced by customers.

            Which BD completely failed to do.

            Fortunately for AMD, RyZen and TR are (unlike BD) actually useful for real world work.

            • Redocbew
            • 1 year ago

            [quote<]Who dares, wins.[/quote<] Or they fail spectacularly and become the target of a lawsuit. The false advertising suit against AMD over the numbering of cores/modules/pancakes was kind of ridiculous, but that is what happened. However, if by "their current position" you mean a hole in the ground which they've been trying to climb out of for the past decade, then yeah I think you're right.

            • the
            • 1 year ago

            There were only two threads running in the module at a time. The innovative part was that the FPU was shared between two threads while integer units were dedicated. These two threads were symmetrical in capabilities meaning that from a programmer’s perspective, they were homogeneous.

            The goal was that the integer units could be streamlined for high clock speeds while parallelism on the FPU side would hide latency while maintaining adequate throughput. The idea isn’t necessarily a bad one academically as it had the potential to increase compute density. AMD wasn’t the only one to try such a design (see Sun with Niagra 1 and its shared FPU).

            The problems AMD had were a horrible cache subsystem and the clock speeds they were hoping for never materialized within reasonable power budgets. Early on there were also bottlenecks in instruction decode and x87 FPU, which were fixed by adding more resources into the design with the Streamroller generation.

          • dragontamer5788
          • 1 year ago

          [quote<]I don't know what you mean by "interesting" - but Bulldozer was a fascinating attempt at creating a heterogenous architecture. It was - by far - the most advanced effort of that type undertaken so far (even ARM's watered down HSA was pathetic by comparison). [/quote<] Zen is by far more interesting than Bulldozer. A single-Zen core has more execution units than TWO Bulldozer cores. There's a reason why Zen SMT >> Bulldozer's CMT. Hell, I'm fairly certain that any workload will run better on SMT-Zen cores rather than "Bulldozer's fake dual core".

          • derFunkenstein
          • 1 year ago

          You’re right. I especially liked the feature where it kept not actually beating Phenom II despite its 15% or more clock speed advantage.

          [url<]https://techreport.com/review/21813/amd-fx-8150-bulldozer-processor/7[/url<] It starts at that link and just keeps losing until the end of the article. Super feature of AMD's processor technology.

        • ermo
        • 1 year ago

        I just got my hands on 4x4GB ECC DDR3 RAM and an AM3+ ASUS board for free. Since the FX-8350 was on sale for the equivalent of $74 before VAT at my local hardware pusher, getting one was a no-brainer as the build will be used for (hobby) Linux distribution maintainer duties (i.e. compilation and compression) where BD was competitive in its day.

        As dpaus says, the benchmarks clearly show that BD *did* have its competitive niches, even if they were few and far between.

        That said, the board came with a PhII X6 1090T which will now live out its days as a nice, free underclocked+undervolted replacement for my PhII X4 955BE in my ECC-capable whitebox home server w/SnapRAID and MergerFS serving up redundant, integrity checked JBOD media storage via Samba.

        But I digress.

          • Shobai
          • 1 year ago

          An embedded C/C++ project I work with frequently is sometimes compiled on a 1090T and an FX 8320; the 1090T compiles this project in less time, every time, than the FX 8320. I hope the extra clock on the 8350 makes it worthwhile for you.

          [The 1090T takes ~85% of the time that the 8320 requires, and a 6700k takes just under 50% of the time the 1090T needs]

            • just brew it!
            • 1 year ago

            Is your 1090T running DDR3 RAM? (It is compatible with both AM2+ and AM3.)

            If the compilation is using all cores and is CPU bound, the clock speed difference on the 8350 should just about exactly make up for the per-core IPC deficit.

            • Shobai
            • 1 year ago

            The 1090T runs 12GB of DDR3 at 1333 across 4 sticks, the 8320 runs 16GB of DDR3 1866 across 2 sticks. We use 6 threads for the 1090T and 8 threads for the 8320.

            • just brew it!
            • 1 year ago

            Hmm… it could be that performance for this workload is being throttled by memory bandwidth (and/or latency).

            • jihadjoe
            • 1 year ago

            If it was bandwidth instead of CPU limited then the FX8320 should be faster than the 1090T. Both are dual channel, but the FX was running much faster DDR3 1866.

            • just brew it!
            • 1 year ago

            Oops, my bad. I guess I flipped the memory speeds around when I read the post.

            • ermo
            • 1 year ago

            On Linux with make -j6 vs. make -j8? That’s interesting.

            The clock speed advantage of the FX-8350 amounts to 40/35 =~ 14.3% over the FX-8320. But I suppose I ought to do a proper PTS benchmark run with both processors before committing.

            Either way, it’s a fun project. =)

            • Shobai
            • 1 year ago

            Windows, but otherwise yes. It’d be worth benchmarking – by those numbers the 8350 should at least break even with the 1090T under my use case, but of course that’s only a single data point – maybe something is misconfigured at my end.

            • ermo
            • 1 year ago

            Just ran the PTS timed compilation test suite on the two CPUs in the same board (ASUS M5A97 Pro) using exact same hardware and software configuration.

            The 1090T ran all cores at 3600MHz@1.275V with Turbo Core off (so the P0 power state is up from 3200 to 3600 MHz on all 6 cores compared to stock). The FX-8350 ran all cores at 4200MHz@1.25V with Turbo Core off. In other words: Both CPUs ran slightly undervolted at their nominal Turbo Core frequency.

            Here are the results (FX-8350@4.2GHz / 1090T@3.6GHz) – lower is better:

            – Apache (39.58s / 42.34s) = FX-8350 completes in 93.48% of the 1090T’s time
            – ImageMagick (65.59s / 86.7s) = FX-8350 completes in 75.65% of the 1090T’s time
            – Linux kernel (147.60s / 169.13) = FX-8350 completes in 87.27% of the 1090T’s time
            – MPlayer (48.71s / 61.51s) = FX-8350 completes in 79.19% of the 1090T’s time
            – PHP (96.68s / 116.43s) = FX-8350 completes in 83.04% of the 1090T’s time

            In aggregate, the FX-8350 looks to be around 15% faster on average while demanding less voltage when running at its nominal Turbo Core speed across all cores.

            • Shobai
            • 1 year ago

            Do you have any idea how simple it would be to set up a live USB stick, or similar, that would let me run the PTS on various computers?

            • ermo
            • 1 year ago

            Were I in your shoes, I’d probably dig out an old 64-128GB SSD and use that in an USB3 enclosure and just install a normal linux distribution which supports PTS (I use and contribute to Solus, which supports PTS).

            PTS will download its test profiles under $HOME, so you’ll need plenty of space. Proper SSDs support wear leveling.

            EDIT: This will also let you keep your benchmarking results for local (‘merge’) comparisons across systems.

            • Shobai
            • 1 year ago

            Heh, yep. I noticed that there’s a Windows option for it, but that would seem far less disruptive. That’s not something I’ll get to this week, though.

            • Concupiscence
            • 1 year ago

            A buddy of mine at Mozilla swears by using -j(core count +1) for multithreaded compile jobs, on the basis that the extra thread being queued up will help ensure the cores stay fully loaded as soon as execution on one process completes. I never performed a comparison, but that doesn’t sound totally loony to me.

            • Waco
            • 1 year ago

            I generally run cores * 2 for big compile jobs on fast storage (or /dev/shm).

            • Shobai
            • 1 year ago

            Does that work out to “threads” for those CPUs?

            • Waco
            • 1 year ago

            Generally, yes.

          • just brew it!
          • 1 year ago

          [quote<]That said, the board came with a PhII X6 1090T which will now live out its days as a nice, free underclocked+undervolted replacement for my PhII X4 955BE in my ECC-capable whitebox home server w/SnapRAID and MergerFS serving up redundant, integrity checked JBOD media storage via Samba. But I digress.[/quote<] To digress a bit further, this is very similar to the home server upgrade I just rolled out a couple of weeks ago. Asus M5A97 (the original version, not the R2.0), PhII X6 1090T, 8GB ECC, and an LSI 8-port SAS/SATA HBA. I did not undervolt (yet...), but I did ensure that it is using the Linux "conservative" CPU governor, which ensures that the cores remain at their lowest clock speed unless there is a sustained CPU load.

            • ermo
            • 1 year ago

            I’ve found that running the cores at 800 MHz (i.e. the default Cool’n’Quiet frequency/voltage) will negatively impact the throughput of Samba, so I’ll have to figure out a way to tweak the the Cool’n’Quiet table (and the governor) to boost up to where max bandwidth is available quickly, while being more conservative about boosting all cores to the 3600 MHz top speed@1.35V.

            • just brew it!
            • 1 year ago

            Yeah, I did notice some performance impact (the conservative governor does indeed leave the cores at 800 MHz most of the time unless you run something truly CPU-bound). But given that I’m generally limited by network bandwidth anyway, I was willing to trade that off for lower power consumption.

            In your case, you may want to just use the stock on-demand governor and call it a day. It seems to be fairly aggressive about boosting individual cores.

            Edit: To digress yet again, I’ve always wondered why the FX-8320/8350 on stock settings won’t clock below 1400 MHz when lightly loaded. Seems like AMD could’ve improved the idle power consumption quite a bit if they’d done that. Maybe the power consumption at lower clocks is completely dominated by leakage current, making lower clocks irrelevant from a power usage perspective.

          • Concupiscence
          • 1 year ago

          As someone who played with FX kit for a number of years, try undervolting the FX-8350. You can probably lower its power requirements by more than 20% with just a little tweaking.

            • ermo
            • 1 year ago

            I’ll give it a go — my ASRock 970 Extreme 3.0 R2.0 board gets fairly toasty around the CPU socket at full load, though the CPU DTS says the temps don’t go above 55C.

            • ermo
            • 1 year ago

            It turns out the the FX-8350 will boot with all cores at their nominal Turbo Core speed of 4200MHz at 1.25V. The stock voltage is 1.325V (and Turbo Core voltage is 1.4V). At 4000MHz, it’ll boot at 1.225V.

            In comparison, the 1090T needs 1.275V to boot with all cores at their nominal Turbo Core speed of 3600MHz.

            Thanks for the heads up @Concupiscence.

      • Kretschmer
      • 1 year ago

      10 years ago we were using Core 2 Duos/Quads. I would say that the “Core i” architecture was very interesting from a performance perspective, and Sandy Bridge was more exciting than piling tons of cores that I’ll never use into one CPU.

    • Kretschmer
    • 1 year ago

    This isn’t for me (or most users), but it’s cool that this tech is becoming so accessible.

    • Krogoth
    • 1 year ago

    I give this -3 Krogoths (genuinely impressive).

    The last few years have been awesome for power users and professionals. You can get so much computing power and I/O throughput under a sane budget.

    This guy is a sleeper in the SMB market if you need to build a server that doesn’t need a ton of memory capacity.

    Some of the Xeon W SKUs are worthy mentions too even if the you might spend a little more on platform costs or need clockspeed over core count.

      • chuckula
      • 1 year ago

      Bookmarked for the 28 Core Skylake X launch.

      If you make the usual “this is just a failed server part because all of the cores work and it overclocks to 5GHz so it must be defective” comment I’ll remember it.

        • Krogoth
        • 1 year ago

        You can get XCC Skylake-X chips today though assuming budget isn’t a major concern. 😉

        Core version would be an unit that just ate too much power for enterprise/datacenter customers. I’m still kinda surprise Intel hasn’t attempt to push/hype-up customer-tier Skylake-X XCCs out to go along with their upcoming 8-core Coffee Lake chips.

          • dpaus
          • 1 year ago

          OK, let’s all be nice to Chucky today; it’s a tough day to be an Intel shill..

          [url<]https://www.cnbc.com/2018/08/06/intel-shares-fall-after-analyst-downgrades-chipmaker-due-to-competitio.html[/url<]

            • chuckula
            • 1 year ago

            Correctly calling out Krogoth on his standard hipocrisy is not “shilling”.

            But if you really want some deeper analysis, riddle me this: Given the standard propaganda about how AMD is stockpiling millions of 16 core RyZen 2 dies from TSMC’s perfect 7nm process… why did it bother releasing a speed-bump to the 1950X. What’s the point?

            • dpaus
            • 1 year ago

            Wow, you’re touchy today – I guess I was right about how the news is affecting you.

            It’s still too early in the day to be addressed rumours about propaganda; I was simply asking everyone to be nice to you. Actually, I think you need a hug… 😀

            • chuckula
            • 1 year ago

            Good, then you will jump in and defend all the perfectly factually accurate statements I make when the next wave of Skylake X launches.

            I’m tired of the one-way propaganda around here and given the fact that there are multiple posters in this very story who are still drinking the koolaid to claim that [b<]BULLDOZER[/b<] was this magically great innovative architecture that just wasn't given a "fair" chance I think you need to figure out who is doing the real shilling.

            • Krogoth
            • 1 year ago

            Hipocrisy? You really need to lay off the troll juice.

            • chuckula
            • 1 year ago

            Go back and find one post of mine* where I seriously call Threadripper a “reject” part that “failed validation”.

            * Posts that are intentionally satiring your stupid reasoning by applying your standards to AMD on an equal basis don’t count.

            Or, slightly more seriously, go back and find the posts where I call Threadripper some kind of failure in anywhere near the sort of language that we hear from you & the usual suspects about literally every Intel product out there.

            I dare you.

            • Krogoth
            • 1 year ago

            I believe you have a solid track record of constantly criticizing the Ryzen family for their lack of AVX512 support and their 14nm process’s inability to scale beyond 4Ghz without insane voltages under the guise of troll-bait.

            • chuckula
            • 1 year ago

            OK, so when TR first reviewed Skylake X without using a single AVX-512 benchmark, please go look at the performance delta between the 16 core 7960X and the 16 core Threadripper 1950X.

            Then go look at the performance delta between the 1950X and the old [b<]ten core[/b<] Broadwell 6950X. Notice how [b<]without using any AVX-512 benchmarks at all[/b<] the performance delta of a core-for-core equivalent 7960X over TR 1950X is [b<]HIGHER[/b<] than the performance delta of the 1950X with gobs more cores than an old Broadwell? Now tell me again of those two chips, the 7960X [b<]without a single AVX-512 benchmark in the performance mix[/b<] or Threadripper... which one was called a "failure" by you and the usual suspects? Oh, and while we're here, when TR used the exact same Blender benchmark that AMD uses in its own PR demos to test energy efficiency and the 7960X easily beat the 1950X.... who exactly called the 1950X a power hog... because that's what the facts say. Funny how I seem to recall that Skylake X is routinely called a power hog while generally winning every real energy efficiency test. That's my point. Oh, and the fact that AMD has practically confirmed that RyZen [b<]2[/b<] doesn't do AVX-512 is also a real disadvantage. Oh, and the fact that AMD's parts still can't reliably get much above 4GHz [b<]even without having all that nasty "AVX" power hogging hardware[/b<] and having just as many pipeline stages -- if not more -- than Skylake sure isn't an advantage.

            • Krogoth
            • 1 year ago

            > Still hung up over people calling product(s) that are are server “rejects”

            > Doesn’t realize that Intel never intended on releasing Skylake-X HCC and Skylake-X XCC SKUs to the customer market until AMD forced their hand

            > Ignores the fact this is the best time for professional/prosumers since K8/Core-2 era.

            • jts888
            • 1 year ago

            I don’t think anybody can credibly claim that a 28c monolithic die with all the cores intact is in any way an enterprise reject. They are borderline golden samples if anything, which makes the problem actually acquiring one if you were predisposed to paying several grand for the processor alone, plus whatever crazy price those overclock-enabling mobos go for.

            Remember that Intel sells 28c Platinums for $9k – $13k apiece, so their options are:[list<][*<]sell the enthusiast chips at high prices, annoying the HEDT market and keep sales volumes kept low by market demand [/*<][*<]sell the enthusiast chips at lower prices in high volumes and risk gutting the creme of their server chip crop [/*<][*<]sell the enthusiast chips at lower (sub-$3k?) prices with capped sales volumes, trying to thread the needle in minimizing aggregate customer annoyance[/*<][/list<] I can only realistically see Intel producing as many of these as it needs to take the wind out of AMD's sales without actually meeting market demand. I.e., the eternal paper launch.

            • Krogoth
            • 1 year ago

            It is only an enterprise “reject” in the sense that it ate too much power when fully loaded for its intended customers (Yes, they are very picky about power/cooling budgets). Threadrippers are in the same vein as well expect that AMD disabled some of the chip logic to prevent them from cannibalizing SP EPYCs.

            • jts888
            • 1 year ago

            I think now that AMD has launched the Embedded Ryzen line to (finally) compete with the Xeon-D line, the primary market for SP EPYC is people who need a sizable amount of RDIMM capacity and/or bandwidth without too much compute, where the 4 UDIMM channels of Threadripper
            limit you to 128 GB and 80-100-ish GB/s. And Threadripper still gives you ECC at least.

            But I do agree that TDP caps and Watt/U budgets are a big deal for most enterprise customers and are reason why not even the top Platinum SKUs just barely break 200W/2.5GHz base.

      • tanker27
      • 1 year ago

      Sane prices? Have you looked at the price of these? I was just thinking, “Jesus these prices! WTF!?”

      Well…… maybe it’ll jumpstart a price war with Intel.

        • moose17145
        • 1 year ago

        If you use your system to make money, then actually, these are VERY affordable for what they offer compared to just a few very short years ago when you would have had to take out a second mortgage on your house to get something even half comparable from Intel.

        Case in point… I have a Core i7-6900K machine right now. That is “only” an 8 core 16 thread chip, and it cost 1100 bucks retail when it came out! Compare that to what 1100 will get you for a CPU today…

        That being said… AMD is right to market these as professional chips for professional use cases that are going to be less ideal for gaming compared to their consumer oriented Ryzen lineup. But that has been the case for a while now that the consumer space has tended to (on the whole) beat out the HEDT platform if all you are going is gaming. Unless you already know exactly why you are going to buy one of these, you probably do not need one.

          • K-L-Waster
          • 1 year ago

          This.

          Threadripper isn’t really aimed at gamers and hobbyists, it’s aimed at professionals.

          For home users who don’t intend to try to make a living with their system, the 2700X is a more sensible choice.

        • Waco
        • 1 year ago

        Epyc has certainly driven down the cost of affordable server chips. TR has done the same for workstation parts.

        They’re incredibly priced especially given the huge leaps forward in core count / memory bandwidth / memory capacity / PCIe lanes / connectivity in general in the past few years.

        • Krogoth
        • 1 year ago

        These are prosumer/professional-tier hardware. Spending north of $1999 on a system build is par on the course.

        Getting a 32-core SP system with ECC UDIMMs + generous I/O connectivity for ~5K is an insane value. It wasn’t that long ago you would have to spend close to 20K for that kind of computing power.

        • derFunkenstein
        • 1 year ago

        Considering the prices *only* scale 1:1 for the number of dies on the chip, those prices are pretty sane. No extra markup.

        • Redocbew
        • 1 year ago

        I had the same reaction, and I haven’t really been that interested in Threadripper in general mostly due to the pricing. AMD going so far as to show a “bad” gaming benchmark does make me wonder though just how many people bought a threadripper for gaming or other non-workstation-ish uses. I’m sure AMD(or Intel, or anyone else) would have happily marketed this chip at gamers had they been buying it, so perhaps this is a rare sign of sanity prevailing over the shenanigans often found here.

      • Aranarth
      • 1 year ago

      “I give this -3 Krogoths (genuinely impressive).”

      I had no idea a Krogoth was a negative sliding scale…

      Holy crap -3?! Krogoth is REALLY impressed!

      Now if only money would grow on tree or I had a rich doting uncle. Say any millionares willing to adopt me as their favorite nephew? 😀

      • tritonus
      • 1 year ago

      Krogoth: Could you please give us a definition of The Krogoth Scale? I am curious. For example:

      – Is it worse to get 1 Krogoth (not impressed x some constant value?) or 0 Krogoths (not even worth of being unimpressive / meh / ok-ish / …)?

      – Should we interpret minus three on Krogoth scale as ”impressive” or ”very very impressive? -Edit: bad reading comprehension, this was already defined. Minus 1 and minus 2 are still interesting!

        • Srsly_Bro
        • 1 year ago

        I suggest a logarithmic scale.

          • Redocbew
          • 1 year ago

          Good call bro. That would give us diminishing returns for very small or very large Krogoths. How impressed can a person really be anyway?

          • Mr Bill
          • 1 year ago

          This could work if the negative Krogoth is considered to be the characteristic of the logarithm.

            • Srsly_Bro
            • 1 year ago

            I was thinking log 0 being unimpressed or absolute Krogoth and the log scale will smooth out people rating a product as the “best/worst thing ever made.” It should work like prescription drugs that manage moods.

            It would surely help control Apple reviews being too generous in praise and Intel’s 10nm being just awful.

      • NoOne ButMe
      • 1 year ago

      if you just need a server, go with single socket EPYC?

      Street prices:
      32C 7551P is about $2200-2300
      24C 7401P is about $1200-1300
      16C 7351P is about $800-850

      You lose clocks but I don’t think that would be a huge issue for the SMB market for servers? And gain more of pretty much everything else.

        • Krogoth
        • 1 year ago

        EPYC costs more but you’ll get 128 PCIe lanes, eight DDR4 channels and proper RLDIMMs/RDIMMs support. It might be a bit overkill for SMB market depending on what you what to do with the hardware.

          • NoOne ButMe
          • 1 year ago

          24C and 16C versions don’t even bring higher cost (For the core counts v. TR2 MSRP) and Motherboards start in the same price range.

          Although what factor the motherboards use could be an issue? No idea.

            • Waco
            • 1 year ago

            You require twice the DIMMs for EPYC as well, which adds up surprisingly quickly. Case support for EATX can be iffy as well.

            • Bauxite
            • 1 year ago

            There is an ATX option for single socket epyc: H11SSL from supermicro and it uses at least 100 of the 128 available pcie lanes. Same 8 dimms as an ATX TR board, and the hefty 1P-only sku discount makes a pretty compelling thread-loving workstation build possible since early this year.

            Shame there are no unlocked epycs, that would be really interesting. 2990WX ($1799) is basically an unlocked 7551P (~$2k) with minor differences to account for the ram/pcie configuration. AMD hinted they did this mostly with firmware, the design was already baked in with how they handle multiple CCX and similar on the fabric.

            May be strange for people used to a decade of intel design/segmentation decisions but dual socket on epyc adds cores + max ram but nothing else, 64 lanes from each socket become the SMP link. Even so 1P epyc still supports more ram + lanes than dual E5/platinum whatever 😉

            • Waco
            • 1 year ago

            4 channels for TR, not 8.

            I mean, you can run without all the channels populated, but it’s a huge performance loss if you do.

    • OptimumSlinky
    • 1 year ago

    [quote<] but if you insist on making bad decisions, the Threadripper 2950X at least doesn't seem to punish you much for massively unbalancing your graphics-card-and-monitor budget. [/quote<] I snarked a bit at this.

    • Pancake
    • 1 year ago

    Holy smokes. The single-threaded clock rates are quite reassuring. You can have a great lightly threaded and mentally insane heavily threaded performance. I honestly can’t see Intel having a credible response to this.

    • Jeff Kampman
    • 1 year ago

    Sorry for the low quality of the video for anybody who watches it early—YouTube is being extremely slow with processing 🙁

      • chuckula
      • 1 year ago

      The video will be fine, and thanks for adding it.

      • thedosbox
      • 1 year ago

      No kitty? No subscribe!

      • Mr Bill
      • 1 year ago

      Well produced video. Nice video pans and you have a good voice for it. But, you need a hand double.

    • chuckula
    • 1 year ago

    Interesting that the base clock of the biggest chips came in lower than expected at 3.0 GHz instead of the predicted 3.4 GHz.

      • drfish
      • 1 year ago

      They must have figured out that “360 W TDP!” doesn’t look as cool on the box as they thought it would. 😉

        • chuckula
        • 1 year ago

        They could have made a 360 watt TDP and called it the Circle of Ripper edition!

          • drfish
          • 1 year ago

          DreadRipper

    • Waco
    • 1 year ago

    I’m looking forward to analysis of the memory controller layout between the new chips. 🙂

      • Krogoth
      • 1 year ago

      It will be interesting to see how NUMA/cache operates on these chips.

Pin It on Pinterest

Share This