Ryzen Threadripper 1950X and its 32 threads will go for $999

AMD blew the lid off two of its Ryzen Threadripper CPUs this morning. The Ryzen Threadripper 1950X will offer 16 cores and 32 threads clocked at 3.4 GHz base and 4 GHz boost speeds at $999, while the Ryzen Threadripper 1920X will offer 12 cores and 24 threads with a 3.5 GHz base clock and 4.0 GHz boost speeds for $799. We already know both chips will offer 64 PCIe lanes and four channels of DDR4 memory.

AMD demonstrated Threadripper performance using the 1920X compared to Intel's Core i9-7900X and its 10 cores. In Cinebench multi-threaded testing, the 1920X delivered a score of 2431, compared to the i9-7900X at 2167. In a dollar-for-dollar comparison, the $999 Threadripper 1950X turned in a Cinebench score of 3062.

Cinebench is just one test, but those numbers suggest Threadripper CPUs could dominate in both multithreaded bang-for-the-buck and in absolute performance when they launch. AMD says Ryzen Threadripper CPUs will be "on shelves" in early August, so we don't have long to wait to see how these chips perform.

Comments closed
    • Pancake
    • 2 years ago

    Oh boy. What a time to be alive and due for a system upgrade. 1920X or a 7820X for my next upgrade? I don’t need the cores but need the quad-channel is a must. Surely this is incentive to write better threaded code. *Head explodes*

    • anotherengineer
    • 2 years ago

    I wonder if reviews will use W10 to test them?

    [url<]https://www.techpowerup.com/235169/windows-10-process-termination-bug-slows-down-mighty-24-core-system-to-a-crawl[/url<] 😉

    • sophisticles
    • 2 years ago

    I’m I the only guy that thinks that Lisa Su looks like a poorly made up tranny? Seriously, is “Lisa Su” a guy in drag?

      • chuckula
      • 2 years ago

      Them’s fightin words!!
      — AMDisDEC

      • kuraegomon
      • 2 years ago

      No, you’re just the only guy that thinks it’s appropriate to serve up your derogatory opinion of a female technical leader’s appearance as a thinly-veiled attempt to “put her in her place”. Jackass.

      • bwoodring
      • 2 years ago

      What a real piece of shit you are. You better hope you’ve never posted anything personally identifiable here.

      • Metonymy
      • 2 years ago

      Lisa Su aside, did you really need to go after transvestites anyway?

      • Jeff Kampman
      • 2 years ago

      sophisticles has been permanently banned for this comment. We don’t welcome this kind of language on The Tech Report.

        • Redocbew
        • 2 years ago

        Thank you.

        • jts888
        • 2 years ago

        Render unto 4chan what is 4chan’s, amen.

        • caconym
        • 2 years ago

        And that’s why you’ve got one of the only welcoming, worthwhile comment sections in tech blogs. Thanks.

        • NeoForever
        • 2 years ago

        Thank you. Civil comments, lack of spam, and the awesome collapsible threaded structure is one of the main reasons I come to this site regularly.

        (On related note, I’m amazed no other tech review site has come up with easy to browse comment structure – or even use something like Disqus.)

        • ALiLPinkMonster
        • 2 years ago

        As a trans woman who is insulted not only by the slur but by the blatant objectification of women in that comment, I must say you have reaffirmed why I love this community and have been a proud part of it for over ten years. Thanks Jeff.

        • CScottG
        • 2 years ago

        ..wow, from one extreme to the next.

        That’s just not moderation, it’s retaliation. If you expect “grown-up” comments from your website responding-readers, then you should actually be moderate in your handling of such abuses – in a “grown up” manner.

        ..ban? Sure. ..but “perma-ban”? How about requiring a good apology for being offensive to two different groups of people, and a more specific good apology for Lisa Su?

        (..I almost never “down-vote”, but if I could I’d down-vote your lack of moderation in this instance by at least a thousand.)

          • Redocbew
          • 2 years ago

          Right… Dude was clearly just misunderstood and needed someone to talk to. Everything would be fine then.

            • CScottG
            • 2 years ago

            Sarcastic straw-man argument?

            He’s clearly in the wrong, that’s why it was one *extreme*. Jeff’s was the other *extreme*.

            -and two wrongs don’t make a “right”.

            • Redocbew
            • 2 years ago

            I suppose I just have a more pessimistic view of these things. You seemed to think this person’s behavior could change over a relatively short period of time. Clearly I don’t think so(and it doesn’t matter anyway since it’s not our call), but if I’m wrong, then great. One less jackass in the world is always a good thing. Either way, I’m not interested in finding out.

        • ImSpartacus
        • 2 years ago

        That’ll teach em to insult muh SuBae!

        AyyMD to the moon!

      • albundy
      • 2 years ago

      yes, yes he is.

      • Krogoth
      • 2 years ago

      You need to go out in the world more. She looks like a typical middle-aged woman without any of heavy make-up and other cosmetics.

      • kuttan
      • 2 years ago

      Yes you are the only guy who thinks such stuffs.

      • PrincipalSkinner
      • 2 years ago

      Not a very sophisticated comment for a guy named sophisticles.

        • cynan
        • 2 years ago

        Well, if the sophistication is limited due to it originating from the anatomical organ from which the second half of the portmanteau is comprised, it could explain a lot.

    • ronch
    • 2 years ago

    Hey guys, remember this ad from 1999?

    [url<]https://youtu.be/MK0hU0OYvCI[/url<]

    • ronch
    • 2 years ago

    Ryzen 7, Ryzen 5, Epyc, Threadripper, and now Ryzen 3.

    Why am I reminded of the days when I played Warcraft 3?

    • Unknown-Error
    • 2 years ago

    Jeff Kampman, are going to do EPYC and Xeon reviews?

    Importantly, anyone here have any idea about the AnandTech’s EPYC vs Xeon review controversy? Now Intel people are calling AnandTech AMDTech! I believe even Linus Torvalds and David Kanter have commented on this.

    [url<]http://www.realworldtech.com/forum/?threadid=169894&curpostid=170024[/url<] [url<]http://www.realworldtech.com/forum/?threadid=169894&curpostid=170012[/url<]

      • chuckula
      • 2 years ago

      TL;DR version: Kanter is right — as he almost always is [although I was right that Intel didn’t jump to new III-V materials at 10nm, yay me].

      Especially some of Anand’s floating point numbers in particular are suspicious as all hell and cast a bad light on the reliability of everything else in the review.

      To name just a couple of issues, other reviewers of the 8176 have seen considerably different results in some of the same benchmarks: [url<]https://www.servethehome.com/quad-intel-xeon-platinum-8176-initial-benchmarks/[/url<] Checking that review you'll see that the dual 8176 system that is "weak" in gcc at least ties the older 2699V4 while in Anand the results of the 2699 are noticeably superior to the new 8176 in a questionable way. Incidentally, Phoronix has seen massive changes in GCC compiler performance from the old Ubuntu distro that was used in those reviews to newer distros: [url<]http://www.phoronix.com/scan.php?page=article&item=7900x-linux-distros&num=1[/url<] (look at Timed Linux Kernel compilation) The 7900X is the same architecture as the 8176 and there are clearly improvements to be made simply by using a modern distro on a brand-new CPU architecture (no great shockers there). On top of that, the NAMD results that Anand got that show basically zero improvement from Broadwell to Skylake are not consistent with either Serve the Home or Tom's Hardware. There are a lot of issues and Anand might have been better off waiting for the systems to be properly vetted before jumping to conclusions.

        • Zizy
        • 2 years ago

        While I agree Anandtech’s tests were pretty suspicious to say the least, the only thing they did right was use of 16.04 – almost nobody would be using 17.04 on these machines.

        • exilon
        • 2 years ago

        Well more on the topic, we still don’t know the IF latency.

        We know it’s ~80-100 ns to hop across a CCX over the IF, but no info on how much slower it is to hop across dies, or across sockets. Is the IF frequency still locked to the memory frequency? How much does that impact the top memory speed? This is the juicy revelation since it determines how severe the NUMA effects are going to be for TR and Epyc.

        Anandtech completely dropped the ball and didn’t measure it with an Epyc system on hand.

        ServeTheHome is being a total tease too, saying IF is slower than 4P UPI [i<]but not which part[/i<].

      • RAGEPRO
      • 2 years ago

      I’m sure Jeff would love to do Epyc and Xeon reviews if he had the hardware. You offering to buy it for him? 🙂

        • Unknown-Error
        • 2 years ago

        Well since I am $h!t broke thats not going to happen. But honestly if I had the financial means I’d be happy to buy a few systems and lend them (for free) to independent reviewers like TR (as long as they return them in one piece 😀 :p). The independent reviews are critical to making informed decisions.

    • sophisticles
    • 2 years ago

    [quote<]AMD demonstrated Threadripper performance using the 1920X compared to Intel's Core i9-7900X and its 10 cores. In Cinebench multi-threaded testing, the 1920X delivered a score of 2431, compared to the i9-7900X at 2167. In a dollar-for-dollar comparison, the $999 Threadripper 1950X turned in a Cinebench score of 3062.[/quote<] What a shocker, a 12C/24T cpu is faster than a 10C/20T cpu, in a benchmark that seems to like the former's architecture.

      • ronch
      • 2 years ago

      I’m amazed at Ryzen’s value proposition here. The single thread argument may be important to buyers of lower core-count CPUs to whom having more cores may not make sense, but buyers who are willing to spend $1K for a chip they know exactly what to do with know they want more cores and it’s all about multitasking and multithreading. Here, it just makes a lot more sense to give your money to AMD.

        • Klimax
        • 2 years ago

        Jumped to conclusions. In some cases it might be better to buy AMD’s CPU but that is not general case. Far from it.

        Depends not only on use case (aka how load responds to different CPUs) but also depends on other costs, If SW costs you “n” times more then entire HW together then you can disregard cost difference and get simply the best match.

    • maroon1
    • 2 years ago

    [url<]http://www.anandtech.com/show/11550/the-intel-skylakex-review-core-i9-7900x-i7-7820x-and-i7-7800x-tested[/url<] I7 7820X is only 6.7% faster than 1800X in cinebench But it is 21% faster in POV-Ray 3.7b3 18% faster in Corona 1.3 and 33% faster in handbrake H264(HQ) You can look other benchmarks for yourself in the review if you have time But my point here is that cinebench make Ryzen look better than what it should be. The gap won't be any big as what AMD marketing team want to make it. In fact only, Intel only need to drop i7 7900X by 200 dollars to counter threadripper (Ryzen 12 core part cost 799 dollars and i7 7900X can easily compete with this)

      • chuckula
      • 2 years ago

      60% more cores/threads and a higher base clockspeed will give the highest-end threadrippers an advantage in the right benchmarks, but even in AMD’s video that score of 3061 is only about 38% faster than TR’s recorded score of 2205 at launch: [url<]https://techreport.com/review/32111/intel-core-i9-7900x-cpu-reviewed-part-one/6[/url<] So right off the bat even in AMD's marketing you aren't getting the types of paper performance boosts that you'd expect of a chip with 60% more cores and higher base clocks even in a benchmark that scales almost too well to be an indicator of more realistic workloads.

    • Bensam123
    • 2 years ago

    And Intel loses another price bracket. Not sure who would buy a 10c when you can get a 16 with a slight loss to single threaded performance (which may be improved in the future with microcode updates).

    They literally need to move all their chips down a price rung to be remotely competitive right now. The R3 is going to tear up the lower end as well and I’m sure there will be something like a R1 to collect at the $50 pricepoint eventually.

    • Bensam123
    • 2 years ago

    Dupe

    • Tristan
    • 2 years ago

    Intel can always lower prices, and if they release 16C SKL-X, AMD Threadripper may be irrelevant

      • Krogoth
      • 2 years ago

      Intel needs to lower prices to be competitive in HEDT and enterprise markets. I don’t think that upcoming i9-7960X and i9-7980X are going to change things either. They are most likely be thermally limited and will not hit the same clockspeed when fully loaded as their lesser Skylake-X brethen without resorting to exotic cooling. Skylake chips scalability also drops off significantly when you go beyond 12 cores and this before known software related issues such as Amdahl’s law. The drawbacks of mesh/ring topology start to creep up.

      • chuckula
      • 2 years ago

      [quote<]Intel can always lower prices,[/quote<] NO THEY CAN'T! They can make 18 Core Xeons that will fit in an iMac but it's a proven law of physics that only AMD can cut the price of a product.

        • Redocbew
        • 2 years ago

        Or Intel can release another 50-some new SKUs a year from now and just confuse the hell out of everyone.

          • chuckula
          • 2 years ago

          1. Release too many SKUs.
          2. Confuse the hell out of everyone.
          3. . . . . .
          4. PROFIT!

            • Redocbew
            • 2 years ago

            Step 3: “Hey look investors, we give them a stupid bucketload of options while the competition provides only a few. We must be awesome if we can cut it 50 different ways, right?”

            To be fair though, I guess some of that is just a consequence of the whole “scalable” rebranding thing which I still can’t get used to. I read “Xeon Scalable”, and wait, but that’s the end.

            • JustAnEngineer
            • 2 years ago

            Intel’s engineers spend tens of thousands of hours designing new features into a new processor. Their manufacturing plants spend billions making process improvements to manufacture the new processors with all of the new features. Intel must then have an entire department dedicated to figuring out which chip features they can disable at each price point to artificially create market segmentation.

            • Redocbew
            • 2 years ago

            Yes, artificial product segmentation is artificial, and it doesn’t happen by accident. I’m sure a lot of time and effort went into the rebranding that Intel just released a few days ago. However, I’m only half kidding about the “cut it 50 different ways” thing. I’ve personally seen business people defend almost that exact thing, and the proliferation of so many nearly identical products is nothing new.

            • anotherengineer
            • 2 years ago

            Ok
            [url<]https://www.techpowerup.com/235237/intel-adds-new-core-cpus-to-its-desktop-laptop-lineups[/url<] done and done 😉

          • the
          • 2 years ago

          Well we haven’t gotten the SKUs with integrated FPGA or Knights Mill/Nervana technology in it yet. Multiply that with variants that combine that with OmniPath for even more SKUs.

      • ronch
      • 2 years ago

      Not that you would but you could.

      Twist that a little bit…

      Not that they couldn’t but they wouldn’t.

      • albundy
      • 2 years ago

      historical pricing has proven otherwise. they’d rather release another cpu that wont work on the current available chipset.

      • jihadjoe
      • 2 years ago

      They don’t want to, though. When you’re taking 80% marketshare price cuts hurt you more than it does the competition.

    • chuckula
    • 2 years ago

    As an interesting historical reference point, Techspot built a 16 core (dual socket) system using Sandy Bridge Xeon parts clocked at 2.6GHz for fun & profit: [url<]https://www.techspot.com/review/1155-affordable-dual-xeon-pc/[/url<] They got about 2000 points in Cinebench, so AMD managed to get a 50% Cinebench score boost with the same number of cores. As for price: [quote<]The core components of our dual Xeon E5-2670 system cost about $800, which includes two E5-2670 processors, a new dual-socket LGA2011 motherboard, and 64GB of DDR3 memory. [/quote<]

    • Mr Bill
    • 2 years ago

    wrong thread

      • chuckula
      • 2 years ago

      RIPPED!

        • ronch
        • 2 years ago

        Must be an engineering sample.

    • wingless
    • 2 years ago

    Can we all just be happy that AMD didn’t release a turd this time around and allow Intel to price gouge? Intel fans and AMD fans can rejoice in lower overall system prices that the Ryzen architecture has brought (well Intel will eventually take a hint).

    This is a new golden era of high-end computing we’re witnessing. Enjoy it while it lasts.

      • srg86
      • 2 years ago

      Very well said, the extra competition is exactly what this market needs.

      • coolflame57
      • 2 years ago

      Now if only this darn cryptocurrency mining craze would end…

        • travbrad
        • 2 years ago

        Yeah the GPU market situation more than negates any lowering of CPU/mobo prices. We were all hoping for major competition between AMD and Nvidia but between the cryptocurrency craze and the non-launch of Vega the GPU market is the worst it has been in years for gamers.

        At the end of 2015 I was debating whether I should hold onto my GTX 660 (and wait for Polaris/Pascal/Vega) or get a GTX 970. I’m increasingly glad I got the 970 instead of waiting. It was equal or even better price/perf than anything I could buy now, which is crazy.

      • ronch
      • 2 years ago

      Let’s all help make it last by buying Ryzen! ^_^ /finds cover!

        • Klimax
        • 2 years ago

        If you your needs are fairly low…

          • ronch
          • 2 years ago

          Not really. Do you think anyone buying a 1600X or 1800X has meager requirements? They might not need a 6C/12T 1600X but why would they want a 4C/4T Intel 7600K for the same money instead especially if they need many threads?

            • Klimax
            • 2 years ago

            If they need many thread yet are saving at all costs then their needs/requirements are quite low. (Or are victim of cent richer pound foolish)

            • caconym
            • 2 years ago

            Or they’re a graphics/audio freelancer not getting enough work to justify a “proper” workstation. They just need something that lets them be competitive and doesn’t choke on pro workloads.

    • CScottG
    • 2 years ago

    *sigh*..

    If only the support for Qemu/KVM and pass-through (IOMMU) was more “mature”.

      • wingless
      • 2 years ago

      It is, even in Linux. What problems are you referring to?! Have you tried it on the Ryzen architecture yet? What motherboard are you using?

        • CScottG
        • 2 years ago

        ..this is just from reading and watching Youtubes on the subject.

        [url<]https://youtu.be/aLeWg11ZBn0?t=267[/url<] It's just not a seamless "mature" process yet. With a modern Intel chipset/CPU that supports ECC memory it is with most distro's.

    • AMDisDEC
    • 2 years ago

    You don’t get it.

    The brilliant Lisa Su isn’t directly challenging Intel as the arrogant former upper managers of Opteron did.
    What she is doing is strategically positioning AMD CPUs in specific marketing niches where AMD products will offer real differentiated value, and allow the market to choose which platforms fits their needs best.
    No arrogant statements about how AMD Tech is better than Intel, just straight facts, Jack.
    Those whining about the CPU isn’t this or that aren’t AMD target customer base anyway. So, who cares what benchmarks they choose to highlight or not. AMD target customers will just purchase a few CPUs and do their own evaluations.

    Lisa Su is one very very smart and beautiful soul!
    The best CEO AMD has employed since it began selling CPUs.

      • chuckula
      • 2 years ago

      I know it’s so easy to fall for Lisa Su.

      But watch out, she’ll only end up ripping your [s<]heart[/s<] [u<]threads[/u<] out.

      • kuraegomon
      • 2 years ago

      Surely this is advanced parody? If so, bravo. If not, dear lord.

      • ronch
      • 2 years ago

      I think the real kudos should go out to the engineers who worked on Zen. Any AMD CEO would love to have a highly competitive chip to throw at Intel and to play around with in terms of market positioning but if you don’t have that chip it doesn’t matter how much marketing or positioning you do: people will know if a product is good and word will spread.

      Maybe Rory also deserves praise for hiring Jim and Lisa. He knew what the market wanted too, and he hired the guy he knew could push all the other AMD engineers to doing what they needed to do. He also didn’t plan to stay long so he took in Lisa whom he knows is strong enough to lead the company.

      I think past AMD management like those of Hector and Dirk simply didn’t have the guts to pull out the gun and shoot Intel. They held back their engineers out of fear of starting another Athlon era war when AMD almost killed itself going toe to toe with Intel, not realizing that it was during those daredevil years that AMD was at its best. This is shown by how they were conservative with K10 and chose the easy path with Bulldozer at a time when they probably had more money for R&D than the time when Zen was being developed.

        • LoneWolf15
        • 2 years ago

        You also have to have a CEO that “lets” its engineers compete, rather than hamstringing them with marketing requirements and blurbs.

        It goes hand-in-hand. We’ve all seen moments (e.g., Pentium 4) where marketing/management and engineering weren’t in lock-step with each other, and management decision stopped engineering from being as great as they really could be.

          • ronch
          • 2 years ago

          Like I said, past management didn’t have the guts. They tried to change the game according to their rules with things like having more but weaker cores, GPGPU computing with APU (HSA), etc., but they forgot that they’re playing Intel’s game in Intel’s sandbox so they have to play by Intel’s rules.

        • NTMBK
        • 2 years ago

        [quote<]This is shown by how they were conservative with K10 and chose the easy path with Bulldozer [/quote<] I'm not sure I'd call Bulldozer the [i<]easy[/i<] path. It was a complete ground up redesign of the entire CPU architecture, trying out a whole new CPU design paradigm. It was a big, bold idea, far riskier than continuing to iterate on the K10, but it still got the green light and went ahead. Of course, Bulldozer is also a prime example of why taking the risky path doesn't always pan out.

          • ronch
          • 2 years ago

          While Bulldozer presented some interesting new ideas, mainly the shared resources, it’s really nothing too sophisticated in the world of microprocessor design. The really tough parts of any modern CPU design include the schedulers and OoO Load/Store units. The decoder width is 4, which is par for the course for any modern ‘big core’ CPU, but each integer cluster is quite narrow with only 2 ALUs (not sure if each AGU is tied directly to each ALU; John Fruehe seemed to indicate they were independent although the arrangement could also be a bit similar to K7-K10). It’s a long pipeline too, presumably 20 stages for the main path, which is nothing really exceptional. Lastly I would think the cache design is a bit lazy: I’ve read somewhere, maybe Agner’s, that cache contention can get nasty and it kinda feels like it’s not very optimal. Benchmarks seem to confirm this. So apart from the shared resources concept and compulsory OoO Load/Store mechanism it was a straightforward design.

          As for the from-scratch design AMD didn’t really have any other choice because they’ve been dragging K10 from generation to generation and extending it to support newer instruction extensions like AVX or FMA would mean practically redesigning everything anyway, which is just what they did with Bulldozer.

          Nonetheless, don’t get me wrong, Bulldozer remains my favorite CPU architecture, weirdly enough. Maybe it’s because of it’s uniqueness with regards to the shared resources, maybe it’s also because of its idiosyncrasies.

      • WaltC
      • 2 years ago

      The Intel version of Lisa Su is 79 going on 95, and you can almost hear her arteries hardening when you talk to her. She moves as if mired in molasses, and about the best she can manage these days is a knee jerk reaction–much delayed–to the pace AMD is setting. Once again, AMD is setting the bar and pushing ahead while Old Lady Intel is charging the batteries for her hearing aid and adjusting the torque of her scooter. Recently, Old Lady Intel was asked about how she “stays so young” at her advanced years and she quipped, “It’s the pacemaker. It’s Intel inside, sonny! It costs 2x as much as it has to and runs half as fast as it should!”

    • Kougar
    • 2 years ago

    EPYC chips posted some pretty decent results. As long as the consumer can put all the cores to work then the choice seems pretty clear between a 7900X and 1950X.

      • ermo
      • 2 years ago

      Intel hopes you’re wrong.

      People who need specific intel-only features will keep buying intel and good on them.

      The rest will send their money AMD’s way if they deem that AMD offers more performance and/or features for the dollar for their specific application.

      Personally, I’m saving up for an upgrade to a RyZen+ 8-core w/a couple of Vegas once the silicon has been respun and the motherboard/microcode/RAM situation settles down and AMD figures out how to harness Vega. I’m thinking jan/feb next year or so.

      IF intel makes what I deem to be a better product (unlocked multiplier, ECC support, VT-d) within a +€50-100 price bracket for the CPU and mobo combined, THEN (and only then) will I consider buying intel.

        • srg86
        • 2 years ago

        I’m the opposite.

        AMD would have to make something hands down better in every way (I’m thinking Athlon 64 X2 vs Prescott) to make me even consider switching back to them again.

        These new products are great, but not great enough to make me want to take the chance on them after my past experience with AMD platforms (which weren’t horrible, but in hindsight not ideal).

          • Kougar
          • 2 years ago

          I had that sort of experience with Haswell and DDR3L RAM, then had it again with a second kit of DDR3. So I no longer have any favorable impressions of Intel platforms being less prone to unforeseen issues. At this point either platform build looks like a roll of the dice to me.

        • Kougar
        • 2 years ago

        I do wish you luck on Vega, as I think the odds are pretty long on that one. By that time frame Volta will be in stock everywhere. I am so pessimistic on Vega I didn’t even think twice before getting a 1080 Ti, my only worry was a fast Volta TI might show up. But I’m banking on that being around 10 months away.

        The whole Intel vs AMD thing is always going to factor into the single/many thread workload the consumer needs. At this point I really don’t see that changing for several CPU generations into the future. If anything AMD’s platform, while admittedly launching rough, is in better shape now than I was expecting. No major IO or driver issues, at least so far. I can’t really say anymore which I would prefer to buy at this point, which is fine as I don’t technically need an upgrade from haswell anyway.

        If anything, if AMD steals enough sales with Threadripper then the next-gen Intel HEDT platform may spawn another wave of aggressive models making it an even better value.

    • chuckula
    • 2 years ago

    Interesting point from the end of that video: According to AMD both of those Threadrippers are 180W TDP parts.

      • freebird
      • 2 years ago

      TDP is probably what is keeping the Epyc processors clocked lower, maybe next year will provide a speed bump to the whole line via 14nm + refinements and even more in 2019 with full blown 7nm Ryzen 2 processors. Looks like things are getting “fun” again in the CPU & GPU marathons…

      • ptsant
      • 2 years ago

      Dat TDP…

      Platforms and cooling will have to be top notch. Expect $400 motherboards, for both the Intel and AMD versions.

      I sure wouldn’t want an extra 180W heat in my room, but at my job, that’s another story.

      • ronch
      • 2 years ago

      Well, the 1950X is effectively 2x1700X so yeah.

    • paultherope
    • 2 years ago

    I’m pretty sure they have posted the specs of all three test machines used in the CB demos; it’s right at the end of the video: [url<]https://youtu.be/J3pJ_--nf5E?t=255[/url<]

      • Jeff Kampman
      • 2 years ago

      Sorry, was under time pressure and didn’t watch to the end.

      • Mr Bill
      • 2 years ago

      Hmmm very comparable configurations and all with Win10. But I thought they had to use Win8 for CB?

    • gmskking
    • 2 years ago

    For the money, might as well go with the 1950X

    • Unknown-Error
    • 2 years ago

    well we know why they only showed multicore number 😉

      • Krogoth
      • 2 years ago

      Because it has identical performance to Ryzen but single-threaded performance isn’t that important in the HEDT and enterprise market. You get regular Kaby Lake/Skylake chips if you want single-threaded performance above all else.

        • brucethemoose
        • 2 years ago

        Yes, but gamers are still going to complain when they discover it’s no faster than an 1800X. AMD will market this platform towards them too, after all.

          • chuckula
          • 2 years ago

          You don’t get it!

          Only The Threadripper lets me run 4 Vegas at FULL PCIe 16 bandwidth for ULTIMATE GAMING POWAR while I get to make snarky quips about how a 7900X at 4.6GHz generates heat and is therefore a useless product.

          • derFunkenstein
          • 2 years ago

          Gamers buying Threadripper expecting improved gaming performance are going to get out of the purchase exactly what deserve. Dumbasses.

          Edit: AMD can market it to whoever they want, but gamer morons buying these things expecting games to run faster didn’t do any research.

            • chuckula
            • 2 years ago

            I got one word for you.

            Just one word: STREAMING!

            ThreadRipper FTW!

            • derFunkenstein
            • 2 years ago

            I think Ryzen 7 is plenty for CPU-based encoding, but I’m sure someone somewhere thinks you’re correct.

            • chuckula
            • 2 years ago

            The funny thing is that if you watch AMD’s pre-RyZen 7 event they went out of their way to say you need 8 cores for streaming with that Dota demo.

            Then they used the exact same Dota demo in streaming mode… before the RyZen 5 launch for 6 core products. So over the course of a few months the minimum threshold for streaming dropped from 8 cores to 6 cores (yay optimization).

            Given that track record, I expect to see the same streaming demo being used to prove that the 7900X can’t game because you need at least 12 cores to stream.

            • derFunkenstein
            • 2 years ago

            They might do that, and they can market however they want, but you and I know better and we’re not buying a Threadripper to stream our DOTA games. 😉

            • the
            • 2 years ago

            *raises hand*

            Currently looking at a build later this year/early next year to handle multistream encoding and decoding. Thread Ripper is a VERY attractive chip for this scenario. On the expansion side we can add plenty of HDMI 2.0 plus 12G SDI input cards for local capture, 10 Gbit Ethernet, fast PCIe storage and providing full bandwidth to a GPU. Thread Ripper’s horse power is enough to decode additional NDI (SDI encapsulated over ~500 Mbit/s IP streams) while encoding both H.264 and H.265 in a single box. Thread Ripper’s 1 TB of memory support would permit huge buffers for recording for buffers for post production editing. The only thing missing from the wish list is Thunderbolt 3 support on motherboards as there are some nice IO decks that utilize that.

            A low core count EPYC is also on the Radar due to its support for even more IO and raw CPU power. The higher clocks of Thread Ripper will likely help more vs. a low core count EPYC chip though.

            Still need motherboard pricing but the core system might cost half as much vs. the Xeons we were previously looking at while providing more IO and performance.

            • derFunkenstein
            • 2 years ago

            Well that’s great but it’s got nothing to do with what chuckula and I were discussing.

            • the
            • 2 years ago

            I like tacos.

            • brucethemoose
            • 2 years ago

            GPU encoding though…

            We’re talking about the segment of people who aren’t happy with GPU or Ryzen 7 encoding quality, and are willing to drop $600 just to fix it. That can’t be a very big niche, and it’ll shrink considerably with Volta/Vega if either can encode VP9.

            • ronch
            • 2 years ago

            Probably marketed best towards RKOI brats.

            • travbrad
            • 2 years ago

            Yep a fool and their money are soon parted. If someone buys a $1000 CPU without doing research the problem isn’t AMD or their marketing. There are countless products marketed towards “gamers” that don’t do anything special for gaming. At least Ryzen will actually run games.

            There are some cases where companies deliberately do things to screw you that are unavoidable, but you can literally go to any hardware review site and see that Ryzen’s strength isn’t single-thread/gaming performance.

        • Klimax
        • 2 years ago

        Still there is absent MP ratio. Also of quite an interest to target market.

          • Krogoth
          • 2 years ago

          Which is rather strange, since it looks like Threadripper have nearly double the performance of Ryzen 1800X which should put MP ratio around 15X.

    • Tristan
    • 2 years ago

    Comparing number of cores and speed, it seems that Skylake X have 15% higher IPC than Ryzen, in most favourite benchmark for AMD. In other apps / benches difference may be higher

      • ultima_trev
      • 2 years ago

      It’s more like 10%. The reason Skylake X scores so well in multithreaded apps is its all-core turbo of 4+ GHz out-of-the-box where as Ryzen can only manage that on one core out-of-the-box.

        • Mr Bill
        • 2 years ago

        SMT eh? You would think that AMD could find a way to turbo up just the cores in a pattern that maximizes their die separation so they could run hotter and spread out the heat better. You could call it.

        Wait for it…

        Symmetric Multi Turboboost

          • Kougar
          • 2 years ago

          “Don’t touch Turbo Boost. Something tells me you shouldn’t touch Turbo Boost”

    • G8torbyte
    • 2 years ago

    I’m trying to recall if there was mention of 6, 8 or 10 core versions of ThreadRipper coming as well? Same time in Aug or later?

      • RAGEPRO
      • 2 years ago

      I doubt you will see versions of Threadripper with core counts indivisible by 4.

        • chuckula
        • 2 years ago

        It’s possible but weird.

        For example, a 14 core Threadripper would [b<]not[/b<] be using two dies with 7 cores each turned on. That's not how it works. Instead, it would be one fully-operational die (8 cores) and another die with only 6 cores turned on. Possible to do, but it could present some weird behavior in certain cases.

          • RAGEPRO
          • 2 years ago

          Hmm, there are 24-core EPYC parts (not divisible by 16) so clearly it doesn’t cause that big of a problem (or AMD doesn’t care). Interesting. I suppose alternatively those EPYC parts are using three fully-enabled dies? Seems unlikely though.

          [edit] Oh right, because then you’d lose two of your memory channels. Man, Ryzen is weird.

            • chuckula
            • 2 years ago

            24 Core Epyc would be 4 dies with 6 cores each active.

            • RAGEPRO
            • 2 years ago

            Oh, yeah, right, fair enough. I’m not awake yet, heh.

            • derFunkenstein
            • 2 years ago

            I think if you divide by the number of dies (dice?) and get an even number, that’s a reasonable number of cores. 24-core EPYC would be 4×6 (and each would be 3+3, so one core on each CCX disabled).

            • frenchy2k1
            • 2 years ago

            you actually need to divide by twice the number of dies, as each die has 2 CCX and AMD has only offered balanced configurations so far:
            TR 1920X = (3+3) *2
            TR 1950X = (4+4) *2

            Same with Epyc, although they offer lower core counts, allowing for high memory/high IO.

            • derFunkenstein
            • 2 years ago

            That’s why I mentioned the even number.

      • G8torbyte
      • 2 years ago

      Gordon over at PCWorld (he used to be editor at Maximum PC) is tracking the ThreadRipper developments:[url=http://pcworld.com/article/3197184/components-processors/amd-ryzen-threadripper-prices-specs-release-date-and-more.html<]here.[/url<] He comments "With 16-core and 12-core Threadripper chips now out in the open, people expecting 14- and 10-core Threadripper CPUs (and full parity with Intel’s Core i9 selection) might be disappointed. AMD hasn’t said boo about any further Threadrippers yet despite earlier leaks indicating a fuller lineup, and AMD executives have told PCWorld it’s not even clear they feel they have to match Intel’s offerings." I hope AMD does not feel that way since a few less cores with the 64 PCIe lane goodness would be nice!

        • null0byte
        • 2 years ago

        Eh…

        I think AMD is at least in a position they rapidly landed in since March that, well, they can do their own thing honestly. Why does someone need to do whatever Intel is doing for Chuckulasake?

        Market Cap as of 7/15/2017 in the US:
        AMD: 12.46B
        INTC: 163.21B
        NVDA: 97.01B

        Seriously, what a whole lot of people (and Count Choc-I mean Chuckula) are forgetting is AMD is:
        1/13 the size of Intel
        1/7.79 the size of nVidia (if you want to round the denominator up, essentially 1/8)

        ie. this little itty bitty (compared to its competitors) company that is trying extraordinarily hard to compete with others multiple times its size. The fact that it can even get in the same damn ballpark is amazing unto itself considering how many others have tried and failed over the years. (oh, and yes, this little itty bitty company has had some spectacular stupid of its own over the years too, I’m not blind to that)

        But, good grief people, pinning the hopes of the world on one little scrappy company is chuckulas-I mean bananas. And blasting said little scrappy company into the ground if they fail to live up to the hype and deliver you from the Intel-based computers *you already own* is also chuckulas-I mean bonkers. %$# @&mn it, why does that word keep sneaking in?

        You don’t have to buy or even *like* their products, but c’mon the least you can do is give them a Krogothclap….

          • brucethemoose
          • 2 years ago

          Market cap =/= company size.

          Tesla is bigger than Ford, after all, but who do you think has more employees, produces more cars, and brings in more revenue?

      • G8torbyte
      • 2 years ago

      Hmm, seems less likely lower core versions will be in the lineup. Other sites indicating leaks were supposedly guessing before this announcement.
      This article at PCGamer says they looked into it too: [url=http://www.pcgamer.com/amds-threadripper-lineup-launches-in-august-starting-at-799-ryzen-3-in-july/<] see here [/url<] They state "Our own Jarred Walton also deserves kudos for correctly surmising that sites supposedly 'leaking' 14-core and 10-core models were guessing rather than using any concrete information. Jarred noted that to do a 10-core chip, AMD would have to use an asymmetrical CCX configuration (3 cores on two, 2 cores on two for a 10-core chip, or 4 cores on two and 3 cores on two for a 14-core)."

    • jts888
    • 2 years ago

    Edit: double post, deleted

    • jts888
    • 2 years ago

    I’m pretty excited about the Threadripper platform, but I’m still kind of hoping to hear about DIMM suppliers throwing together some high clocked (3200+ MHz) unbuffered ECC parts.

    They shouldn’t cost that much more to produce than unbuffered non-ECC (assuming 9x8b designs since ChipKill is not warranted for workstations) and would allow Threadripper to show its fuller potential in both throughput and RAS areas.

      • just brew it!
      • 2 years ago

      With 4 channels available even lower clocked RAM will have impressive bandwidth if you install 4 DIMMs.

        • jts888
        • 2 years ago

        It’s not just raw memory bandwidth, it’s the prospect of getting the IF clocks up (and GMI links too?), but also having some semblance of confidence from the ECC that the system hasn’t silently gone flakey from that higher clocking.

      • freebird
      • 2 years ago

      Not much of a market for them, so you won’t see any one making them if Thread-ripper is the only system that could “run” ECC DDR4 @ 3200. Actually, I think the Agesa 1.0.0.6 bios for my AsRock Fata1ty Gaming mobo now has an ECC option, but same reason… not much demand for ECC (3200) memory in PC world except for hobbyists.

        • jts888
        • 2 years ago

        I’m not super optimistic either, but it’s frustrating since it feels like an easy business opportunity.

        <10% higher parts and assembly/testing costs, and probably >20% higher potential sticker prices, with near zero R&D costs.

        If I was forced to make a TR system tomorrow, I’m not sure if I’d go 2400 ECC or try for higher clocked non ECC, but having to even make that choice feels annoying in its fundamental non-necessity.

          • freebird
          • 2 years ago

          Never mind, I see brucethemoose typed my thoughts below…

      • brucethemoose
      • 2 years ago

      With the right chips, you can probably get an ECC DIMM that will OC pretty well.

        • Waco
        • 2 years ago

        ECC memory and overclocking rarely go hand-in-hand.

          • ermo
          • 2 years ago

          Which, if you think about it, is actually a bit odd, since ECC would reduce the chance of a conservative OC leading to botched data?

          If I buy an expensive CPU, why don’t I get the choice to turn up or down the wick as I please with the added benefit of ECC? Call it ‘Black Edition’ and take my money already!

          Scenario 1: Calculate critical stuff: Turn down the CPU to factory verified speeds.
          Scenario 2: Play around with something unimportant: Turn up the wick and see how fast it’ll run.

          I’m perfectly capable of making that decision myself, no need for some marketing person to do that for me thankyouverymuch.

            • jts888
            • 2 years ago

            The overwhelming majority of ECC DIMMs sold are buffered (of either FB or LR flavor), and I think it’s largely these buffer chips that play poorly with overclocking, not the underlying DRAMs.

            Enterprise parts need to be reliable for a relatively unaggressive clocking range, since memory power is starting to eclipse most other components in data centers.

            Unbuffered ECC (for Ryzen, TR, and similar) should mostly be able to get away with stuffing the DIMMs 9/8ths as full with high-clocking DRAMs, but this seems to be an untapped market for pretty understandable reasons.

            • Waco
            • 2 years ago

            Exactly, yes.

            The market for high-clocked unbuffered DRAM just doesn’t exist for the most part. The market for high-clocked *buffered* DRAM is certainly there, but it’s exceeding hard to run everything at ever-increasing clocks.

          • Krogoth
          • 2 years ago

          Because it doesn’t make any sense. Overclocking memory defeats the entire purpose of ECC memory. ECC memory is always rated at JEDEC-spec.

            • smilingcrow
            • 2 years ago

            ECC is there to catch the errors not to tell you how close to the sun to fly.
            Admittedly it’s usually used in systems where reliability is very important but there is no reason why you can’t push it beyond specs.
            When you consider that you can track errors that ECC detects you could argue that it’s the best type of RAM to use for over-clocking.

            • Krogoth
            • 2 years ago

            ECC is “slower” then non-ECC memory and it doesn’t protect you against data corruption from overclocked CPU.

            The people who get ECC memory seek reliability and data integrity. Overclocking defeats both of them.

          • CuttinHobo
          • 2 years ago

          But… But… The ECC corrects errors caused by having RAM overclocked to the ragged edge!

    • 1sh
    • 2 years ago

    Why are they running Windows 8?

      • Jeff Kampman
      • 2 years ago

      Cinebench bug.

        • bhtooefr
        • 2 years ago

        Hold up a sec, though.

        Windows 8.1 doesn’t have any patch support for Zen or KBL platforms, right? So, this is a configuration that isn’t actually possible in a sane environment.

        If your benchmark doesn’t run properly on [i<]the only version of Windows supported on your CPU[/i<], try a different benchmark.

          • tipoo
          • 2 years ago

          The bug is Windows 10 is detected as Windows 8 😉

            • bhtooefr
            • 2 years ago

            Oh, it’s [i<]that[/i<] issue. That's not actually a [i<]bug[/i<] per se, as much as Cinebench using a deprecated API for detecting the Windows version, not reporting to Windows that they support newer versions than 8, and Windows lying about the version as a result. This was part of why Microsoft could make it NT 10.0 instead of 6.4, actually. Basically, now, if you want to be treated as a modern app, you have to have an app manifest that advertises compatibility with the new OSes. Otherwise, compatibility hacks are automatically enabled, instead of making the user manually enable them. (And, of course, the mechanism to lie about OS version has existed since the app compatibility tools in Windows 2000, and the UI made more visible in XP. It's just that 8.1 started automatically doing that.)

        • derFunkenstein
        • 2 years ago

        Cinebug

          • chuckula
          • 2 years ago

          Cinnabon > Cinebug.

            • derFunkenstein
            • 2 years ago

            mmmbreakfast

            • Redocbew
            • 2 years ago

            You might even see [s<]Saul Goodman[/s<] Gene while you're there.

    • Peldor
    • 2 years ago

    $800 for 2431 or $1000 for 3062. Those are essentially the same performance/$. That’s fairly peculiar to not see a premium for the top end.

    The 1800X scored 1628 for $499 in TR’s test which is only a small advantage in performance/$.
    (Though at current its $419 street price it’s further ahead.)

    I don’t think AMD remembers how to extract money from the market.

      • Krogoth
      • 2 years ago

      They are trying to capture marketshare first. Undercutting Intel on the HEDT and Enterprise markets is the best short-term approach to this.

      • stdRaichu
      • 2 years ago

      > I don’t think AMD remembers how to extract money from the market.

      I suspect they do remember, but at the moment they’re more focussed on actually [i<]creating[/i<] thie market. High-end desktops have been an intel stronghold for a decade now so AMD will be wary of the "taking the piss" factor and making sure that all the product tiers look like good value. Personally I'm damn chuffed they've chosen not to milk the top end on their list prices at least; what they actually end up costing at retail is another matter of course.

      • Mr Bill
      • 2 years ago

      [quote<]I don't think AMD remembers how to extract money from the market.[/quote<] Sure they do. Loose a little bit on every CPU sale; make it up on volume.

    • Krogoth
    • 2 years ago

    AMD has thrown down the gauntlet.

    Hopefully, this will force Intel to reevaluate the price points on their HEDT line-up.

      • freebird
      • 2 years ago

      I would bet that AMD hopes the opposite, so they can enjoy some good sales headwinds

        • UberGerbil
        • 2 years ago

        You realize headwinds slow you down, make you work harder for the same gain, and are generally undesirable? If you’re going to use a anemos-ian analogy, perhaps they want wind beneath their profit wings?

          • kamikaziechameleon
          • 2 years ago

          headwinds are favorable for sailing. I find sailing upwind to be preferable.

          • freebird
          • 2 years ago

          Yeah, I realized that as I typed it, but they are also good at tacking their “sails” into the Intel headwind…

        • ronch
        • 2 years ago

        More like Intel will just crank up its marketing efforts. Well, let them. Let those who buy smart get more for their money.

      • Mr Bill
      • 2 years ago

      Lets all SHED a tear for the HEDT.
      [url=http://www.anandtech.com/show/11636/amd-ryzen-threadripper-1920x-1950x-16-cores-4g-turbo-799-999-usd<]By virtue of being sixteen cores, AMD is seemingly carving a new consumer category above HEDT/High-End Desktop, which we’ve coined the ‘Super High-End Desktop’, or SHED for short.[/url<]

        • ronch
        • 2 years ago

        Yeah it’s kinda like AMD is saying, “We’ll give you the multithreading performance Intel will give you 10 years from now if we didn’t exist… at the same price!”

    • ptsant
    • 2 years ago

    The most interesting thing about the price/performance is that the chip provides almost the same single-core boost as the 1800X. So, single-threaded should not be bad. And intermediate loads (8-10 threads) are probably going to hugely benefit from the 4 memory channels.

    I also expect the chip to be extremely power efficient. Perf/w is going to be great for such a big chip.

    The price is highly competitive but way out of my league for home use. I know that at my bioinformatics job (integer, parallel workloads) these would make beastly workstations. Much better than the 4c/8t Xeons we currently have.

    • Zizy
    • 2 years ago

    Interestingly, TR is the only AMD processor where the top part has better perf/$ on its own than the weaker version. Sensible for this segment, but this is exactly what makes it so surprising.

    This 16C is very tempting workstation thingy. I might suggest buying some of those when we get reviews and holidays end 😛

      • sreams
      • 2 years ago

      “Interestingly, TR is the only AMD processor…”

      Pretty sure Tech Report isn’t an AMD processor, but what do I know?

        • JustAnEngineer
        • 2 years ago

        ThreadRipper?

          • chuckula
          • 2 years ago

          WASSONING CONFIRMED!

          • derFunkenstein
          • 2 years ago

          THRPR

            • Mr Bill
            • 2 years ago

            Almost reaching Bloom County. I wonder if Opus would consent to being the new mascot for LINUX and THRPR?

            • JustAnEngineer
            • 2 years ago

            [url<]https://www.amazon.com/Bloom-County-Dont-Blame-Voted/dp/B016YFRKE8[/url<]

      • Mr Bill
      • 2 years ago

      Maybe AMD can get Robert Plant and the HoneydRippers to sing [url=https://www.youtube.com/watch?v=f2wyYEnjeas<]'Sea of Love'[/url<] to sell the infinity fabric aspect of ThreadRipper.

    • ronch
    • 2 years ago

    I’m estimating this to be about 2.4x faster than AMD’s previous 16-core flagship, the Opteron 6378. For reference the 6378 ran at 2.4GHz and, well, is a 16-core only by Bulldozer standards.

    • chuckula
    • 2 years ago

    So it looks like I called the price quite a while ago.

    [url<]https://techreport.com/discussion/31986/intel-core-x-series-cpus-and-x299-platform-revealed?post=1038039[/url<]

      • derFunkenstein
      • 2 years ago

      It’s basically two 1800Xes duct taped together, so it makes sense.

        • Duct Tape Dude
        • 2 years ago

        [url<]http://www.bash.org/?98450[/url<]

          • chuckula
          • 2 years ago

          +1 for the quote.
          +2 for the quote including a reference to your username.

        • lmc5b
        • 2 years ago

        I still wonder how much that duct tape costs.

        • jts888
        • 2 years ago

        None of these prices exist in a vacuum. These are slightly higher than I was guessing ($650 and $850), but I also didn’t expect Skylake-X’s launch to end up at flat-footed on as many fronts as it did.

        We’re now in a bizarre spot though where some 1S Epyc chips will have comparable parallel throughput and dramatically more robust memory support (2TB vs 128GB) for similar prices.

          • Zizy
          • 2 years ago

          Epyc costs ~50% more and the board will be likely more expensive as well.
          7401P costs 1450 iirc and offers 24C * 2GHz (base), 2.8 (all-core turbo) for total of 50-60 coreGHz. 1950X costs 1000 and offers 16C * 3.4 GHz (base?), ?? (all-core turbo) for total of ~54 coreGHz.

          But yeah, Epyc is pretty nice alternative if you need more than “just” 128GB of ram, as it isn’t that much more expensive.

          EDIT: I rechecked and yes, 7401P is indeed just 1075$. Crazy cheap. So Epyc costs the same, but the board will be probably more expensive.
          Can you use 7401P in the Threadripper’s board as well?

            • jts888
            • 2 years ago

            Who knows if they’re right, but the numbers I’ve seen thrown around are $1075 for 7401P and $750 for 7351P.

            • Zizy
            • 2 years ago

            You are right, I rechecked and it states >1k with some sites that wrote this exact number. Crazy cheap.

          • ImSpartacus
          • 2 years ago

          Yeah, the 1S Epyc stuff is very aggressive, particularly that $1000 24C part.

        • Demetri
        • 2 years ago

        No duct tape, just glue according to Intel’s press slides.

        [url<]https://www.techpowerup.com/img/QhA6gdonrmBT27fr.jpg[/url<]

          • Krogoth
          • 2 years ago

          Wow, Intel needs to fire their marketing staff.

          Their mindset in still stuck in the 1990s and early 2000s. Their hamfist attempts at HEDT is just embarrassing. They completely missed out on K8/Opteron era.

          Intel needs to wake up and realize that the halcyon days of Nehalem and beyond with its massive margins are over.

          • freebird
          • 2 years ago

          It is funny, how “unglued” Intel marketing has become since AMD has “Ryzen” this year.

          • ImSpartacus
          • 2 years ago

          I can’t find references to AMD’s old “duct tape quad” taunting, but I’m almost certain that it existed. Do you have a reference for that?

            • Redocbew
            • 2 years ago

            Yeah, AMD did the same thing when Intel stuck two dies together to create the Pentium D. Yay marketing.

            • ImSpartacus
            • 2 years ago

            I think they did it all the way until Intel had a native 4C die in Lynnfield around the 2010ish time frame (don’t quote me).

            It was spectacularly embarrassing since the “inferior” C2Q stuff actually competed very well with AMD’s first native 4C die (Phenom? Can’t remember).

            • RAGEPRO
            • 2 years ago

            Yep. And despite the latter having an on-chip memory controller too, heh.

          • willyolioleo
          • 2 years ago

          Given the benchmark results, AMD glue > Intel engineering

            • Klimax
            • 2 years ago

            What results? I am aware of only some test by Anandtech and they didn’t support your assertion.

        • blastdoor
        • 2 years ago

        No disagreements, just a few thoughts:

        1. Duct tape is awesome. I applaud its use here.
        2. Intel duct tape would cost way more than this, so I’m happy to see AMD basically giving the duct tape away for free.

          • chuckula
          • 2 years ago

          Intel would use gaffe tape.

            • Shobai
            • 2 years ago

            [quote<]gaffe[/quote<] Heh, I'm not sure you're wrong! [url<]http://bfy.tw/4siD[/url<]

            • Redocbew
            • 2 years ago

            I’d like a gaffe eraser more than gaffe tape. Making a blunder stop after it happens is useful, but making it go away completely would be awesome.

            • chuckula
            • 2 years ago

            High school stage crew. I know all about gaffe tape.

            • Shobai
            • 2 years ago

            Are you certain you don’t mean gaffer tape? As in, the strong, cloth backed, waterproof tape used by a gaffer?

            • chuckula
            • 2 years ago

            Yeah.. that’s exactly what I mean.
            Gaffer’s tape.

            Or gaffe tape to those of use who used it for years and didn’t learn about it from google [as a side note: There was no Google at the time]

            • Shobai
            • 2 years ago

            Calm down, chuckula. You made an ironically amusing gaffe and now you’re onto entirely misplaced personal attacks.

          • derFunkenstein
          • 2 years ago

          Oh, definitely. We can appreciate this duct tape.

        • ImSpartacus
        • 2 years ago

        But things are almost never that simple, so the apparent simplicity is unusual in my opinion.

        For example, a 1600X is basically 3/4 of a 1800X, yet it costs about half as much as a 1800X.

        I think it’s a big deal that this is at one grand.

        • Mr Bill
        • 2 years ago

        Kapton Interposer tape

      • flip-mode
      • 2 years ago

      You and Krogoth: the great mystics of the TR comment sections. You guys are brilliant, smarter than the rest, and never too humble to pat yourselves on the back. Legends in your own minds. Praise you! May you be as great a hero in real life as you are in TR article comments!

        • Srsly_Bro
        • 2 years ago

        Thousands estimated that same price, but he who is great and humble was brave enough to remind us. Praise him in his name for all his glory.

          • chuckula
          • 2 years ago

          It wasn’t rocket science to predict the correct price.

          But that didn’t prevent people like you from launching the standard personal attacks and downthumbs when I was right.

          So if me being right didn’t take that much mental effort, what does that say about the intellectual capacity of everybody who attacked me for being right?

            • flip-mode
            • 2 years ago

            [quote<]It wasn't rocket science to predict the correct price.[/quote<] Stop being so humble! No mere mortal could have done it. Only you, sir. I will pursue those who do not believe in your divinity to the ends of the earth!

            • freebird
            • 2 years ago

            Serious, I down vote you just for the fun of it… LoL! 😉

            • Meadows
            • 2 years ago

            You shouldn’t have commented in the first place. You added nothing to the discussion except “look how clever I was” in an attempt to fish for appreciation.

          • derFunkenstein
          • 2 years ago

          yeah it didn’t take a genius.

            • Srsly_Bro
            • 2 years ago

            If we let the Great One think only a genius could have guessed and posted about it on a forum, will the Great One continue being humble, in all his greatness?

      • freebird
      • 2 years ago

      I was thinking probably $1200 & $1000 but kudos to AMD for this pricing. Should convince some fence-sitters vacillating between X299 & X399 to go the AMD route and maybe drive more volume. This also implies they must be able to pump out as many Ryzen modules as they want.

Pin It on Pinterest

Share This