AMD will explore New Horizons and Zen CPUs on December 13

AMD has a new page up with a countdown to December 13. On that day, we'll be treated to a web-show called "New Horizon" where the company will give the first public demonstration of Zen silicon in a gaming context. Tortilla-chip pontiff Geoff Keighley will be hosting, and pro gamer Peter "ppd" Dager (of American e-sports group Evil Geniuses) will be on hand to give the new chip a workout.

If early reports are to be trusted, it's possible that Intel's upcoming Kaby Lake processors may not make a huge performance leap over the company's current-generation Skylake chips. That situation could leave AMD in a prime position to capture a fair bit of new-CPU hype at CES 2017. Zen benchmarks, both leaked and official, show the upcoming "Summit Ridge" processors competing well with Intel's many-core Xeons.

New Horizons will be the first time anyone has had the opportunity to test the processors' gaming performance, though. Not only is that metric much more relevant to a typical user's workload, it's also a specific point of interest given the poor single-threaded performance of AMD's Bulldozer-family chips. Only time will tell whether AMD will go so far as to trumpet its chips' 99th-percentile frame times on stage, but we'd take odds on it.

AMD is calling the show an "exclusive advance preview." True to that name, AMD is asking folks who want to watch to sign up on its site for a slot ahead of the event. We'll be there with some bags of Doritos in tow on December 13.

Comments closed
    • KaosMike
    • 3 years ago

    Hey guys, long time lurker, first time poster.

    Personally, very excited about the next year Zen release.

    I do not know how many creative content folks are on this site, but I guarantee you that everyone in the gaming, film/animation, fx industry should he pretty excited.

    Most work stations in this field are high end desktops with near top end consumer CPUs and GPUs. Now that we are on the 14nm fabbed GPUs, it doesn’t mean all studios bought these, since there is not enough incentive unless your vid cards are old.

    Howver, that may change big-time once Amd rolls out their stuff. Even my old machine at home, which is a i7 920 I got 6 years ago, is still not even beaten by 3x the performance of some top end consumer Intel chips of today. It doesn’t mean that Intel has not made a ton of progress, it’s just that the CPUs, have a igpu integrated in, which are totally useless.

    So really if AMD brings an 8 core 16 thread CPU to the desktop market, even if a little slower than some Intel Xeons. It will force Intel to bring those to desktop market and lower the prices across the board. And the standard for high end desktops will shift from 4 cores to 8.

    Everybody wins,

    Except Intel’s bottom line.

    Exciting times ahead.

    • BaronMatrix
    • 3 years ago

    AMD is the Howard Stern of tech sites.. This article has the most comments… by A LOT…

    • chuckula
    • 3 years ago

    I’m going to be extremely disappointed in AMD if they have the nerve to compare Zen to a 6950X in some GPU bound games and then claim that Zen is somehow the new price-performance leader.

    Here’s why I’ll be disappointed: They’ve already pulled that stunt in the past.

    Time to step it up AMD: Compare Zen to a 72 core Xeon Phi instead! Then run around claiming how much better your 8 core chip is compared to a $6000 72 Core Xeon!

    [url<]http://ark.intel.com/products/codename/48999/Knights-Landing[/url<]

      • maxxcool
      • 3 years ago

      I think that is EXACTLY what we will see. I makes -0- sense to sell a 8c\16t cpu that is within a per core 90% generalized IPC rating of Intel’s Xeon setups for 1/2 the costs.

      I think that yet again were going to see them betting the farm on ‘highly organized’ threading ‘requirements’ to see the real potential of the architecture.

      I would LOVE to be wrong … but the marketing approach thus far reeks of ‘some level’ of a BD repeat.

      • BaronMatrix
      • 3 years ago

      Vega is there to slap Knights Landing… With Full HCC, a GPU doing C++ will destroy KL and Tesla…

        • synthtel2
        • 3 years ago

        HSA is cool, but GPUs don’t really do C++, and Vega 10 will be destroyed by GP100, not the other way around. Vega 10 versus GP102 could probably go either way depending on the workload. (I don’t know enough about Knights Landing to say anything on it.)

          • BaronMatrix
          • 3 years ago

          The WHOLE POINT of FUSION and HSA is to have the GPU share and USE C++ pointers…

          “HC: Heterogeneous Compute
          What is it? HC is a C++ dialect with extensions to launch kernels and manage accelerator memory. It closely tracks the evolution of C++ and will incorporate parallelism and concurrency features as the C++ standard does. For example, HC includes early support for the C++17 Parallel STL. At the recent ISO C++ meetings in Kona and Jacksonville, the committee was excited about making the language capable of expressing all forms of parallelism—including multicore CPU, SIMD and GPU. We’ll be following these developments closely, and you’ll see HC move quickly to include standard C++ capabilities.”

          “C++AMP: C++ Accelerated Massive Parallelism
          What is it? C++AMP is a C++ language for data-parallel programming. It originated at Microsoft as an open specification and introduced the world to a true C++ accelerator language. At the time of its release in 2012, the state of the art in GPU programming was C-based APIs such as CUDA and OpenCL. The HCC compiler supports a C++AMP mode, and many of the data structures in the HC dialect derive from that heritage. We did extensive user testing with C++AMP and rolled our insights into HC. The enhancements include raw pointers to accelerator memory, more control over asynchronous execution, the ability to specify group sizes at launch time, richer math functions and more.”

          [url<]http://gpuopen.com/rocm-do-you-speaka-my-language/[/url<]

            • synthtel2
            • 3 years ago

            The point of fusion/HSA is to better interface between GPUs and normal programming like C++, yes, but the C++ in question is usually not what’s actually running on the GPU. What’s running on the GPU are those kernels mentioned in the second sentence of your first quote. HC has more in common with DirectX than HLSL, to use graphics examples (though there isn’t a perfect equivalent in the graphics world AFAIK).

            It’s possible to make a C++ compiler for a GPU, as C++ AMP shows, but just because you can do something doesn’t mean you should. C++ really isn’t a good match for how GPUs work. Maybe Microsoft’s resources can eliminate enough footguns to make it not entirely terrible, but I’d be genuinely impressed if they did, doubly so if it still resembled C++ enough to be called that, and skeptical regardless.

            • chuckula
            • 3 years ago

            Yeah, if HSA is so great how come 5 years later it’s basically unused?

            Lots of people are using accelerators these days, but HSA isn’t needed (or particularly useful) in using those accelerators.

            Case in point: The Top 500 includes exactly 1 system that uses AMD GPUs. And that system doesn’t use HSA whatsoever.

            • maxxcool
            • 3 years ago

            I did not know that. Are they all NV or PHI shops ?

            • chuckula
            • 3 years ago

            In the top 500, the large majority (I think like 80%+) of the systems still just use plain old CPUs in big clusters. That includes the world’s #1 system* (“TaihuLight”) in China which is using a customized CPU, but it is still very much a multi-core CPU in nature.

            Of the minority that are using accelerators of some type, Nvidia has the largest number of systems with Intel bagging a decent share, including three systems in the top 10. AMD GPUs are currently only used in number 288: [url<]https://www.top500.org/system/177996[/url<] AMD total (including CPUs) is in about 7 systems of the top 500, the most notable being the Titan at Oakridge... which is ironic since Titan is really a showcase for Nvdia GPUs. * #1 at Linpack at least. It's nowhere near as good at the HPCG benchmark that is designed to test system performance beyond embarrassingly parallel number crunching: [url<]http://www.hpcg-benchmark.org/custom/index.html?lid=155&slid=289[/url<]

      • DoomGuy64
      • 3 years ago

      Comparing a $300 desktop CPU to a $6000 server CPU. Well, I guess we know where ol Chucky stands on this chip. Such a hard decision for regular builders to make, when we supposedly have an extra $5700 laying around to burn on a home PC.

      I can just imagine the classic p4 marketing scheme returning. Benchmarks of niche scenarios specifically tuned for Intel chips. “See, Intel is faster!” Because that’s really what the average builders wants in a desktop chip. Spend several hundred dollars more because a niche Intel optimized benchmark gets a few extra points on a graph, and here I thought we were done with shady synthetic benchmarks that don’t translate into general performance.

        • chuckula
        • 3 years ago

        Your inability to understand satire that is obviously above your paygrade is cute.
        Almost as cute as your incoherent conspiracy theories that are always wrong.

          • DoomGuy64
          • 3 years ago

          Satire? Your posts are more like fanboy strawgrasping.

          Satire needs to have a valid point to make sense. What’s your point? You’re not comparing products that ere even remotely related.

    • chuckula
    • 3 years ago

    Actually, for all the Zen hype, the latest totally unconfirmed rumor is that Vega is going to be demonstrated at this event.

    Which, considering it’s supposed to be about games, is more interesting than Zen anyway.

    • ronch
    • 3 years ago

    Not sure how true [url=http://en.yibada.com/articles/175446/20161126/amd-zen-amd-processor-summit-ridge-cpu-processor-intel-broadwell-e.htm<]this[/url<] is, but it seems to be in line with AMD's projected 40% improvement over Excavator. I reckon AMD needs 75% better IPC than Excavator to match Broadwell or Skylake.

    • mkk
    • 3 years ago

    My wallet is ready!
    But will have to wait a little longer…

      • chuckula
      • 3 years ago

      AMD needs to learn from Kickstarter & Steam early-release games: It’s [b<]NEVER[/b<] too early to get your hands on their money.

    • Chrispy_
    • 3 years ago

    Intel have stopped innovating in terms of gaming IPC. I think they need a whole architecture redesign to get any significant performance from it.

    Sandy is basically Kaby Lake in terms of gaming-specific IPC. The 5-10 performance improvements in games have come from faster, lower-latency DDR4 and minor tweaks to the schedulers and prediction over the years. When talking about architectural refinements across Intel generations from Sandy to Kaby, yes there are some vast improvements in specific situations due to bolted on instruction-set specific logic but games tend to be unable to take advantage of them.

    For this reason alone, I really hope Zen forces intel to compete with new and improved architecture, rather than taking the easy (lazy?) option of very incremental tweaks and bolt-on instruction sets to the last major architectural change (Sandy).

      • ronch
      • 3 years ago

      Didn’t know there is a separate IPC for gaming. 😀

        • NTMBK
        • 3 years ago

        Sure there is. IPC depends entirely on the application being run. The trivial example being, if I write a program that is nothing but millions of NOPs, it’s going to get a very high IPC. If I write a program which is pointer chasing all over memory, IPC is going to be terrible.

      • maxxcool
      • 3 years ago

      Disagree slightly. I like a few others think that until x86 software changes fundamentally x86 silicon is struggling for gains simply because there is little left to optimize for out side of custom execution units to accelerate repetitive/intensive tasks.

      AMD is simply catching up to the same state of code and execution entropy as Intel. There is little here that is magical or actually new.

        • DrDominodog51
        • 3 years ago

        I slightly disagree with your statement. Intel could improve performance in gaming workloads, but it would absolutely tank power efficiency. And that’s without boosting clocks.

        Boosting clocks on anything with Haswell or above performance is the easiest way to boost performance is gaming workloads.

          • maxxcool
          • 3 years ago

          Given power budgets, constraints on ‘size of die’ for pro-sumer and consumer edition silicon I do not think it is possible. Gaming is such a tiny aspect of Intel’s overall design and a minor consideration for a general application cpus, Since almost all of their revenue is based on generalized sales it makes sense to proceed as they have and gain by better process allowing for better gating, actuating and clock speed by moderating better thermals.

          Intel ‘could add another FPU’ or custom build a physics engine and reduce transistor counts elsewhere for better ‘gaming’ in the ‘now’.. but the legacy x86 code base would them peak again and we’d see stagnation again. A direct negative secondary effect of that change would also be that the ‘gaming’ cpu becomes less useful outside of gaming. I think in my opinion this would be in error.

          Better Coding is key…(imo) But with x86 the only improvements seem to be custom instructions combining old code to execute in tighter bundles and then they stagnate as well. While Itanic was a huge error. it may be time to revisit the idea of a new core language.

          edit:: for train of thought derailment.

        • Chrispy_
        • 3 years ago

        You’re sort of repeating what I’ve said from a different perspective – in that there have been no significant generic x86 gains since Sandy because there’s little left to optimize, and hence why all the architectural improvements have come from stacking on lots of custom execution units.

        AMD are hopefully catching up to the status quo after the Bulldozer error that they have persevered with for so long, but the potential competition AMD brings to the table is that it might force Intel to make significant compiler improvements if AMD can get their HSA magic to go more mainstream.

        HSA was largely ignored because the GPU advantages AMD’s APUs brought were massively offset by their poor x86 performance and the effort of having to design for HSA in the first place. If Zen is good x86 and they can also throw in GCN for parallel compute, we could see another x86 instruction set evolve from AMD64 to move the industry forward another giant leap.

        I can respect that x86 as we know it seems pretty maxxed out, unless Intel really have just been sitting on their hands doing the bare minimum (which I doubt, given all the dollars they throw at it).

    • Anovoca
    • 3 years ago

    A Zen press event, because AMD has never embellished a Powerpoint slide ever!

      • Peldor
      • 3 years ago

      So much this. Like 4 gigathis.

        • ronch
        • 3 years ago

        Just be careful you don’t overload the TR servers with your billions of this’s and lock us all out.

    • Kretschmer
    • 3 years ago

    Something tells me that we’ll see GPU-limited titles and “Look! The same FPS as a 7700K for half the price in these AAA titles at 4K ultra settings.” Alternately, “Look! The same smoothness as Intel when you benchmark with our 460!”

    Gaming demos are AMD’s favorite way to fudge benchmarks.

    • ronch
    • 3 years ago

    Planning to get Zen in 2018 or so. Might even wait for Zen+++.

    Edit – care to explain the downthumbs?

    • bfar
    • 3 years ago

    I think this is massively exciting. Even if these are slower than Intel equivalents, they’ll still be credible parts competing for enthusiasts. And Intel appear to be willing to concede some of this market – HEDT in particular looks very vulnerable.

    Aside from Halo products, I reckon we’re going to see CPU prices plummit back to 2010 levels over the next year or two.

      • ronch
      • 3 years ago

      When did strong competition from AMD result in lower prices? If anything, having strong products just allowed AMD to play in the high end and charge Intel-like prices. On the other hand, it’s the availability of cheap chips that couldn’t compete in the high end like those from Cyrix and AMD’s own K6 that made Intel put out Celerons to avoid ceding that part of the market to the competition.

        • rechicero
        • 3 years ago

        Actually, what strong competition brought us was Conroe instead of Itanium. I would say… Thanks AMD

          • just brew it!
          • 3 years ago

          Yup. If Intel had its way, 64-bit would have taken a lot longer to trickle down to the masses. Their plan was to segment the market as 32-bit x86 for consumers, 64-bit Itanium for datacenter. AMD managed to upset that apple cart big-time.

          • blastdoor
          • 3 years ago

          Yup — and Itanium was not cheap!

          Competition from AMD absolutely leads to better price/performance for consumers. I think what confused people is they just focus on price.

          In 2005, the best x86 processor you could buy was the Athlon 64 x2 4800+ for about $1,000
          [url<]https://techreport.com/review/8295/amd-athlon-64-x2-processors[/url<] If AMD didn't exist, the best processor you could have bought at the time would have still cost $1,000 but it would most likely have been a single core Pentium 4. And Core would never have come into being. Competition from AMD is incredibly valuable to consumers!

        • just brew it!
        • 3 years ago

        I think you underestimate the effect AMD had back in the original Athlon/Duron days. They were still struggling to win market share, and enthusiasts were able to get some fantastic bang for the buck. They couldn’t charge Intel-like prices because they didn’t have any major OEM wins, and still had the reputation of being the bargain-basement vendor that only played in the Celeron segment.

          • DoomGuy64
          • 3 years ago

          Anyone who lived through the P4 era knows that AMD didn’t have a reputation of playing in the Celeron segment, aside from the Duron. Yes, it was cheaper, but that was because AMD wasn’t using Rambus and overcharging for their name. It was also much faster when it came to gaming. Intel shot itself in the foot with netburst and rambus, not only because it was too expensive, but performed poorly in a lot of non-optimized scenarios. Intel knew this, and released the Tualatin and Pentium M as alternatives.

          The whole point of the XP naming scheme was making a joke of the p4 because the Athlon had superior IPC, not to mention was an overclocking champ. Even though Intel went all out with it’s EE chips, and AMD quit making new Athlon XPs, you could still overclock the XP and beat the latest model p4 in gaming. There wasn’t a single P4 that was ever worth buying, unless you actually needed it’s niche features.

          The Athlon 64 is where AMD really took the crown and became a viable competitor. Anyone who says otherwise is trying to spin history. The 64 was the best CPU available for any scenario, and the x2 was another huge win for them.

          If AMD can once again compete with Intel in gaming, I think we’ll see a partial return of the P4 mindset where AMD is considered better for gaming, and Intel is considered better for the niche computing market.

            • just brew it!
            • 3 years ago

            I was referring to the pre-P4 era. Before Athlon64 and Sempron and XP.

            • DoomGuy64
            • 3 years ago

            True if you’re talking about the k6-2/k7, but the Athlon t-bird was around during the p3 and it was a beast.

            [url<]http://www.anandtech.com/show/557/15[/url<]

            • just brew it!
            • 3 years ago

            I think you missed my point. Yes, T-Bird was their “coming out” party. Thing is, because of the history of K6-x and its ecosystem, and lack of major OEM support, mainstream users were largely oblivious.

            Even the K6-x series was pretty damn good once they went through a die shrink (K6-2+ and K6-III+). Unfortunately, those came too late to matter much in the market (since K7 was already rolling out), or save K6-x’s reputation.

            I’ve built systems using every AMD processor generation back to the original K6, and (skipping a few generations back from that) their original “second source” clone of the Intel 8080A.

            • DoomGuy64
            • 3 years ago

            K6 didn’t have major support, and for good reason. It was too generic and issue prone, regardless whether or not later updates fixed those issues. OEM’s probably didn’t want to deal with the support issues.

            The T-Bird had heavy mainstream support, as it had no major flaws and was much cheaper being that Intel was pushing Rambus at the time. One of my family’s first prebuilt PC’s was a T-bird with a tnt2 bought from TigerDirect. OEM’s ignored the k6-2, not the T-bird. Intel was solely relying on it’s strong-arm tactics at that point, and plenty of OEM’s still sold AMD because it was that much better.

          • ronch
          • 3 years ago

          The K7 days were different. It was AMD’s first time to break into the performance segment. Fast forward to the K8 days and they were charging around a grand for their top chips just like Intel. Maybe a little cheaper, but certainly not cheap.

        • bfar
        • 3 years ago

        AMD needs to increase market share very quickly. Matching Intel on price for similar features would be a poor strategy imo.

          • Klimax
          • 3 years ago

          What AMD needs is money. You cannot pay with Market Share. You could ask pre-2010 Nokia how that went well…

    • ronch
    • 3 years ago

    Does this mean some of us will have Zen chips in time to put under the tree?

    • ronch
    • 3 years ago

    Well for AMD’s sake I hope they are well aware that ads or fancy marketing no longer work like they used to the same way buying ad space or air time can no longer guarantee victory in the elections. People are wiser. They look at the products and then look at alternatives. Then they think and decide.

    Of course if the product really sucks they have no choice but hype the product all the way to Mars.

      • Misel
      • 3 years ago

      You mean like in the presidential election of the United States?

      *SCNR* 😀

      • Kretschmer
      • 3 years ago

      Volume comes from the retail and OEM business channels where no one has even heard of benchmarks. Volume is all about OEM design wins and thermal properties.

        • ronch
        • 3 years ago

        So no one has ever heard of benchmarks in the retail and OEM channels? I haven’t heard of anything like that, I think.

          • Kretschmer
          • 3 years ago

          Most consumers are hard pressed to quantify differences in units besides price and “directly comparable” metrics like clock speed. Business volume is largely driven by price and specific features, as office work doesn’t require the latest and greatest.

          Most people I have met would have difficulty naming the chips that run their laptops and desktops. They might (might) know Intel or AMD, but beyond that it’s all a mystery.

          Enthusiasts who read benchmarks are a distinct minority.

            • w76
            • 3 years ago

            Most consumers also do dumb stuff like buy cell phones with 4K screens because 4K > 1080p, then wonder why cell phone battery life sucks. I have to disagree with ronch here, marketing and empty specs still win a lot of sales.

            • ronch
            • 3 years ago

            If that’s the case the FX line would’ve stomped all over the i7 models already.

    • hasseb64
    • 3 years ago

    “upcoming Kaby Lake processors may not make a huge performance leap”
    Off course they wont, how could anyone even think that would be possible.

    • Unknown-Error
    • 3 years ago

    So just 2-weeks to go. Lets see who was right and who was wrong about Zen’s performance.

      • chuckula
      • 3 years ago

      Oh there’s more than 2 weeks to go until we find out anything real about Zen’s actual performance.

      This is yeat another dog and pony show that won’t have anything subastantive.

        • Unknown-Error
        • 3 years ago

        True. We have to wait until early 2017 (February?) for the independent reviews. But I guess the experienced tech-writers (like Scott) can still gauge things from what they reveal (or don’t reveal). If AMD is extremely cryptic or evasive in their presentation/answers during this “reveal” on the 13th then it is probably another Bulldozer. AMD was extremely cryptic & evasive prior to the official Bulldozer NDA lifting.

          • Klimax
          • 3 years ago

          Isn’t Scott with AMD or am I mistaking people again?

            • maxxcool
            • 3 years ago

            Is but I really doubt he has any input or say on the AMD marketing aspect

            • Unknown-Error
            • 3 years ago

            He is with AMD but the graphics division. AMD was impressed with his analysis of graphics cards at TR.

            Anyway, the point was, experienced and knowledgeable tech writers will know if AMD is trying to mask poor performance by cherry picking/deceptive benchmarking and evading key benchmarks.

        • maxxcool
        • 3 years ago

        Sure it will… they will get to show that thier new cores are as fast as Intel silicon in games with slow engines, Dota style games that don’t push hardware, graphics limited games (world of tanks etc) and winzip super spyware edition…

        Now in some respects… that’s good enough. Get the low end market on RTS, emu, lighter weight games , sims and whatnot and sell them a CPU for half price that does better, same or tiny bit better FPS?? That ‘is’ a win.. not a great one but any sales you can pull from intel is good.

        iPhone brevity and stuff…

      • HisDivineOrder
      • 3 years ago

      We won’t really know until people have them in their machines. A press event set up by AMD with machines built by AMD with who knows what they’ve done to gloss over things that don’t work as well in the real world?

      That won’t tell us much. AMD’s been known to throw a good press show before.

      Funny thing to me is they didn’t take everyone to somewhere tropical like they usually do. People love new products more when they’ve just come in from the beach and drank a few too many drinks with umbrellas.

      • gmskking
      • 3 years ago

      I read that it is Broadwell type performance per core. So it will not be as fast as Skylake or Kaby Lake per core. But obviously it will have 16 threads to help make up for that. I am leaning towards Zen because of this and because I hate the intel socket.

        • BaronMatrix
        • 3 years ago

        That was a sample from August with beta mobos… I believed they could get 10-15% with tweaking … And now thee are higher clocked samples.. Looks good… Keller FTW…

        • Unknown-Error
        • 3 years ago

        Per Core IPC will be Sandy-Bridge/Ivy-Bridge level. But remember instead of having a module with 2 int-cores and shared FPU, the new generation will pack 2 full Zen cores. “Moar Cores” but this time “actual cores”. The single-threaded benchmarks will have decent gains but not up to HW/BW/SKL/CBL. One the other hand when it comes to multi-threaded benchmarks, instead of having 4-modules (e.g. FX-8350) the top Summit-Ridge will have Zen 8-cores! Don’t forget in-terms of overall performance, a single Bulldozer module was only 80% of 2-full K10 cores.

    • Ninjitsu
    • 3 years ago

    But can it run Crysis?

      • LocalCitizen
      • 3 years ago

      not without a discrete video card. see, kraby lake is winning already.

      • ronch
      • 3 years ago

      Hahaha that joke never fails to crack a rib!!

      /s

    • jts888
    • 3 years ago

    If this is indeed Summit Ridge, I’m a bit surprised that gaming is even an attempted selling point.
    How many games at this point really do better with 6-8 lower clocked cores compared to 4 running at 4+ GHz?

      • Platedslicer
      • 3 years ago

      Civilization?

      I know I get p*ssed waiting for the AI turn towards the endgame, on a 3770K @4.5GHz.

      • ronch
      • 3 years ago

      You forget: AMD’s next generation parts have always been designed for the Next Generation of software. 🙂

      • chuckula
      • 3 years ago

      Not too many.
      From TR’s latest CPU review, the 5960X occasionally ties quad-core CPUs in situations that look to be GPU-bound, but never has a noticeable lead in any game while lagging in several other games: [url<]https://techreport.com/review_full/28751/intel-core-i7-6700k-skylake-processor-reviewed[/url<] [Edit: And that's why I'm fully expecting AMD to base any Zen gaming comparisons against the even-slower clocked 10 Core 6950X while playing up the price differential and intentionally pretending that Skylake 6700K parts being sold for less than $300 at Microcenter don't exist]

        • DrDominodog51
        • 3 years ago

        From a business perspective, it makes sense to ignore Microcenter prices. Most people don’t live within a reasonable distance of a Microcenter.

          • Kretschmer
          • 3 years ago

          Can we really call that living?

      • DoomGuy64
      • 3 years ago

      Depends on how efficient those cores are, like the Athlon XP was.

    • Kougar
    • 3 years ago

    If AMD had a Conroe in their pocket they wouldn’t be dorking around with CPU-favored gaming benchmarks and occasional piecemeal samplings of tests.

    It’s not possible for gaming performance using real-world settings to be increased by such a large factor that the final FPS numbers would awestruck a crowd. So it sounds like just a regular hype/publicity event to show off the latest model chip.

      • DoomGuy64
      • 3 years ago

      Back in the P4 days, Intel cherry picked benchmarks by using scenarios that specifically benefited from netburst, while AMD was more competitive in gaming and general use.

      PC gamers are more interested in gaming benchmarks than CPU encoding, so I’d say it’s a fair test for Zen. AMD doesn’t have to “wow” consumers, just show that they’re competitive and they’ll get sales from the anti-intel crowd.

      I’m more concerned about getting a CPU that:
      * Supports Upgrading and Overclocking.
      * Doesn’t have security backdoors.
      * Isn’t going to disable advertised features a month after release, because they weren’t tested enough.

      Looking at you, Haswell. (Should have gotten a free/discounted upgrade after a class action, but no.) If AMD can pull off decent gaming numbers, I may be switching over. Haswell works okay, but it’s more a byproduct of lack of competition than an actually good CPU.

        • smilingcrow
        • 3 years ago

        “PC gamers are more interested in gaming benchmarks than CPU encoding. ”

        And CPU encoders are more interested in CPU encoding benchmarks than gaming benchmarks.
        Funny old world isn’t it.

          • DoomGuy64
          • 3 years ago

          CPU encoding is pointless with today’s GPU acceleration, so it doesn’t really matter anymore. I’d rather have a decent cpu for gaming that is reasonably priced.

            • chuckula
            • 3 years ago

            In that case since Intel has Quicksync and AMD has… nothing, it looks like you’ve just torpedoed Zen before it could even launch.

            I’ll be sure to remember this post if by some chance Zen does well in the x264 or Handbrake benchmarks.

            • rechicero
            • 3 years ago

            AMD has… VCE. I doubt Zen will have it though, but who knows. Anyway, the original post from DoomGuy64 was not really right. CPU offers better quality than any hardware encoder, and it’s more flexible (a software encoder will always have the last version of any encoder, etc). In quality, according to this (https://www.reddit.com/r/letsplay/comments/4mn239/can_someone_explain_vce_quicksync_and_x264_and/ not really a great source, but I don’t want to waste time looking):

            CPU > VCE > Quicksync > NVENC

            One important thing (as you are a heavy Intel fanboy) is quality is worse (just a tad worse, but worse) if encoding while multithreading. So, single core performance would be better quality wise than having more cores. That’s the point you’ll want to remember, as single core perf should be better for Intel. But then again, if you want to encode several videos, then you’ll be able to use the extra cores we can expect from AMD.

            Bottomline: If Zen does well in x264 or x265 benchmarks, it’ll be great for some users that value quality. Specially those who want to encode lots of videos. Others will rather use hardware encoders, from any vendor (for example, if streaming or recording a game session).

            • HERETIC
            • 3 years ago

            In my experience-CPU encoding is better quality than GPU encoding.
            So CPU encoding is NOT pointless.

            • DoomGuy64
            • 3 years ago

            Depends on the settings, software used, and hardware feature support.

            [url<]https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video[/url<] [quote<]Quick Sync, like other hardware accelerated video encoding technologies, gives lower quality results than with CPU only encoders. Speed is prioritized over quality.[/quote<] I don't think super high quality is necessary or even wanted by most people, unless you are ripping blurays. The biggest reason why people use GPU encoding is probably for posting youtube videos, which doesn't offer great quality anyway. If you think CPU encoding is more important then give an example, and explain why you think it matters. I don't think it does for the vast majority of people using it, pirating redbox movies aside.

            • HERETIC
            • 3 years ago

            “I don’t think super high quality is necessary or even wanted by most people,”
            DUH-If you want super high quality you don’t re encode……………………..

            Re-encoding is all about “COMPROMISE”
            Getting the best quality per size-or smallest file size per fixed quality.

            Example-ripping all your box sets and re-encoding to put on your media server.

            I agree with your comment about cr*p youtube vidoes………………………..

            • f0d
            • 3 years ago

            i cant stand how horrible gpu/quicksync encoding is
            the files are larger for the same quality or the quality is lower for the same file size and this makes a difference for me because most australian isp’s have horrible upload speeds and its faster to just encode better and smaller using the cpu than it is to have a larger file size using quicksync

        • synthtel2
        • 3 years ago

        Anti-Intel crowd here. I agree with most of it, but AMD has backdoors too. IIRC, they might be less open to the world, but also IIRC, AMD does a lot less verification of payloads. If I am in fact recalling correctly, that makes it easier for some attackers, tougher for others. On the plus side, if someone builds a super-worm that infects things through Intel ME, AMD hardware might escape simply due to small market share. :/

        Also the TSX bug bit me more directly than most, but it’s not really fair to say AMD doesn’t have QC issues too.

      • chuckula
      • 3 years ago

      You have a good point. Intel intentionally gave reviewers early access to Conroe for benchmarking.

      If Zen really is all that, AMD would do the exact same thing, especially because it looks like Kaby Lake will be on the market before Zen. No sense in giving Intel a free pass if Zen really is better.

        • DrDominodog51
        • 3 years ago

        [quote<]No sense in giving Intel a free pass if Zen really is better.[/quote<] UNLESS THE ILLUMINATI FORBADE THEM FROM GIVING REVIEWERS EARLY ACCESS!!!11!!!

        • flip-mode
        • 3 years ago

        Not true: When Intel does something, AMD will do it the same way.

        That has really never been the case. AMD does it’s own thing, which is probably why it is constantly getting criticized for having such an awful marketing game.

        None of the above is meant to say I think one way or the other that AMD has a winner of a CPU. Just saying it’s impossible to determine using the above stated measuring stick.

      • Ninjitsu
      • 3 years ago

      yeah but there is a clear difference in gaming benchmarks between present gen Intel and AMD CPUs, in non GPU-bound workloads.

      If they can make Zen good enough for gaming (i.e. <20% diff to a i7-6700K or 7700K) then it should be fine (as long as the price is right). That way you appeal to gamers, and it’s obviously good enough for other home-office use.

      They may not be able to take the crown off intel’s head, but at least they can lay siege to the castle 😀

    • CScottG
    • 3 years ago

    Doritos: a better chip than what you are going to see at that event.

    (..well, at least for gaming enthusiasts; there will however likely be some markets that will be happy for the introduction. ..of course you know that based on the Xeon comparison comment.)

    • 1sh
    • 3 years ago

    Many people forget how small AMD is compared to Intel yet they are still able to compete.
    Intel is like 20 times bigger than AMD…

      • curtisb
      • 3 years ago

      Intel also has their hands in about 50 times the number of pies AMD does.

        • Anonymous Coward
        • 3 years ago

        Maybe 2 times.

          • just brew it!
          • 3 years ago

          Hmm… CPUs, SSDs, RAID controllers, NICs, Bluetooth chipsets, flash memory, server motherboards, rackmount chassis, fiber optic transceivers, FPGAs, real-time operating systems, compilers… I’m probably missing a couple. OK, so maybe not quite 50x. But certainly a lot more than 2x. And they *are* nearly 50x the size if you look at revenue.

            • Andrew Lauritzen
            • 3 years ago

            Remember too – the fab part of Intel is a huge part of the total size!

            Honestly by this logic ARM is clearly kicking everyone’s asses. But that’s not really a relevant metric in reality.

        • ronch
        • 3 years ago

        50 pies maybe but if you only count the BIG pies Intel only has a few more than AMD.

          • Klimax
          • 3 years ago

          Are we counting entire AMD or AMD’s CPU/GPU pies?

            • ronch
            • 3 years ago

            Entire AMD, including their not-so-real foray into SSDs and RAM.

      • smilingcrow
      • 3 years ago

      “Yet they are still able to compete.”

      Not for the last decade they haven’t; which is no surprise given their size.

        • blastdoor
        • 3 years ago

        Also, if Zen ends up being competitive, it’s not so much because AMD is doing anything miraculous as it is Intel just isn’t trying very hard.

          • Mat3
          • 3 years ago

          [quote<]Also, if Zen ends up being competitive, it's not so much because AMD is doing anything miraculous as it is Intel just isn't trying very hard. [/quote<] No, it's just that increasing per thread performance any more than Intel is already doing is extremely difficult and the trade-offs for doing so are not very appealing.

            • blastdoor
            • 3 years ago

            There are several simple things Intel could do to improve the price/performance of their CPUs:

            1. bigger cache
            2. eDRAM
            3. more cores at a lower price
            4. wider cores (might not increase single thread performance in many scenarios, but could increase per-thread performance in a multi-threaded scenario)

            Intel doesn’t do these things because it would lower their profits and without competition, why accept lower profits for higher price/performance?

            Intel isn’t trying because Intel doesn’t have to.

            • Andrew Lauritzen
            • 3 years ago

            Price/performance *in what application*? Not gaming as you can see clearly from the EE parts…

      • DancinJack
      • 3 years ago

      Compete at what? Beyond their GPUs they haven’t been competitive in anything for years.

      • rechicero
      • 3 years ago

      And that’s not even the real issue. Fab process is. Even with the best possible design, when you need to use processes optimized and mostly proved for mobile and a couple nodes behind, usually with worse tech… Hoping for another Athlon 64 seems absurd. I do hope is good enough for keeping AMD alive and Intel honest. But that’s all.

      • chuckula
      • 3 years ago

      Many people forget how AMD went from a successful company with its own fabs, real R&D instead of copying 5 year old Intel designs, and a sizeable share of the server market to the small company that it is today.

        • Srsly_Bro
        • 3 years ago

        Many people also forget Intel’s anti competitive practices and AMD paying far in excess of the actual value of ati. Never mind the inept leadership of AMD at the same time.

        I member when AMD was great. Let’s hope Lisa Su can make AMD great again.

          • Vaughn
          • 3 years ago

          I’m a huge AMD fan still but if we look back at the history.

          Was AMD on top because of excellent execution or simply because of an intel blunder.

          Had intel of dropped the P4 right after Willamette and gone straight to conroe things would have been alot different.

          I’m hoping for Zen to bring us back to those times.

          But never forget that big bully is still in the room and sitting on a mountain of cash!

          • chuckula
          • 3 years ago

          AMD’s most profitable days occurred exactly when Intel was being all supposedly anti-competitive. Their fortunes took a nose dive when they finally “won” and got into Dell systems (and were forced to give Dell the same discounts Intel did while stabbing all of the smaller channel distributors who had shown faith in AMD when they were the underdog in the back).

          The next phase in AMD’s self-destruction sequence that had nothing to do with Intel was when Hector* decided that getting his face on the cover of a magazine by paying too much for ATi was more important than improving on the original FX chips.

          * Fun fact 1: he was making more than Intel’s CEO when Intel was producing the Core 2 and making real money while AMD pushed “asset light” powerpoints. Fun fact 2: He lost his job over insider trading.

            • Srsly_Bro
            • 3 years ago

            Fun Fact 3. Ice cream sells better on hot days than on cold days.

        • cegras
        • 3 years ago

        You’re much more agreeable when you don’t do your predictable descent into clown-show-chuckula.

        For new readers, teach a man to fish, and he can fish forever:
        [quote<] [url<]https://techreport.com/discussion/30540/amd-gives-us-our-first-real-moment-of-zen?post=996584[/url<] [url<]https://techreport.com/news/30539/intel-announces-next-gen-knights-mill-xeon-phi-accelerator?post=996782[/url<] [url<]https://techreport.com/news/30394/3dmark-time-spy-benchmark-puts-directx-12-to-the-test?post=990429[/url<] [url<]https://techreport.com/discussion/30587/intel-kaby-lake-cpus-revealed?post=998808[/url<] Look you sociopathic D-bag with your "strawman" whines to hide your own history of lying, here's a reminder of how you insulted Haswell with a completely disingenuous and dishonest rant that only got upthumbed because your crew was circling the wagons in desperation: [url<]https://techreport.com/discussion/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed?post=735330[/url<] Guess what sunshine? In 2017 AMD is flat out copying that chip* on which you heaped so much scorn in 2013. In literally every way AMD is basically waving the flag and saying that Haswell was so good that they hired Jim Keller to dump everything they've done for the last decade and just clone Intel to the best of their abilities. Riddle me this you little shill: You know how scared of Zen Intel is? They're so scared that according to your own little strawman Kaby Lake is not any better than the same chips Intel has been selling since 2011! That's how much AMD keeps Intel's design team up at night that Intel basically saw no need to do anything in response to Zen even though we've been hearing hype about the stupid chip since 2012. How does it feel to be that irrelevant? * OK, in fairness the un-core part of Zen is more like a 2009 Nehalem instead of a 2013 era Haswell.[/quote<]

          • chuckula
          • 3 years ago

          You know what’s funny?
          The only thing wrong about that comment was when I said zen’s core was a copy of Hawell.
          It’s more accurate to call it a copy of Sandy Bridge.

          Incidentally, that comment was a rebuttal to Bensam123 who had some pretty nasty and insulting things to say about Hawell, a chip that is literally three times more popular by itself that AMD’s entire lineup according to the latest TR poll. Looks like reality agrees with me.

            • DoomGuy64
            • 3 years ago

            Ha(s)well was only good because there was no competition, other than that it was a letdown in a lot of other areas:

            * Security / potential backdoors
            * disabled features and no recourse for people interested in using them, not to mention bugs that disabled more.
            * Upgrade capability : None. Intel screwed compatibility so you had to buy new motherboards and ram to upgrade.
            * ECC: Not available, because Intel wants to charge extra for that.
            * Overclocking only allowed on most expensive models and BLK locking. Not to mention the bus is synced to the system so overclocking it without an unlocked cpu meant stability problems. Intel made damn sure you bought their expensive models. Also, if you wanted to overclock, the initial models left out virtualization.

            The only good Haswell was Devils Canyon, which still had tsx disabled. Haswell was a product of Intel not having any competition, and money grabbing. If AMD’s Zen does everything ootb and doesn’t overcharge, it’s an automatic win. The artificial product segmentation is too annoying to navigate, and I’m sure a good portion of people feel the same.

            • chuckula
            • 3 years ago

            So basically an incoherent wall-o-text with your technically illiterate reasons for why everything from Intel sucks.

            How are you going to justify all that when Zen is still slower on a core for core basis?

            • Klimax
            • 3 years ago

            Reminder: Each cycle had its new socket. Only Core 2 could fit into older Netburst mainboards if mainboard was ready. Haswell was not in any way unique about upgrade options for previous socket.
            ECC: Always done it. Not specific to Haswell.
            Security/backdoors: What was specific to Haswell?
            Overclocking: Again, not specific to Haswell.

            And it brought quite nice performance upgrade. Just because it didn’t manifest in general code is problem with general branchy code. (Not including HT) Not much there to be done.

            Do you have any actual good complaints whicha re actually specific to Haswell?

            • Klimax
            • 3 years ago

            And BTW: AM4 was exception, very unlikely to be repeated, so if you are looking to AMD for “upgradeability” then you are looking to wrong entity. (And if they try same thing, AMD is dead)

            • MOSFET
            • 3 years ago

            Not in total disagreement, DG64, however:

            Intel screwed [i<]Broadwell desktop release[/i<] so you had no upgrade path. If it wasn't so close to Skylake, a Core i7-5775C in Socket 1150 (with a refreshed Z97 or solid Z87) board looked like a pretty good, long-lasting CPU upgrade path. Plenty of desktop i3's can do ECC. (Still, it is needless product segmentataion, but you [i<]can[/i<] build a cheap(ish) Intel server with ECC.)

            • the
            • 3 years ago

            Which i3’s support ECC is not consistent and the chipset requires are a bit steeper in SkyLake than in Haswell i3s.

            Still needless product segmentation on behalf of Intel.

            • cegras
            • 3 years ago

            Oh no, he insulted a computer chip!

      • ronch
      • 3 years ago

      Yup. I’ve been saying that, apart from my occasional bushings of AMD, particularly their mktg. dept.

      • derFunkenstein
      • 3 years ago

      Believe me – nobody forgets how small AMD is. Every failure, the first comment is “well AMD doesn’t have the R&D budget Intel does”

        • w76
        • 3 years ago

        And speaking to the R&D budget, lets say that, okay, AMD pours everything it has left in to Zen. Zen knocks it out of the park. Great. Meanwhile, Intel is working on its next die shrink and the couple architectures and die shrinks that come after that. If AMD doesn’t already have those future improvements in the oven right now too, not waiting for Zen revenue, Intel will (once again) leave them in the dust.

        I’d like AMD to talk a little more about future chips, because right now I get the sense that it’s Zen or bust, with not much materializing unless they win with Zen first.

          • Amiga500+
          • 3 years ago

          No doubt the aim will be to iterate off Zen for a couple of generations.

          If your baseline is good enough, you don’t need to reinvent the wheel.

          Even with the Bulldozer series, AMD managed to improve IPC fairly well from BD through Piledriver, Steam Roller and Excavator.

          Barcelona also made a massive jump to Deneb.

          The last time AMD was semi-competitive* with Intel was the initial months after launching Deneb and Thuban. If they could have launched Barcelona on 45nm and competed directly with Penryn, things would have looked quite different.

          *not at ultra-high end, but at least within 15-20% of this performance level.

      • BurntMyBacon
      • 3 years ago

      Regarding this:
      [quote=”1sh”<]Many people forget how small AMD is compared to Intel yet they are still able to compete. Intel is like 20 times bigger than AMD...[/quote<] So, ... , if Intel is a Great White shark, then AMD is an Albacore Tuna or perhaps a Baikal seal.

      • BaronMatrix
      • 3 years ago

      AMD got smart years ago and does Open stuff with PARTNERS.. OpenCAPI will be used for Opteron based on a Credit Suisse talk yesterday…

      They are also founders in GenZ protocol… Another thing Intel isn’t in…

    • DragonDaddyBear
    • 3 years ago

    The “leaked” link states the content was removed at the request of AMD. Anyone have more details on what was leaked?

      • derFunkenstein
      • 3 years ago

      Screenshot of the leak here:

      [url<]http://wccftech.com/new-amd-zen-blender-benchmarks/[/url<]

        • DragonDaddyBear
        • 3 years ago

        Isn’t that the exact same thing they did a demo of that wasn’t a leak? I’m going Krogoth on this. One benchmark does not impress me.

          • derFunkenstein
          • 3 years ago

          Yeah, after that Blender benchmark leak they released Blender benchmarks. Which makes me think that AMD accidentally uploaded the “leak” themselves. Whoops.

    • allreadydead
    • 3 years ago

    Zen CPU’s are our future. The competitiveness level of those CPU’s will determine how much discount we will be able to get in our next Intel CPU’s purchase…

    • the
    • 3 years ago

    What I would like to see is a surprise appearance of a Vegas based GPU. That would hit my interests.

      • chuckula
      • 3 years ago

      [quote<]What I would like to see is a surprise appearance of a Vegas based GPU.[/quote<] The Wayne Newton or Penn & Teller version?

        • cygnus1
        • 3 years ago

        Hey now, Penn & Teller put on a damn good show. Don’t compare them to AMD… :p

        • Neutronbeam
        • 3 years ago

        Garth Brooks version…it has clocks in low places.

      • Firestarter
      • 3 years ago

      and neuter its performance in its first public benchmarking? That’d be just the kind of strategy I’d expect of AMD’s marketing dept

      • Mr Bill
      • 3 years ago

      A big gamble…

    • derFunkenstein
    • 3 years ago

    Wait wait wait. This guy from EG is a Dota 2 player. You can play Dota 2 on my dad’s Athlon II X2 system. I am not prepared to be impressed by this thing pushing 100fps in Dota 2.

      • shaq_mobile
      • 3 years ago

      Yeah not sure what any pro gamer brings to the table, as the biggest games are all playable on hardware a decade old…

      I guess this is AMD showing us they are still hip and with it? At least it’s not Fear. Even Sumail has a better personality than Fear.

        • bfar
        • 3 years ago

        If he was called Damage, I might be interested!

      • BurntMyBacon
      • 3 years ago

      Interesting perspective. I figured an older, less GPU intense game would be more capable of teasing out the differences between CPUs than the latest GPU burner. Granted, frame rate is purely academic above your monitor’s refresh rate, but it will still give you a rough idea of how the processors compare.

    • christos_thski
    • 3 years ago

    One question. Seeing how we’re only a few months away from release, aren’t the chips finalized already? Why not release to press for full on reviewing and spoil intel’s christmas sales? (I mean, from AMD’s perspective)

      • chuckula
      • 3 years ago

      It is interesting about how TR has a front-page article about a full review of a retail i7 7700K processor from a somewhat credible source as a “leak” for a chip that is launching in about a month.

      In the meantime, the benchmark leaks for Zen are far more sporadic from far less reputable sources and are purportedly only of “engineering sample” parts with no retail parts anywhere in sight.

      What makes this weird is the assumption that Zen is actually going to be commercially available in January. Something tells me there will be a launch of Zen in January, but not of the silicon variety.

      • DPete27
      • 3 years ago

      Yeah, we all know AMD is perfectly fine with soft/paper launches. Unless their actual launch date a few months from now is already intended to be a paper launch….(mind blown)

      • Waco
      • 3 years ago

      They’ve stated a few times now that final silicon isn’t ready yet.

      • travbrad
      • 3 years ago

      Most CPU sales are to OEMs anyway so I doubt it would make much difference, and that’s assuming it’s actually competitive in the first place.

    • DPete27
    • 3 years ago

    Who wants to bet they choose GPU-limited games for their demo?

      • derFunkenstein
      • 3 years ago

      It’s their most efficient processor! Benchmarked with their most efficient GPU, the RX 460!

        • christos_thski
        • 3 years ago

        Seems like a uselessly controlled and restricted demonstration to me too, to be honest.

        What allows me to retain a degree of optimism, is that they put on an even more ridiculous show for RX480 (that hysterical demonstration of Hitman…. running. just running. ), but the end product did not turn out all that bad at all… (though perhaps overhyped and priced higher than promised, it’s still a good GPU).

          • derFunkenstein
          • 3 years ago

          Until the chips (and the motherboards, of course) are in the hands of independent reviewers, I just can’t believe. I won’t let myself. And lots of folks feel the same way.

            • DPete27
            • 3 years ago

            C’mon, you know the AMD fanbois are gonna eat this $#!7 up.

            (ducks for cover)

      • chuckula
      • 3 years ago

      It’s not the first time that [url=http://www.techpowerup.com/img/11-09-24/116a.jpg<]AMD claims their less expensive platform is equal to or better than Intel for games.[/url<] But we all know how that turned out. Fun page that brings back nostalgia for the Bulldozer launch hype [the 56x faster than a 2600K slide belongs in some type of Hall of Fame (or Shame)]: [url<]https://www.techpowerup.com/152569/amd-fx-8150-looks-core-i7-980x-and-core-i7-2600k-in-the-eye-amd-benchmarks[/url<]

      • Waco
      • 3 years ago

      I would love for them to run a Titan X (Pascal) on a current i7 versus the top-end or comparable Zen chip.

      It won’t happen, but it would certainly make a lot of people that distrust AMD take note.

    • chuckula
    • 3 years ago

    December 12 you say?
    Do you already have the review sample and is it giving you off-by-one errors?

    [quote<]​Watch live on 12/13 at 3 pm CST for a sneak preview of our new “Zen” CPU​[/quote<]

      • chuckula
      • 3 years ago

      Oh I see how it is. Ninja-edit when the heat is on.

        • DancinJack
        • 3 years ago

        I saw it. I’m actually surprised too, as there were three mentions of the 12th. Must have really been set on the 12th for some reason.

          • RAGEPRO
          • 3 years ago

          That was me (the writer); The site says 12/13 and for some reason my brain fixated on the 12 really hard. Sorry about that guys.

            • chuckula
            • 3 years ago

            It’s fine 😛

            • DancinJack
            • 3 years ago

            what he said ^

            • UberGerbil
            • 3 years ago

            Your European Brain was just insisting that 12/13 must be the 12th day of the 13th month.

            • Voldenuit
            • 3 years ago

            Stupid Smarch weather!

            • morphine
            • 3 years ago

            RAGEPRO = USian.

            I’m the token European. And it’s ultimately my fault for editing the post 🙂

            • Wirko
            • 3 years ago

            My European Brain ordered me to upvote this. I guess it’s wired wrong but I’m not sure if I can do anything about it.

            • Chrispy_
            • 3 years ago

            Can we stop calling it a “European” Brain and start calling it a “non-US” Brain?

            The entire rest of the planet does it the normal, logical, sensible way* and the US way is ass-backwards.

            *edit – actually, to avoid pedantic retorts, there are two ways – forwards and backwards, but either way is in order of significance. The US system is just completely FUBAR and it’s disjointedness from the rest of the world is a huge cause of human error (spoken by a global sysadmin having to deal with linked sites in multiple countries, including the US)

      • morphine
      • 3 years ago

      FDIV bug.

        • DancinJack
        • 3 years ago

        Nah these are just ints.

          • derFunkenstein
          • 3 years ago

          It originally read the 12.0th.

          • BurntMyBacon
          • 3 years ago

          [quote=”Article”<]Watch live on 12/13 at 3 pm CST for a sneak preview of our new “Zen” CPU​[/quote<] [quote="morphine"<]FDIV bug.[/quote<] [quote="DancinJack"<]Nah these are just ints.[/quote<] 12/13 is definitely not an int. ... Too much? ... I'll see myself out.

        • Concupiscence
        • 3 years ago

        At least it’s not F00F.

          • rephlex
          • 3 years ago

          F00F was a non-event, FDIV was a real problem.

    • ozzuneoj
    • 3 years ago

    Good timing AMD.

    *prepares popcorn*

Pin It on Pinterest

Share This