AMD intros 35W Richland mobile APUs

Richland is here. The successor to AMD’s Trinity APU has been shipping to PC makers since January, and today, we can tell you more about what it has in store—and what it doesn’t. Although Richland will replace Trinity in notebooks and on the desktop, it isn’t really new silicon. Trinity’s Piledriver-based CPU cores remain, and the integrated GPU is still rooted in the graphics architecture of the Radeon HD 6900 series. Richland is even fabbed on the same 32-nm process as Trinity, albeit with some more efficient transistor tuning. AMD hasn’t taped out a new chip design. Instead, it’s taking fuller advantage of the hardware already built into the chip. 

Dueling die shots. Left: Trinity. Right: Richland. (Source: AMD.)

The biggest difference between Richland and Trinity can be found in the power management department. Both chips manage power states with an integrated 32-bit microcontroller. Trinity’s Turbo mechanism adjusts clock speeds based on how much power the chip is consuming. Real-time power measurements are used to estimate the temperature of the chip, and the algorithms are relatively conservative to account for different cooling solutions and ambient environments. The silicon actually includes a network of integrated temperature sensors that aren’t utilized fully by Trinity. In Richland, these sensors feed into a new Hybrid Boost power management scheme.

Microprocessors take a little time to heat up when they’re put under load, so changing clocks based on actual temperatures should allow higher frequencies to be maintained for longer periods—especially if the previous estimates were conservative ones. Intel realized extra clock headroom when it switched to temperature-sensitive Turbo tech in Sandy Bridge, and Nvidia benefited from a similar approach when it started considering temperature in its second-generation GPU Boost implementation. AMD’s clock-boosting mechanism has now joined the party. 

Source: AMD.

In addition to gaining temperature sensitivity, Richland’s power management has adopted smarter algorithms to balance the processor’s CPU and GPU components. Trinity increases the clock speeds of whichever component demands more power, but Richland’s new Intelligent Boost algorithm is more discerning. When one of the chip’s components—the CPU or GPU—requests extra juice, Intelligent Boost attempts to determine whether the component is truly bottlenecking system performance. If it isn’t, Intelligent Boost will save power rather than raising the voltage and clock speed.

The power management algorithms are more refined, too. AMD has spent more time profiling individual workloads since Trinity’s release, and Richland’s algorithms have been updated accordingly. Some application-specific optimizations have been implemented, as well, although AMD says it uses those only sparingly.

Richland complements its more advanced power-management algorithms with finer-grained clock speed control. AMD has added more points along the chip’s frequency and voltage curve, giving the power management engine more options when determining optimal combination for a given workload. The processor’s TDP is also configurable, allowing customization for notebook makers with unique needs.

Thanks to these improvements, AMD claims Richland consumes 17% less power than Trinity at idle and 38% less when playing 720p video. The two APUs purportedly offer comparable power consumption when browsing the web, though. Those numbers are based on AMD’s fastest mobile Trinity part, the A10-4600M, and its Richland-based counterpart, the A10-5750M. Here’s how the latter APU looks next to the rest of the Richland chips being unveiled today:

Processor CPU cores CPU clocks L2 cache GPU Graphics ALUs GPU clocks Max DRAM
A10-5750M 4 3.5/2.5GHz 4MB Radeon HD 8650G 384 720/533MHz 1866MHz
A8-5550N 4 3.1/2.1GHz 4MB Radeon HD 8550G 256 720/515MHz 1600MHz
A6-5350M 2 3.5/2.9GHz 1MB Radeon HD 8450G 192 720/533MHz 1600MHz
A4-5150M 2 3.3/2.7GHz 1MB Radeon HD 8350G 128 720/533MHz 1600MHz

AMD is only introducing standard-voltage 35W mobile chips today, and the A10-5750M is the flagship. It’s joined by another quad-core part and a pair of duallies, all of which share the same thermal envelope. Compared to the Trinity APUs they’ll replace, the Richland chips have 200-300MHz higher CPU clock speeds and an extra 53-65MHz on the GPU side. The L2 cache sizes and Radeon ALU counts remain unchanged, but the A10-5750M gets a bit of a boost thanks to support for 1866MHz memory. Other Richland APUs, much like mobile Trinity processors, are limited to 1600MHz RAM. (Desktop-bound Trinity chips can run their memory at up to 1866MHz, however.)

Although the integrated Radeons have received only clock-speed adjustments, AMD has given them new, 8000-series model numbers. The integrated GPUs can participate in Dual Graphics teams when combined with AMD’s 8000-series mobile graphics parts, one of which we looked at late last year.

Somewhat surprisingly, AMD isn’t making any lofty claims about the performance of the 35W Richland parts versus their predecessors. At CES in January, AMD touted a 40% jump in graphics performance and a 10-20% boost in CPU performance over Trinity, but that was for ultra-low-voltage models aimed at ultrathin laptops. The gains likely won’t be as dramatic for APUs with higher thermal envelopes, which suggests Richland’s eventual desktop incarnations may benefit the least.

There’s more to Richland than fancier power management and slightly faster frequencies. AMD has introduced several platform-level enhancements, including support for the quick resume and wireless connect tech built into Windows 8. Nevertheless, Richland APUs will drop into existing sockets and work with current chipsets.  

Source: AMD.

Richland-based systems will come with a bundle of AMD-branded software, as well.

One of the bundled apps, Gesture Control, will use GPU acceleration to translate hand waving into commands for media playback, web browsing, and other applications. Gesture Control appears to rely on a webcam rather than a true 3D camera, which is great for compatibility but probably means lousy precision. Another app, Face Login, will let you use your webcam to log into Windows or access websites. Folks who want to stream content to remote televisions will be able to use Screen Mirror, which promises a low-latency connection and requires DLNA-compatible hardware on the receiving side. Screen Mirror appears to be broadcast-only, so it’s more of an answer to Intel’s Wireless Display tech than a competitor for Nvidia’s Project Shield. AMD will also combine Richland with a collection of video software—Quick Stream, Steady Video, and Perfect Picture HD—that will handle bandwidth prioritization, stabilization processing, and dynamic image adjustment, respectively.

The 35W mobile Richland parts listed above are shipping to notebook makers now, and they’ll be joined by 25W and 17W parts in the first half of the year. We don’t have a timeline for desktop versions of the chip, but we know they’ll work in the same FM2 socket as Trinity processors.

Richland may not be with us very long, because AMD is on track to deliver another APU later this year. Dubbed Kaveri, this chip will feature updated Steamroller CPU cores alongside a new integrated GPU based on the Graphics Core Next architecture. Kaveri will be built on a smaller 28-nm process, and rumor has it the chip could incorporate a GDDR5 memory interface.

Comments closed
    • anotherengineer
    • 7 years ago

    I hope the desktop version has decent refinements as well. I would like to repurpose my htpc guts to a PC for the garage, and would need a new mobo/cpu/apu combo.

    • Shobai
    • 7 years ago

    reading this on my Droid 4 and just realised that the tables don’t display properly in portrait mode, there’s info lost off the right-hand side of the screen. Is rotating the orientation the only fix?

    • tbone8ty
    • 7 years ago

    screw all this hubbub we want more frametime testing!

    • revparadigm
    • 7 years ago

    Why is AMD wasting time polishing a turd that will be flushed when Haswell arrives?

      • derFunkenstein
      • 7 years ago

      If Haswell’s launch is anything like Ivy’s, it’ll be a long while before there are $100 Haswell-based CPUs.

        • A_Pickle
        • 7 years ago

        …aren’t there like… a bunch of cheap IB CPU’s out nowadays? Could’ve sworn there was an $86 IB Pentium…

          • derFunkenstein
          • 7 years ago

          Yes, a year after launch (roughly) they are coming out.

            • A_Pickle
            • 7 years ago

            I feel like that low-end IB CPU has been out for well over a year… and really? Is Sandy Bridge THAT bad?

      • stmok
      • 7 years ago

      [b<]revparadigm[/b<] says... [quote<]Why is AMD wasting time polishing a turd that will be flushed when Haswell arrives?[/quote<] => Because this turd (Richland) is a "Product Refresh" to buy AMD engineers time for the Steamroller-based Kaveri APU...The latter comes at the end of the year. AMD puts out processor roadmaps. Consider reading them. Kind of like this one... => [url<]http://img849.imageshack.us/img849/4998/amd2013roadmapzps90ed6b.jpg[/url<]

      • Airmantharp
      • 7 years ago

      Why is it a turd?

        • Alexko
        • 7 years ago

        Because AMD suxxx and Intel roxxx!!!11!one1!

        Or something to that effect. Never mind AMD’s substantial graphics lead—neither trolls nor fanboys are very bothered by reality.

          • revparadigm
          • 7 years ago

          Or because Intel has a vastly superior cpu design? I am very aware of Intel’s weak graphics. I am also aware that Haswell core will offer far superior cpu performance and probably better battery life. I’m not interested in portables for blazing 3D. I’ll take Intel’s, thank you.

          • NeelyCam
          • 7 years ago

          Not so substantial in 17W TDP envelope…

          [url<]http://forums.anandtech.com/showthread.php?t=2258512[/url<] 3DMark06 Trinity: 3,625 Ivy Bridge: 4,549 Wait, wut? Ivy Bridge has a substantial graphics lead at 17W..?

            • sschaem
            • 7 years ago

            Wait , what ? Ivy bridge with Twice the memory bandwidth got a higher score ?
            [url<]http://www.3dmark.com/3dm06/16808052[/url<]

            • NeelyCam
            • 7 years ago

            That’s such a convenient excuse, but in spite of my [i<]extensive[/i<] efforts, I haven't been able to find a 17W Trinity with dual-channel memory anywhere to be able to compare more evenly. All we have to go with is the one I listed (IB wins) and the one raddude9 pointed out for a [b<]higher-TDP[/b<] Trinity w/ dual-channel memory that barely beats the ultrabook IB by a few percent. Sorry, but until you or someone else can point to a true apples-apples comparison where Trinity takes the lead, [i<][b<]Ivy Bridge is the undisputed graphics champion at 17W TDP[/b<][/i<]

            • chuckula
            • 7 years ago

            Dude did you look at that page carefully? THE RADEON 7550M IS A DISCRETE GPU WITH A FULL GIGABYTE OF DEDICATED MEMORY!!!!
            You just posted a result of a crossfire enabled Trinity with both the IGP and a discrete GPU with the accompanying higher power draw barely beating a 17-watt Ivy Bridge with only an IGP.. if that is a “win” for AMD then I’d hate to see what they call a loss.

            • sschaem
            • 7 years ago

            My point was that comparing an ivy bridge equipped with dual channel memory, while the Trinity is crippled with a single chanell make as much sense as the link I posted when Trinity wins.

            You seem outraged that I pointed to that link.. this is the same thing neelycam is doing with his post… comparing apple to oranges.

            And my only statement was
            “Wait , what ? Ivy bridge with Twice the memory bandwidth got a higher score ? “

            • NeelyCam
            • 7 years ago

            I don’t think that’s the case… tools report the AMD IGPs that way – AMD just “names” them based on something somewhat comparable on the discrete side

            EDIT: I take it back; clearly I was wrong. The linked 3dMark06 was with a discrete GPU.

            • raddude9
            • 7 years ago

            That HP sleekbook was crippled with single channel memory, that EDGE VS8 review (on Tuesdays shortbread) gives a 3D Mark score with some slow dual channel memory, so I’ll just take the liberty to rewrite your table as:

            3DMark06
            Trinity: 4,832
            Ivy Bridge: 4,549

            • NeelyCam
            • 7 years ago

            No mention of the CPU or its TDP. The system consumed 34W under load… making this probably a 25W TDP Trinity, hardly comparable to a 17W Ivy Bridge ultrabook chip.

            And even with the power disadvantage, Ivy Bridge [i<]almost[/i<] beat Trinity. At the same TDP, Ivy Bridge is superior.

            • Alchemist07
            • 7 years ago

            I always wonder, what does NeelyCam get from the constant bashing of AMD and promotion of Intel?

            Does he get paid for this or just a sad fanboy at home on his computer who spends all his time doing this?

            • chuckula
            • 7 years ago

            Neely is a troll, but his posts are massively more factually accurate than many up-thumbed posts from the AMD fanboy squad. Case in point: Look at the thread above where sschaem’s post about how Intel’s IGPs suck because a trinity part [b<]with a discrete GPU including dedicated RAM in crossfire mode[/b<] is able to edge out a 17watt Ivy Bridge GPU by a small margin.... who is really getting paid again?

            • NeelyCam
            • 7 years ago

            [EDIT: I was wrong; sschaem’s post was for a discrete GPU]

            Some sites are claiming the APU used is A8-4555M, which some different sites claim is a 19W part. So, seemingly somewhat close to Ivy Bridge’s 17W, but why does the system consume 34W if the APU is 19W..? Is this 19W a “Scenario Design Point” like fake-TDP?

            Bottom line: this system consumed way more power than the Ivy Bridge one, and got only about 6% higher scores. (In fact, if I remember right, when I ran 3DMark06 on the NUC, I got something higher… I should check that again..)

            • sschaem
            • 7 years ago

            I only stated this
            “Wait , what ? Ivy bridge with Twice the memory bandwidth got a higher score ? ”

            And posted a score where Trinity wins. If you are going to equip ivy with 2x the channels, and cripple in one test, why not show the other scenario where Ivy its crippled and Trinity is given the advantage ?

            Double standards…

            • NeelyCam
            • 7 years ago

            I’ve been trying to find examples that are fair, but haven’t been able to do so. Raddude pointed to the Sapphire system – that’s the closest to an apples-apples, imo, but 2W higher TDP and significantly higher system power should be accounted for.

            You linking a score from a discrete GPU is a bit disingenuous, though, don’t you think?

            • sschaem
            • 7 years ago

            I was thinking the same about comparing an APU setup in dual channel and another with only half the bandwidth resource and claiming one having superior graphics 🙂

            Yes, I haven’t found a decent review of a 17w trinity laptop with dual channel memory enabled.

            The A8-4555M is rated at 19 watt, ok, but it does pack 50% more GPU compute. 384 units vs 256

            • NeelyCam
            • 7 years ago

            [quote<]I was thinking the same about comparing an APU setup in dual channel and another with only half the bandwidth resource and claiming one having superior graphics :)[/quote<] That's fair - they were both apples/oranges.. 3DMark06 also has a CPU component that's probably helping Ivy Bridge to keep up with the Trinity scores.

            • raddude9
            • 7 years ago

            WTF! [quote<]massively more factually accurate[/quote<]... I don't think so, Neely deliberately linked to a review of a trinity based system that was crippled by it use of single channel RAM, and tried to use that as proof of how AMD's APU sucked at low power envelopes. How is that any different to the trolling that you are accusing sschaem of?

            • NeelyCam
            • 7 years ago

            It’s for entertainment value.. at least it used to be. AMD fanbois were sensitive and full of hypocrisy, making them perfect targets for some light trolling. Now they don’t seem to care anymore

            As chuckula said, I always try to keep my trolling based on facts, or at least reasonable arguments where facts aren’t available, and I admit it if I’m proven wrong. I also try to keep it respectable; I don’t make personal insults unless I’ve been insulted several times first.

            • raddude9
            • 7 years ago

            Surely it would have been quicker to google the answer rather than speculate (incorrectly in your case).

            The chip in question is the 19W AMD A8-4555M. So it’s very comparable to the Ivy Bridge ultrabook chip.

            In my book this makes your statement[quote<]At the same TDP, Ivy Bridge is superior.[/quote<] incorrect. The AMD chip gives people a better gaming machine in the ultra-low-power envelope.

            • NeelyCam
            • 7 years ago

            The 12% TDP increase gave it 6% higher score. It could be argued that a 19W Ivy Bridge part would’ve beaten it, don’t you agree?

            My biggest issue with the reviewed system there is the 34W load power. Assuming the APU was running at 19W, where is the extra 15W going? I’m wondering if AMD is trying to pull some sort of an “ACP”/”SDP” trickery here..

            I guess my point is that if we looked at [b<]system[/b<] power, Ivy Bridge ultrabook is within 10% of graphics performance while running at a significantly lower power consumption envelope even though it also had a display to power up.

            • raddude9
            • 7 years ago

            It’s not an Apples-to-Apples comparison. Ultrabooks usually use different components and are built to different standards than nettops. For example, the EDGE VS8 uses a 500GB mechanical hard drive, whereas ultrabooks usually use SSDs. Then there’s power supply efficiency, number of ports, etc.. Anyway, if you really want to look at [b<]system[/b<] power, can you point me to an IvyBridge ultrabook that uses a 17Watt CPU, but will only use 17Watt under full load, including screen etc.... I don't think so. A much better comparison Apples-to-Apples comparison would be with the much vaunted NUC, which can burn though a very high 46Watts under a GPU load: [url<]http://www.xbitlabs.com/articles/mainboards/display/intel-nuc-dc3217iye_7.html[/url<] How does a system with a 17Watt CPU manage to use 46Watts. That's a 29 Watt difference for a system that does not have the excuse of having to run a mechanical hard drive. And you are complaining about 15W going missing in the AMD system! So which company is more likely to be trying some TDP-related trickery exactly? And no, increasing the TDP by 12% does not mean that the 3DMark score would increase by more than 6%. There's way too many variables to make an assumption like that. To recap, the 33.4Watt AMD system gives a better 3DMark score than the 46Watt NUC system. So we're agreed then, AMD still rules 3DMark in the sub 20 Watt CPU category.

            • NeelyCam
            • 7 years ago

            Good arguments. 🙂 This is why I like the debates here at TR.

            I remember looking at those NUC numbers from xbitlabs, and wondering what’s wrong with their system. I have the GbE version of NUC (no Thunderbolt, no WiFi), and it’s idling somewhere around 6W if I remember right. Under load, it’s also well below those numbers.. I don’t remember the exact wattage, but I can check it tonight.

            2.5″ HDD uses maybe 2W more than an SSD.

            [quote<]To recap, the 33.4Watt AMD system gives a better 3DMark score than the 46Watt NUC system.[/quote<] Newer 3DMarks rely heavily on features like tesselation that don't really work on HD4000 that well. 3DMark06 would be a fairer comparison [quote<]So we're agreed then, AMD still rules 3DMark in the sub 20 Watt CPU category.[/quote<] When you word it that way, I certainly can't disagree Overall, this was a solid victory - congratz!

            • raddude9
            • 7 years ago

            thanks, I’d be interested to hear what kind of power numbers you’re getting on your NUC if you get them, the Thunderbolt and WiFi probably count for part of the difference at least.

            • NeelyCam
            • 7 years ago

            I might test them tonight and post here if I have time. I still have it plugged in into a KillAWatt (it’s measuring “average use”), so I don’t even have to reboot.

            Any suggestion on the load benchmark? Last I tried 3DMark06, it crashed….

            • NeelyCam
            • 7 years ago

            Idle (i.e., writing this post) = 9W (CPU @0-1%)
            4-thread prime = 20W (CPU @100%)
            Furmark = 38W (CPU @1-3%)
            4-thread prime & Furmark = 40W (CPU @100%)

            Very interesting… how is this a 17W TDP part if the system goes to 40W when loaded? And power consumption with graphics loaded and CPU at near-zero was pretty surprising. I have fast memory, but still…

            • NeelyCam
            • 7 years ago

            I ran the 3DMark06 again – this time it didn’t crash (I wonder what happened during my earlier attempt). [b<]Score: 4953[/b<] [url<]http://www.3dmark.com/3dm06/17185381[/url<] I guess the memory speed really matters a whole lot (mine is 2x4GB of DDR3L-1600, CL9). So, NUC took the crown*, but I'm still not sure how apples/apples this is with DDR3-1333 vs. DDR3-1600. I suppose the only fair conclusion we can make is that Trinity and NUC are pretty much in the same ballpark in this power envelope. *EDIT: It took the aggregate 3DMark06 crown, mainly because of the CPU score. Compared to sschaem's scores with a discrete card, IB inside NUC had almost 2x higher CPU score, but lost on the other two by 5-10%. The Sapphire system review didn't show the sub-scores, so I don't know how NUC compared to that

            • raddude9
            • 7 years ago

            Congrats on the score… Quite Interesting, so both you and xbitlabs came up with the same figure for the difference between Idle usage and full load, 31 Watts. Although Xbitlabs got a higher figure when it just loaded the GPU (46W), and not the GPU+CPU (44W). Seems like the NUC’s CPU has a hard time keeping it’s GPU component running efficiently. Regardless though, that’s a lot of juice for what is supposed to be a 17W CPU, perhaps their TDP calculations exclude the GPU part of the CPU?

            Yep, I’d say it’s probably the memory speed that pushed your numbers a bit higher, most of the Trinity reviews especially showed big 3DMark differences when running with faster memory.

            I’m not quite sure if I’d say that Trinity and the NUC are in the same ballpark! In my arbitrary binning of systems (which I do when it suits me of course), Laptop-based Trinity systems seem to be firmly in the 30W category, but NUC systems are in the 40W category 😉

            I’ve been looking into these low-power systems lately as I’ve started building a new machine for my dad (his 2007 iMac is failing) based, of course, on an AMD A10-5700 (65W), I’m looking forward to running a few benchmarks on it now…

            • NeelyCam
            • 7 years ago

            I was comparing to the Sapphire system that has a 19W Trinity APU in it, and dual-channel memory – the closest thing to an apples/apples we have. Techpowerup review was linked on Tuesday’s shortbread:

            [url<]http://www.techpowerup.com/reviews/Sapphire/Edge_VS8/5.html[/url<] The system scores 4832 on 3DMark06 (vs. 4953 that the NUC had). Techpowerup says "load" power consumption is 33.4W, but they don't say what "load" is, and we don't really know how they measured it (did they calculate out the AC adapter inefficiency?). Meanwhile, these guys measured power consumption with a KillAWatt off the wall, just like I did with the NUC, and say they loaded the system with both 3DMark and Cinebench (I had 4-way prime and Furmark): [url<]http://www.kitguru.net/desktop-pc/zardon/sapphire-edge-vs8-mini-pc-review/23/[/url<] They got 38W, I got 40W. Both the power numbers and 3DMark06 scores are very close. I'm not sure why you don't agree Trinity and Ivy Bridge are in the same ballpark...

            • raddude9
            • 7 years ago

            I see, kitguru seem to be loading both the CPU and GPU, and techpowerup don’t mention how they come up with their load figures, I was assuming that they know what they’re doind and they loaded up both components at the same time. As you point out though, that’s not necessarily a good assumption to make. I had a look around for another review that measures the full CPU+GPU load power of the VS8 but the only thing close I could get is:
            [url<]http://uk.hardware.info/reviews/3872/4/sapphire-edge-vs8-review-compact-mini-pc-with-amd-a8-energy-consumption-and-noise-levels[/url<] But all they mention is an Idle rating of 19W and a GPU load rating of 29W, nothing about CPU + GPU. So I'll have to concede to you on that point, both systems are in the same power ballpark (until I can prove otherwise). The kitguru Idle power numbers look a bit high though, 27W, I know measuring power like this is an inexact science, but their value is about 10W higher than the values mentioned in other reviews.

            • NeelyCam
            • 7 years ago

            [quote<]The kitguru Idle power numbers look a bit high though, 27W, I know measuring power like this is an inexact science, but their value is about 10W higher than the values mentioned in other reviews.[/quote<] I know - that looked a bit strange. Maybe they didn't have all the power management settings enabled..? As far as I can tell, it was exactly the same system as the Techpowerup. Oh, by the way, I also ran 3DMark11 on the NUC a while back. I don't remember the exact score (I have it saved on the NUC), but it was something pretty awful. It sort of looked like the new features 3DMark11 tests weren't really working that well on HD4000

            • raddude9
            • 7 years ago

            Yea, the NUC’s 3DMark11 scores are not going to be brilliant, 3DMark11 relies on DirectX 11 and the HD4000 graphics are a bit behind the times in that regard. This benchmark:
            [url<]http://www.hardwarezone.com.sg/product-sapphire-edge-vs8-mini-pc[/url<] gives the EdgeVS8 a 3DMark11 score of 912 and a slightly more powerful HD4000 chip a score of 650.

            • NeelyCam
            • 7 years ago

            Yeah; I checked my NUC score: P625. Kinda bad… lol

            I’ve been wondering if I should recommend a Trinity system to a friend or not. One concern I have is the Netflix/Silverlight issue Brazos had… is Trinity fast enough now (or has Netflix enabled hardware support)? Another was brought up by Kurotetsu’s request of trying those 10b H246 anime decodes where in some cases NUC was running at 95% CPU.. does Trinity have some dedicated hardware to handle that, or is the CPU able to do it well enough?

            This would be for a HTPC, and I’m hesitant to recommend NUC because of its price.. but I don’t want to suggest this Sapphire system either if it can’t handle 10b H246 files (he’s into anime big time).

            I guess I could ask him to wait for Richland/Haswell..

            • raddude9
            • 7 years ago

            As for Netflix HD streaming, I was looking into some similar issues for my own trinity build and according to Anand, Trinity supports GPU acceleration for Netflix:
            [url<]http://www.anandtech.com/show/6335/amds-trinity-an-htpc-perspective/6[/url<] They're looking at an A10-5800 system there, but the CPU and GPU utilization were low enough (about 25%) that it looks like it would be fine on the slower 4555m system. But for 10bit H.264 files the answer is a definite NO (for the moment at least), Anand have some details on this page: [url<]http://www.anandtech.com/show/6335/amds-trinity-an-htpc-perspective/7[/url<] I don't think any GPU's support 10bit H.264 decode acceleration right now, reliably at least (I saw mention of a Nvidia CUDA based system), and systems like the Edge VS8 and NUC would struggle (hitting 95% on the NUC probably means it's dropping a few frames). I don't think Richland or Haswell will necessarily allow hardware decoding of these files, so I don't think waiting for a new chip is the answer. I think your friend just has to step up to a more desktop-type chip to get a bit of breathing space, for me, it would be a straight (but difficult!) choice between an i3-3225 or an A10-5700.

            • NeelyCam
            • 7 years ago

            Thanks for the good info. I guess the worry about Netflix is a thing of the past.

            The 10b/H264 thing may or may not be an issue; NUC [i<]almost[/i<] handles it with the CPU alone (when I was watching it, I couldn't pick up the missed frames, although I'm sure they are there). I wish I knew how a Trinity system handles it. I guess that's my point about RIchland/Haswell/Kaveri - the next-gen system might have enough CPU horsepower to push through it in a low-enough TDP envelope that this is no longer an issue

      • clone
      • 7 years ago

      because guys like me aren’t going to wait and will be buying sooner than later which will earn them money…. sorta the idea behind running a business.

      but I guess in your world everyone at Intel should have thrown up their hands and quit the day the Athlon was introduced back in 1998.

      • anotherengineer
      • 7 years ago

      Successful troll was successful!!

    • Sam125
    • 7 years ago

    Richland seems like a pretty good stopgap measure before Kaveri. The review should be somewhat interesting. The 17w mobile part would clearly be the best CPU to watch out for.

    • NeelyCam
    • 7 years ago

    [quote<] The silicon actually includes a network of integrated temperature sensors that aren't utilized fully by Trinity. In Richland, these sensors feed into a new Hybrid Boost power management scheme.[/quote<] Yet another Intel innovation that AMD is blatantly copying. First powergates, then Turbo, now this. I think we can all agree now that Intel is the one driving innovation, and AMD is merely a copycat, stealing other companies' brilliant ideas when they can't come up with their own.

      • MadManOriginal
      • 7 years ago

      I bet *that* is what the recent rumors about an Intel-Apple partnership were about…there really wasn’t talk of fabing chips or anything like that, they just got together and had a good time complaining to each other about other companies copying them.

        • NeelyCam
        • 7 years ago

        I think you’re right. In fact, getting depressed by all the copying is probably why Steve Jobs died and Paul Otellini quit

      • DeadOfKnight
      • 7 years ago

      Intel has also copied AMD in the past. All is fair if the consumer benefits, I say.

        • NeelyCam
        • 7 years ago

        I know, and I’m all for copying – everyone should copy [i<]everything[/i<] because, as you said, it benefits consumers. I'm just enjoy pointing out the hypocrisy of AMD fanbois who are all up in ARMs claiming that Intel copies AMD (with the implication that AMD doesn't do that, even though in reality AMD copies Intel way more than Intel copies AMD)

          • A_Pickle
          • 7 years ago

          I think companies copying eachother and offering ever-cheaper yet ever-better products is the greatest thing ever.

          That said, I think it’s a bit disingenuous to argue that the module architecture of Bulldozer is anything like HyperThreading…

      • Sam125
      • 7 years ago

      Intel is following AMD into a heterogeneous computing future. That’s big picture stuff that Intel hasn’t been quite able to focus on. Haswell was a reactionary measure to AMD’s proposed HSA silicon.

      AMD may have “copied” (although everyone copies or borrows ideas from each other, that’s why the industry is so advanced) minor technology but Intel is completely following AMD when it comes to major tech. Hate to burst your little bubble there, Neely. :p

        • NeelyCam
        • 7 years ago

        [quote<]Intel is completely following AMD when it comes to major tech. [/quote<] Now you're just being silly. I forgot to add: multi-core, graphics integrated into the same package... Haswell is rumored to introduce high-speed graphics memory (into the same package, even!)... all these [b<]major technologies[/b<] AMD is copying. Next AMD will probably copy [i<]Quicksync[/i<], Haswell's integrated voltage regulators, or integrated PCI Express... I can't believe they [i<]still[/i<] don't have integrated PCIe! Charlie said it'd happen in 2005, laughing how Intel will be two years later: [url<]http://www.theinquirer.net/inquirer/news/1016069/amd-to-integrate-pcie[/url<] Well, didn't happen. Intel has been the absolute champion of integration, and AMD is way behind. EDIT: I meant "Quicksync" - not Quickpath

          • Sam125
          • 7 years ago

          [quote<]rumored to introduce high-speed graphics memory[/quote<] That's an old idea that's been done since the 80's, Cam. :p If you [i<]really[/i<] want to get into old tech stuff, AMD was the first to offer dual core CPUs and Intel had to rush out a P3 equivalent that was literally two single cores fused together on one package. Then there's the first to 1GHz AMD win back when Intel's marketing convinced consumers that clockspeed was the most important factor. AMD64 forced Intel to adopt it when they were dragging their feet on 64bit CPUs (Although Intel attempted to undermine AMD by removing AMD's bitkill feature until Intel had their own copied version ready). AMD was the first commodity chipmaker to come out with an IMC (using Alpha tech bought from DEC) which Intel copied the idea from. There are likely many more that I've likely forgotten to mention. Although to be completely fair, it's really asinine to say one company copies another when most of these major ideas come from academia or they're logical progressions from current technologies and each company then researches and patents their own methods of implementation. This is where Intel usually wins as they have a much larger research budget, and last I checked is larger than AMD's entire annual budget. However, at any rate, you should learn your history, kid. 😀

            • Airmantharp
            • 7 years ago

            [quote=”Sam125″<]Although to be completely fair, it's really asinine to say one company copies another when [u<]most of these major ideas come from academia or they're logical progressions from current technologies[/u<] and each company then researches and patents their own methods of implementation.[/quote<] Exactly. When reading Neely's OP, I was asking myself, "who wouldn't do that?". Most of the advances he lists as 'technologies' are the exact next steps that were needed, at the time. And when it comes down to it, Intel is hands down the 'technology' leader.

            • A_Pickle
            • 7 years ago

            Also! AMD was the first major CPU vendor to use high speed interconnects, versus AGTL+ frontside buses, to feed data to the CPU, so QuickSync isn’t exactly Intel pioneering.

            It [i<]does[/i<] make for a bloody fast computing experience nonetheless.

          • Antimatter
          • 7 years ago

          Are you aware that AMD had Sideport Memory for their IGPs several years ago, it’s only logical that something similar would be added to their APUs. Quickpath is just a COPY of Hypertransport that AMD introduced in 2003.

          • abw
          • 7 years ago

          Next AMD will probably copy Quickpath did you say ?…

          Why should they copy what is essentialy a carbon copy of Hyper Transport.?..

          Yeah , for intel it was really a quick path to find an adequate bus ,
          just tell how much you re biaised and clueless.

            • NeelyCam
            • 7 years ago

            My mistake – I meant QuickSync. Thank you for your offensive correction.

            By the way, QuickPath was faster than HT.

            • abw
            • 7 years ago

            Speed is exactly the same…Check the thing if you wants….

            • NeelyCam
            • 7 years ago

            Sorry, no – Hypertransport tops out at 6.4GT/s:

            [url<]http://www.hypertransport.org/default.cfm?page=HyperTransportSpecifications[/url<] Quickpath goes up to 8GT/s: [url<]http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI[/url<]

            • abw
            • 7 years ago

            There s a blatant mistake in intel s link…Speed is the same..

            Both are 32bit point to point bus with 3.2Ghz clock , do the maths
            or check wiki…

            [url<]http://en.wikipedia.org/wiki/Intel_QuickPath_Interconnect[/url<] [url<]http://en.wikipedia.org/wiki/HyperTransport[/url<]

            • NeelyCam
            • 7 years ago

            So, you’re actually pointing to Wikipedia as a more reliable source of information than Intel’s own product spec page…?

            Look; I usually rely on Wikipedia almost blindly, and consider it to be a pretty solid (and supremely convenient) source of information. When I saw your post, I started by checking the wiki pages for the two, but since I remember reading something about QuickPath going faster than HT, I googled some more, and found that ark.intel.com page.

            My guess is that the wiki hasn’t been updated with the latest info. See the reference [1]; it says “The initial product implementations are targeting bit rates of 6.4 GT/s and 4.8 GT/s.” It looks like the later “product implementations” increased the speed from the HT3.1 max of 6.4GT/s. If you click on the ark link here:

            [url<]http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI[/url<] and hover over the "Intel® Xeon® processor E5-2600 Product Family" drop-down menu link, you'll see a list of various server parts, with different QuickPath speeds (6.4GT/s to 8GT/s). Either there are multiple "blatant mistakes" there, or QuickPath is faster than HT. Which one do you think is correct?

            • abw
            • 7 years ago

            Some CPUs can manage 4Ghz QPI speed but the plateforms
            will only work at 3.2Ghz.

            As pointed , QPI is essentialy a rebranded HTransport , there s no way
            that the same bidirectional bus with the same width and frequency would
            yield a higher bandwith , safe perhaps in some fanboys minds.

            Anyway , back to the original discussion , the wiki links
            clearly show that QPI , quick path to innovation , is a copycat
            of Hypertransport.

            • NeelyCam
            • 7 years ago

            [quote<]Some CPUs can manage 4Ghz QPI speed but the plateforms will only work at 3.2Ghz.[/quote<] [url<]http://www.supermicro.com/products/motherboard/Xeon7000/[/url<] [quote<]"Supermicro motherboards based on the Intel® C602 chipset support four Intel's Xeon® E5-4600 series processors (MP) with QPI up to 8GT/s."[/quote<] But please keep doubling down on your misplaced belief that QuickPath and HyperTransport have the same max frequency, like a loyal fanboi should. [quote<]Anyway , back to the original discussion , the wiki links clearly show that QPI , quick path to innovation , is a copycat of Hypertransport.[/quote<] They are both based on DEC Alpha EV7's interconnect technology, separately developed. Intel didn't copy AMD's HT. Now, if you want to talk about copying, should we talk about how AMD blatantly copied Thunderbolt, including the name ("Lightningbolt"... I mean, seriously?!?)

            • abw
            • 7 years ago

            You are a rough boy , neely , but still not enough….

            The CPU can do 8GT/s because it has four QPI links as is the case
            with 4P plateforms , but each link will only do 6.4GT/s.

            Do not confuse an aggregated bandwith with the bus effective speed ,
            8GT/s is what the CPU can manage with four active 6.4GT/s QPI links.

            • NeelyCam
            • 7 years ago

            4×6.4GT/s -> 8GT/s…?! Now you’re just so way off base that I don’t think anything I say will change your mind.

            Seriously – read up on this a little bit. Do you know what 8GT/s means? Or how 3.2GHz translates to 6.4GT/s?

            • abw
            • 7 years ago

            8GT/s out of four 6.4GT/s links…..

            The CPU cant use the four links with their full bandwith simultaneously.
            If it use a link full bandwith then only 1.6GT/s will remain at disposal for the three
            remaining links , if the CPU bandwith is shared equaly among all links then each
            one will see 2GT/s but if ever the CPU use only a single link then there will be
            no more than 6.4GT/s running through this said link.

            What you did wrongly assume is that a single link could use the CPU
            full 8GT/s bandwith but that s not the case , it s like thinking that a dual
            channel memory controller could use all of its bandwith concentrated in
            only one of his channel.

            • NeelyCam
            • 7 years ago

            [quote<]The CPU cant use the four links with their full bandwith simultaneously. If it use a link full bandwith then only 1.6GT/s will remain at disposal for the three remaining links[/quote<] My god... Why do I feel like I'm being trolled...? I mean, this can't be real...right?

            • abw
            • 7 years ago

            You are not trolled , simply you forgot your maths courses,
            and also have a strong “intel is forcibly better” bias…

            In a 4 sockets plateform each CPU has four QPI links
            that goes to the three other CPUs for the first three links
            while the remaining link is used for I/O with peripherics.

            Each QPI link allow 6.4GT/s x 4 bytes x 2 directions = 51.2 Gbytes/s/link
            wich is effectively the QPI bandwith at 3.2Ghz , the same as Hypertransport.

            As already mentionned , the CPU cant make use of 4 links at their full
            51.2Gbyte/s SIMULTANEOUSLY , it is limited to 8GT/s whatever the ratio
            for each link.

            You are confusing QPI bus speed with the CPU 4 QPI links management
            capability , wich is not the same thing.

            • NeelyCam
            • 7 years ago

            One more try, and then I quit.

            [quote<]As already mentionned , the CPU cant make use of 4 links at their full 51.2Gbyte/s SIMULTANEOUSLY , it is limited to 8GT/s whatever the ratio for each link.[/quote<] This doesn't make any sense. "N GT/s" is not a "ratio" - it's the data transmission rate for each [b<]lane[/b<] of the link. The clock frequency of the QuickPath link running at 8GT/s is 4GHz, and a bit is transferred (transmitted/received) twice per clock period (hence 8 gigatransfers per second). In contrast, HyperTransport is running slower, at 3.2GHz, and a bit is transferred (transmitted/received) twice per clock period, so the lane speed is 6.4 gigatransfers per second. Simple fact: QuickPath lanes are 25% faster than HyperTransport lanes. [quote<]You are not trolled , simply you forgot your maths courses, and also have a strong "intel is forcibly better" bias...[/quote<] My math is fine, and whatever bias you may think I have doesn't change facts.

            • abw
            • 7 years ago

            Either you dont understand nothing or are trolling..

            I think it s the second option , though….

            QPI has 51.2Gbyte/s transfert bandwith , not 64Gbyte/s whatever
            your manipulation to make the 8GT/s relevant with an imaginary
            4Ghz frequency that you invented.

            There s no 4Ghz QPI , it works at 3.2 , it was plagiarized from
            Hypertransport and renamed to make intel look innovative ,
            in fact as innovative as Apple that did patent a rectangle….

            • NeelyCam
            • 7 years ago

            [quote<]There s no 4Ghz QPI , it works at 3.2 [/quote<] Absolutely wrong. QuickPath physical layer is a half-rate architecture, and as the [b<]single-lane[/b<] data rate is running at 8GT/s, the clock is 4GHz. If you don't understand this, or the difference between a "lane" and a "link", you should step back and and reflect on if you should in fact have this discussion with me or not. [b<]Wikipedia hasn't been updated to mention the increased max rate, but Intel's product specs show the higher rate[/b<]. I gave you a link to the specs, which for some strange reason you dismissed as erroneous. Is this still your position, that QuickPath doesn't run at 8GT/s, and Intel's product sheets just have errors all over the place?

            • abw
            • 7 years ago

            You re voluntarly staying in denial.

            You would be hard pressed if asked to provide a link that
            officialy claim 4Ghz frequency , you only provided a link
            to a CPU spec that say that it can manage 8GT/s but
            without specifying if it s on a single link , wich is not
            the case , no doubt that you did made a search but
            finding nothing you re stuck to deliberate denial….

            • NeelyCam
            • 7 years ago

            [quote<]8GT/s but without specifying if it s on a single link[/quote<] It should be obvious that it's on a single [b<]lane[/b<] - you can even refer to Wikipedia to find out that the 4.8GT/s and 6.4GT/s refer to single a lane. If you don't know/understand that, this topic is over your head. [quote<]provide a link that officialy claim 4Ghz frequency[/quote<] Similarly, this should be obvious, when talking about a half-rate I/O. Look - if you want to learn more about this, I can explain it to you, but I'd like to ask you to lose the attitude first. If you don't want to learn this, that's fine - you probably won't really need to know any of this that well anyway.

            • abw
            • 7 years ago

            Still no results googling the thing..??..

            You are left generating an hollow discourse , you ll find nothing
            to substantiate your claims , i have no doubt that you eagerly
            googled but alas , no results , and it s not by chance…..

            The only one in the whole web claiming a 4GHZ QPI is you….

            Next time be more cautious before assuming “intel is better”
            as a being a quasi religious belief…

            • NeelyCam
            • 7 years ago

            I give up. Your troll was successful.

            [url<]http://forums.anandtech.com/archive/index.php/t-230509.html[/url<]

      • dextrous
      • 7 years ago

      Yes, Intel is the only one driving innovation. This AMD64 thing was really Intel’s idea…. IMCs too! How can anybody take you seriously?

        • NeelyCam
        • 7 years ago

        AMD copies Intel way more than Intel copies AMD. Curiously, since IMC was introduced in DEC Alpha, and all the Alpha IP was sold to Intel, AMD actually – yet again – copied Intel technology when they implemented IMC.

        AMD64 is the [i<]only[/i<] major innovation Intel has adopted from AMD. Did you see the laundry list of innovative tech AMD has copied from Intel...?

          • A_Pickle
          • 7 years ago

          Also, seriously?

          Don’t AMD and Intel have a patent cross-licensing agreement so the DOJ doesn’t ram an anti-trust stick up Intel’s ass?

            • NeelyCam
            • 7 years ago

            Yeah.. but one of them innovates, and the other one relies on DOJ for getting the benefits of innovation

            • anubis44
            • 7 years ago

            <<Yeah.. but one of them innovates, and the other one relies on DOJ for getting the benefits of innovation>>

            You mean one of them innovates and the other one threatens their own customers if they consider using another company’s CPUs:

            “executives of the Gateway Corporation said their company paid a high price for doing business with A.M.D. and that Intel had “beaten them into guacamole”:
            [url<]http://www.nytimes.com/2005/06/29/technology/29chip.html?_r=0[/url<] ... or they pay off Dell for remaining Intel-exclusive, to the tune of hundreds of millions of dollars: [url<]http://www.pcpro.co.uk/news/359770/intel-sweeteners-made-up-76-of-dells-income[/url<] Yeah, if somebody breaks into your house and steals your stuff, you're a real pussy for reporting the theft to the police and having them arrest the criminal so you can, you know, retrieve what's yours... rrrriiiigggghhhhtttt! WTF? Do you live in an armed compound or something? What kind of logic is that? If you really want to support a piece of crap company like that, be my guest. The thought of buying anything from Intel nauseates me. It would be like buying something from a guy just after you watched him beat his wife. Just don't bitch when their nasty business practices harm YOU, the customer.

            • A_Pickle
            • 7 years ago

            That’s a bit sensationalist. I’d say there was a fair bit of evidence indicating that Intel was guilty as sin of EXACTLY what AMD was accusing them of. You beat your chest about Intel, and how badass their chips are compared to the anemic AMD chips, but you don’t even recognize the company’s wrongdoings when they’re dancing naked in front of you.

            What if Intel [i<]hadn't[/i<] strong-armed OEMs into using only Intel chips? AMD might be in a better market position today. Intel MADE $11 billion in 2012. AMD "made" -$1.8 billion. Were you cheering when the United States "won" the War in Iraq? Because that's what your unwavering loyalty seems like. "Stupid AMD, can't even keep up with Intel at 1/20th the budget. LAME." <-- That's you. That's what you sound like.

            • NeelyCam
            • 7 years ago

            [quote<]"Stupid AMD, can't even keep up with Intel at 1/20th the budget. LAME." <-- That's you. That's what you sound like[/quote<] No; I'm more like "Stupid AMD fanbois that think that AMD can keep up with Intel at 1/20th of the budget. LAME." I've been saying for a while now that AMD [b<]can't possibly[/b<] keep up, [b<]exactly[/b<] because of that significantly lower budget, but a bunch of AMD fanbois scream "Yes We Can!" Then when I point out that, as a result of that disadvantaged R&D budget AMD's products are getting asswhooping by Intel's products, they call me a dumb fanboi. I'm not attacking AMD. I think they are doing great considering the tiny budget they are doing it with. I'm just saying that it's not enough. They [i<]should[/i<] copy everything Intel invents as long as they do it legally - that's the smart thing to do when you don't have the R&D budget to do that innovation yourself. No - I'm attacking their fanbois that think AMD doesn't copy anything

          • anubis44
          • 7 years ago

          But the man who invented IMC, Jim Keller, moved to AMD after DEC was sold. Are you suggesting Intel should own his brain, or that he should have his memory wiped so that he can’t engineer an even better version for AMD?

          Also, when you say AMD ‘copies’ Intel way more than Intel copies AMD, are you only speaking in terms of quantity, or are you speaking in terms of quality? Because AMDx86-64 is worth a dozen minor innovations from Intel. Intel really didn’t want x86 to carry on to the 64 bit world, because they wanted to have a complete monopoly with IA-64. Not only can Intel thank AMD for x86-64, so can all of us; if not for AMD driving the x86 server market into the 64 bit realm, we’d have only Intel to turn to for mainstream high performance computing in servers and desktop computers. Then guess what your desktop CPU would cost? Clue: it wouldn’t be ~$200, more like ~$1000.

        • Airmantharp
        • 7 years ago

        Who’s idea was it?

        Intel could have done x86-64 or used IMC’s at any time, and they did it when it made business sense for them to do it. Remember those Phenom II’s coming out after Core 2’s and still getting spanked despite their IMCs? Or that Intel had x86-64 before there was even a 64-bit desktop OS (Vista)?

        Intel didn’t want to do IMC’s when AMD did because they had advanced their pre-fetching considerably trying to optimize Netburst. They didn’t [i<][b<]need[/i<][/b<] IMCs for Core 2, and in addition to that, they had fabs that were still pumping out northbridges. AMD was instrumental in pushing the industry towards x86-64, I believe largely because Intel had already invested a huge amount of resources into developing the VLIW Itaniums. And they haven't let the Itanium go yet, either; it's not practical now for the volume markets, but the architecture is still a huge advance over the more limited fixed-instruction CISC front-ends of x86 CPUs as well as the RISC back-ends, and of the other RISC architectures out there (PowerPC, ARM, etc.).

      • Hirokuzu
      • 7 years ago

      Just because somebody is first to market doesn’t mean that they had the idea first. Just because a computer engineer has an idea doesn’t mean they have the R&D budget to actually implement it (or they’re in the wrong side of the architecture development cycle). Design cycles for these things take years (I mean, look at how long it took for Bulldozer to come out).

        • NeelyCam
        • 7 years ago

        [quote<]Just because somebody is first to market doesn't mean that they had the idea first. [/quote<] That's a slippery slope.. This could lead to me claiming that Intel actually invented 64b computing - they have been around much longer than AMD, after all, so it'd only make sense. We can't continue our absolutely unnecessary arguments if we have to establish - without any doubt - who in fact came up with the idea first. So, I suggest we go with who was first to implement it in working silicon... that's much easier to track

          • Airmantharp
          • 7 years ago

          Intel could have made a 64-bit Pentium II, and AMD could have made a 64-bit K6. Indeed, the current Core series descends from that classic P6 architecture that debuted in the Pentium Pro/II.

          AMD even stated that adding 64-bit to the K7 to make the K8 was only a 5% increase in logic; the rest of the additional transistors over the K7 come from the IMC.

          AMD only did it because it made sense. With Intel pushing the VLIW IA-64 set, which wouldn’t easily shared with AMD, AMD knew that they had to do something, and with Microsoft’s support, they won the bet. And we won too- which IS good.

        • cartman_hs
        • 7 years ago

        so?

      • clone
      • 7 years ago

      Texas instruments was first to patent a means of producing multiple transistors one one package. (by individually threading thing copper wire between them) so it was apparently notably unoriginal of Robert Noyce (founder of Intel) to find a practica way to do it which eventually led to the creation of CPU’s?

      Intel didn’t find a way to implement it’s technology to work with an AMD cpu so no AMD is not really copying Intel unless you believe idea =’s ownership & innovation…. which is a horribly slippery slope that is being debated as we speak in the courts.

      was the first guy to throw a rock at someone else an innovator, and everything that’s followed since the actions of complete hypocrites because they are copying the idea?… I know, I’m just having fun poking huge holes in fanboy absurdity.

        • Sam125
        • 7 years ago

        Bell Labs played a larger role than Texas Instruments had in the development of the transistor. Most of the physicists working on FETs at Bell went on to win Nobel Prizes. Which is also why all of the major semicos are in California and not Texas. Just keeping it real. :p

          • MadManOriginal
          • 7 years ago

          People at Bell Labs certainly did good fundamental research, and TI did a lot with production and commercialization of the transistor but one thing I’d note is that transistors were basically miniaturized vacuum tubes for a long time. I’d rank Fairchild Semiconductor above TI and as the true grandfather of the modern semiconductor manufacturing industry because of the integrated circuit and the companies it spawned, with Bell Labs as an ‘ancient ancestor’. Fairchild Semi was the starting point for all the great semi manufacturers in Silicon Valley (and elsewhere) aside from TI and IBM – neat tidbit, these companies were called ‘Fairchildren’ and include the likes of Intel and AMD.

          Here’s a great documentary that any tech nerd who is interested in the history of the industry needs to watch:

          [url<]http://www.pbs.org/wgbh/americanexperience/films/silicon/player/[/url<] What's amazing to me after viewing that and digesting it is just how much a few people can radically change the world. Robert Noyce is one of those people.

            • Sam125
            • 7 years ago

            +1 Yeah, I’ve watched that documentary a few weeks ago on PBS, good stuff. Although I would say even non-nerds would appreciate that documentary.

          • clone
          • 7 years ago

          if you want to keep it real it was Britain’s John Ambrose Flemming in 1904/1907 who invented the Vacuum tube transistor which inspired Julien Lilienfeld in Canada to patent the idea of using surface states for a possible way to “copy” the vacuum tube’s abilities in solid state form which led Shockley’s work in New Jersey… not California.

          the reason why California became the birth place of silicon valley instead of Texas wasn’t because of Bell Labs (no physicists aside from Shockley would leave Bell) but because William Shockley was able to initially lure the current brightest graduates from across the country to his company in Mountain View, California…. specifically 8 of those graduates eventually quit working for Shockley in protest to his management practices and founded their own company joining with Fairchild camera to found Fairchild semiconductor & from Fairchild those 8 employees founded 65 companies in Silicon Valley along with other former Fairchild employees who combined with the original 8’s effort created more than 145 companies including Intel and AMD.

            • MadManOriginal
            • 7 years ago

            ^^ Starting with Shockley, all outlined in the documentary I linked. It’s really fascinating stuff!

            • clone
            • 7 years ago

            MadMan I’ve seen that documentary which is more about the birth of Silicon Valley than it is about the birth of the transistor….. it all depends on how you want to look at it, the documentary was a little dry to watch but it did tell 3 very interesting stories, why the “8” left Shockley semiconductor, the effect their rebelling had on the mindset of the culture of silicon valley that eventually spread across the nation in later years and the drastic changes that took place in Mountain View going from an agricultural economy to a technology one……but it “all” can’t start with Shockley unless their were no transistors before Shockley which was not the case, as mentioned John Ambrose Fleming introduced the first transistor but it didn’t start their either because the idea of transistors had been around since the 1800’s.

            • Sam125
            • 7 years ago

            [url<]http://www.nobelprize.org/educational/physics/transistor/history/[/url<] [url<]http://www.pbs.org/transistor/tv/index.html[/url<] There you go.

            • Sam125
            • 7 years ago

            [url<]http://www.dailytech.com/Toronto+Researchers+Increase+Solar+Efficiency+by+35+Percent+in+Quantum+Dot+Photovoltaics/article30085.htm[/url<]

            • NeelyCam
            • 7 years ago

            Cost-competitive..?

            • Sam125
            • 7 years ago

            Hmm…if something like this were to succeed then all signs would point to: Yes!

            Also, did you read the comments? I did a few calculations analyzing the energy output of a hypothetical system. 🙂

            • NeelyCam
            • 7 years ago

            Yeah, that would be great! Ban the evil oil; bring on solar and wind!

            I went to read your comment there (I assume you’re sam07?); interesting stuff. I’ve been waiting to fill the second sun-facing roof section with panels, as soon as they get efficient and cheap enough.

            (I’m sure you know, though, that kW is a measure of power instead of energy, right…?)

            • Sam125
            • 7 years ago

            [quote<]Yeah, that would be great! Ban the evil oil; bring on solar and wind![/quote<] Whoohoo! Finally, someone who doesn't always want more of the same. Although just to be boringly realistic, the new energy future will involve solar and wind but mainly rely on fusion power with some more exotic methods also being viable in niche cases (Oil will still be big for most of this century though.). Also, yeah my Dailytech account name is Sam07 and solar panel technologies have really stagnated for half a decade now, so the newer generation panels should only become more efficient and cheaper. The sooner the better, I say as researchers in this area have really become lazy. :p [quote<](I'm sure you know, though, that kW is a measure of power instead of energy, right...?)[/quote<] That's kW/(1 day cycle) so it's a unit of energy. 😉

            • NeelyCam
            • 7 years ago

            [quote<]That's kW/(1 day cycle) so it's a unit of energy. ;)[/quote<] Tricky... so, I guess it's kWd, then..?

            • Sam125
            • 7 years ago

            That’s a good question. I’d say just leave out the “d” since saying “kilowatts” rolls off the tongue better than saying “kilowatts per day”.

            • NeelyCam
            • 7 years ago

            It’s kilowatt-days instead of kilowatts/day. Energy is power*time, not power/time.

            • Sam125
            • 7 years ago

            Sure, still just exclude the “d” as even that’s not technically correct. 🙂

            • Sam125
            • 7 years ago

            Just to provide more information on what I wrote earlier:

            [url<]http://bit.ly/SWEH28[/url<] The future is bright indeed for those who aren't afraid to become more advanced and not more bass ackwards. :p

            • clone
            • 7 years ago

            I already knew.

      • flip-mode
      • 7 years ago

      It’s people, Neely. People come up with ideas. Not “Intel” or “AMD”. People at both Intel and AMD come up with ideas every day. I don’t think anyone should get upset when good ideas spread. That’s the way it’s all supposed to work. People at AMD come up with ideas too, you can be assured. Maybe you just don’t know about them or aren’t as impressed by them.

        • NeelyCam
        • 7 years ago

        [quote<]It's people, Neely.[/quote<] Soylent green? Individual people may come up with ideas, but refining them and implementing them in a practical manner is much harder and takes effort of large teams, with support from management. That's why I said 'Intel' and 'AMD'

      • NeelyCam
      • 7 years ago

      I’m a little disappointed. When I had a similar post some two years ago, I broke the -50 barrier in a day. AMD fanbois were much more aggressive back then, I guess, because BD hadn’t been released yet..

      Or maybe I’ve just lost my edge. I should get some new material

        • MadManOriginal
        • 7 years ago

        You’ve just got to keep up with the times. Saying positive things about Windows 8 has done well for a while but it’s starting to get fewer negatives these days.

        • anotherengineer
        • 7 years ago

        You can start here. 😀

        [url<]http://a.wattpad.net/cover/1182843-256-937789.jpg[/url<]

          • NeelyCam
          • 7 years ago

          That sounds great – thanks!

          I couldn’t find it on Amazon, though. Maybe I’ll swing by Barnes & Noble tonight after work

      • Theolendras
      • 7 years ago

      AMD has created x64 as we know it, was first to integrate memory controller, first to move away from FSB, resist to rambus and FB-Dimm madness, giving way to new DDR and GDDR implementations, is one of the few compagnies to give GPU computing and OpenCL a purpose and interesting hardware implementation for it.

      Their architecture is also largely different in Trinity, SMT is left out favoring dual core integer module, not that this move has paid off tough… Next will be HSA and what else from there.

      So yeah great ideas are largely borrowed and inspired from, but they both have their own implementation and both camp are doing it. Thousands ways to do things and competition is good for exploring a least a few ones. When you realize the competition has a leg up because some of their choice produce better results, it’s not time to cave in, but to recognize this is more efficient and implement something similar. You still have to do the engineering to implement it, you can’t just scan the blueprints and reproduce it.

        • NeelyCam
        • 7 years ago

        [quote<]was first to integrate memory controller, first to move away from FSB[/quote<] You can't claim the same thing twice! [quote<]resist to rambus and FB-Dimm madness[/quote<] So [b<]not[/b<] doing something is innovation now? [quote<]giving way to new DDR[/quote<] Intel was first to DDR2 and DDR3 [quote<]Their architecture is also largely different in Trinity, SMT is left out favoring dual core integer module,[/quote<] Yes; that's definitely a bit different from what Intel is doing. Let's see if Intel will copy it [quote<]When you realize the competition has a leg up because some of their choice produce better results, it's not time to cave in, but to recognize this is more efficient and implement something similar.[/quote<] Agreed

          • Theolendras
          • 7 years ago

          Respecfully it deserves clarification.

          [quote] You can’t claim the same thing twice!

          Technically speaking, FSB and memory controller is not the same thing, tough I understand this can be confusing. FSB was used for memory request, interrupt, inter-CPU communication on multi CPU setup and was a parrallel protocol to allow all of it to share the same interface. As the memory controller moved away from the FSB it made little sense to preserve a complex parrallel protocol so it was seen better redesign that interface for a serial one, and a memory coherent one at that to also enable NUMA configuration. It was a sensible choice and it would have made little sense to maintain FSB so I see your point, but they are still different moves.

          It might have been possible to have a centralized chipset to communicate between CPU in a FSB like fashion.

          [quote] So not doing something is innovation now?

          Not innovative, but certainly not copycat.

          [quote] Intel was first to DDR2 and DDR3

          Somehow first to market does not neccesserily mean you did the R&D and specifications of something. AMD has been one of those thriving for new memory standard like GDDR5 and are now developping GDDR6 for example even if it’s not nearly market ready right now. Last I know AMD had interest in DDR2 and DDR3 development and is proposing stuff to JEDEC…

            • NeelyCam
            • 7 years ago

            [quote<]Somehow first to market does not neccesserily mean you did the R&D and specifications of something.[/quote<] I'm pretty sure Intel was heavily involved with developing DDR1/2/3, and also the future of memory - HMC. As far as I know, Intel is also involved with pretty much every I/O spec you could think of, ranging from USB to PCI, SATA to 802.11...

            • Theolendras
            • 7 years ago

            True for most, but not DDR1. Intel was very relustant, back then it rambus was the technology of the future for Intel. Still no doubt Intel contribute a lot more to the technology landscape a a whole, but let’s not down play the contributions of the underdog.

      • anotherengineer
      • 7 years ago

      I would like to blatantly copy Finland’s ‘coffee bread’ , mmmmmmmmmmmmmmm.

      • anubis44
      • 7 years ago

      AMD and Intel have a cross-licensing agreement. This means AMD can use certain innovations that are developed by Intel, and vice-versa. An example of Intel borrowing from AMD is x86-64 (64bit x86 extensions), on-die memory controller and dual-core CPUs on one die. AMD is also able to implement SSE, SSE2 and SSE3 extensions in their CPUs. These two companies both actually benefit from each other as well as being arch-rivals.

    • HisDivineOrder
    • 7 years ago

    If AMD is delivering this part, then I think Kaveri is going to be delayed. Just watch. AMD will state they already filled the channel with Richland and they don’t see any reason to jump to it right away, delaying it for six months until this time next year.

    They don’t pre-announce things anymore. Just like with the 7xxx series not being updated despite indications showing that even the card makers (ie., Asus) didn’t know ahead of time that the 7xxx series wasn’t going to be refreshed. Look at the latest HardOCP review talking about how the Asus card fell out of the channel due to changes in the roadmaps. They don’t say it explicitly, but you know that’s Asus being surprised by the fact the 7xxx series wasn’t changed over to 8xxx series with a GPU refresh.

    Basically, AMD will deny they EVER intended to do a CPU refresh like they denied they ever intended a GPU refresh in the Spring. And they’ll state they see no reason to make a changeover at this time. It annoys me because Steamroller cores are really, really needed now for AMD to get competitive with IB, let alone Haswell or Broadwell.

    They’re bleeding money badly and they need to keep things cheap. Ain’t nothing cheaper than recycling the same products for more than one cycle while delaying products that are already done to improve performance in a year. This is how they can cull their R&D so hard and still have new products. They’re stretching what they’ve already made across the gap, hoping that they can make some profits off the existing and last until 1) a miracle happens or 2) they get bought out.

    Meanwhile, Intel and nVidia are both focused on other markets, so they’re more than happy to play along and keep trucking with what products they already have, too. Waiting for either a rainy day or the day AMD returns to competing.

      • Airmantharp
      • 7 years ago

      AMD has something real with this ‘Bulldozer’ architecture. Hardware hyper-threading, and tuned for higher clockspeeds. I’m hoping that they keep pushing this stuff, because if/when they get it figured out, it will be something awesome, and Intel won’t have an answer for it.

        • moose17145
        • 7 years ago

        Not sure if I agree or disagree with that assessment. Bulldozer has certainly had a rocky beginning, no doubt about that. But AMD has seemed to be making progress getting its clock speeds up a little bit while improving IPC and also keeping its thermals / power consumption in check (at least on the mobile side).

        That being said the improvements are nice, and are definitely there, but whether this is leading up to something really good, or just another netburst, I am as of yet undecided. Only time will tell. This architecture certainly has it’s strengths, as well as weaknesses. Ultimately it will be up to both AMD refining this architecture to better take advantage of it’s strengths combined with software developers making better use multi-threading for this style architecture to really shine.

        Some applications that are already highly multi-threaded we have already seen these chips handily keeping up with the i7’s from Intel. Although while typically using more power to do so. But the performance is there none the less.

        Where these chips tend to struggle (comparatively speaking), is in single threaded applications. For some programs this isn’t a huge issue as they can be made to be multi-threaded, but others are simply inherently single threaded and there isn’t anything that can be done to change that.

        But all of that being said… I would LIKE to see the bulldozer architecture succeed. If nothing else I find Bulldozer to be more interesting to watch and keep up with than what Intel has been doing.

          • Airmantharp
          • 7 years ago

          They have the multi-threaded stuff down- they do need to focus on single-threaded performance, and they need to watch their power usage compared to Intel’s process advantage.

          It’s not a fun place to fight back from, but they’re either going to do it or they’re going to die. Their real advantage, I think, is that they can combine competent multi-threaded CPUs with world class graphics, and they can even probably do that while attaching to DDR4/5. That chip (or chips) that they’re building for Sony and Microsoft show a lot of promise, and could quickly be adapted for other uses, especially with a die shrink or two.

            • Theolendras
            • 7 years ago

            Well I think your opinion is worth a read and I mostly agree, but I find it funny that you’re saying that they’re going to die if the don’t address power efficiency and single thread performance and then talking on their GPU differiencing factor thereafter… If they can indeed capitalize on their strengh, with Directcompute, OpenCL and HSA then this might be enough.

            To be frank I still think you are right, they may not need to be first in single thread but they need to show they can still hang in there mostly in these situations. Power efficiency might be even more important else they may be completely wiped out of the lucrative market server and laptop won’t takeoff (both are sectors where they are beaten pretty bad already).

            • Airmantharp
            • 7 years ago

            You can look at it two ways- comparing them to Intel, and comparing them to Nvidia.

            Versus Intel, they have a weaker CPU across the board. Anything AMD can do, Intel can do better, with a higher margin to boot, considering Intel’s R&D budget and process development advantages, let alone their current technology.

            Versus Nvidia, they’re behind everywhere it counts- less support for developers, less optimized drivers, less support for OpenCL vs. Cuda. Nvidia has everything covered from tablets to mainframes, including experience with ARM, which is about to get interesting.

            Take a look at the server benchmark Anandtech posted yesterday- they show a 24 module ARM server both outperforming a dual-octa-core Xeon server and running at lower power under load for webpages. It’s a very specific workload, but it shows ARM’s potential, and it puts AMD’s server marketshare in jeopardy, especially in the low-cost realm, where ARM will no doubt head.

            So I agree with your argument pertaining to efficiency. AMD doesn’t have to be as fast as Intel or Nvidia, but they’re going to need to be at least as efficient, or they’re going to get eaten by ARM.

          • Theolendras
          • 7 years ago

          I agree, AMD is a relatively small compagny and Bulldozer was a tall order. So there was too much compromise in engineering it. They made nice progress with Trinity, particularly in power consumption. Now the remaining glaring problem the single thread performance must be at least remotely competitive.

          The problem is, even if you do have more multi-threaded apps, the Adhmal Law will put you in a disavantage too many times without good single-thread performance. Also GPGPU initiative can take care of embarrassingly parrallel way better. Some problems are inhenratly single thread. So even in a world with virtually all applications well developped for multithreading single thread will be revelant in many cases. Not to say that many applications might not be optimize for multithreading ever or before long…

    • moog
    • 7 years ago

    It’s amazing how much more power efficient x86 is becoming without compromising performance. In the case of AMD they’re bringing powerful graphics to low cost devices, I couldn’t have imagined it would get this good a couple years ago, it’s good to see them executing on time.

    With Blue shipping soon and bringing even more power efficiency and performance improvements over Win8, “Wintel” tablets are going to delight users.

      • Airmantharp
      • 7 years ago

      If you could get Nvidia graphics on Intel’s cores hooked up to DDR5, then life would be perfect. Intel’s cores are so small and efficient per-transistor and per-clock, and so are Nvidia’s (well, their mid-range designs from GK104 on down).

        • Alexko
        • 7 years ago

        Intel’s cores are quite power-efficient, but also much larger than AMD’s on the same process size.

        As for Kepler, it’s not significantly different from GCN in terms of efficiency, with the exception of Tahiti, which isn’t quite as good as Pitcairn and Cape Verde. Bonaire probably does a bit better than every other GCN chip.

    • brucethemoose
    • 7 years ago

    If its almost the same silicon as trinity, why are the ULV parts launching ~2 months after the normal ones?

      • OneArmedScissor
      • 7 years ago

      Probably binning. Intel does the same thing.

      Though it’s fast approaching, we aren’t quite to the day where ULV parts are built as a dedicated chip and able to go on sale first. Even with Haswell, they’re still a niche platform that’s assembled with a hodgepodge of components, as has been the case since Core 2.

    • forumics
    • 7 years ago

    AMD sells Austin campus
    [url<]http://www.marketwatch.com/story/amd-sells-austin-campus-2013-03-12?link=MW_home_latest_news[/url<] for a global chip maker to sell its campus at only 164m, they must be seriously desperate.

      • chuckula
      • 7 years ago

      [quote<] "As we reset and restructure AMD for long-term success, we are taking a number of steps designed to optimize our business and monetize assets," AMD Chief Financial Officer Devinder Kumar said[/quote<]... immediately before hitting White Castle.

      • ludi
      • 7 years ago

      Out of curiosity, what is your valuation of AMD’s commercial real estate?

      AMD was already talking about doing this last year in order to cut costs, so this is not actually surprising. Also, you might have missed the “sell and lease back” part of the story — they sold to a property management company, which means an immediate cash infusion plus letting someone else worry about leasing any unused space, paying property taxes, keeping up on utility bills and landscaping, and buying replacement parts for the air conditioner.

      While it’s always nice to have land, it’s also an expensive distraction for companies that don’t have major capital equipment investments tied to it, which is why most companies don’t do it these days.

        • sschaem
        • 7 years ago

        AMD did buy and custom built this from scratch just a few years ago…

        So it is a desperate move, but one that they planned for a while, so this is old news.

        Most of their workforce in austin move ‘across the street’ to the new Samsung offices.
        Maybe AMD should have done a package deal to simply the process for Samsung ?
        “Office for rent, come free with our Brazos Team for your next mobile SoC R&D efforts”
        “Secret document included”….

          • quasi_accurate
          • 7 years ago

          Not quite “across the street”. The AMD campus is in southwest Austin. The Samsung campus is in northwest Austin. Completely opposite sides of the town. And no, most of the workforce is not going to Samsung.

      • OneArmedScissor
      • 7 years ago

      It’s a tech company. They’re supposed to be ahead of the curve. Office complexes have been declining for years, thanks to the internet.

      That’s not to say that Intel will suddenly announce they have gone 100% work from home next year, but the trend is obvious.

      AMD spun off their foundries and now they’re even spinning off part of their chip design by shifting towards ARM. That doesn’t make it any more the beginning of the end than GPU and CPU companies consolidating.

      • NeelyCam
      • 7 years ago

      I remember seeing that news months ago… lemme see… Here:

      [url<]http://www.theverge.com/2012/11/28/3703632/amd-plans-sell-austin-campus-raise-cash[/url<] Just like back then, I think this move makes perfect sense. Gut assets to raise cash that's desperately needed.

      • A_Pickle
      • 7 years ago

      You know, but Via’s still around. AMD is shrinking, but it seems like the one thing they ARE pretty good at is… not… dying.

      They’ve consistently been able to downsize and downsize, and considering what they’ve come out with… I’m actually flabbergasted.

      • anubis44
      • 7 years ago

      You do realize that IBM does the same thing; they sold most of their real estate back in the 1990s and leased back the properties. This allows them to:

      a) Make good money on real estate, while releasing the equity contained therein for business purposes.
      b) Write off the costs of leasing on their income taxes, which you can’t do with property taxes when you are an owner (at least not in Canada – don’t know what the tax laws are in the U.S.
      c) Absolve yourself of the costs of maintenance and repair for the building – something that, while tax-deductible, is not something you really want as a line-item fixed cost on your balance sheet, as it can also be unpredictable.

      In my view, this is a wise use of capital that can be ploughed back into the business, especially at a time of transition in the computing business.

      So if AMD is ‘desperate’ for doing this, so is IBM.

        • sschaem
        • 7 years ago

        a) AMD lost about 50 million in this deal. They are selling at a HUGE discount.
        AMD is selling a recent construction, with heavy fees that have not been recovered yet.

        At least b) & c) made sense to AMD as they built this just a few years ago?

        The reason for this change of mind is that AMD is borderline bankrupt.
        Cash flow (lack of it) is what’s driving the sale.

        nvidia is doing the opposite , and starting to build their own brand new campus.
        Adobe just built a brand new custom designed ‘space ship’ in utha,
        and Apple got a finger on the trigger to build their new campus.

        I trust nvidia, apple, Adobe business decision allot more then AMD…

    • shank15217
    • 7 years ago

    I’m looking forward to the A10-6700 model, it might become the APU of choice till Trinity rolls around.

      • MustangSally
      • 7 years ago

      Presume you mean Kaveri?

    • Mr. Eco
    • 7 years ago

    The six AMD exclusive features are quite good – and for free. AMD Screen Mirror compared to the expensive WiDi is so good for home use.
    I am only suspisious to Perfect Picture HD. I do not like some software to “enhance” contrast and quality for me – had bad experience with similar feature in Panasonic TV.

    • ronch
    • 7 years ago

    I have a friend who has an A10-5800K paired with a Gigabyte A85 board. Even after installing an aftermarket cooler he’s hitting 70C at load, he says (not sure which cooler he’s using though). Considering Trinity is less aggressive than Richland in boosting MHz, I hope Richland will not be difficult to cool. Now I know all the reviews out there are saying Trinity is awesome in the power efficiency department esp. at idle, but at load you have to admit it’s not the best thing out there, and hitting 70C at load doesn’t seem too far fetched. I actually want to get a Trinity or Richland someday so I hope AMD has ironed out some load power efficiency kinks.

    I also have to wonder just what kind of core AMD is using when they said ’32-bit microcontroller.’ I know Intel is using a i486-class core to control Sandy and Ivy Bridge’s power management, so it’ll be curious to know just what AMD is using. ARM, perhaps?

    Finally, it’s good to know Kaveri will ship this year. That would mean those SR cores are pretty much ready to roll. I wonder how much performance those extra decoders will net AMD.

    Edit – Can’t understand all the down-thumbs. Just sharing what my friend has experienced. I’m not sure if he did everything right, and I’m certainly not badmouthing AMD either. Just something to consider if you’re planning to get Trinity or Richland.

      • maxxcool
      • 7 years ago

      70? even folding at a 1ghz overclock to 4.0 my hexcore doesnt hit 70 ….

      • xeridea
      • 7 years ago

      Your friend got a tiny cooler, doesn’t know how to apply thermal grease, and/or has no airflow in case. I have an FX-6300 with aftermarket cooler. I max at around 53C with all 6 cores loaded. CPU fan is set to silent, and needs cleaned a bit.

        • Theolendras
        • 7 years ago

        Agreed, or worse he can’t diffirentiate Farenheit and Celsius.

        • ronch
        • 7 years ago

        I don’t know. Perhaps I’ll see for myself when I buy my own Trinity or Richland.

      • forumics
      • 7 years ago

      my core i5 will throttle and shut itself down too if i didn’t sit my heatsink properly or if i turned off the fan on the heatsink, but on the other hand, i know during the pentium 4 days that when water cooled the p4 was able to run at 3.5ghz at room temps.

      does that make the p4 a much better chip than the i5?

      • jihadjoe
      • 7 years ago

      Equally imporant: What are the ambient temps where your friend lives?

        • ronch
        • 7 years ago

        Roughly 32C.

      • auxy
      • 7 years ago

      [quote=”ronch”<]I have a friend who has an A10-5800K paired with a Gigabyte A85 board. Even after installing an aftermarket cooler he's hitting 70C at load, he says (not sure which cooler he's using though)[/quote<]Ehh. I've built a lot of A10 machines and using the stock cooler at max CPU+GPU load it never hits 70C. Usually around ~62C maximum, and that's in a tiny case with no airflow. (ᅌᴗᅌ* )[quote="ronch"<]Edit - Can't understand all the down-thumbs. Just sharing what my friend has experienced. I'm not sure if he did everything right, and I'm certainly not badmouthing AMD either. Just something to consider if you're planning to get Trinity or Richland.[/quote<]Except you ARE bad-mouthing AMD. You're repeating an unverified and vague negative experience the veracity of which you yourself admit is questionable, and yet you repeat it nonetheless and then go on to affirm that it's "something to consider". You may not realize it, but you're creating and/or perpetuating a negative perception that is unjustified. (・`ェ´・)つ You should think more carefully about what you say.

        • NeelyCam
        • 7 years ago

        Your emoticons are beyond creepy

        • ronch
        • 7 years ago

        Hey dude, I’m talking about something someone I know has experienced and I did put there CLEARLY enough in the edit that this was his experience and I don’t know if he did everything right or not. Are you suggesting that I keep quiet despite these occurrences? People [b<][u<]WILL[/b<][/u<] talk about products, whether you like it or not, the same way people talk about how some car brands are crappy or cost a lot to maintain. That's why people wanting to buy a product ask those who already have it for comments on the product, whether good or bad, whether the owner is using the product properly enough to make the most out of his ownership experience or not, to help them make informed buying decisions. It's just how the world works, pal. [i<]Live with it.[/i<] For the record, I am sporting an FX-8350 and it does hit almost 70C in the BIOS with the stock cooler and ambient temp is around 32C. Then again, perhaps you're living in the Antarctic with your penguins and so you're fine and dandy with your A10 temps. Stop being such a stupid, [b<]defensive[/b<] AMD fanboy and stop making your stupid emoticons. They're [i<]sooooo gay[/i<].

          • anotherengineer
          • 7 years ago

          You have to be careful with Gigabyte boards, I have a few of them and by default they actually overvolt AMD cpus at stock speeds.

          My 990XA and 790GX used to pump 1.40V into my 955 at stock speeds, while I think stock voltage was 1.35V, and I have it undervolted to 1.225V.

          You should check the voltages the mobo is pumping to the chip.

            • GeForce6200
            • 7 years ago

            Agreed with anotherengineer about checking the voltage. Mine was set wrong as well, although it could only be found by checking with CPU-Z. It made my FX4170 hit 60C in under two minutes with PRIME 95. Turned out to be wide issue with the board, got a Gigabyte board and temps lowered substantially.

          • auxy
          • 7 years ago

          I’m not an AMD fanboy — I’m not a ‘boy’ at all — and I live in Texas. Nooot quite Antarctica. (」゚ペ)」My PC is Intel/Nvidia, and has been for a few builds now. WRT my emoticons, who cares if they’re gay? ☆⌒(*^∇゜)v

          Maybe you should worry less about who and what are “[i<]soooo gay[/i<]" and more about how what you say appears to people.[quote="ronch"<]Are you suggesting that I keep quiet despite these occurrences? People WILL talk about products, whether you like it or not, the same way people talk about how some car brands are crappy or cost a lot to maintain. That's why people wanting to buy a product ask those who already have it for comments on the product, whether good or bad, whether the owner is using the product properly enough to make the most out of his ownership experience or not, to help them make informed buying decisions.[/quote<]You missed the point completely. Maybe, instead of talking about gays, you should [b<]actually[/b<] worry about improving your reading comprehension. ψ(`∇´)ψ

            • ronch
            • 7 years ago

            If you’re not an AMD fanboy (well, just because you’re using Intel/Nvidia doesn’t necessarily mean that you’re not one), then why are you so riled up when I said something that merely suggests there may be some temp issues with Trinity? This is a free country, isn’t it? So when I talk about Trinity’s temps, which I did acknowledge may not be typical since I’m not sure my friend did everything right, you switched to defensive mode right away. Here:

            [quote<]Except you ARE bad-mouthing AMD. You're repeating an unverified and vague negative experience the veracity of which you yourself admit is questionable, and yet you repeat it nonetheless and then go on to affirm that it's "something to consider". You may not realize it, but you're creating and/or perpetuating a negative perception that is unjustified. (・`ェ´・)つ[/quote<] (Yeah, I had to include your emoticons.) It hurts you every time you read something remotely negative about AMD, doesn't it? Then this: [quote<]You missed the point completely. Maybe, instead of talking about gays, you should actually worry about improving your reading comprehension. ψ(`∇´)ψ[/quote<] … which was your reply to this earlier post of mine: [quote<]stop making your stupid emoticons. They're sooooo gay.[/quote<] ARE YOU SURE I AM THE ONE who needs to improve his reading comprehension? When did I [u<]EVER[/u<] talk about gays? I was talking about YOUR EMOTICONS, which are [i<]sooooo gay[/i<]. Get it?

            • auxy
            • 7 years ago

            Ah … you’re a troll. I see. ┐( ̄ヮ ̄)┌

            That’s kind of a shame. I thought I might be able to teach you something. Oh well.

            • ronch
            • 7 years ago

            Nope. Calling me a troll doesn’t help your case, because I’m not trolling. If anything, it only tells me you’ve run out of arguments to support your point.

            • auxy
            • 7 years ago

            Nope! Just not going to bother arguing with someone who says things like this:[quote=”ronch”<]It hurts you every time you read something remotely negative about AMD, doesn't it?[/quote<] ... which is pretty dumb in light of many other things I've said, both in the comment threads and also in the forums (which you can search). Investigating your posts shows that you're pretty obviously either stupid or a troll, and either way, I can't be bothered. Sorry! ┐(‘~`;)┌

            • ronch
            • 7 years ago

            My previous comment was meant to make you realize to give this a rest, but you just couldn’t, could you? So that proves that you ARE butt-hurt from all this AMD talk. 🙂 And no, this has nothing to do with the other threads here at TR.

            [quote<]Investigating your posts[/quote<] You mean you REALLY checked out ALL my previous posts here? You must really be butt-hurt! You're the one who started these inflammatory remarks when you replied to my first post, and now you're the one calling me a troll? There's a [i<]fine line[/i<] between trolling and merely sharing an experience or knowledge about certain products, pal. Learn to distinguish between the two instead of resorting to calling people 'trolls' and calling them stupid when you've run out of arguments to support your points.

            • auxy
            • 7 years ago

            [quote=”ronch”My previous comment was meant to make you realize to give this a rest, but you just couldn’t, could you?[/quote]I’m mostly enjoying watching you make an arse of yourself on the internet.[quote=”ronch”<]So that proves that you ARE butt-hurt from all this AMD talk.[/quote<]Really? And I suppose you also think the 9/11 attacks justified the Iraq War? The two follow each other just as well. [quote="ronch"<]And no, this has nothing to do with the other threads here at TR.[/quote<]Really? A judgement of character weathers no evidence of the person's character? How foolish are you, really?[quote="ronch"<]You're the one who started these inflammatory remarks when you replied to my first post, and now you're the one calling me a troll?[/quote<]Redirecting blame [i<]and[/i<] trying to imply I don't have a point? Classic troll tactics! You started the inflammatory remarks with your original comment; I merely pointed it out and you are the one who got defensive, not I, child. I haven't bothered to bring up any further points because you have still failed as of yet to counter or even say anything to address any of the points I brought up in my original reply to your post; you've merely proceeded with ad hominem attacks and completely unfounded anecdotes, all while repeating the same things I already decried in your original post. Really, this is you: [quote<]Hey guys, I'm totally not bashing anyone because I could be wrong, but here let me bash AMD for awhile, oh, and this should definitely be taken as gospel if you're building a system. But I could be wrong, so you should definitely listen to this thing that might have happened to my friend, if I remember right, because it's definitely a valid concern, probably.[/quote<] Look, ronch, I don't care if you -- or anyone else -- bash on AMD -- or anyone else -- as long as you're thorough, factually accurate, and consistent with your complaints. You are none of these things. Having quite thoroughly exhausted my patience with you, I am through replying to this thread. I hope you will consider your actions and feel suitably humbled, but I realize this is exceptionally unlikely. Perhaps one day you shall have a reckoning of sorts and then you shall know where you went wrong. [super<]Hint: it was when you picked a fight with me.[/super<] [sub<][i<]edit: grammar and spelling[/i<][/sub<]

            • ronch
            • 7 years ago

            [quote<]Really? And I suppose you also think the 9/11 attacks justified the Iraq War? The two follow each other just as well.[/quote<] What does THAT have to do with this? Please stick to the topic. Besides, 9/11 was sooooo long ago. Learn to move on. [quote<]Really? A judgement of character weathers no evidence of the person's character? How foolish are you, really?[/quote<] So you searched the TR forums to gather evidence of my character? You're gonna base your opinion of me based on what I write around here? You really spent [u<]time[/u<] doing that? My, you must have an awful lot of free time! And then you say that you're not hurt? To make it worse, you call me foolish? And yeah, you called me a troll, too: [quote<]Redirecting blame and trying to imply I don't have a point? Classic troll tactics! You started the inflammatory remarks with your original comment; I merely pointed it out and you are the one who got defensive, not I, child.[/quote<] You're saying that you merely pointed it out and I got defensive. Wait just a god-darned minute. Isn't it the OTHER WAY AROUND? I merely pointed something out regarding Trinity's temps in my orig. post, even being humble about it, and you got defensive. Isn't that right? Let's recap your response/reaction to my original post: [quote<]Except you ARE bad-mouthing AMD. You're repeating an unverified and vague negative experience the veracity of which you yourself admit is questionable, and yet you repeat it nonetheless and then go on to affirm that it's "something to consider". You may not realize it, but you're creating and/or perpetuating a negative perception that is unjustified. (・`ェ´・)つ [/quote<] Now, if that's not inflammatory, I don't know what is. See, there's a fine line, as I stated earlier, between [b<]merely expressing a concern/opinion and being seen as inflammatory by overly sensitive people who can't stand reading about anything negative about their favorite company or something (or expecting everything they read on the Internet to be thorough and factually accurate)[/b<], and [b<]deliberately starting an all-out war regarding a topic by knowingly and deliberately posting something that the poster knows [u<]will[/u<] trigger angry reactions[/b<]. The former isn't trolling per se, the latter is. Looks like you saw my original post to be one of the latter, thereby calling me a troll. There's a fine line. If you don't see it, you'll end up doing what you just did again and again and again. [quote<]Look, ronch, I don't care if you -- or anyone else -- bash on AMD -- or anyone else -- as long as you're thorough, factually accurate, and consistent with your complaints. You are none of these things.[/quote<] I NEVER stated that my original post was thorough and factually accurate. I AM MERELY EXPRESSING A CONCERN about a certain product. I am NOT writing an article, merely a COMMENT. Why are you so riled up with that? This is the Internet, pal. NOT EVERYTHING HERE is thorough and factually accurate and you'll find ALL SORTS of OPINIONS and HERESAY around here, in case you still haven't noticed, and your reaction to my original post gives me the impression that you ARE taking this a bit too seriously. What's wrong with expressing my opinion? You're some sort of stiff or something? Demanding that EVERYTHING people write around here should be THOROUGH and FACTUALLY ACCURATE down to the last letter? Are you like this in real life, demanding that everything people say around you are thorough and factually accurate, and get riled up every time people do not meet your 'standards'? Better watch your blood pressure, pal. I sincerely hope you now understand just what the heck happened here. You don't need to post inflammatory reactions to what people post around the Internet and people are NOT obliged to give thorough and factual statements around here especially if it's just him/her expressing a concern or his/her opinion. Not everyone is writing a 100% thorough and accurate article. This is a comments section where people DISCUSS. Instead, you shoot me down. How inflammatory is that? It's unfortunate that this has blown out of proportion. No hard feelings, pal. I'm willing to put this aside and act as if this never happened. See you around the TR comments section and/or forums.

          • sschaem
          • 7 years ago

          Why dont you check the review on the web of the A10-5800k ?!?!

          They confirm auxy results, ~60c at absolute max load with stock cooler.
          [url<]http://www.legitreviews.com/article/2047/16/[/url<] So could the problem be "User error" and you wrote a whole story of doom and gloom for nothing ?

            • ronch
            • 7 years ago

            Online reviews only serve to give the potential buyer some data points he/she can use to make a more informed buying decision, the same way ‘car comparos’ (say, from Car and Driver) are not conclusive and are there only to inform the potential buyer about what some reviewers/editors think of the product. Of course, there will always be some hard numbers, say, the acceleration time of a certain vehicle or the benchmark results of a certain CPU. Those numbers are objective, of course, but there are still other factors to consider. For example, those 60C temps reviewers are seeing on their A10 test systems may be affected by things such as ambient room temperature, the cooler they used, or the motherboard they’re using, which, as someone has suggested here, may be sending too much juice to the CPU thereby making it heat up more.

            As for doom and gloom, no, as I’ve stated earlier, I’m not bashing AMD here. Not at all. I’m merely expressing my concern with Trinity and Richland. [u<]For the record[/u<], I am planning to get a Richland one of these days and my friend's experience has only somewhat raised my curiosity as to why he's experiencing what he is experiencing. I'm actually quite tempted to go out and get an A10 already (except I just got my FX-8350 setup and am giving my wallet a breather) just to see if he just had a hot sample or A10 really is all what reviewers make it out to be. If people here think I'm bashing AMD for bringing this up, then you can get more of this kind of 'bashing' by checking out online forums such as the one here on TR. There, you'll find threads talking about many problems concerning many products from all the big companies out there, not just AMD or Intel. Can you call that bashing?

      • Bensam123
      • 7 years ago

      Apply new thermal paste, make sure the fan is actually spinning, reseat the heatsink… If temps persist consider flashing to a newer bios.

      Those temps aren’t normal.

      I suggested a new bios because some older bioses don’t yet support new chips, so they need the newest version. In other words your bios could be mistakenly sending the chip the wrong voltage and/or C’n’Q isn’t operating correctly meaning it’s always sitting at the maximum frequency with the maximum voltage.

        • ronch
        • 7 years ago

        Yeah, that’s what he told me he did, reseat the heatsink, apply quality TIM (not the stock included with the HSF), and check for a new BIOS (he’s using the latest at that time).

        I’m betting it could be the BIOS and/or board, or perhaps his aftermarket HSF wasn’t very good. If anything, this just tells me that buying the latest chips can sometimes give you some issues like this. Being an early adopter can sometimes lead to things like this, I suppose. That’s the thing you have to deal with if you want to have the latest toys.

    • chuckula
    • 7 years ago

    [quote<]One of the bundled apps, Gesture Control, will use GPU acceleration to translate hand waving into commands for media playback, web browsing, and other applications. Gesture Control appears to rely on a webcam rather than a true 3D camera, which is great for compatibility but probably means lousy precision. Another app, Face Login, will let you use your webcam to log into Windows or access websites. Folks who want to stream content to remote televisions will be able to use Screen Mirror, which promises a low-latency connection and requires DLNA-compatible hardware on the receiving side. Screen Mirror appears to be broadcast-only, so it's more of an answer to Intel's Wireless Display tech than a competitor for Nvidia's Project Shield. AMD will also combine Richland with a collection of video software—Quick Stream, Steady Video, and Perfect Picture HD—that will handle bandwidth prioritization, stabilization processing, and dynamic image adjustment, respectively.[/quote<] DONOT WANT GIMMICKY CRAP. Especially kuz I run Linux and none of this junk will have any support there. (P.S.--> That applies to gimmicky crap that we get from Intel/Nvidia/Microsoft/Google/etc. etc. too, AMD is not alone in this crime.)

      • derFunkenstein
      • 7 years ago

      Yes I soooooooo don’t want any of this stupid software. +1

        • derFunkenstein
        • 7 years ago

        And here the thumbs make no sense. Count Chuckula should be up around +90, not -1.

          • MadManOriginal
          • 7 years ago

          Why, because he doesn’t want to use software that’s completely optional but needs to tell the world that he won’t be using it on his <1% market share OS anyway?

            • derFunkenstein
            • 7 years ago

            Well, I’m with him, but only because my mom (or someone else for whose tech I am responsible) will wind up with this software and it will inevitably cause problems with Windows.

      • Saribro
      • 7 years ago

      So you’re crying about not being able to use something you don’t want to use?

        • chuckula
        • 7 years ago

        Don’t be so self-centered!
        It’s not about me, I’m too smart for this crap.
        It’s about the people who aren’t too smart for it who I will have to deal with in the future when gimmick XYZ decides to break and I have to fix it! grrr….

          • derFunkenstein
          • 7 years ago

          I’m totally with you but apparently I’m alone.

            • Airmantharp
            • 7 years ago

            You’re not alone. Just reading about software that comes with a CPU makes me shiver.

            If these features were implemented as open APIs that anyone could use, sure, and maybe they are that as well, but I don’t want AMD’s stuff.

    • Alexko
    • 7 years ago

    Does TR plan to review a Richland-based laptop?

      • tbone8ty
      • 7 years ago

      They havent even reviewed a trinity based laptop yet. Thats not a prototype

        • NeelyCam
        • 7 years ago

        I’ve been wondering about this. I’ve been unable to find solid reviews on Trinity ultrathins anywhere… no TR, no Anand, not even Tom’s..

    • jjj
    • 7 years ago

    because we all love a paper launch….

      • MustangSally
      • 7 years ago

      Did you read as far as the second sentence, or stop after the first 3 words?

        • HTarlek
        • 7 years ago

        He would have had to stop reading at the second word. If he read at all.

          • derFunkenstein
          • 7 years ago

          If he reads too many words in a row, he gets sleepy!

            • DragonDaddyBear
            • 7 years ago

            In his defense, I didn’t see any products launching with this, nor a mention job retail availability (though, because they are a mobile being launched right now, I wouldn’t expect that). It may be [quote<]here[/quote<] but I don't see it.

            • derFunkenstein
            • 7 years ago

            It’s been shipping from AMD to OEMs since January and there are products listed. There aren’t products using that product yet, but I have a hard time faulting AMD.

    • Ryhadar
    • 7 years ago

    It’s a shame I’ll probably never see Richland in the laptop form factor that I want (just like with Trinity). I can literally buy a 11.6″ notebook with a quad core i7 and a discrete GT 630.

    But if I look for AMD notebooks in the same size the best I can do is Bobcat 2.0. If someone showed up with an 11.6″ Trinity/Richland notebook I’d probably buy it without even thinking.

      • rwburnham
      • 7 years ago

      AMD is sorely underrepresented in the laptop space.

        • NeelyCam
        • 7 years ago

        This. I think it just takes time for AMD to gain the reputation in the laptop space. Richland will take it one step further.

        I’m personally more excited about Kaveri. I just hope it’ll land on time for my Black Friday buying window…

          • dpaus
          • 7 years ago

          I’m modestly excited by the incremental improvements in Richland, because they provide the foundation for AMD to advance the power-hungry graphics without putting themselves out of the game (power-wise, that is). Graphics should be AMD’s strong suit in the APU business, but right now, Intel’s small but appreciable lead in power management allows them to throw more horsepower at the IGP; using brute force to overcome any technical shortcomings in their GPU design. The improvements that AMD is demonstrating – and delivering, apparently – with this silicon will change that dynamic. How much, and if it is enough… only time will tell.

            • NeelyCam
            • 7 years ago

            [quote<]Intel's small but appreciable lead in power management allows them to throw more horsepower at the IGP[/quote<] I'd say it's more due to Intel's [i<]huge[/i<] lead in process technology that gives them a major advantage in transistor power efficiency, especially at ultra-low voltage. The 50% power consumption reduction didn't really seem to be there with Ivy Bridge. Something prevented the chip from running at the super-low voltage that would give the major power efficiency benefits the process announcement slides were touting. I have a creeping feeling that Haswell will have fixed whatever was wrong with Ivy Bridge, and the efficiency improvements are massive.. We'll see "soon enough" (i.e., by Black Friday...). Kaveri being at 28nm non-trigate may be disadvantaged too much to be able to compete with Haswell in an Ultrabook form factor, but I definitely want to see reviews for both before pulling the trigger.

      • drfish
      • 7 years ago

      630? Try a 650. Not an Ultrabook form factor but still…

      • Zizy
      • 7 years ago

      True.
      All this AMD launching new stuff wont matter a bit, unless AMD finally manages to convince OEMs to put their hardware in any decent system. There are just 2 FHD AMD laptops out there. 15.6″ GX60 and 13.3″ VivoBook.
      Anyway, in a 11.6″ notebook you are more likely to find Kabini, or even Temash if the notebook also functions as a tablet. Yeah, 17W Richland is going to be faster, but does/will it matter for you?

        • brucethemoose
        • 7 years ago

        Sadly, the vivobook never even launched in the US, and that chip is rather pointless in a GX60.

        I agree. Richland could be the best thing since sliced bread, but since you’ll only see it in those ultra-cheap 15.6″ laptops OEMs love to churn out, it wouldn’t even matter.

      • OneArmedScissor
      • 7 years ago

      With Jaguar targeting low end tablets and Intel pushing Ivy Bridge and Haswell down to 10″ convertible tablets, that will have to change.

      This is a completely different game than Bobcat was playing. Atom netbooks are out of the picture, and the pressure on laptops has shifted to the high end.

      Look at what has happened to the price of “ultrabooks” in only the last few months. Models which were running $1,000+ have dropped to $500-600. There’s a huge list of them in that range on Newegg.

      New models will be even cheaper and prevalent at Best Buy, Staples, etc.

    • dpaus
    • 7 years ago

    So you’re saying that the two dies in the first photo are purportedly different?

      • Ryhadar
      • 7 years ago

      I’m guessing that this game of “Can you spot the differences?” could only be played by microprocessor designers.

      But they should throw it into a Highlights magazine at the doctor’s office. Just for kicks.

        • DPete27
        • 7 years ago

        They look the same to me, and for good reason. AMD is just doing another clock-boost-to-create-new-SKUs like they always do. This time, they get to call it a new family name because they tweaked some “firmware level” functions (boost) and are now using features that were present but dormant in Trinity.
        I have to give credit to their marketing team. They’re excellent about hyping a new product even though the product turns out to be lackluster.

      • MadManOriginal
      • 7 years ago

      They are totally different. The one on the right is part of AMD’s new ‘Trapezoid’ architecture. Although they don’t have 3D transistors yet, they do have chips that are cool 2D shapes that aren’t squares.

        • NeelyCam
        • 7 years ago

        2.5D >> 2D

      • ronch
      • 7 years ago

      You have to wonder just what the point is to put those identical dies side by side, one in 2D, the other in 3D. And yes, Ryhadar beat me to it when he said ‘spot the difference.’ Not an easy thing to do at 32nm, gerbils.

        • GeneS
        • 7 years ago

        [quote<]Not an easy thing to do at 32nm, gerbils[/quote<] I bet you could do it if you had an Apple© Retina™ Display™©

          • ronch
          • 7 years ago

          Except those kids who buy Apple stuff don’t even know what a die shot is.

      • Palek
      • 7 years ago

      You just wanted to use “purportedly” in a comment, didn’t you?

    • chuckula
    • 7 years ago

    [quote<]Somewhat surprisingly, AMD isn't making any lofty claims about the performance of the 35W Richland parts versus their predecessors. At CES in January, AMD touted a 40% jump in graphics performance and a 10-20% boost in CPU performance over Trinity, but that was for ultra-low-voltage models aimed at ultrathin laptops. The gains likely won't be as dramatic for APUs with higher thermal envelopes, which suggests Richland's eventual desktop incarnations may benefit the least.[/quote<] It ain't all that surprising... I accurately pointed out that AMD's own marketing was very careful to limit the 40% claim to the "Ultrathin" parts (interesting how TR has never once received an Ultrathin product from AMD for review....). I was of course modded down and attacked, but I'd rather be right than be modded up.

      • MustangSally
      • 7 years ago

      As much as I’d love to see it, I think doing any kind of comprehensive testing of an ultrathin CPU would be very difficult, as the rest of the platform isn’t standardized at all.

Pin It on Pinterest

Share This