A first look at AMD’s Radeon HD 8790M

AMD’s Graphics Core Next architecture premiered around this time last year on the Radeon HD 7000 series. The architecture brought with it a slew of improvements, which we covered at length here. Before long, GCN had spread through much of AMD’s desktop graphics lineup, providing fierce opposition to the competition from Nvidia.

When AMD got around to updating its mobile offerings, though, it did something a little disappointing. The company decided to use its older TeraScale architecture to power mid-range and low-end members of the Radeon HD 7000M series, from the 7600M on down. Those offerings were built using TSMC’s 40-nm fab process, as well, instead of the finer 28-nm process used to fab newer, GCN-powered chips.

AMD did eventually introduce GCN-driven Radeon HD 7700M, 7800M, and 7900M series products to service the higher end of the mobile market. However, cheaper and thinner gaming notebooks have essentially been stuck with re-branded last-gen Radeons for the better part of a year.

Happily, that’s all about to change. AMD officially announced the first members of the Radeon HD 8000M series earlier this week, and those offerings are due out early next year. All of them are based on the GCN architecture and built using TSMC’s 28-nm fab process, and AMD promises some meaty performance gains over the 40-nm TeraScale parts they’re supposed to replace.

We’ve been able to get one of those newcomers, the Radeon HD 8790M, in our lab today. As we reported a few days ago, AMD has rejiggered its mobile branding somewhat, so the Radeon HD 8700M series is meant to succeed the Radeon HD 7600M series. As luck would have it, we have representatives of both lineups in our labs today: a 7690M and an 8790M.

Left: the Radeon HD 7690M. Right: the new Radeon HD 8790M.

A cursory look at the picture above shows that the 8790M’s graphics chip, code-named Mars, is quite a bit smaller than Thames, the 40-nm slab of silicon that powers the 7690M. Mars measures about 77 mm² by our count, while Thames takes up 118 mm². That difference is at least partly due to the fact that Mars is built using the finer 28-nm process, so its transistors are smaller.

We’d love to compare transistor counts at this point, but unfortunately, AMD is refusing to disclose complete specifications for the Radeon HD 8000M series until January 7. What little we’ve been able to glean about our test samples is listed in the table below:

GPU ALUs Core

clock

Mem.

clock

Mem.

interface

width

Memory Fab.

process

Die

size

Radeon HD 7690M 480 600 MHz 800 MHz 128-bit 1GB GDDR5 40 nm 118 mm²
Radeon HD 8790M 384 900 MHz 1000 MHz 128-bit 2GB GDDR5 28 nm ~76.5 mm²

The 8790M has fewer shader ALUs than its predecessor, but at least in our test sample, its clock speeds are higher—and it comes packed with more memory. Also, of course, the 8790M is based on AMD’s latest GPU architecture. Interestingly, AMD tells us it’s made no refinements to the GCN architecture in the Radeon HD 8000M series. No new features have been added, either, beyond those that were already present in proper, GCN-powered 7000M-series parts.

The new architecture and higher clocks bode well for performance, despite the 8790M’s lower ALU count. AMD’s internal benchmarks show an increase of 20-50% from the 7670M to the 8770M, and one would expect the 7690M to trail the 8790M by a similar margin. Since we have the latter two GPUs our disposal, we’re going to put that assumption to the test.

A novel approach to mobile GPU testing

Traditionally, testing mobile GPUs involves using pre-assembled laptops. That can make comparisons with other offerings tricky, especially when you want to benchmark a previous-gen part that may only be available inside older systems with outdated CPUs.

With the help of AMD, we’ve tried a different approach this time. AMD supplied the aforementioned GPUs—the Radeon HD 8790M and 7690M—as bare MXM modules, each one with its own, dedicated cooler. For our test platform, AMD sent us an MXM to PCI Express x16 adapter alongside an off-the-shelf desktop processor, motherboard, and memory. Using this setup, we’re able to test mobile GPUs free from the confines of notebooks or proprietary qualification hardware.

What you see here is an Intel Core i7-3770K processor, a Gigabyte Z77X-UD3H motherboard, four gigs of AMD memory, AMD’s MXM adapter, and our two mobile Radeons. AMD also threw in a 500GB Seagate hard drive, but we swapped that for a Crucial solid-state drive from our own supply of test hardware. Benchmarking games is a lot quicker on an SSD.

Since we’re testing with a very fast desktop CPU, we’re able to ensure that the processor isn’t a primary performance bottleneck. That means our GPUs should be free to fulfill their performance potential. We follow this same approach when testing desktop graphics cards, and it has served us well. Of course, in this particular case, the processor we’re testing is a fair bit quicker than what you’d find in even a top-of-the-line gaming notebook. Our scores may be a little higher as a result.

Benchmarking mobile GPUs with a desktop platform also rules out battery testing, for obvious reasons. However, elegant solutions to that problem are rare in any event. Every notebook is bound to be different—some will have smaller batteries, some will have larger displays, and others will couple gaming GPUs with slim enclosures and power-sipping CPUs. At least with our test setup, we we can offer a look at the power consumption delta between new and old mobile Radeons, and that’s exactly what we’ve done.

Sadly, we’re still working on getting a competing mobile GPU from Nvidia. The benchmarks on the next few pages will show you how the new Radeon compares to the old one, but they won’t tell you how either stacks up against a rival GeForce. Our apologies. That said, knowing how the two generations compare—not to mention what kinds of games can be played, and at what settings—is very valuable information, especially given the dearth of mobile GPU benchmarks out there. We think we’re better off making this information available now and building upon it later, rather than waiting, possibly indefinitely, for the stars to align.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median results. Our test systems were configured like so:

Processor Intel Core i7-3770K
Motherboard Gigabyte Z77X-UD3H
North bridge Intel Z77 Express
South bridge
Memory size 4GB (2 DIMMs)
Memory type AMD Memory

DDR3 SDRAM at 1600MHz

Memory timings 9-9-9-28
Chipset drivers INF update 9.3.0.1021

Rapid Storage Technology 11.6

Audio Integrated Via audio

with 6.0.01.10800 drivers

Hard drive Crucial m4 256GB
Power supply Corsair HX750W 750W
OS Windows 8 Professional x64 Edition

 

  Driver revision GPU base

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

AMD Radeon HD 8790M Catalyst 9.011 RC2 900 1000 2048
AMD Radeon HD 7690M Catalyst 9.011 RC2 600 800 1024

Thanks to Corsair and Crucial for helping to outfit our test rig—and AMD for supplying the test platform and mobile GPUs, as well.

AMD-specific optimizations for texture filtering and tessellation were disabled in the control panel. Other image quality settings were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per GPU in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.

  • We measured total system power consumption at the wall socket using a P3 Kill A Watt digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The GPUs were plugged into our MXM to PCI Express adapter, which was itself plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The GPUs were tested under load running Skyrim at its Ultra quality preset.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Battlefield 3

We tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

We tested at 1440×900 using the game’s “Medium” detail preset.

Frame time

in milliseconds

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

We should preface the results below with a little primer on our testing methods. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

To get a sense of how frame times correspond to FPS rates, check the table on the right.

We’re going to start by charting frame times over the totality of a representative run for each system. (That run is usually the middle one out of the five we ran for each GPU.) These plots should give us an at-a-glance impression of overall playability, warts and all.

Right off the bat, we can see the 8790M is doing quite a bit better than its predecessor. Both solutions exhibit occasional latency spikes, though.

We can slice and dice our raw frame-time data in other ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. While this metric doesn’t account for irregularities in frame latencies, it does give us some sense of overall performance. We can also demarcate the threshold below which 99% of frames are rendered, which offers a sense of overall frame latency, excluding fringe cases. (The lower the threshold, the more fluid the game.)

We’re looking at a 58% increase in average frame rates and a 59% drop in 99th-percentile frame times. That’s a pretty impressive improvement from one generation to the next.

Now, the 99th percentile result only captures a single point along the latency curve. We can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

Finally, we can rank solutions based on how long they spent working on frames that took longer than a certain number of milliseconds to render. Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario. Here, you can click the buttons below the graph to switch between different millisecond thresholds.


The 8790M isn’t just faster on average. It also spends a lot less time working on high-latency frames, which makes for more fluid, stutter-free animations and smoother gameplay.

Borderlands 2

For this test, I shamelessly stole Scott’s Borderlands 2 character and aped the gameplay session he used to benchmark the Radeon HD 7950 and GeForce GTX 660 Ti earlier this month. The session takes place at the start of the “Opportunity” level. As Scott noted, this section isn’t precisely repeatable, because enemies don’t always spawn in the same spots or attack in the same way. We tested five times per GPU and tried to keep to the same path through the level, however, which should help compensate for variability.

Here, too, we tested at 1440×900. All other graphics settings were maxed out except for hardware-accelerated PhysX, which isn’t supported on these Radeons.

Yikes. Neither GPU does a good job of keeping frame times consistently low, as evidenced by the large number of spikes on both plots.

The 8790M does pull off much better average frame rates and 99th-percentile frame times, though. Then again, that 54.3-ms frame time works out to an 18.4 FPS frame rate, which is hardly anything to brag about.

According to our percentile graph, things start to go awry around the 95th percentile on both GPUs. So, about 5% of frames take significantly longer to render than the rest. That about tracks with what Scott measured in Borderlands 2 with Radeon HD 7950. (This time, though, we were using newer drivers supplied by AMD.)


Both GPUs spend a fair bit of time working on frames that take more than 33.3 ms or 16.7 ms to render. However, the 8790M is obviously faster and doesn’t linger too long on frames that take more than 50 ms or so. Our seat-of-the-pants impression echoes this result. Borderlands 2 is responsive and very much playable on the 8790M, even if the latency spikes damage the illusion of motion to some degree, making animations appear not completely fluid.

Far Cry 3

This is our first time testing Far Cry 3. I picked one of the first assassination missions, shortly after the dramatic intro sequence and the oddly sudden transition from on-rails action shooter to open-world RPG with guns.

The game was run at 1440×900 with MSAA disabled and the “Medium” graphics quality preset selected.

Far Cry 3 is a very graphically intensive game, and at the image quality settings we’ve chosen, both cards struggle intermittently with delivering smooth gameplay. The Y axis on the plot has been stretched to show the 7690M’s lone spike to around 180 ms a third of the way into the run. (Yes, it happens every time.) However, you can see the 8790M suffers from relatively frequent spikes over 40 ms, especially toward the beginning of the run.

At the start of the run, those spikes manifest as a kind of jumpiness or jerkiness in the animation. Oddly enough, the effect is worse than in Borderlands 2, even though the spikes tend to be shorter.

The 8790M’s 99th-percentile frame time is lower here than in Borderlands 2, at least. 39.8 ms works out to around 25 FPS.

Our latency plot shows the 8790M’s frame latencies are fine through about 97% of the run. Only the last 3% of frames seem to pose a problem. Because the 8790M is so much faster, though, even its worst frame times aren’t too much higher than the 7690M’s are generally.


The 8790M spends very little time working on frames that take more than 50 ms or 33.3 ms to render. That reflects what we see in the frame-time plot, where the 8790M’s latency spikes are generally short and clearly become infrequent after the first 500 frames or so. There are only two exceptions where unusually high spikes disrupt gameplay.

The 7690M only sees one exceptional spike, but because the GPU is relatively slow, it nevertheless spends over two thirds of the run working on frames with latencies above 33.3 ms. Far Cry 3 feels sluggish and choppy on that GPU.

Hitman: Absolution

Here, too, I pilfered Scott’s saved game and attempted to replicate his gameplay session, which involves walking through a crowd in the Chinatown level. As you can see in the video, the crowd is very dense, and there are plenty of fancy shader and tessellation effects in play.

Testing was conducted at 1440×900 with MSAA disabled and using the “High” quality preset.


Yeah, so, this isn’t really much of a contest. The 7690M performs abysmally here, with huge and extremely frequent latency spikes. Even the average frame rate alone—just 22 FPS—shows how poorly the last-gen part handles itself.

The 8790M fares much better at these settings, with nearly double the average frame rate and half the 99th percentile frame time. 53.2 ms is admittedly still on the low side—it works out to about 18.8 FPS—and there are more high-latency frames than in Far Cry 3. However, the 8790M is fine through about 96% of the run, and subjectively, it feels smooth overall.

Sleeping Dogs

I haven’t had a chance to get very far into Sleeping Dogs myself, but TR’s Geoff Gasior did, and he got hooked. From the small glimpse I’ve received of the game’s open-world environment and martial-arts-style combat, I think I can see why.

The game’s version of Hong Kong seems to be its most demanding area from a performance standpoint, so that’s what we benchmarked. We took Wei Shen on a motorcycle joyride through the city, trying our best to remember we were supposed to ride on the left side of the street.

We benchmarked Sleeping Dogs at 1440×900 using a tweaked version of the “High” quality preset, where we disabled vsync and knocked both antialiasing and SSAO down to “Normal.” We had the high-resolution texture pack installed, too.


Along with Battlefield 3, Sleeping Dogs is perhaps one of the best-behaved games on the Radeon HD 8790M. The 99th-percentile frame time is nice and low, and latency spikes are both small and infrequent. The game looks and plays great.

On the 7690M, it’s another story. I suspect that GPU’s smaller memory capacity is a hindrance here, since we’re using the high-resolution texture pack. Either way, the game hangs in a very disruptive fashion every few seconds, which makes driving through the busy streets of Hong Kong a dangerous and scary experience. More than once, I had to restart a test run after veering off course when the game skipped.

Power consumption

Over the last few pages, we’ve seen that the Radeon HD 8790M is much quicker than the 7690M. Now, we can see that performance increase doesn’t come with substantially higher power consumption; the 8790M draws only 2W more under load. Not only that, but it draws less power than the 7690M at idle. Thanks to AMD’s ZeroCore Power feature, which is exclusive to 28-nm, GCN-powered parts, power utilization falls even lower when the display goes to sleep.

Note that these numbers show power draw for the whole system, including the Core i7-3770K, which has a 77W power envelope. Total power use on a notebook would probably be much lower.

We would have included noise and temperature numbers, but the MXM GPU modules AMD sent us have very different cooling solutions, neither of which you can expect to see in actual notebooks. Noise and temperature measurements for these samples would be pointless at best and misleading at worst.

Conclusions

You know the saying: better late than never.

I think that applies to the Radeon HD 8000M series. Mid-range and low-end gaming notebooks have been saddled with 40-nm GPUs based on AMD’s old TeraScale architecture for almost a year. The 8000M series finally rights that wrong by bringing 28-nm, GCN goodness to lower price tiers and power envelopes. AMD hasn’t broken new ground here; it’s simply made a year-old architecture available to more folks.

As our benchmarks have shown, the wait has been worth it. The Radeon HD 8790M beats the pants off its predecessor, and it does so while consuming less power at idle and only marginally more under load.

Before we sign off, we should remind readers that clock speeds and memory configurations will vary in the wild. So, not all of the 8790M or 7690M GPUs you find out there will be equivalent to those we tested. Some will feature slower DDR3 memory and may have lower core clock speeds, as well.

Comments closed
    • tarateh00aa
    • 7 years ago
    • christianlez001a
    • 7 years ago
    • evelynnx0931
    • 7 years ago
    • kathyes7309
    • 7 years ago
    • buhusky
    • 7 years ago

    why not include numbers from other gpu options (like intel’s) that this is meant to replace. it’d be easier for me to see how this compares to the cheap stock stuff. granted, i can find it somewhere else, but i’d rather just have it all in one article for ease of use (and it’ll keep me on your site instead of searching for content that may be elsewhere)

      • UrBoiCJ
      • 7 years ago

      “Sadly, we’re still working on getting a competing mobile GPU from Nvidia” taken from the last paragraph of the introduction.

        • dpaus
        • 7 years ago

        I believe he meant comparing it to the HD3000 and HD4000 IGPs in the core i3/i5/i7 chips commonly used in laptops (or the corresponding AMD chips), and I agree it would have been a very interesting comparison.

    • Zarf
    • 7 years ago

    Oh man. This is a fantastic review! It takes all of the guesswork out of figuring out the performance difference between two GPUs, even if it isn’t a real-world scenario. I’d like to request that you guys cobble together thousands of these MXM GPUs and benchmark them all on the same rig. 😀

    Stuff like this makes me happy that I added TechReport to my Adblock+ whitelist.

    • staceymon888i
    • 7 years ago
    • chelsie09xmarie
    • 7 years ago
    • dpaus
    • 7 years ago

    Hmmm.. Looking forward to seeing APUs with this generation of GPU power in them.

    • Voldenuit
    • 7 years ago

    I would love to see more slim laptops with a dock that you could stick a desktop GPU in, so you don’t have to carry a heavy gaming laptop when you’re on the go but still be able to play games when you get home.

    Sony had a vaio with an external GPU enclosure, but it was overpriced and had an anemic GPU.

    • XTF
    • 7 years ago

    No comparison against the last-gen GCN 7700M?

      • BestJinjo
      • 7 years ago

      The series that replaces HD7700M at the same price are HD8800M cards. So you’d need an HD8870M to compare against HD7770M. It wouldn’t make sense to compare HD7770M to HD8790M since the latter is not a direct replacement. Chuck that one to AMD’s confusing naming where you’d think HD8790M replaced HD7700M. 8790M is only a 384 SP part and replaces HD7600M while 8870M is a 640 SP part and replaces HD7700M.

      [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/58597-amd-details-next-gen-radeon-hd-8000m-gpus.html[/url<]

    • just brew it!
    • 7 years ago

    I hope we see the HD 8790M make its way onto some affordable passively cooled desktop cards. If you don’t need bleeding edge GPU horsepower, passive cooling FTW!

    • vargis14
    • 7 years ago

    Looking at that MXM add in card for the desktop i noticed it has 2 MXM slots…pretty neat. Guessing for SLI/Crossfire testing.

    • axeman
    • 7 years ago

    Testing these out at an 16:10 resolution seems like an odd choice. Why not test 1368×768 or 1600×900? Not that I’m trying to be critical. You guys do great work.

    • Pantsu
    • 7 years ago

    I tried out the FC3 run with my Vaio S15 with a modded 640M LE at 1000/950 MHz and the same settings. No stutters apparent, though imo the test graphics settings were far from optimal.
    [url<]http://aijaa.com/0Vx2cO[/url<] You should try out the buffered frames setting to see if it has an impact. One would think it might help even out the frame delivery.

      • Airmantharp
      • 7 years ago

      While the trade-off may be worth it with a budget oriented part, keep in mind that any time you introduce ‘buffering’ into an interactive computing application, you also introduce lag, and lag is usually more noticeable in FPS-style games.

    • raghu78
    • 7 years ago

    the HD 7690M XT runs at 725 mhz and should have been tested. Based on the 40nm Turks GPU this GPU was first launched as the HD 6770M and then renamed during the HD 7000M generation. quite a popular 40nm mobile GPU in the entry level gaming segment.

    [url<]http://www.notebookcheck.net/AMD-Radeon-HD-6770M.43955.0.html[/url<] [url<]http://www.notebookcheck.net/AMD-Radeon-HD-7690M.67737.0.html[/url<] So the actual increase would be slightly lower than in this review. but still you can expect around 35 - 40% improvement. The HD 8790M gets a nice increase in perf, perf/watt and perf/sq mm due to the GCN architecture and 28nm process. The higher clocks contribute in a major way to the performance improvement. But definitely some of the improvement is due to the the newer and superior GCN architecture. Hopefully this GPU will be available in ivy and haswell core i3 and core i5 laptops at USD 600 - 700. that would make a decent entry level gaming laptop.

    • puppetworx
    • 7 years ago

    I can’t help but notice the huge heatsink and fan on the new model, that’s never going inside any laptop. AMD wouldn’t be trying to pull a fast one by sending out review cards which are clocked up would they?

    • eofpi
    • 7 years ago

    1440 x 400 on BF3?

    Perhaps that should be 1440 x [i<]9[/i<]00?

      • Cyril
      • 7 years ago

      Indeed. Fixed.

    • HisDivineOrder
    • 7 years ago

    Kinda sad that AMD has so few design wins they have to send you a MXM adapter with their chips on their own cards to let you test the hardware…

      • DPete27
      • 7 years ago

      Ummm, the part I found ironic (if I read it correctly) is that AMD supplied the i7-3770K for the test rig?

    • kalelovil
    • 7 years ago

    Interesting review.

    It would have been useful to see how much faster the Radeons were compared to Intel integrated graphics, particularly considering you had already had a 3770k in the testbed..

    • Duck
    • 7 years ago

    Can’t you clock the CPU down a bit to 2GHz for example to try and imitate a mobile system?

    • tbone8ty
    • 7 years ago

    yah finally Far Cry 3!

    • UberGerbil
    • 7 years ago

    [quote<] which makes driving through the busy streets of Hong Kong a dangerous and scary experience. [/quote<]In other words, more realistic.

      • dpaus
      • 7 years ago

      “It’s not a bug, it’s a feature!”

    • mnemonick
    • 7 years ago

    Another fine review, and that PCI-E adapter setup looks sweet. I’d have liked to see a couple of the games (Borderlands 2 and Absolution) tested with slightly lower or tweaked quality settings (lowering SSAO in Absolution, for example), such as you might use on an actual laptop.

    I hope Nvidia can get you some ‘bare’ MXM modules (maybe a 640M and a GT 650M) for comparison testing.

    [sub<]Note: minor typo on page 3, should read 1440x[u<]9[/u<]00, not 1440x[u<]4[/u<]00. :D[/sub<]

    • chuckula
    • 7 years ago

    Nice review. It’s good to see AMD improving its offerings in the mobile space too. Is that MXM adapter a standardized part that could be used to test other mobile GPUs from AMD/Nvidia in other reviews?

    • Bensam123
    • 7 years ago

    I like how you guys are testing graphics for laptops now, compared to simply getting a fully assembled laptop and making the best of it! Definitely a good testing style, you just need to get more data points.

    It’s really unfortunate that MXM isn’t completely standardized across laptop vendors and that whitelists in the bios exist. You guys should check the past laptops that you benchmarked and see if they have a MXM module you can pull out of them. Some laptop vendors do. The mounting holes look like they may accommodate a normal GPU cooler, which you can find on Newegg. Zalman is usually pretty compatible with different mounting holes and designs.

    Have you guys considered using GPUZ to measure power draw too? GPUZ displays amperage and volts, so you could convert it to watts. It also allows you to log values so you could even overlay powerdraw on a FPS graph.

      • UberGerbil
      • 7 years ago

      The counter-argument is that nobody buys their mobile graphics this way, and the performance in actual laptops may be compromised by cooling considerations among other things. So while this establishes what is presumably the best-case ceiling for these models, it doesn’t necessarily help someone who is making a buying decision.

        • Bensam123
        • 7 years ago

        Well it’s possible to rate the laptop cooler up to a certain TDP and for graphics cards to be rated for certain TDPs… or wattage.

        I would look at a graphics card when buying and if laptops actually listed if they don’t white listed against upgrades or have a MXM module I’d also be more inclined to buy that as well.

        Consider how enthusiast oriented Asus is, I’m surprised they don’t do this with their laptops. Then again laptops are all about selling the package and upselling like cellphones, so maybe it doesn’t surprise me. :l

          • vargis14
          • 7 years ago

          Ever try to buy a MXM based card? They are almost impossible to find and cost a fortune. Its kinda crazy.

            • Bensam123
            • 7 years ago

            Yeah, that’s cause the market isn’t devevloped. You can get them on eBay, but even if you do and you’re sure your laptop has a MXM slot, the bios usually whitelists certain specific cards.

            • Scrotos
            • 7 years ago

            [url<]http://www.mxm-upgrade.com/[/url<] is the only site I've seen that offers MXM junk. Man, expensive!

        • HisDivineOrder
        • 7 years ago

        I agree with this. When you’re talking a desktop, the concern doesn’t matter as much because you typically have more space and, worst case, you could just take the side of the case off if you had to. With a laptop, though, there just isn’t much configuration possible. If it is overheating and that’s compromising performance, well, that’s that. It’s really a big part of the experience.

        I think testing would be better if you had a laptop you could add a MXM board to. Then get MXM from both GPU makers. Of course, I think that would eliminate the awesome of Optimus and Enduro from being compared.

        Might be best if you just had a vendor who made both who could make identical laptops except for the GPU’s. Good luck on the coordination of that small miracle.

        • Spunjji
        • 7 years ago

        The power measurements help with that, as does the fact that these solutions are all operating under the same (albeit abnormal) conditions.

        Even irrespective of that, this process is much better for comparative purposes. If you want real-world performance reviews then you can only really look at those for the model you intend to purchase anyway, anything else will be inaccurate! 🙂

        • WaltC
        • 7 years ago

        Yes. The overriding design principle for laptops is power conservation–or “battery life,” if the former term doesn’t trip your trigger…;) You know, speak of the devil, just the other day I had a conversation with some guys on an major ATI-centered site (which I won’t specifically name but which I’ve frequented on and off for years) who were complaining about the fact that they didn’t think their laptop gpus were running as fast as they should be running. I jumped in to politely reiterate the obvious, that laptop designs deliberately focus on power conservation (performance-per-watt, battery life, whatever we might call it) in a way that desktop designs simply don’t because conserving battery life isn’t even a minor consideration in the design of a desktop platform. But for a laptop–it’s *everything*–it is the central design pivot around which everything else revolves–cpus, gpus, drives, etc. and you name it–every component in a given laptop system design is selected for its power conservation characteristics. So…laptops do all kinds of funky stuff that desktops don’t in order to conserve power, such as dim the screen, throttle the cpu/gpu, and so on, ad infinitum. But everybody already knows this stuff, right?

        Wrong…;) I got hammered by one guy in particular who was passing himself off as a programmer on the site, a site regular, who suggested that laptop design has never had anything at all to do with “power conservation,” which he seemed to think was an amusing term for me to use! He saw no connection between laptop design and power conservation–and told me that “maybe in the old days that you remember that might’ve been true,” but no more!

        Half shocked and half disgusted I asked him to tell me when it was that laptops stopped using batteries–’cause I surely did miss it when it happened! After that I never went back to thread–it was too distressing to think of people expecting their laptops to run with the big dogs on the desktop–not gonna’ happen. Laptops, like all portable devices, bow to the gods of portability & battery life, whereas desktops worship neither. You might say that “Desktops are built to run” whereas “Laptops are built to run on a battery.”

        Anyway…you’d be surprised at the number of uninformed people out there who are buying laptops who think the only differences between a desktop and a laptop is that one’s a portable while the other isn’t. And yea, some of these folks really do think their laptop gpus should be running as fast as the most expensive discrete gpus made for desktops–that a “7000-series” AMD laptop gpu is *exactly* the same thing as a discrete 7000-series desktop gpu–and if you don’t agree with them and try to steer them on a better course they’ll just call you “old fashioned”…and keep right on trying to hammer that square peg into that round hole.

        I thought your point was a good one–but it reminded me of this anecdote. As personal computers become more and more a purely commodity market we can expect to see more of this, unfortunately. Sites like Tech-Report can be an invaluable resource for these people. Question is, though: can they read?

          • just brew it!
          • 7 years ago

          Well, even if the guy was a software developer, that doesn’t necessarily mean he knows squat about hardware. Sadly, it’s something I see far too often: software people who really have no clue how the hardware that runs their code works. I firmly believe that understanding the underlying hardware makes you a better programmer.

            • swaaye
            • 7 years ago

            Is that like electrical engineers who don’t know how to solder? 😉

            • CampinCarl
            • 7 years ago

            I’d compare it more to an EE who doesn’t understand Ohm’s Law.

    • derFunkenstein
    • 7 years ago

    Using IE10 on Windows 8, on the all the game pages other than BF3 (which looks fine) the graphs that appear above the Time Spent Beyond 50/33.3/16.7ms buttons are missing. Both with and without compatibility view.

      • Cyril
      • 7 years ago

      Should be fixed now. Sorry about that.

        • derFunkenstein
        • 7 years ago

        No problem, thanks Cyril! I noticed it in a couple of older reviews today, too. Like the GTX 650Ti review was that way. They appear OK now, though. You’re a genius. 🙂

    • juampa_valve_rde
    • 7 years ago

    It’s a clear improvement (small cheap silicon, better features, better performance), but yet no idea of the performance metrics in mobile systems. Disabling a couple of the 3770 cores and turning down the clockspeed and turbo could give a better idea of real world perfomance.

    Kudos for AMD, the small GCN chip does far better with the jitter, and the hiccups probably can be fixed with fine tune of the drivers.

    • mczak
    • 7 years ago

    What about throwing in some desktop GPUs?
    I don’t mean high-end, but something somewhat comparable (like a HD7750) – just to show how they compare (and how much less energy they use).
    Since you’ve got a desktop board that seems like a missed opportunity.

    • Ryszard
    • 7 years ago

    s/8770/8790/g it seems!

    • bthylafh
    • 7 years ago

    So what wrong information did you get from AMD that required changing the article?

      • Cyril
      • 7 years ago

      We were told our review samples were a Radeon HD 7670M and a Radeon HD 8770M, but they were actually a 7690M and an 8790M. Some of the information we received about pricing, positioning, and power envelopes also had to be changed.

    • Novuake
    • 7 years ago

    Excellent!!! Love the “A novel approach to mobile GPU testing”! Good review.

    • UberGerbil
    • 7 years ago

    You know, that MXM setup has some potential: with a couple of tweaks, there’s an opportunity to define an interesting variation on the ITX standard for low-profile form factors where you still want discrete graphics. A tiny motherboard with MXM and mSATA connectors (and no conventional slots whatsoever) might be a nice counter to Intel’s NUC.

      • Airmantharp
      • 7 years ago

      I think that the MXM slot is all that’s missing from the NUC :).

      Granted, the applications for such a thing are pretty limited, but it’d still be cool!

        • HisDivineOrder
        • 7 years ago

        I remember when MXM was going to herald upgradable graphics for laptops. But the laptop makers kept right on using their own proprietary solutions (and that’s if they didn’t just solder them straight on) and MXM never hit high enough usage to matter much.

        I doubt it’ll be much better in any way. Too bad.

          • Chrispy_
          • 7 years ago

          All dem extra connectors cost like $0.8 and beancounters around the world fired off emails to their boss with the subject line: MXM IS BAD FOR PROFITS, AVOID

          • Airmantharp
          • 7 years ago

          In part, they sort of have- if you have a laptop that has an MXM ‘slot’, then there is the possibility of an upgrade.

          The challenges are what you might expect though- the upgrade part will still have to fit within the enclosure’s TDP which is unlikely to be more than marginally higher than the part being replaced, which isn’t so bad, but BIOS support will likely have to be added for the new part as well, as mobile GPUs are a little more integrated with the power and integrated graphics subsystems these days.

          Any laptop with an MXM is going to also converge on the opposite side of the spectrum from ‘thin, light,’ as well as likely be more expensive, limiting their appeal and mainstream acceptance.

          The subsequent supply and demand issues definitely keep volume low and prices high, holding us where we’re at, with MXM being a niche solution.

          Unless volume increases, possibly with a greater use of MXMs in things like SFFs and AIO systems, we’re probably going to stay right where we’re at. And that’s not necessarily a bad thing since we do have access to DTR-style gaming laptops that don’t incur too much of a premium for all of the capability that they present!

    • bthylafh
    • 7 years ago

    Wait… AMD sent you a test rig with an [i<]Intel[/i<] CPU? Their marketing department should all be fired.

      • Novuake
      • 7 years ago

      Obviously the test system is TRs own system…

        • bthylafh
        • 7 years ago

        Not unless Cyril made a mistake.

        [quote<]For our test platform, AMD sent us an MXM to PCI Express x16 adapter alongside an off-the-shelf desktop processor, motherboard, and memory.[/quote<]

        • StuG
        • 7 years ago

        I had to go back and read, but it does indeed imply AMD gave them everything:

        “AMD supplied the aforementioned GPUs—the Radeon HD 8770M and 7670M—as bare MXM modules, each one with its own, dedicated cooler. For our test platform, AMD sent us an MXM to PCI Express x16 adapter alongside an off-the-shelf desktop processor, motherboard, and memory.”

          • Cyril
          • 7 years ago

          AMD did supply the motherboard and CPU, yeah.

            • brucethemoose
            • 7 years ago

            Well that’s… interesting.

            • jessterman21
            • 7 years ago

            Absolutely hilarious.

            • brute
            • 7 years ago

            AMD IS BIASED AGAINST AMD

            • slaimus
            • 7 years ago

            I guess it makes sense from a mobile standpoint, as most AMD mobile GPUs are in intel laptops. They’re trying to say: people with AMD laptops already have good integrated GPU!

      • MadManOriginal
      • 7 years ago

      Too late.

        • Spunjji
        • 7 years ago

        Ouch. Too soon? 😉

      • Deanjo
      • 7 years ago

      Na, it was the AMD branded memory that made all the difference. 😛

      • dragosmp
      • 7 years ago

      Yep, hope the marketing bonus due to this article will help the 8000M, because showing no trust in their own CPU team sure doesn’t help the image of AMD. A Phenom II w/ L3 cache or an FX 4170 could have maxed those cards

      • just brew it!
      • 7 years ago

      Most mobile x86 systems are using Intel CPUs, and the ones that aren’t are probably using the IGP on a Fusion APU instead of discrete graphics. So while it may not have been the brightest move from a marketing perspective, it does accurately reflect how most of these GPUs will be used “in the wild”.

        • Voldenuit
        • 7 years ago

        +1.

        There are practically no mid-/high-end gaming notebooks with discrete graphics using AMD CPUs. AMD sending TR an intel testboard is a credit to their integrity and knowledge of their user base.

    • UberGerbil
    • 7 years ago

    Nice. Christmas came early.

Pin It on Pinterest

Share This