I'll wait for TR's results until I make a final judgment but Anand's review of the A10-7850K included some (incomplete) benchmarking vs. the 4770R. Incidentally, the 4770R has a higher-end Iris Pro IGP, but the highest-end graphics parts seem to be in the mobile-oriented I7 4750HQ and 4950HQ parts. The 4770R is more a desktop all-in-one component that has higher CPU clocks but only has 1/2 the eDRAM cache of the mobile parts. Since Apple received most (but not all) of the 4750HQ and 4950HQ parts the Macs are the most widely distributed
Anyway, using Anand's review as a guide, it looks like the 4770R is competive with the A10-7850K, which is the highest-end Kaveri. There are certainly cases where AMD's IGP pulls ahead, but not by incredible margins that mean the difference between an unplayable game and a perfect experience. I attribute a bunch of that to drivers especially since AMD works closely with game developers and Kaveri is using the same GPU architecture that game developers have been using since 2011.. it's a mature architecture and AMD works very closely with game developers in a way that Intel doesn't yet match. However, the 4770R also beats Kaveri in several of those benchmarks and the energy footprint of the 4770R is substantially smaller than the full-bore A10-7850K. The Haswell GT3+ IGP with that eDRAM is potent and frankly is the first "real" GPU that Intel has made where it doesn't feel like it was designed for basic use in an Ultrabook and then just hung around to be used on a desktop.
What we really are seeing is this clash of philosophies:
1. AMD: We have given up on CPU performance beyond just (barely) good enuff, where good enuff is somewhat behind last-year's i3 level dual-core Haswell parts (but with unfortunately high power draw). There's a trick though: We bought ATI and GCN is a strong graphics architecture and it's relatively cheap for us to lay out a bunch of GCN processor arrays on silicon where CPUs wouldn't work as well. To make up for the deficiencies we'll come up with as many ways to convince developers that the GPU should be used for as much as possible (HSA, hUMA, etc. etc. etc.). This has two effects: 1. In the right benchmarks it makes people look away from the CPU. 2. If done properly it locks software developers into our hardware platform exclusively and we want lockin.
Oh and we'll sell everything "cheap" although at $190 the A10-7580K is now unfortunately competing against other AMD products that can clobber it at CPU and the IGP still isn't a substitute for even an HD-7750.
2. Intel: We don't have a particularly amazing history in the graphics industry but we do have manufacturing power and, given enough time, we have the ability to eventually develop strong graphics in-house since we have the money and the patience to do it, but it's not an overnight fix. Since our background is compute, the execution units in our graphics hardware are probably not as good at doing graphics as AMD's are, but we do have a very very long history in number crunching so they should be (pound for pound) pretty strong at basic compute operations. Since we are really targeting mobile where power draw is king*, the sheer quantity of transistors that we throw at graphics is nowhere near as much as AMD. We try to make up for some of that deficiency by tying the IGP to the CPU cores with a shared L3 cache that makes data transfer very efficient (AMD's HSA seeks to overcome this advantage).
This is all part of a rather gradual process for unifying CPU and GPU from the CPU side (instead of AMD's GPU side). The real results will be seen in Skylake, but the GT3+ parts in Haswell are an interesting preview of where Intel is going. The good news for Intel is that you don't have to make the same sacrifices that AMD requires: You don't have to go around pretending that the CPU doesn't matter anymore when it clearly does. Instead, you still have a strong CPU core (that will get stronger with new versions of AVX in Skylake) but the IGP becomes much more flexible and powerful to expand beyond just what the CPU can do.
* The Iris Pro parts, while still capable of mobile use in full-sized notebooks, are really designed for much greater power envelopes than the Ultrabooks where the rest of Intel's IGPs are targeted.
The biggest drawback for Intel is that their parts are still more expensive not just because of Intel wanting a nice markup, but because things like eDRAM are harder to do and drive up production costs. Additionally, the market for IGPs still isn't huge since hardcore gamers will use discrete GPUs (even ones that are only ~$100) and due to simple physics those discrete GPUs will beat the IGP no matter who makes it.
That's my take. I'm moderately interested in Mantle although as a Linux user... where there is zero Mantle support... I'm not all that enthused and frankly I don't care if AMD flat-out doubles the performance of BF4, I don't trust undocumented APIs where AMD's employees rewrote a game to "prove" Mantle is great. We'll really see if Mantle is useful in 2 or 3 years.
4770K @ 4.7 GHz; 32GB DDR3-2133; GTX-1080; 512GB 840 Pro (2x); Fractal Define XL-R2; NZXT Kraken-X60
--Many thanks to the TR Forum for advice in getting it built.