The Damage Report

AMD attempts to shape review content with staged release of info
— 11:01 PM on September 26, 2012

Review sites like TR are a tricky business, let me say up front. We work constantly with the largest makers of PC hardware in order to bring you timely reviews of the latest products. Making that happen, and keeping our evaluations fair and thorough, isn't easy in the context of large companies engaging in cutthroat competition over increasingly complex technologies.

I know for a fact that many folks who happen across TR's reviews are deeply skeptical about the whole enterprise, and given everything that goes on in the shadier corners of the web, they have a right to be. That said, we have worked very hard over the years to maintain our independence and to keep our readers' interests first among our priorities, and I think our regular audience will attest to that fact.

At its heart, the basic arrangement that we have with the largest PC chip companies is simple. In exchange for early access to product samples and information, we agree to one constraint: timing. That is, we agree not to post the product information or our test results until the product's official release.

That's it, really.

There are a few other nuances, such as the fact that we're released from that obligation if the information becomes public otherwise, but they only serve to limit the extent of the agreement.

In other words, we don't consent to any other constraint that would compromise our editorial independence. We don't guarantee a positive review; we don't agree to mention certain product features; and we certainly don't offer any power over the words we write or the results we choose to publish. In fact, by policy, these companies only get to see our reviews of their products when you do, not before.

If you're familiar with us, we may be covering well-trodden ground here, but bear with me. Our status as an independent entity is key to what we do. Most of the PR types we work with tend to understand that fact, so we usually get along pretty well. There's ample room for dialog and persuasion about the merits of a particular product, but ultimately, we offer our own opinions. In fact, the basic arrangement we have with these firms has been the same for most of the 13 years of our existence, even during the darkest days of Intel's Pentium 4 fiasco.

You can imagine my shock, then, upon receiving an e-mail message last week that attempted to re-write those rules in a way that grants a measure of editorial control to a company whose product we're reviewing. What AMD is doing, in quasi-clever fashion, is attempting to shape the content of reviews by dictating a two-stage plan for the release of information. In doing so, they grant themselves a measure of editorial control over any publication that agrees to the plan.

In this case, the product in question is the desktop version of AMD's Trinity APUs. We received review samples of these products last week, with a product launch date set for early October. However, late last week, the following e-mail from Peter Amos, who works in AMD's New Product Review Program, hit our inbox:

We are allowing limited previews of the embargoed information to generate additional traffic for your site, and give you an opportunity to put additional emphasis on topics of interest to your readers. If you wish to post a preview article as a teaser for your main review, you may do so on September 27th, 2012 at 12:01AM EDT.

The topics which you are free to discuss in your preview articles starting September 27th, 2012 at 12:01AM EDT are any combination of:

- Gaming benchmarks (A10, A8)

- Speeds, feeds, cores, SIMDs and branding

- Experiential testing of applications vs Intel (A10 Virgo will be priced in the range of the i3 2120 or i3 3220)

- Power testing

We believe there are an infinite number of interesting angles available for these preview articles within this framework.

We are also aware that your readers expect performance numbers in your articles. In order to allow you to have something for the preview, while maintaining enough content for your review, we are allowing the inclusion of gaming benchmarks.

By allowing the publication of speeds, feeds, cores, SIMDs and branding during the preview period, you have the opportunity to discuss the innovations that AMD is making with AMD A-Series APUs and how these are relevant to today’s compute environment and workloads.

In previewing x86 applications, without providing hard numbers until October [redacted], we are hoping that you will be able to convey what is most important to the end-user which is what the experience of using the system is like. As one of the foremost evaluators of technology, you are in a unique position to draw educated comparisons and conclusions based on real-world experience with the platform.

The idea here is for AMD to allow a "preview" of the product that contains a vast swath of the total information that one might expect to see in a full review, with a few notable exceptions. Although "experiential testing" is allowed, sites may not publish the results of non-gaming CPU benchmarks.

The email goes on to highlight a few other features of the Socket FM2 platform before explaining what information may be published in early October:

The topics which you must be held for the October [redacted] embargo lift are:

- Overclocking

- Pricing

- Non game benchmarks

The email then highlights each of these topic areas briefly. Here's what it says about the temporarily verboten non-gaming benchmarks:

 Non game benchmarks
- Traditional benchmarks are designed to highlight differences in different architectures and how they perform. We understand that this is a useful tool for you and that your readers expect to see this data. The importance of these results is in your evaluation, as the leading experts, of what these performance numbers mean. We encourage you to use your analysis if you choose to publish a preview article and if you find that to be appropriate to your approach to that article. The numbers themselves must be held until the October [redacted] embargo lift. This is in an effort to allow consumers to fully comprehend your analysis without prejudging based on graphs which do not necessarily represent the experiential difference and to help ensure you have sufficient content for the creation of a launch day article.

Now, we appreciate that AMD is introducing this product in an incredibly difficult competitive environment. We're even sympathetic to the idea that the mix of resources included in its new APU may be more optimal for some usage patterns, as our largely positive review of the mobile version of Trinity will attest. We understand why they might wish to see "experiential testing" results and IGP-focused gaming benchmarks in the initial review that grabs the headlines, while burying the CPU-focused benchmarks on a later date. By doing so, they'd be leading with the product's strengths and playing down its biggest weakness.

And it's likely to work, I can tell you from long experience, since the first article about a subject tends to capture the buzz and draw the largest audience. A second article a week later? No so much. Heck, even if we hold back and publish our full review later (which indeed is our plan), it's not likely to attract as broad a readership as it would have on day one, given the presence of extensive "previews" elsewhere.

Yes, AMD and other firms have done limited "preview" releases in the past, where select publications are allowed to publish a few pictures and perhaps a handful of benchmark numbers ahead of time. There is some slight precedent there.

But none of that changes the fact that this plan is absolutely, bat-guano crazy. It crosses a line that should not be crossed.

Companies like AMD don't get to decide what gets highlighted in reviews and what doesn't. Using the review press's general willingness to agree on one thing—timing—to get additional control may seem clever, but we've thought it over, and no. We'll keep our independence, thanks.

The email goes on to conclude by, apparently, anticipating such a reaction and offering a chance for feedback:

We are aware that this is a unique approach to product launches. We are always looking at ways that we can work with you to help drive additional traffic to your articles and effectively convey the AMD message. We strive to provide the best products in their price points, bringing a great product for a great price. Please feel free to provide feedback on what you find, both with the product and with your experience in the AMD New Product Review Program. We try to ensure that we are providing you what you need and appreciate any feedback you have to offer on how we can do better.

I picked up the phone almost immediately after reading this paragraph and attempted to persuade both Mr. Amos and, later, his boss that this plan was not a good one. I was told that this decision was made not just in PR but at higher levels in the company and that my objections had been widely noted in internal emails. Unfortunately, although fully aware of my objections and of the very important basic principle at stake, AMD decided to go through with its plan.

Shame on them for that.

It's possible you may see desktop Trinity "previews" at other websites today that conform precisely to AMD's dictates. I'm not sure. I hope most folks have decided to refrain from participation in this farce, but I really don't know what will happen. I also hope that any who did participate will reconsider their positions after reading this post and thinking about what they're giving up.

And I hope, most of all, that the broader public understands what's at stake here and insists on a change in policy from AMD.

If this level of control from companies over the content of reviews becomes the norm, we will be forced to change the way we work the firms whose products we review. We will not compromise our independence. We believe you demand and deserve nothing less.

Update: AMD has issued a statement on this matter.

285 comments — Last by Morris at 12:27 PM on 10/10/12

A look at TR's new GPU test rigs
— 12:22 PM on February 24, 2012

As I mentioned on the podcast this week, I have been working to re-fit Damage Labs with new hardware all around. Since I test desktop GPUs, desktop CPUs, and workstation/server CPUs, I have a number of test rigs dedicated to each area. Our desktop CPU and GPU systems have been the same for quite some time now. Heck, my massive stable of 30+ CPU results dates back to the Sandy Bridge launch. However, as time passes, new hardware and software replaces the old, and we must revamp our test systems in order to stay current. Oddly enough, we've just hit such an inflection point in all of the types of hardware I test pretty much at the same time. Normally, these things are staggered out a bit, which makes the change easier to manage.

Fortunately, though, I've been making solid progress on all fronts.

The first of my test rigs to get the treatment are my two graphics systems—identical, except one is dedicated to Nvidia cards and the other to AMD Radeons, so we can avoid video drivers for one type of GPU causing problems for the other. Also, I can test two different configurations in parallel, which really helps with productivity when you're running scripted benchmarks and the like.

The old GPU rigs were very nice X58 systems that lasted for years, upgraded along the way from four cores to six and from hard drives to SSDs. They're still fast systems, but it was time for a change. Let me give you a quick tour of our new systems, and we'll talk about the reasons for the upgrade.

Behold, the new Damage Labs GPU test rig. Innit pretty? In the past, our open-air test rigs have sat on a motherboard box, with the PSU sitting on one side and the drives out front. This system, however, is mounted in a nifty open-air case that the folks at MSI happened to throw into a box with some other hardware they were shipping to us. I was intrigued and put the thing together, and it looks to be almost ideal for our purposes. I'm now begging MSI for more. If we can swing it, we may even give away one of these puppies to a lucky reader. That may be the only way to get one, since this rack apparently isn't a commercial product.

Here are a few more shots from different angles.

Nifty and pretty tidy, all things considered. Even takes up less room on the test bench.

Now, let's talk specs. I had several goals for this upgrade, including the transition to PCI Express 3.0, a lower noise floor for measuring video card cooler acoustics, and lower base system power draw. I think the components I've chosen have allowed me to achieve all three.

CPU and mobo: Intel Core i7-3820 and Gigabyte X79-UD3 - The X79 platform is currently the only option if you want PCIe 3.0 support. Of course, even after Ivy Bridge arrives with PCIe Gen3 for lower-end systems, the X79 will be the only platform with enough PCIe lanes to support dual-x16 or quad-x8 connectivity for multi-GPU rigs.

Obviously, the conversion to PCI 3.0 essentially doubles the communications bandwidth available, but that's not all. The integration of PCIe connectivity directly into the CPU silicon eliminates a chip-to-chip "hop" in the I/O network and should cut latency substantially, even for graphics cards that only support PCIe Gen2.

The Core i7-3820 is the least expensive processor for the X79 platform, making it an easy choice. Yes, we've dropped down a couple of cores compared to our prior-gen GPU rigs. That's partly because I didn't want to get too far into exotic territory with these new systems. With four cores and a Turbo peak of 3.8GHz, the Core i7-3820 should perform quite similarly to a Core i7-2600K in cases where the X79 platform's additional bandwidth is no help.

We did want to be able to accommodate the most extreme configurations when the situation calls for it, though. That's one reason I selected Gigabyte's X79-UD3 mobo for this build. Even some of the more expensive X79 boards don't have four physical PCIe x16 slots onboard like the UD3 does. Those slots are positioned to allow four double-width cards at once, making the UD3 nearly ideal for this mission.

Cramming in all of those slots and the X79's quad memory channels is no minor achievement, and it did require some compromises. The UD3 lacks an on-board power button, a common feature that's only important for, well, open-air test rigs like this one. Also, the spacing around the CPU socket is incredibly tight. With that big tower cooler installed, reaching the tab to release the retention mechanism on the primary PCIe x16 slot is nearly impossible. I had to jam part of a zip tie into the retention mechanism, semi-permanently defeating it, in order to make card swaps easier.

Still, I'm so far pleased with Gigabyte's new EFI menu and with the relatively decent power consumption of the system, which looks to be about 66W at idle with a Radeon HD 7970 installed. That's roughly 40W lower than our prior test rigs, a considerable decrease.

Memory: Corsair Vengeance 1600MHz quad-channel kit, 16GB - If you're going X79, you'll need four fast DIMMs to keep up, and Corsair was kind enough to send out some Vengeance kits for us to use. Setup is dead simple with the built-in memory profile, supported by the UD3.

PSU: Corsair AX850 - Our old PC Power & Cooling Silencer 750W power supplies served us well for years, but they eventually developed some electronics whine and chatter under load that interfered with our acoustic measurements. It was time for a replacement, and the wonderfully modular Corsair AX850 fit the bill. Although 850W may seem like overkill, we had some tense moments in the past when we pushed our old 750W Silencers to the brink. I wanted some additional headroom. It didn't hurt that the AX850 is 80 Plus Gold certified, and I think the nice reduction we've seen in system-wide idle power draw speaks well of this PSU's efficiency at lower loads. (In fact, when the 7970 goes into its ZeroCore power mode, system power draw drops to 54W.) Even better, when load is 20% or less of peak, the AX850 completely shuts down its cooling fan. That means our idle acoustic measurements should be entirely devoid of PSU fan noise.

CPU cooler: Thermaltake Frio - The original plan was to use Thermaltake's massive new Frio OCK coolers on these test rigs, but the OCK literally would not fit, because the fans wouldn't allow clearance for our relatively tall Vengeance DIMMs. That discovery prompted a quick exchange with Thermaltake, who sent out LGA2011 adapter kits for the older original Frio coolers we had on hand. Although the original Frio isn't that much smaller than the OCK version, we were able to shoehorn a Frio in a single-fan config into this system. The fan enclosure does push up against one DIMM slightly, but that hasn't caused any problems. With a cooler this large, we can keep the fan speed cranked way down, so the Frio is blessedly quiet, without the occasional pump noise you get from the water coolers often used in this class of system.

Storage: Corsair F240 SSD and some old DVD drive - The F240 SSD was a fairly recent upgrade to our old test rigs, and it's one of the two components carried over from those systems, along with the ancient-but-still-necessary DVD drive for installing the handful of games we haven't obtained digitally. The biggest drawback to the SSD? Not enough time to read the loading screens between levels sometimes.

That's about it for the specs. I'm very pleased with the power and noise levels of these new systems. The noise floor at idle on our old test rigs, with the meter perched on a tripod about 14" away, was roughly 34 dB. I'm hoping we'll be able to take that lower with these systems, although honestly, driving too far below that may be difficult without a change of environments. Our basement lab is nothing special in terms of acoustic dampening and such. We'll have to see; I haven't managed to squeeze in a late-night acoustic measurement just yet.

For what it's worth, we have considered using a system in a proper PC case for acoustic and thermal measurements, but that hasn't worked out for various reasons, including the sheer convenience for us, typically rushing on some borderline-abusive deadline, of being able to swap components freely. We also have concerns about whether a case will serve to dampen the noise coming from the various coolers, effectively muting differences on our meter readings that the human ear could still perceive. We may still investigate building a dedicated, enclosed acoustic/thermal test rig in the future, though. We'll see.

Now that the new Damage Labs GPU test rigs are complete, I'm sadly not going to be able to put them to use immediately. I have to move on to testing another type of chip first. I'll get back here eventually, though. I still need to test Radeon HD 7900-series CrossFire, and I understand there are some other new GPUs coming before too long, as well.

42 comments — Last by DarkUltra at 2:43 PM on 03/28/12

CES 2012: the shape of things to come?
— 11:08 AM on January 17, 2012

One of the funny things about going to CES is that you're expected to be plugged into the overall vibe of the show, so you can return and tell your friends and family about "what's hot" in technology. As a journalist, that's especially true, because we have access to press events, show previews, and the like. The trouble is, as I've explained, CES for us is an endless parade of meetings, cab rides, rushed walks, and foot pain. The time we spend on the show floor itself is minimal and mostly involves rushing to that next meeting. Beyond that, we simply don't cover the entire span of consumer electronics and don't get much insight into what's happening in the broader market there—not that, given the scope of CES, any one person or small team really could.

One can catch the vibe of CES in various ways, though. I've already offered my take on the state of the PC industry at CES 2012, which was more about following Apple's template than bold innovations, somewhat unfortunately. In other areas, a few highlights were evident as we rushed through the week.

One new creation that stood out easily at the press-only Digital Experience event was Samsung's amazing demo unit: a 55" OLED television.


This puppy was big and bright, even in the harsh lighting of the MGM Grand ballroom. The most striking thing about it to me, on first glance, was how impossibly thin the bezels were around its edges. To my eye, which has been frequently exposed to various Eyefinity demo rigs and display walls, the sheer thinness of the frame around the screen was jarring—in a good way. After that, one noticed other nice things about this OLED monster versus the average display: near-perfection at difficult viewing angles, amazing brightness and contrast, and much truer blacks than you'd see on an LCD. Unfortunately, this display is still far from being a true consumer product. We didn't get a price tag from the Samsung rep on hand, but the number $50,000 was thrown around only semi-jokingly. If you wanted to see something wondrous from the future at CES 2012, though, the display itself certainly qualified.

Another way you can catch the tech vibe at CES is simply observing the attendees. That's been a reliable method on many fronts, from the number of folks there to the gear they're carrying. In years past, CES has been all about iPhones and an utterly, laughably jammed AT&T network, unable to service 'em all. iDevices were again everywhere at CES 2012—I'd put the iPhone ownership among attendees at somewhere around 50%, easily—but what impressed me this year was the apparent consolidation of non-Apple phones. That contingent didn't consist of a host of smartphones of various types or even a varied selection of Android-based phones. Instead, it seemed like virtually all of the cool kids were toting one of two devices: a Samsung Galaxy S II or a Galaxy Note. Those big, bright screens and thin enclosures were everywhere, and one had to do a double-take at times: does that dude have a really small head, or is he using a Galaxy Note as a phone? Or, you know, perhaps both? In a world where two-year contracts tend to define when a smart phone upgrade makes sense, it's amazing how many CES attendees had upgraded to one of Samsung's new offerings in recent months. Also impressive was how much those big screens and thin cases looked like the future, and how much the tiny little iPhone 4/4S display looked like the past.

CES attendance is also considered something of a bellwether for the tech economy or even the economy as a whole. In 2009, as the wheels were coming off of the banking system, attendance at the show dropped dramatically. I was there, and although things felt a little lonely in the convention center, the upside was most evident: no cab lines, no pressing crowds, few waits at restaurants. Recovery was slow and incremental. The show felt like it was back in force last year, and this year, the crush of people was as inconvenient as anything since 2008, probably up a bit from 2011.

One thing that hasn't changed much is the state of Las Vegas itself. For a number of years, we had the fun task of scoping out the latest massive new casino hotels as they opened up, from Paris and the Venetian to the Wynn and Aria and so on. In 2009, though, commercial building loans dried up, construction stopped, and half-completed structures sat idle, some partially built with cranes atop them. Some still sit that way. One of the more memorable examples was the frame of a new tower for the Venetian, left sitting exposed to the elements for years, obviously rusting. That always stuck out at me, an odd contrast to the bustling activity of the Venetian below.

This year, while approaching the Venetian for the second or third time, I realized I hadn't noticed the half-completed tower yet. That's when I looked up and saw this:

Yep, they've wrapped the rusting frame of the tower in a plastic shroud, colored to look like the buildings around it. That, my friends, is more like what I'd expect from Las Vegas. Let that structure rust in obscurity while giving us the approximation of something better.

15 comments — Last by cynan at 5:18 PM on 01/21/12

Some thoughts on Rage
— 10:54 AM on October 27, 2011

Well, I finished Rage last night. I have to say that I enjoyed it quite a bit, mostly because, at heart, it's a very solid shooter. A number of missions in the very middle of the game, especially those starting from Wellspring, offer an excellent mix of varied environments, interesting enemies, and near-ideal shooter mechanics. Those really pulled me in. I was less thrilled by the game's beginning and ending, especially since the very ending felt rushed, like too many games, where they'd run out of time and budget to make the final battle as epic as those in the middle of the game. Since I had a lot of fun with Rage, and since it's nearly a genre convention these days for a game to end weakly, I can forgive that sin, although the sense of wasted potential is a little saddening.

As I told a friend the other night, I have two thoughts on wingsticks. First, they're a barrel of fun, with a very tangible sense of the ostensible physics involved and excellent animations to go along with them. Winging one of these puppies at a bad guy and watching him take damage is ridiculously satisfying—an instant FPS classic. Second, wingsticks are an obvious concession to the lack of precise control on console gamepads. They're too easy to aim and too powerful in terms of damage dealt, especially since they're so easily interspersed with weapons fire. Wingsticks thus drain much of the drama and challenge out of Rage. Never would have happened if this were a PC-first title. Took me a while to come around to that second line of thinking, but once I did, I couldn't shake that impression.

I said on the last podcast that I had to get over the fact Rage is not Borderlands, and I mostly was able to do so. Rage is smaller, more linear, and more of a pure shooter than Borderlands. Although it has a more limited number of weapons, there are actually much more varied options for creative killin' in Rage thanks to different ammo types and devious devices like the RC bomb cars and sentry bots. I've never gotten into the crazy alt-weapons options in games like BioShock because they just didn't suit me—seemed too contrived, slow, and clumsy compared to, you know, a gun. In Rage, I took special glee in dispatching bad guys with dynamite bolts and other such contrivances, even when they weren't the fastest, because the hilarious carnage was reward enough.

Still, the player limitations built into this game are sometimes frustrating just because they don't seem necessary. You can rarely go off of the intended path, even if doing so would only require stepping over a small brick. You might as well be trying to jump over a skyscraper, for all the good trying will do you. Then, in the final level, I came to what seemed like the obvious and only place to move forward in a small hallway, and it was blocked by a fairly large metal crate (imagine that). Immediately, I backtracked and searched every prior inch of the level looking for another way through. When I found nothing, I went back and considered blowing up the crate or finding some other option. Eventually, to my utter and bewildered shock, I was able to jump over this crate, something the entire rest of the game leading up to that point had meticulously taught me was impossible. Really strange.

Also, the wasteland is too prickly, dangerous, and barren to make freelance exploration rewarding. This game should have been a true open-world affair that encourages improvisation and discovery. The bones are there, but the flesh is not. I know it's not Borderlands, and I swear I'm OK with that, but I still wish Rage was the game it promised to be, the game it cries out to be. Here's hoping for more freedom in Rage 2.

Most of this talk sounds negative, but really, I'm mostly just wishing for more quality time spent in the world of Rage—and, heh, perhaps less spent on silly minigames like the knife thing. The silky-smooth id Software shooter mechanics are as good as ever, and there is a potent mix at work here. The game engine allows for unique textures to be painted on every object in the world, and the game's creative types have taken that ball and run with it, creating levels that are more detailed, varied, interesting, and realistic than anything we've seen before, in a way. Meanwhile, the visuals and action unfold at a constant march of 16 milliseconds per frame (60 FPS if you average it out, but the smoothness is immediate and consistent). Other games may run at high frame rates, but few look this good and move this smoothly all of the time. The fact that this is one of the handful of contemporary games to get multisampled edge antialiasing right, on almost every edge in every scene, also helps immensely. Taken together, these things add up to a full-motion animated experience that's more immersive than most games—and that is perfect for an action game like this one.

Screenshots don't capture the experience, and what they do capture is Rage's one great visual weakness: a frequent lack of texture detail once you get too close to most objects. That weakness is unfortunate, and I understand a patch is coming with a "detail texture" addition that should at least partially alleviate the problem. Even without a fix, though, the in-motion visuals this game slings out can rival or surpass anything else on the PC, with the likely exception of BF3's single-player campaign. Some of the scenes in the game are incredible. Even though they're static, I did grab a few screenshots in places as I played through Rage, and I've put them into the gallery below. Be sure to click the "View full size" button if you'd like to see a scene in its full 2560x1600 8X MSAA glory.

30 comments — Last by indeego at 3:10 PM on 11/03/11

Live blog: IDF 2011 Justin Rattner keynote
Time for an update on Intel R&D
— 11:05 AM on September 15, 2011

Time for one last live blog from the Intel Developer Forum in San Franscisco.  Today Justin Rattner gives an update on the future: Intel's R&D efforts.

We're starting with a cheesy video, in which the military has discovered a 48-core chip and is deeply concerned.

They've found the chip's programmer and are interrogating him about how the heck he parallel programmed the chip.  He explains there are lots of tools now, even JavaScript!


Ladies and gentlemen, please welcome Justin Rattner.

And he's wearing a beret!  Nice.

He offers us his Mooly impersonation.  Which is.. short.

Five years ago today, I stood on this stage to give the opening keynote. It's usually the CEO thing, and I'm not vying for the CEO thing, so have no fear.  But I was onstage to introduce the Core architecture.  At that time, we talked about slowing down some cores to speed up others.  We've come a very long way in five years.

We now have heterogeneous processors, with GPUs onboard.  Soon, Intel will introduce Knight's Corner, Intel's first many core general-purpose processor.

Programming the cores is easy.  Familiar memory model, familiar instruction set.

We also launched our terascale effort, Intel's many-core research program.  We've built a few experimental processors in Intel Labs.  The 80-core, the 48-core single-chip cloud computer that tests out what a future server processor might look like.

 And we've been busy creating better tools for programming many-core architectures.

 We haven't limited our scalability test for many-core to HPC apps.  We're testing lots of different types.  Across this very large range of applications, we're seeing excellent scalability.  Don't think there's anything on this chart less than 30X speedup.  Given us a lot of confidence people are going to be able to put this architecture to work and take performance to higher levels.

Andrezj Nowak, a particle physicist from CERN openlab, is onstage to talk with us.  He works on optimizing performance for many-core.

Let's talk about the large hadron collider.  It operates at a temp cooler than outer space, at 1.8 Kelvin.  And... we're going to see a video about it.

Currently yearly data production from LHC is over 15 petabytes. Real challenge is processing the data later.  We've build a grid with 250K Intel processing cores.  Is spread across the world.

Processing takes place in four major domains.  We simulate the physics and see if the behavior in the collider matches with what we know.

We can use the same toolset we use for Xeon with the MIC, which is nice.  Here's a look at some code than Intel helped us visualize.

We use a single MIC core here and can visualize the program running.  Now, we've engaged all 32 cores of the MIC on this other machine.  What would take minutes on a single core takes seconds on the MIC.

We're looking forward to further versions of the MIC, and we'll take as many cores as we can get.

Rattner: Make me one promise.  Don't make any little black holes that suck us all in.

Openlab dude: Ok, if you promise me more cores.


And the Openlab guy is finished.

Rattner: Do you have to be a ninja programmer to write many-core applications?

Loud gong sound echoes through the hall.  To no laughter whatsoever.

Don't worry.  We're not going to bring ninjas back onstage.

(Phew.  Can we banish additional ninja jokes, too?  Parallelism isn't always good.)

Billy is here to talk about improving the access of large-scale content in the cloud.

Billy says what folks have done with legacy databases for the cloud is just moving the entire database into RAM.

Best transaction rates today are about 560K transactions/second.  With MIC, we can go over 800K queries/sec with lower latency.

Brendan Eich, CTO Mozilla, is onstage to talk about parallel Javascript.  He made Javascript.

"I had 10 days in May 1995 to make it." Has grown into a mature, widely used language.

Javascript so far on the client is predominantly sequential, but we'd like to use multiple cores.  Tatiana is going to show us parallel Javascript.  River Trail is the code name of the work.  It's running a physical simulation at 3 FPS on one thread.  Pop to all of the cores, and it speeds up to 45 FPS.

She says using this should be "quite easy" because it's just Javscript extended to add parallelism in an easy way.  Is available to developers on

Brendan says he's going to promote this at standards bodies.

Moving on, we're asking whether an LTE base station can be built out of a multi-core PC.  Had an idea, along with our friends at China Mobile, that it might be possible to turn a standard PC into a base station.  Entered into an agreement with them two years ago in order to do this.  Architecture is interesting.  Key idea is that at the cell tower is just the RF front end.  Radio signals are digitized, moved over fiber network to a data center.  It's kind of base stations in the cloud.

Dude from China Mobile is onstage to demo.  He's armed with a quad-core Sandy Bridge desktop and a pretty thick accent.  Says they're using AVX instructions to do signal processing.  With lots of optimization, can handle real-time requirements.  And the workload isn't even using all of the computing power.

Rattner: Tell me how you dealt with the real-time issues.

Software is real-time patched Linux.  In about 3ms, is able to respond.  Can stream video over it.  Next year, will begin field trials with China Mobile.

Rattner: We're not just looking at base stations, but high-volume network equipment and switches using standard components from computers.

Now, Dave is going to tell us how we can use the power of multi-core for security.

Folks have perked up, sitting on the edge of their seats.

(Kidding.  Kidding.)

Dave says "I love this demo, because you can actually see the cryptography."  There are.. pictures of people onscreen.  Not sure I see the crypto.

Oh, some pictures look like static.

They use a webcam as a biometric security gate to determine which pictures the user can access.  Changes depending on who's on camera, using facial recognition.

Dave's finished, and Justin reminds us you don't need to be ninja programmer for MIC programming.

What lies beyond multi-core computing?  Extreme scale computing.

Our 10-year goal is to achieve a 300X improvement in energy efficiency for computing.  Equal to 20 picojoules/FLOPS at the system level.


Extreme scale guru whose name I missed is here to talk about... extreme scale computing.

Today, we operate a transistor at several times its threshold voltage.  One thing we can do is reduce the supply voltage and bring it closer to threshold.

Claremont: a Pentium-class processor running near threshold voltage.  This is the one from the solar-powered system demo on Tuesday.  We're operating within a couple of hundred millivolts of threshold voltage. 

This is a 5X improvement in power efficiency, but could have gotten ~10X with a newer core.

It's so old, we went on eBay looking for a Pentium motherboard for it.

How do we turn this into a higher performance system?  Scales to over 10X the frequency when running at nominal supply voltage.

It's running Quake!  Slowly.  Heh.

So could see future ultra-low-power devices with wide dynamic operating range.

New prototype: hybrid DRAM stack.  About 4-8 can be stacked.  These are 4-high.  Stacked mem is very high efficiency.  A terabit per second demo, supposedly very energy efficient, but I don't see any info on voltage or power draw.  Hrm.

And we're finished with the cool power guru.

Rattner: What we've been talking about today is the future.  We have something called the Tomorrow Project.  Brian David Johnson, our futurist, is gonna talk about it... on video.

Voiceover with lots of graphics that look like Tron.  Although no light cycles. :(

We're talking to dignitaries, thinkers, sci-fi authors.  Want to invite you all to join the conversation by visiting the website for it.

"If you can dream it, we can invent it together.  Thank you, and see you next year."

Annnnnd, that's that.  We'll take one of those near-threshold-voltage computers for review, please.  Thanks.

4 comments — Last by ronch at 1:53 AM on 09/22/11

Live blog: IDF 2011 Mooly Eden keynote
Ultrabooks take the stage
— 11:04 AM on September 14, 2011

Join us as we offer a live account of Mooly Eden's day-opening keynote from the Intel Developer Forum 2011 in San Francisco.  Reload to refresh, in keeping with our incredibly high-tech real-time operating procedures.

Johan is back, sounding for all the world like Arnold Schwarzenegger, opening the festivities.

And here's Mooly!

...after a video, I guess.  There are babies and stuff.  Soft music.  So hopeful.  Geoff wipes a tear from the corner of his eye.

 "PCs will continue to inspire all of us to make something wonderful."

LOUD music, and the beret is onstage!

Mooly's talking about how amazingly huge the PC market is.

"Emerging markets are on fire."  China has surpassed US as a consumer of PCs, and Brazil is #3.

 "Let me remind you, the personal computer has been the most adaptable device."  "Its form and function is constantly evolving."

1995, Pentium MMX, was transition from enterprise to a consumer device.

Eight years later was another transformation, with the Centrino.  Mobility. Interestingly enough, eight years later, our customers still want us to excel on this mobility vector.

And eight years later, we are transforming things with the Ultrabook.  It will be a consumption device, but also a creation device.

Today, people use their PCs in several ways.  There is debate: which is more important?  The CPU? The GPU? Media?  I think the best thing is to map applications and usage.  In some cases, it's CPU.  In some cases, it's GPU.  Actually, your experience is not defined by the best component in the mix, but the worst one.  The magic is to deliver a balanced system.  That's what we tried to do with Sandy Bridge. 

Annnnnd... demo time.  Content creation.  Picasa 3 with Task Manager showing eight threads.  Going to compare, I think, an Ultrabook to a three-year-old Core 2 Duo system.  Combining three images into a single HDR one.  Showing the before and after images.  Wow, HDR is so.... HDR-ish!

Now we're demoing a CyberLink video editing tool.  And now the Ray-Ban website with a virtual tool that lets you see different glasses types on a representation of your face.  Uncanny valley, meet high style.

Mooly: "All right, Ivy Bridge."  Ivy Bridge has 1.48 billion transistors.  Remember the number.  "Those of you who are trying to take picturesw of this beautiful die, I played with this.  It's not the real one."  Hmm.. looks like a quad core.  But will there by a quad Ivy Bridge?  Intel is kinda being cagey here.

Mooly says Ivy Bridge is pin-compatible with Sandy.

Now he's talking about interrupt handling.  2.5K interrupts per second from a Gigabit NIC. 3K from USB.  With Ivy, rather than waking a sleeping core to handle interrupts, the interrupts can be routed to the active core in order to save power, extend battery life.

DX11 is going to be available on all our PCs.  We improved geometry throughput, shader array, sampling throughput. Those of you who have been surprised by Sandy Bridge graphics will be delighted by Ivy's.

Ivy Bridge demo time!

Display driver has stopped responding and recovered.  Doh!!

Swappped to another demo. And now we're running HAWX 2 with tessellation on Ivy.  Looks nice and fairly smooth.

We've been focusing on user experience.  Actually talking with anthropologist, psychologists, which is weird.  Asking" What do people want out of their computing?

Bringing David, a marketing manager, onstage to talk about this one.

David says we want to satisfy our left-brain side and our right-brain side.  Left brain: We want to be productive and get things done. Learn and advance ourselves.  Be in control, safe, and secure.  Right brain: We want to create, to connect and share, and to lose ourselves in seamless, immersive experiences. Is there one device that can satisfy all of those things?

Hmm, perhaps something based on an Intel chip?!

David is telling us how an Ultrabook might meet each of his criteria.  And he's finished.

Mooly: This brings us to the Ultrabook.

The Ultrabook is the device that you hold in your hand, the device that you like to show, the device that we put so much effort into what David was talking about.

Need a combination of responsiveness, security, good costs, style, form factor, battery life.

One of the things that we heard with the CULV is that it was still not enough performance.  Ultrabook performance is better than that.

To ensure responsiveness, we extended Turbo.  With 17W and 35W parts, base clock is different, but peak frequency is nearly the same.  So responsiveness is pretty much identical.

Mooly is joined by a young woman who is doing a demo of hibernation, talking about how hibernate is too slow.  Acer Ultrabook comes out of hibernate in ~5 seconds.  (They counted to four, but very slowly.)

Toshiba laptop has been in sleep mode, but it woke up periodically to get updates from the 'net.  Data is fresh when the user calls the laptop out of sleep.

Now we're talking security.  To really discuss it, let me invite onstage one of the cyber-warriors, Todd Gebhart, Co-President of McAfee.  Message: You should worry.  And give us money.

McAfee, Intel working on an anti-theft technology.  User can remote-wipe or lock a stolen laptop.  Will be shipping in 2H of next year.  And Todd's out.

And there's a ninja onstage.  Him, "No, I'm a hacker."  "This is very comfortable.  Maybe not as much as a muscle T and Kango hat, but very comfortable."  Ooh, Mooly.  pwned by the ninja.

Serious dude on the other side of the stage says the ninja/hacker is failing to take over his secure data transfer.

Mooly says you can give your PC a suicide pill, and the PC commits suicide.  Will be a deterrent to theft, since the PC won't be useful.

Kinda neat: an onscreen PIN pad won't show up when the ninja remote monitors the display via a hacked display driver.  Server's display of those PIN pads is somehow secured.

Ninja's out, and Mooly's talking about thinness.  Need a smaller, thinner hard drive.  Different batteries.  Was a huge effort.  We had conferences in Taiwan, China, invested more than $300M in order to accelerate economy of scale for Ultrabooks.

Rolling a video about these conferences.

Wow, a room full of people at an Intel presentation is sitting here watching a video of a room full of people at an Intel presentation.  How deep does it go?

But.... there was a lot of discussion lately about Windows 8.  Intel is working with Microsoft.  Welcome Bret Carpenter from Microsoft, who flew from the build conference to give us a demo.

Win8 tablet running on a 32-nm Atom SoC.

And moving over to the Ultrabook.  Acer Aspire S3.  13mm profile, 13" display.  Resumes very quickly.

A picture of Mooly onscreen.. without a beret.  Mooly is scandalized!

Showing a Windows Metro UI start screen.  "I am able to use a keyboard and mouse with this."  Even though it was designed for touch.

Tiles represent all of your content, and you'll notice they're live.  You'll notice there's no chrome.  We give developers access to every pixel onscreen, so they have control over look of their application.

Popping over the Windows desktop mode... Visual Studio Express.  Looks like Windows.

Mixed mode, a weather widget in Metro style split screen with traditional desktop mode.  Nifty.

Now Mooly's going down a line of demo systems: Toshiba, Lenovo, Asus, Acer..   And they're really, really thin.

A surprise: "All of these Ultrabooks are featuring Ivy Bridge."  Pause.  "I repeat, all of these Ultrabooks are featuring Ivy Bridge."  Applause.


Now Lily here is to talk about screen power savings.

System on left is a traditional LVDS panel, while on right, an eDP panel is refreshing while the CPU is asleep.  Image is stored statically when nothing is happening.  Savings of 500 mW, or 45-60 minutes of battery life in an Ultrabook.

Slide show: screen stays in self-refresh whenever the image doesn't change.  CPU only wakes up when things change.  She pulls out the display cable to prove it's working.  Without the cable, the display continues to refresh itself and show an image.

Mark is here to show off Thunderbolt on Windows.  Streaming four uncompressed HD videos.  Over 700 MB/s.  Acer and Asus will be delivering platforms with Thunderbolt technology on them next year.

Haswell time!  Will deliver more than 20X reduction in standy power.  Mooly holds aloft a Haswell chip.  And now here it is in a working system, up, running, and ready.  Several windows doing different things... briefly.

So, to summarize: PC market is growing and continuing to grow.  Ivy Bridge is a "tick-plus," lots of new functionality.  Ultrabooks are nifty.  And Haswell will complete the Ultrabook revolution.

Mooly has one more thing!  "Roll the video!"

Oh, it's inspirational, talking about how we are more than media consumers, but creators.  Music cranked up to 11.  The audience is.... deaf.

Mooly: "Lades and gentlemen, let's go and build wonderful things together!"

And that's it.  Thanks for reading!

24 comments — Last by dpaus at 7:32 AM on 09/16/11

Live blog: IDF 2011 Paul Otellini keynote
Live from Moscone Center, we type quickly with marginal accuracy
— 11:01 AM on September 13, 2011

Ok, we're going to attempt a live blog from the opening keynote of the Fall 2011 Intel Developer Forum.  Keep reloading this page as we go in order to see the latest updates, while will be at the bottom of the page.  Yes, highly sophisticated.

And we're started.....

 A Schwarzenegger clone named Johan is giving a slick overview of what's coming this week, before Intel CEO Paul Otellini takes the stage.

Ooh, the voiceover is talking about the history of Moore's Law.  Yep, it's IDF.

Ladies and gents, please welcome Paul Otellini!

"My theme today is about fundamental changes."  Transformations in computing have released wave after wave of productivity improvements, but I would submit we're at the very early stages in the history computing.  Two years ago, I introduced a shift from the personal computer to personal computing.

Want to start by talking about how we got to where we are today.  Faster processors, more capable computing, cloud services have changed our lives.  Rattling off stats about the size of YouTube, Twitter, Facebook.  Amount of data generated each year exceeds some 900B gigabytes.  Creating unprecedented demand for transistors.

Talking about total transistor use worldwide in terms of quintillions.  We'll soon move past the sextillion mark.

Moore's Law is not a scientific principle, but an observation of the pace of human innovation.  Since it was first made, people have talked about how it was destined to end.  But we've moved through multiple barriers and continue working.  We already  have line-of-sight for our 14nm technology, beginning to tool our factories for it.

Talking about "Intel Architecture" (x86) and its role, pervasiveness in the industry.

Computing has to adapt in the future.  Must be engaging, consistent and secure.

Ultrabooks!  First ones are now shipping from our partners.

Expect next year to ship Ivy Bridge, and it will accelerate Ultrabook development.

Wanted to go one generation beyond that and talk about Haswell.  Next-gen processor's design is already completed.  30% reduction standby power vs. prior generation, but also architecting a system-level power management framework that has the potential to enable reductions of more than 20 times our current designs.  All-day use on a single charge from the power grid, with no compromise on performance.

 Demo of a future computer that's teeny, solar-powered, running an animation.  Cuts off light to solar cell, and the animation stops.

Otellini: Was just a technology demo, no plans for products, but shows what we can do with our transistor technology.

Now, to talk servers.  Another demo: real-time sharing of event data for visualization.  Funky, but short. Moving on...

Intel and Cisco business communication device demo, running Android.  It's a phone! With a screen! It can browse! It runs apps!  Cisco apps!  Lots of apps!  Thousands of apps!  Ah, and the screen pops off and is a small tablet. A "reinvention of the office phone."  Hrm.

We have developed a framework for development in the Intel computing "continuum."  Craig is gonna show us how that looks.

Craig takes a picture of Paul.  Has an Android phone.  Intel's pair and share allows him to pair phone to PC.  And there is the picture onscreen.  Can also do it with an iPhone.  He's getting notifications, calls that come up in a window on the screen of the PC via Intel Teleport Extender software.

Craig has just one more thing!

A family wall.  A digital bulletin board that shows up on a big screen, can be fed from multiple devices, tablets.  Medfield tablet running Android Honeycomb.  Also using a Toshiba Ultrabook to update the wall.

Craig's finished, and it's time for a video montage of people talking about the Intel computing continuum.  Lenovo, Toshiba reps...

Otellini: that brings me to connected computing and security.  Security is important.  Every device is vulnerable.  Smart phones and tablets are not immune.  This led to our "deep partnership with McAfee."  And a nice lady from McAfee joins Paul onstage.  She has a CSI-style graphical map showing world malware infections.  Talking about the difficulty of dealing with rootkits.

Paul wants to know if there's a way to detect unknown rootkits before they "occur."  McAfee DeepSafe technology.  Uses VT in Intel processors, hardware + software combo to detect rootkits.

Demo of software stopping an unknown rootkit in real time.  Which is about as exciting to portray onstage as you might think.

Now, a video about making movies with Intel stuff.  Jeff Katezenberg from DreamWorks has nice things say about Intel products.  He's clearly reading a script.  "Key enabler of a complete transformation of our business." At DreamWorks, we animate movies.  Intel animates the world.

Paul has one more thing!

Happy to say we're making real progress on goal of getting into smart phones.  Demo phone shown earlier was a reference design running Android.  Want to see Intel phones in market in first half of 2012.

Andy Rubin, Sr. VP at Google, is here to announce a development partnership with Intel for smart phones.

Paul talks, awkward pause, and Andy then says "Oh, yes.  That was my cue."  Heh.

Andy: Let's talk about the future.  We have a tight-knit family of developers.  Here to announce continuation of strategic alliance.  Going forward, all future versions will be optimized from kernel level all the way up to multimedia, 3D graphics.  Very excited to work with Intel.  Paul is also eager.  Thanks, Andy!

And Andy's gone.

Paul, thank you and I hope you enjoy the rest of IDF.

18 comments — Last by dpaus at 7:29 AM on 09/16/11