The Damage Report

Thanksgiving offers a perfect chance to crack open a busted iPad Air
— 11:05 AM on December 1, 2014

Phew. I needed that break. I was able to take off the latter half of last week and this past weekend to spend some time with my family, and it was refreshing to get away. Thanks to Geoff and Cyril, with their alternative Canadian Thanksgiving ways, for keeping the site going.

Because I'm partially crazy, I couldn't just relax during my time away, of course. I took it upon myself to attempt a computer hardware repair. And since doing things that are, you know, sensible isn't a requirement for the halfway insane, I decided to replace the cracked glass digitizer on my brother's iPad Air. Any old chump can fix a busted PC, but only the truly elite hax0rs can tackle hardware maintenance for devices that have been designed with active malice toward the technician.

My preparation for this feat was asking my brother to order a replacement digitizer for his iPad Air and watching the first few minutes of an instructional video on the operation before losing interest. I figured, eh, it's all about glue and guitar picks.

Don't get me wrong. It is all about glue and guitar picks, but the YouTube videos lie. They show operations being performed by competent, experienced people whose hands know what to do in each situation. I am not that person, which is a very relevant difference once you get knee deep into one of these operations.

The other way most of the YouTube videos lie is that they show somebody removing a completely whole, unsullied piece of glass from the front of a device. That was not my fate. The screen on my brother's Air had cracks running clear across its surface, combined with shattered areas covered by spiderwebs of tiny glass shards.

The replacement screen came with a little repair kit, including a guitar pick, mini-screwdrivers, a suction cup, and several plastic pry tools. I used a hair dryer to heat the adhesive around the glass pane, pulled up on the glass with the suction cup, pried under it in one spot with the tiny screwdriver, and slipped a guitar pick into the adhesive layer. Sounds simple, but just getting this start took a lot of trial and error.

I soon discovered two important truths. One, I needed about five more guitar picks to keep the areas where I'd separated the adhesive from re-sealing. I had only the one—and we were away from home, at a little rental house thing, for the holiday. Two, getting a cracked screen to separate from the adhesive is a huge pain in the rear. Suction cups don't stick to cracked glass.

Here's what I eventually pulled free from the chassis, after over an hour's hard work with a bad feeling in the pit of my stomach.

Notice the spiderwebbed section sticking up. Yes, I literally peeled the glass free from its adhesive backing. Not pictured are the hundreds of tiny glass shards that shattered and fell out during the process, all over me and into the iPad chassis. The minuscule shards practically coated the surface of the naked LCD panel beneath the glass, while others worked their way into my fingertips. The pain was one thing, but worse, I was pretty sure at this point that I'd ruined the LCD panel in my brother's tablet.

Notice that some sections of the screen around the edges are not in the picture above. They didn't break free when I removed the rest of the digitizer, so I had to scrape those shards off of their adhesive backing separately.

Also notice that the busted digitizer doesn't have a home button or a plastic frame around the pinhole camera opening up top. Most of them do, and this one did when I first removed it from the iPad. However, the replacement digitizer we ordered bafflingly didn't come with a home button or pinhole frame included. It did in the YouTube videos, but surprise! You get to do this on hard mode.

The home button bracket seemed like it was practically welded on there. And remember: we didn't have any spare adhesive or glue or anything.

After nearly giving up in despair, I found another YouTube video showing this specific operation in some detail. The dude in it used tools I didn't have, but what the heck. After heating the home button area with the hair dryer, I pulled out my pocket knife and went for it. I proceeded to separate the home button, its paper-thin ribbon connection, and the surrounding metal bracket from the busted digitizer. Somehow, I managed to keep enough adhesive on the bracket to allow it to attach to the new screen. The button happily clicked without feeling loose. This success massively exceeded my expectations.

Once I'd crested that hill, I  came face to face with that perfect Retina LCD coated with glass dust. Frankly, I'd been trying to bracket off my worries about that part of the operation, or I wouldn't have been able to continue. After lots of blowing on the gummy surface of the LCD panel, I decided what I needed to deal with the remaining glass shards and fingerprints was a microfiber cloth. Lint from cotton would be disastrous. Shortly, my brother went out to his truck and returned with a nasty, dirt-covered microfiber cloth that was pretty much our only option. A couple of the corners were less obviously soiled, so I used them lovingly to brush, rub, and polish the surface of the LCD panel. Several spots where I concentrated my efforts just grew into larger and larger soiled areas. My brother stood looking nervously over my shoulder, asking worried questions about the state of things. However, after rotating the cloth and giving it some time and gentle effort, I was somehow able to dispel the oily patches almost entirely.

From here, it was all downhill, right? I attached each of the miniature ribbon connectors and, before reassembling the tablet, turned it on for a quick test. To my great relief and pleasure, the LCD worked perfectly, with no dead pixels or obvious damage of any kind. And the touchscreen digitizer responded perfectly to my input, even though it wasn't yet layered atop the LCD. It was good to go.

The next step was the tedious process of placing the pre-cut 3M adhesive strips along the edges of the iPad chassis. Somehow, I managed to do this without folding over the glue strips and having them stick to themselves. Really not something I expected to pull off cleanly.

Pictured above is the open iPad with the new digitizer attached. You can see the adhesive strips around the edges of the chassis with the backing still on one side. My bandaged fingers are holding up the LCD panel, and the big, back rectangles you see are the iPad's batteries. The device's motherboard sits under the metal shield just above the batteries. It's a little larger than a stick of gum. I stopped to take a picture at this point mostly because my stress level was finally low enough for me to remember to do so.

With only a little remaining struggle, I was able to re-seat the LCD panel and secure it, remove the adhesive backing, flip over the new digitizer, and push it firmly into place atop the new adhesive layer. After a little clean-up, my brother's iPad Air looked as good as new.

Three hours after my journey began, I turned on the repaired iPad. It booted up quickly. The LCD looked perfect. The home button was clicky and solid. And I swiped to log in.

Didn't take.

I swiped again, a few times, and I was able to log in. And then... the thing went crazy. Phantom touches everywhere ran apps, activated UI buttons, and began typing gobbledygook messages. The touchscreen was completely hosed.

Utter defeat. What followed isn't something I'd like to share on the Internet. Suffice to say that I'm a grown man, and grown men shouldn't act like that.

Initially, I blamed myself for messing up the repair with my clumsiness. I figured I must have ruined a ribbon connector or something. Hours later, after I'd gotten some distance from the whole thing, I poked around online and came to a different conclusion. You see, the original adhesive layer I removed from the iPad was essentially a felt lining with sticky stuff on both sides. The repair kit, however, came only with a thin layer of adhesive, with no insulator. I'm now 99% certain that the touchscreen's problems were caused by making electrical contact with the iPad's aluminum chassis. Others have run into the same issue, looks like.

I may never know for sure. My brother took the iPad back to his home after Thanksgiving and will be paying a repair shop to fix it. I dunno whether they'll offer any feedback about what happened.

Meanwhile, I suppose I got a little bit more experience doing repair work on mobile devices. So far, I've learned two things. First, I can do this. It just takes more of the same patience, precision, and self-imposed calm that working on larger computer systems requires. And a few initial victims, like my daughter's Nintendo DS, my mother-in-law's cell phone, my old laptop, and my brother's iPad Air.

Hey, they were broken anyway.

Second, it takes a special sort of person to do this stuff for fun. I am probably not that sort of person—and I'm okay with that.

Besides, next time I'll have a proper heat gun, more guitar picks, and some insulating tape.

51 comments — Last by epicmadness at 12:45 AM on 12/11/14

Finally light bulb's Tesla tech gives LEDs a worthy rival
— 12:37 AM on November 20, 2014

Ever since I improbably started blogging occasionally about light bulbs, I've been waiting impatiently to get a look at the first product from The Finally Light Bulb Company. This start-up company from Cambridge, Massachusetts has decided to bring a Tesla-era lighting technology into the consumer space.

The tech is known as induction or electrodeless lighting. Induction tech is pretty closely related to fluorescent lighting: a magnetic field excites gases in an enclosed tube. Those gases generate UV light, which strikes the phosphor coating on the tube, causing it to glow. (I'm probably butchering the details, so go here for more info.) Induction lighting has been used for years in industrial and commercial settings, where its reliability and efficiency are appealing, but the fixtures have been much too large for use in the home. The folks at Finally have worked to miniaturize induction lighting radically, so an entire assembly will fit into the space of a conventional A19 light bulb.

Finally calls its miniaturized version of induction lighting "acandescent technology" in an obvious play on "incandescent"—and a tip of the hat to the firm's goal, which is to replicate the warm, welcoming light of an incandescent bulb with very few compromises.

Now, I have almost no specific details about how Finally's implementation of inductive lighting works. All I have is presumably a finished product packaged neatly in retail garb. Heck, I'm not entirely sure why I have this bulb apparently before just about anyone else. Probably they sent me one since I kept bugging them about it.

That said, I suspect Finally may have deployed a couple of important tools in pursuit of their goal. One such tool could be a very fast cycle time. Old-school fluorescents cycle at 60Hz, and I believe CFLs generally run at 2KHz. Some induction lights cycle as quickly as two and a half megahertz. Finally may have chosen a relatively high operating frequency in order to ensure solid, steady illumination. Also, Finally was undoubtedly very particular when selecting the mix of phosphors to use, since those determine the spectrum of light emitted by the bulb.


The yellow-striped Finally bulb next to a Cree 4Flow and a conventional 60W incandescent

Whatever else is going on, there's no question that Finally's miniaturization efforts have succeeded. The payoff is a bulb whose shape closely mimics the teardrop profile of a traditional 60W incandescent.

The rest of the Finally Bulb's specs are competitive with the incumbent LED offerings, as well. It generates 800 lumens of light output using only 14.5W, just a touch above the 13.5W power consumption of Cree's TW-Series LED. The bulb's color temperature is rated at 2700K, the same as other "soft white" bulbs, and its $9.99 list price is in the neighborhood of the best LEDs, even if it is a couple of bucks higher than Finally initially projected. The bulb is EPA rated for 13.7 years of operation at three hours per day, which Finally backs with a 10-year limited warranty.

This bulb can go places some LEDs can't, too. It's rated for use in damp environments like bathrooms (though not in direct contact with water), and it can also be used in enclosed fixtures. For most intents and purposes, the Finally bulb can be used just like an incandescent. There is one place where it falls a bit short: it's not compatible with dimmer switches. Finally has said that future "acandescent" bulbs could be made to work with dimmers, but this first product doesn't go there.

The biggest question, of course, is about the quality of the illumination it produces. Finally makes a big claim about how its bulb reproduces that familiar, warm incandescent glow: "Finally, it is the same." That's a tall order since even the best LEDs don't measure up to the full-spectrum illumination produced by incandescent lights.

The Finally bulb's spec sheet says it has a color rendering index (CRI) of 83. That's short of the perfect 100 produced by incandescent bulbs, but it surpasses the 80 rating of the excellent Cree 60W Soft White LED. (Cree's TW Series claims a CRI of 93.) That said, CRI is an imperfect measure, so I wouldn't get too hung up on those numbers.

When I installed the Finally bulb in a lamp and flipped the switch, I was greeted with a bit of a surprise. The product's packaging says it's "instant on and instant re-start," but that summation misses an important reality. The bulb does light up immediately when you flip the power switch, but it only begins at about 50% of peak brightness. The light then ramps up to full brightness over the course of the next five or six seconds, so quickly that the change in luminance is easy to observe. The ramp up is faster than any CFL I've ever seen, but it doesn't match the immediacy of LEDs or incandescents.

In fact, it's hard to tell for sure, but I suspect the Finally bulb may not reach its absolute peak brightness until several minutes have passed. If I'm right about that, though, the effect is pretty subtle.

Get past that one quirk, and the rest of the story is quite good. As you can probably tell from the picture above, the bulb offers pretty much perfect omnidirectional light distribution, with none of the challenges LEDs sometimes face on this front.

The illumination from the Finally bulb is, as promised, warm and inviting. In my view, it's easily superior to any CFL. Each one of my poor friends and family members who I've accosted for an opinion have agreed with that assessment without reservation. The difference is not hard to see.

Stare at a room lit by this bulb a little longer, and you'll notice something unexpected: the light it produces is noticeably pink in tone. If you've experimented with CFL and LEDs, you may have noticed that not every 2700K light source produces the same mix of colors. Many CFLs tend to be predominantly green, and they can cast a sickly pallor across a living space. LEDs aren't quite so skewed, but they tend to be relatively yellow in tone.

Finally appears to have chosen a phosphor mix that emphasizes red. That's an intriguing aesthetic choice. The rosy pink light from this bulb runs counter to the cooler, flatter, and more antiseptic feel of many CFLs and even LEDs. This emphasis on the red portion of the spectrum makes the Finally bulb more appealing in certain ways. Wood tones appear deeper and more pronounced. Skin tones look healthier, too. I haven't yet combined three of them in the fixture above our kitchen table, but I suspect food presentation will be more pleasing, as well.

That said, the green walls of my bedroom take on more of a gray cast in this light, so it's not perfect. If you compare them side by side, the Finally bulb actually looks somewhat pinker than a 60W incandescent, kind of like GE's original Reveal bulbs with the neodymium coating. Not that there's anything wrong with that. (Happily, this product doesn't make the mistake of providing noticeably less illumination than a regular 60W bulb, either.)

Overall, I'd say the Finally bulb's light quality nearly rivals that of my favorite LED, Cree's 13.5W TW Series. I'm not sure I could say one is clearly superior to the other in every way. I do think the light from the TW Series is probably a little more balanced. If I were installing lamps in a room full of wood paneling, though, I'd pick the Finally bulb for that mission.

All in all, then, this is a spectacular start for an alternative lighting technology that's new to the consumer space—and an auspicious beginning for the young company that produced it. If you're into this stuff, you should grab one and try it out. The bulb is worth seeing in action, and you'll surely get some use out of it.

Unfortunately, I don't yet know where you can purchase one beyond the pre-order form on Finally's website. The firm hasn't yet announced a final availability date for its first product or a list of retailers that will carry it. I expect we'll be hearing more on that front soon.  I may have to snag a few more of these bulbs for myself once they become available.

70 comments — Last by ThatStupidCat at 9:34 PM on 12/08/14

Cree raises its game, lowers prices with 4Flow bulb
— 8:00 AM on October 28, 2014

Since I posted my Friday night topic and then a blog post about LED light bulbs, I've been quietly waiting for another chance to try out something new and interesting on the lighting front. I figured that chance would come with the introduction of the Finally Bulb, but that company's name is proving to be unintentionally uncomfortable. I'm now told they'll have samples ready next month.

Meanwhile, the folks Cree are making news today with the introduction of a new, cheaper consumer LED bulb. The firm's existing 60W replacement bulbs were already my favorites, and this new bulb further refines the formula. Have a look at, yes, our review sample:

As you can see, this puppy is shaped pretty much exactly like an Edison-style A19 light bulb. Cree has eliminated the external heatsink and replaced it with what the firm calls a 4Flow Filament Design. Without the heavy, bulky external heatsink, this LED bulb is shockingly lightweight—under two ounces—and costs quite a bit less to produce. As a result, the price for the 60W-equivalent bulbs is just $8.97, a dollar less than Cree's current 60W-equivalent offering.

Cree plans to offer 4Flow bulbs in 40W- and 60W-equivalent types, with a choice of "soft white" 2700K and "daylight" 5000K color temperatures. The new bulbs will be sold exclusively through The Home Depot, and they will add to Cree's lineup rather than replacing any existing products. Like other Cree LED bulbs, the 4Flow models are instant-on, compatible with dimmers, and rated for ridiculously long lifetimes.

One obvious competitive target for the 4Flow is Philips' nifty heatsink-free SlimStyle LED bulbs. The SlimStyle 60W equivalent sells for $8.97 at The Home Depot, and right now, my local power company is apparently subsidizing these bulbs in a deal that brings their price down to $5.97 in local stores. The SlimStyle offers excellent illumination that's almost indistinguishable from the Cree's. Its only major drawback is a funky, flat shape that may be a little wider than some fixtures will permit. The 4Flow matches the SlimStyle's base price and offers a more conventional shape.

Cree has managed to eliminate the need for a metal heatsink at the base of the bulb by combining several measures. Most obvious is the venting at the top and bottom of the plastic shroud covering the LEDs. Inside, the 4Flow bulb is divided into four chambers by the reflective metal substrates on which the LEDs are mounted. Each chamber contains two LEDs, for a total of eight in each bulb. The heat generated by the LEDs causes air to circulate, and the bulb is then cooled by convection.

Older Cree bulbs have 10 LEDs inside. Cree says it was able to reduce the LED count in the 4Flow thanks to its new Extreme High Power LEDs.

All of the LEDs inside the bulb are situated on the same plane, so the 4Flow retains the filament-like look familiar from Cree's earlier products. One could easily mistake it for an incandescent upon casual inspection. The 4Flow layout, however, eliminates the dark spot at the top of the bulb. Despite this dark area, the older Cree bulbs cast light in all directions pretty effectively, but I suspect some folks will consider the 4Flow an aesthetic improvement.

One downside of the new design is slightly higher power consumption: 11W versus 9.5W for Cree's earlier 60W equivalent. I got the chance to talk with Mike Watson, Cree's VP of Product Strategy, about the 4Flow, and I asked him about the added power draw. He said that the new bulb draws more power in part because of the different thermal process; it's driving the LEDs harder. He pointed out that the energy cost difference between the two bulbs over their lifetimes works out to about $4—$139 versus $135. Cree saw this tradeoff as acceptable so long as it could lower the price of entry without compromising light quality. He also noted that the 40W-equivalent version of the 4Flow has the same 6W power rating as its predecessor.

The 4Flow's open venting could make it susceptible to some problems that other LEDs wouldn't face, including damage from moisture and bugs. Watson told me the 4Flow isn't rated for use in damp settings, although it could go into outdoor fixtures that provide enough protection. As for bugs, Watson pointed out that the 4Flow's mostly indoor usage model should help stave off some problems. He also explained that LEDs do not emit light in the UV spectrum, so they don't tend to attract bugs like incandescents do. That's really interesting and somewhat reassuring, but I'll have to make it through a few Missouri summers with 4Flows in our indoor lamps before I'm entirely persuaded. I figure we're bound to have a cricket or spider fricassee itself on one of those LEDs eventually.

That worry aside, the Cree 4Flow looks to be the most compelling candidate yet to prompt a house-wide conversion from inefficient incandescents or nasty-looking CFLs. The extent to which it mimics the look and feel of a conventional light bulb is unprecedented. Before talking to Watson, I hadn't realized that Cree bulbs could be used in enclosed fixtures, but they can. The 4Flow's packaging warns only against combining LEDs with CFLs or incandescents in the same fixture. That fact opens up a new front at my house. I reckon having that knowledge will cost me some multiple of $8.97.

The one question Watson couldn't answer directly was whether Cree plans to introduce a TrueWhite version of the 4Flow. Thanks to a neodymium coating that reduces the yellow bias in the light produced by LEDs, Cree's TW Series bulbs produce the best color rendering I've seen this side of an incandescent. I'd flip out over an inexpensive TW Series bulb. Of course, Watson couldn't comment on unannounced products. He did say that Cree is committed to having TW Series bulbs available and that if a TrueWhite version of the 4Flow makes sense, "we'll do it." I suppose time will tell.

85 comments — Last by just brew it! at 1:38 AM on 11/18/14

Civ: Beyond Earth with Mantle aims to end multi-GPU microstuttering
— 3:46 PM on October 23, 2014

The next installment in Sid Meier's Civilization series, Civilization: Beyond Earth, comes out tomorrow. The folks at AMD have been working with its developer, Firaxis, to optimize the game for Radeon graphics cards. Most notably, Firaxis and AMD have ported the game to work with AMD"s lightweight Mantle graphics API.

Predictably, AMD and Firaxis report that Mantle lowers the game's CPU overhead, allowing Beyond Earth to play smoother and deliver higher frame rates on many systems. They've even provided a nice bar graph with average FPS showing AMD in the lead, like so:

That's all well and good, I suppose (although *ahem* the R9 290X they used has 8GB of RAM). But average FPS numbers won't tell you about gameplay smoothness or responsiveness. What's more interesting is how AMD and Firaxis have tackled the thorny problem of multi-GPU rendering in Beyond Earth.

Both CrossFire and SLI, the multi-GPU schemes from AMD and Nvidia, handle the vast majority of today's games by divvying up frames between GPUs in interleaved fashion. Frame one goes to GPU one, frame two to GPU two, frame three back to GPU one, and so on. This technique is known as alternate-frame rendering (AFR). AFR does a nice job of dividing the workload between GPUs so that everything scales well for the benchmarks. Both triangle throughput and pixel processing benefit from giving each GPU its own frame.

Unfortunately, AFR doesn't always do as good a job of improving the user experience as it does of improving—or perhaps inflating— average FPS scores. The timing of frames processed on different GPUs can go out of sync, causing a phenomenon known as multi-GPU micro-stuttering. We've chronicled this problem in our initial FCAT article and, most extensively, in our epic Radeon HD 7990 review. AMD has attempted to fix this problem by pacing the delivery of frames to the display, much as Nvidia has done for years with its frame metering tech. But frame pacing is imperfect and, depending on how a game's internal simulation timing works, may lead to perfectly spaced frames that contain out-of-sync visuals.

Making AFR work well is a Hard Problem. It's further complicated by variable display refresh schemes like G-Sync and FreeSync that attempt to paint a new frame on the screen as soon as it's ready. Pacing those frames could be a hot mess.

In a similar vein, virtual reality headsets like the Oculus Rift are extremely sensitive to input lag, the delay between when a user's head turns and when a visual response shows up on the headset's display. If that process takes too long, the user may get vertigo and go all a-chunder. Inserting a rendering scheme like AFR with frame metering into the middle of that feedback loop is a bad proposition. Frame metering intentionally adds latency to some frames in order to smooth out delivery, and AFR itself requires deeper queuing of frames, which also adds latency.

At the end of the day, this collection of problems has conspired to make AFR—and multi-GPU schemes in general—look pretty shaky. AFR is fragile, requires tuning and driver support for each and every game, and doesn't always deliver the experience that its FPS results seem to promise. AMD and Nvidia have worked hard to keep CrossFire and SLI working well for their users, but we at TR only recommend buying multi-GPU solutions when no single GPU is fast enough for your purposes.

Happily, game developers and the GPU companies seem to be considering other approaches to delivering an improved experience with multi-GPU solutions, even if they don't over-inflate FPS averages. Nvidia vaguely hinted at a change of approach during its GeForce GTX 970 and 980 launch when talking about VR Direct, its collection of features aimed at the Oculus Rift and similar devices. Now, AMD and Firaxis have gone one better, throwing out AFR and implementing split-frame rendering (SFR) instead in the Mantle version of Beyond Earth.

AMD provided us with an explanation of their approach that's worth reading in its entirety, so here it is:

With a traditional graphics API, multi-GPU arrays like AMD CrossFire™ are typically utilized with a rendering method called "alternate-frame rendering" (AFR). AFR renders odd frames on the first GPU, and even frames on the second GPU. Parallelizing a game's workload across two GPUs working in tandem has obvious performance benefits.

As AFR requires frames to be rendered in advance, this approach can occasionally suffer from some issues:

·         Large queue depths can reduce the responsiveness of the user's mouse input

·         The game's design might not accommodate a queue sufficient for good mGPU scaling

·         Predicted frames in the queue may not be useful to the current state of the user’s movement or camera

Thankfully, AFR is not the only approach to multi-GPU. Mantle empowers game developers with full control of a multi-GPU array and the ability to create or implement unique mGPU solutions that fit the needs of the game engine. In Civilization: Beyond Earth, Firaxis designed a "split-frame rendering" (SFR) subsystem. SFR divides each frame of a scene into proportional sections, and assigns a rendering slice to each GPU in AMD CrossFire™ configuration. The "master" GPU quickly receives the work of each GPU and composites the final scene for the user to see on his or her monitor.

If you don’t see 70-100% GPU scaling, that is working as intended, according to Firaxis. Civilization: Beyond Earth’s GPU-oriented workloads are not as demanding as other recent PC titles. However, Beyond Earth’s design generates a considerable amount of work in the producer thread. The producer thread tracks API calls from the game and lines them up, through the CPU, for the GPU's consumer thread to do graphics work. This producer thread vs. consumer thread workload balance is what establishes Civilization as a CPU-sensitive title (vs. a GPU-sensitive one).

Because the game emphasizes CPU performance, the rendering workloads may not fully utilize the capacity of a high-end GPU. In essence, there is no work leftover for the second GPU. However, in cases where the GPU workload is high and a frame might take a while to render (affecting user input latency), the decision to use SFR cuts input latency in half, because there is no long AFR queue to work through. The queue is essentially one frame, each GPU handling a half. This will keep the game smooth and responsive, emphasizing playability, vs. raw frame rates.

Let me provide an example. Let's say a frame takes 60 milliseconds to render, and you have an AFR queue depth of two frames. That means the user will experience 120ms of lag between the time they move the map and that movement is reflected on-screen. Firaxis' decision to use SFR halves the queue down to one frame, reducing the input latency to 60ms. And because each GPU is working on half the frame, the queue is reduced by half again to just 30ms.

In this way the game will feel very smooth and responsive, because raw frame-rate scaling was not the goal of this title. Smooth, playable performance was the goal. This is one of the unique approaches to mGPU that AMD has been extolling in the era of Mantle and other similar APIs.

All I can say is: thank goodness. Let's hope we see more of this kind of thing from AMD and major game studios in the coming months and years. Multi-GPU solutions don't have to double their FPS averages in order to achieve smoother animations or improved responsiveness. I'd much rather see a multi-GPU team producing more modest increases that the user can actually feel and experience.

Of course, while we're at it, I'll note that if you measure frame times instead of FPS averages, you can more often capture the true improvement offered by mGPU solutions. AMD has been a little slower than Nvidia to adopt a frame-time-sensitive approach to testing, but it's clearly a better way to quantify the benefits of this sort of work.

Fortunately, AMD and Firaxis have built tools into Beyond Earth to capture frame times. I have been working on other things behind the scenes this week and haven't yet had the time to make use of these tools, but I'm pleased to see them there. You can bet they'll figure prominently into our future GPU articles and reviews.

88 comments — Last by BIF at 12:47 AM on 11/08/14

AMD's CEO transition is a natural next step
— 7:50 PM on October 8, 2014

I just finished listening in to the conference call for financial analysts regarding AMD's CEO transition from Rory Read to Dr. Lisa Su. As usual in cases like this one, the words spoken by Read and Su were carefully chosen and partially scripted ahead of time. As a result, they didn't offer a completely satisfying answer to the questions on everyone's minds about why Read is leaving just a few short years after he took the helm at AMD. Carefully crafted statements from large companies in a time of change rarely satisfy everyone's natural curiosity. One always wonders if there is a larger story behind the official narrative.

Perhaps we'll find out about a profound internal disagreement or dissatisfaction from the board that led to Read's ouster, as happened with Dirk Meyer in 2011.

In this case, though, I think it's entirely possible the reasons behind this change are fairly straightforward. Read said in his opening statement that one of his mandates upon joining AMD was to pick a successor, and he later stated that he hired Dr. Su with that possibility in mind. Read also pointed out that, on his watch, AMD cut operational expenditures by 30%. One doesn't slash a third of the jobs (or something close to it) at a company of AMD's size without alienating quite a few people.

Perhaps Read very intentionally planned to make sweeping changes, to reconstitute AMD's leadership team and structure, and then to step away in a fairly short window.

That's essentially the picture Read painted during his talk, although he's not one to speak in direct, clear language about much of anything. He'd ask you to "reevaluate the binary condition of the wall-mounted switching mechanism" rather than to "turn off the light."

When questioned about the timing of this move, Read briefly spoke in straightforward terms. He said, "The part I'm good at, I've already done," and "Lisa is uniquely positioned for the next phase."

For her part, Dr. Su echoed Read's sentiments about the transition being part of an intentional plan. She also outlined her priorities for AMD going forward, and there wasn't much daylight between those priorities and AMD's strategy under Read. Even the likely changes she outlined—such as an increased emphasis on co-development of products with customers like AMD did with Microsoft and Sony for their game consoles—echo the strategy Read and this team revealed in early 2012. Dr. Su also emphasized that AMD's investments in new x86 and ARM cores, new graphics IP, and SoC integration are "absolutely critical" to the company's future.

Furthermore, under direct questioning, Read and Su both denied this transition was prompted by a disagreement over AMD's long-term strategy. Dr. Su said she and Rory had "really no disagreements on anything" and have been "very aligned."

If the official portrait of this transition is largely accurate, it would be unusual in the context of AMD's last two CEO transitions.

In this case, my natural skepticism is dampened by a nugget I picked up at CES back in January. It wasn't anything I could report, but a well-placed industry source suggested to me that Dr. Su would very likely replace Read as AMD's CEO "within the next six months." Of course, since this is AMD,  the schedule was optimistic, but that prediction proved accurate—and it lends credibility to the notion that this move was in the works for a while.

By practically all accounts, Dr. Su is well-suited by virtue of her experience and ability to lead AMD. If she does well, it seems likely that Rory Read's tenure will be remembered as a time when a corporate turnaround artist installed new leadership and steered the company in a positive new direction.

That turnaround is still very much in progress, though, and the most difficult stages may yet lie ahead. The K12 (ARM) and Zen (x86) cores are still in development and likely will be for another year or more. AMD will struggle to remain relevant in the CPU market until its new cores arrive. Meanwhile, AMD's graphics division has a daunting challenge to face in the form of Nvidia's ultra-efficient Maxwell-based GPUs.

Dr. Su inherits a company with a clear direction and a potentially bright future, but the next 18 to 24 months could be really rough sailing. Here's hoping she—and the rest of AMD—is up to the challenge.

75 comments — Last by sschaem at 5:41 PM on 10/14/14

Here's another reason the GeForce GTX 970 is slower than the GTX 980
— 3:09 PM on October 1, 2014

I was really under the gun when I was trying to finish up my GeForce GTX 970 and 980 review. As a result, I wasn't able to track down the cause of an interesting anomaly in my test results. Have a look at the theoretical peak pixel fill rate of the GTX 970 and 980 reference cards (along with the Asus Strix 970 card we tested) based on the GPU's active ROP count and clock speed:

  Peak pixel
fill rate
(Gpixels/s)
Peak
bilinear
filtering
int8/fp16
(Gtexels/s)
Peak
shader
arithmetic
rate
(tflops)
Peak
rasterization
rate
(Gtris/s)
Memory
bandwidth
(GB/s)
GeForce GTX 970 75 123/123 3.9 4.7 224
Asus Strix GTX 970 80 130/130 4.2 5.0 224
GeForce GTX 980 78 156/156 5.0 4.9 224

On paper, the GTX 970 ought to be nearly as fast on this front as the 980—and the Asus Strix card ought to be a smidgen faster. The 3DMark color fill test we use has evidently been limited by memory bandwidth at times in the past, but that shouldn't be an issue since all three cards in question have the exact same memory config.

Look at what happened, however, when I ran that synthetic fill rate test:

Despite having superior or equal numbers on paper, the Asus Strix 970 couldn't come close to matching the GTX 980's delivered pixel throughput. I promptly raised an eyebrow upon seeing these results, but I didn't have time to investigate the issue any further.

Then, last week, an email hit my inbox from Damien Triolet at Hardware.fr, one of the best GPU reviewers in the business. He offered a clear and concise explanation for these results—and in the process, he politely pointed out why our numbers for GPU fill rates have essentially been wrong for a while. Damien graciously agreed to let me publish his explanation:

For a while, I've thought I should drop you an email about some pixel fillrate numbers you use in the peak rates tables for GPUs. Actually, most people got those numbers wrong as Nvidia is not crystal clear about those kind of details unless you ask very specifically.

The pixel fillrate can be linked to the number of ROPs for some GPUs, but it’s been limited elsewhere for years for many Nvidia GPUs. Basically there are 3 levels that might have a say at what the peak fillrate is :

  • The number of rasterizers
  • The number of SMs
  • The number of ROPs

On both Kepler and Maxwell each SM appears to use a 128-bit datapath to transfer pixels color data to the ROPs. Those appears to be converted from FP32 to the actual pixel format before being transferred to the ROPs. With classic INT8 rendering (32-bit per pixel) it means each SM has a throughput of 4 pixels/clock. With HDR FP16 (64-bit per pixel), each SM has a throughput of 2 pixels/clock.

On Kepler each rasterizer can output up to 8 pixels/clock. With Maxwell, the rate goes up to 16 pixels/clock (at least with the currently released Maxwell GPUs).

So the actual pixels/cycle peak rate when you look at all the limits (rasterizers/SMs/ROPs) would be :

GTX 750 : 16/16/16
GTX 750 Ti  : 16/20/16
GTX 760 : 32/24/32 or 24/24/32 (as there are 2 die configuration options)
GTX 770 : 32/32/32
GTX 780 : 40/48/48 or 32/48/48 (as there are 2 die configuration options)
GTX 780 Ti : 40/60/48
GTX 970 : 64/52/64
GTX 980 : 64/64/64

Extra ROPs are still useful to get better efficiency with MSAA and so. But they don’t participate in the peak pixel fillrate.

That’s in part what explains the significant fillrate delta between the GTX 980 and the GTX 970 (as you measured it in 3DMark Vantage). There is another reason which seem to be that unevenly configured GPCs are less efficient with huge triangles splitting (as it’s usually the case with fillrate tests).

So the GTX 970's peak potential pixel fill rate isn't as high as the GTX 980's, in spite of the fact that they share the same ROP count, because the key limitation resides elsewhere. When Nvidia hobbles the GTX 970 by disabling SMs, the effective pixel fill rate suffers.

That means, among other things, that I need to build a much more complicated spreadsheet for figuring these things out. It also means paying extra for a GTX 980 could be the smart move if you plan to use that graphics card to drive a 4K display—or to use DSR at a 4X factor like we recently explored. That said, the GTX 970 is still exceptionally capable, especially given the clock speed leeway the GM204 GPU appears to offer.

Thanks to Damien for enlightening us—and for solving a puzzle in our results that I hadn't yet had time to investigate.

41 comments — Last by ronch at 10:18 AM on 10/06/14

TR subscribers get Macrium Reflect for 20-40% off
— 9:22 AM on June 11, 2014

We haven't said much about TR subscriptions for a little while, after the rush of the launch, but this little experiment is so far off to an excellent start. You all proved that reader-supported content can work, and you saved our bacon after weak sales in early 2014. We learned some lessons from the initial introductory period, and now we're making additions and changes to the subscription service in response.

One thing that we've wanted to do is add more value for subscribers, so that more of you who are regular readers will find it worth your time to sign up. To that end, we're very happy to announce our first external benefit for TR subscribers: some handsome discounts on software purchased from the Macrium website, including the outstanding Macrium Reflect backup and imaging solution.

Anyone who subscribes for any amount of money at all, down to $1 payment in our pay-what-you-want system, will get a code good for 20% off at Macrium.com. Those folks who beat the average and get a Gold subscription will receive a code for a whopping 40% off, instead.

If you're a TR Silver or Gold subscriber now, your discount code is already waiting for you. Just go to the user control panel and look for it under the "Features" tab. The code should be redeemable throughout the next year.

I'm very pleased to be able to offer a subscriber discount on a product as good as Reflect. I make use of Reflect in Damage Labs constantly thanks to your recommendations. The program writes a bootable WinPE utility onto a thumb drive, and I use it for imaging all of my test systems. I also back up my own PC with Reflect, and it has saved me from an SSD failure with a flawless restore of a weekly image backup. Not only that, but I've received free updates from Macrium for more than a year now without once being held hostage to a required, paid upgrade due to an "incompatibility" with an upgraded version of Windows—unlike *ahem* some imaging companies.

We have more subscriber benefits in the works along these lines, so do yourself a favor and sign up now. You'll also get all of the other subscriber perks, including single-page article views, print templates, comment reply notifications, a subscriber badge, and access to the Smoky Back Room. Beat the average to get triple upvote/downvotes and access to our four-megapixel image galleries, as well.

Finally, remember, if you like what we're doing, you can always add to your subscription amount to support the cause. Thanks!

46 comments — Last by steelcity_ballin at 11:44 AM on 06/15/14