The Damage Report

Is FCAT more accurate than Fraps for frame time measurements?
— 11:36 AM on July 22, 2015

Here's a geeky question we got in response to one of our discussions in the latest episode of the podcast that deserves a solid answer. It has to do with our Inside the Second methods for measuring video game performance using frame times, as demonstrated in our Radeon R9 Fury review. Specifically, it refers to the software tool Fraps versus the FCAT tools that analyze video output.

TR reader TheRealSintel asks:

On the FRAPS/frametime discussion, I remember during the whole FCAT introduction that FRAPS was not ideal, I also heard some vendors performance can take a dive when FRAPS is enabled, etc.

I actually assumed the frametimes in each review were captured using FCAT instead of FRAPS.

When you guys introduce a new game to test, do you ever measure the difference between in-game reporting, FCAT and FRAPS?

I answered him in the comments, but I figure this answer is worth promoting to a blog entry. Here's my response:

There's a pretty widespread assumption at other sites that FCAT data is "better" since it comes from later in the frame production process, and some folks like to say Fraps is less "accurate" as a result. I dispute those notions. Fraps and FCAT are both accurate for what they measure; they just measure different points in the frame production process.

It's quite possible that Fraps data is a better indication of animation smoothness than FCAT data. For instance, a smooth line in an FCAT frame time distribution wouldn't lead to smooth animation if the game engine's internal simulation timing doesn't match well with how frames are being delivered to the display. The simulation's timing determines the *content* of the frames being produced, and you must match the sim timing to the display timing to produce optimally fluid animation. Even "perfect" delivery of the frames to the display will look awful if the visual information in those frames is out of sync.

What we do now for single-GPU reviews is use Fraps data (or in-engine data for a few games) and filter the Fraps results with a three-frame moving average. This filter accounts for the effects of the three-frame submission queue in Direct3D, which can allow games to tolerate some amount of "slop" in frame submission timing. With this filter applied, any big spikes you see in the frame time distribution are likely to carry through to the display and show up in FCAT data. In fact, this filtered Fraps data generally looks almost identical to FCAT results for single-GPU configs. I'm confident it's as good as FCAT data for single-GPU testing.

For multi-GPU configs, things become more complicated because frame metering/pacing comes into the picture. In that case, Fraps and FCAT may look rather different. That said, a smooth FCAT line with multi-GPU is not a guarantee of smooth animation alone. Frame metering only works well when the game advances its simulation time using a moving average or a fixed cadence. If the game just uses the wall clock for the current frame, then metering can be a detriment. And from what I gather, game engines vary on this point.

(Heck, the best behavior for game engine timing for SLI and CrossFire—advancing the timing using a moving average or fixed cadence—is probably the opposite of what you'd want to do for a variable-refresh display with G-Sync or FreeSync.)

That's why we've been generally wary of AFR-based multi-GPU and why we've provided video captures for some mGPU reviews. See here.

At the end of the day, a strong correlation between Fraps and FCAT data would be a better indication of smooth in-game animation than either indicator alone, but capturing that data and quantitatively correlating it is a pain in the rear and lot of work. No one seems to be doing that (yet?!).

Even further at the end of the day, all of the slop in the pipeline between the game's simulation and the final display is less of a big deal than you might think so long as the frame times are generally low. That's why we concentrate on frame times above all, and I'm happy to sample at the point in the process that Fraps does in order to measure frame-to-frame intervals.

I should also mention: I don't believe the presence of the Fraps overlay presents any more of a performance problem than the presence of the FCAT overlay when running a game. The two things work pretty much the same way, and years of experience with Fraps tells me its performance impact is minimal.

Here's hoping that answer helps. This is tricky stuff. There are also the very practical challenges involved in FCAT use, like the inability to handle single-tile 4K properly and the huge amount of data generated, that make it more trouble than it's worth for single-GPU testing. I think both tools have their place, as does the in-engine frame time info we get from games like BF4.

In fact, the ideal combination of game testing tools would be: 1) in-engine frame time recordings that reflect the game's simulation time combined with 2) a software API from the GPU makers that reflects the flip time for frames at the display. (The API would eliminate the need for fussy video capture hardware.) I might add: 3) a per-frame identification key that would let us track when the frames produced in the game engine are actually hitting the display, so we can correlate directly.

For what it's worth, I have asked the GPU makers for the API mentioned in item 2, but they'd have to agree on something in common in order for that idea to work. So far, nobody has made it a priority.

46 comments — Last by Melvar at 6:27 PM on 07/28/15

Reconsidering the overall index in our Radeon R9 Fury review
— 12:20 PM on July 13, 2015

I've been pretty active over the weekend responding to questions in the comments section for our Radeon R9 Fury review.

As you may know, our value scatter plot puts the R9 Fury just behind the GeForce GTX 980 in our overall index of average FPS scores across our test suite. Some of you have expressed surprise at this outcome given the numbers you've seen in other reviews, and others have zeroed in on our inclusion of Project Cars as a potential problem, since that game runs noticeably better on GeForces than Radeons for whatever reason.

I've explained in the comments that we use a geometric mean to calculate our overall performance score rather than a simple average specifically so that outliers—that is, games that behave very differently from most others—won't have too big an impact. That said, the geomean doesn't always filter outlier results as effectively as one might wish. A really skewed single result can have a noticeable impact on the final average. For that reason, in the rush to prepare my Fury review, I briefly looked at the impact of excluding Project Cars as a component of the overall score. My recollection is that it didn't seem to matter much.

However, prompted by your questions, I went back to the numbers this morning and poked around some. Turns out the impact of that change may be worthy of note. With Cars out of the picture, the overall FPS average for the R9 Fury drops by 1.2 FPS and the score for the GeForce GTX 980 drops by 2.8 FPS. The net result shifts from a 0.6-FPS margin of victory for the GTX 980 to a win for the R9 Fury by a margin of 1.1 FPS.

Things are really close. This is why I said in my analysis: "That's essentially a tie, folks."

But I know some of you hang a lot of worth on the race to achieve the highest FPS averages. I also think the requests to exclude Project Cars results from the index are sensible given how different they are from everything else. So here is the original FPS value scatter plot:

And here's the revised FPS-per-dollar scatter plot without the Cars component.

Some folks will take solace in this symbolic victory for AMD in terms of overall FPS averages. Do note that the price-performance landscape isn't substantially altered by this shift on the Y axis, though.

We have long championed better metrics for measuring gaming smoothness, and our 99th-percentile FPS plot is also altered by the removal of Cars from the results. I think this result is a much more reliable indicator of delivered performance in games than an FPS average. Here's the original one:

And here it is without Project Cars:

The picture shifts again with Cars out of the mix—and in a favorable direction for the Radeons—yet the R9 Fury and Fury X still trail the less expensive GeForce GTX 980 in terms of general animation smoothness. I believe this result is much more notable to PC gamers who want to understand the real-world performance of these products. AMD still has work to do in order to ensure better experiences for Radeon buyers in everyday gaming.

Then there's the power consumption picture, which looks like so:

I didn't have time to include this plot in the review, although all of the data are there in other forms. I think it's a helpful reminder of another dynamic at play when you're choosing among these cards.

At the end of the day, I think the Cars-free value scatter plots are probably a more faithful reflection of the overall performance picture than our original ones, so I'm going to update the final page of our Fury review with the revised plots. I've looked over the text that will need to change given the shifts in the plot positions. The required edits amount to just a few words, since the revised scores don't change anything substantial in our assessments of these products.

Still, it's always our intention to provide a clear sense of the overall picture in our reviews. In this case, I'm happy to make a change in light of some reader concerns.

159 comments — Last by gigafinger at 8:07 AM on 07/23/15

Time Warner slings free Maxx upgrades to counter Google Fiber
— 1:35 PM on May 21, 2015

I've been chronicling the slow progress of Google Fiber moving into my metro area, my city, and eventually, into my house. Since Google Fiber started building in the Kansas City area, a funny thing has happened: competition. Even before the Google announcement, we had the option of AT&T U-Verse or Time Warner Cable in my neighborhood. Then Google did its thing, and AT&T later announced the rollout of its own fiber product in parts of the metro. Meanwhile, my incumbent cable provider, Time Warner, has raised the speeds of our cable Internet service several times at no extra charge.

I get the sense that we're pretty fortunate around here, all things considered, compared to a lot of areas in the U.S. One thing we have that many others don't is a real set of options.

Anyhow, I mentioned the other day that the timeline for Google Fiber service turn-ups in my neighborhood is disappointingly slow, even though the fiber's already in the ground. The wait for 1000Mbps up- and downstream was gonna be pretty rough at a continued pace of 50Mbps down and a pokey 5Mbps up.

Happily, we got a notice in the mail (yes, via snail mail) the other day from Time Warner telling us about yet another speed increase at no cost.  This is part of TWC's new Maxx service offering. The "standard" service tier jumps from 15Mbps down/1Mbps up to 50/5. Our "Extreme" package rises from 50/5 to 200/20. And the fastest package goes from 100/5 to 300/20.

Not bad, really. And the change was apparently active. I ran down to my office and did a quick speed test, and sure enough, performance was up. Downstream reached about 110Mbps, and upstream hit about 11Mbps. We have a relatively new modem, from the last couple of years, but the notice said we might need to swap it out for a newer one to reach the full rates. I quickly hopped online and ordered a swap kit, which TWC promised to send out to my house free of charge.

That was on Friday. Then, on Sunday, our Internet service simply stopped working. From what I could tell after some poking and prodding, our home router was fine, and our modem was synced up to the cable network fine. It just wouldn't pass packets. What followed was a weird combination of good and bad.

Somehow, I found TWC's customer service account on Twitter and decided to see if there was an outage in my area. They were incredibly quick to reply and ask me for more info about my TWC account. I provided it, and they soon informed me that my modem had been quarantined in order to alert me that I needed to upgrade my modem to get the full speeds available to me.

Yes, they straight took down my service to let me know that I needed to order a modem I'd already ordered.

If only we had... information technology that would allow companies to target only appropriate customers with these messages. If only other forms of communication existed than a total service shut-off. If only... wow.

Anyhow, the Twitter rep took my modem out of quarantine and explained that most users should see a web-based message about the reason for the quarantine—along with a form to order a new modem and a means of getting the current one out of quarantine. It's just that "some routers" block that message. My excellent Asus AC2400 router was one that did, it seems, likely due to good security design.

Again, wow. I think competition has made TWC aggressive without really making them customer-focused. I suppose it's a start.

Regardless, my new modem arrived yesterday and I installed it. The process was a little clumsy, but I muddled through. The end result was a full realization of our new service speeds. Speedtest.net tells me I can reach 216Mbps downstream and 21Mbps upstream, just a little better than the advertised rate.

Man, four times our old upstream and downstream speeds is gonna make the wait for Google Fiber much easier. Heck, I'm not sure how many servers out there really sling bits to consumers at 200Mbps—other than,  you know, Steam. Maybe other folks with fast connections can enlighten us about that. My sense is that, for purposes that don't involve upstream transmissions, what we have now may not differ much in practical terms from fiber-based Internet services. Didn't happen how I expected, but I'm pleased to see it.

77 comments — Last by redavni at 6:44 PM on 06/02/15

Thanksgiving offers a perfect chance to crack open a busted iPad Air
— 11:05 AM on December 1, 2014

Phew. I needed that break. I was able to take off the latter half of last week and this past weekend to spend some time with my family, and it was refreshing to get away. Thanks to Geoff and Cyril, with their alternative Canadian Thanksgiving ways, for keeping the site going.

Because I'm partially crazy, I couldn't just relax during my time away, of course. I took it upon myself to attempt a computer hardware repair. And since doing things that are, you know, sensible isn't a requirement for the halfway insane, I decided to replace the cracked glass digitizer on my brother's iPad Air. Any old chump can fix a busted PC, but only the truly elite hax0rs can tackle hardware maintenance for devices that have been designed with active malice toward the technician.

My preparation for this feat was asking my brother to order a replacement digitizer for his iPad Air and watching the first few minutes of an instructional video on the operation before losing interest. I figured, eh, it's all about glue and guitar picks.

Don't get me wrong. It is all about glue and guitar picks, but the YouTube videos lie. They show operations being performed by competent, experienced people whose hands know what to do in each situation. I am not that person, which is a very relevant difference once you get knee deep into one of these operations.

The other way most of the YouTube videos lie is that they show somebody removing a completely whole, unsullied piece of glass from the front of a device. That was not my fate. The screen on my brother's Air had cracks running clear across its surface, combined with shattered areas covered by spiderwebs of tiny glass shards.

The replacement screen came with a little repair kit, including a guitar pick, mini-screwdrivers, a suction cup, and several plastic pry tools. I used a hair dryer to heat the adhesive around the glass pane, pulled up on the glass with the suction cup, pried under it in one spot with the tiny screwdriver, and slipped a guitar pick into the adhesive layer. Sounds simple, but just getting this start took a lot of trial and error.

I soon discovered two important truths. One, I needed about five more guitar picks to keep the areas where I'd separated the adhesive from re-sealing. I had only the one—and we were away from home, at a little rental house thing, for the holiday. Two, getting a cracked screen to separate from the adhesive is a huge pain in the rear. Suction cups don't stick to cracked glass.

Here's what I eventually pulled free from the chassis, after over an hour's hard work with a bad feeling in the pit of my stomach.

Notice the spiderwebbed section sticking up. Yes, I literally peeled the glass free from its adhesive backing. Not pictured are the hundreds of tiny glass shards that shattered and fell out during the process, all over me and into the iPad chassis. The minuscule shards practically coated the surface of the naked LCD panel beneath the glass, while others worked their way into my fingertips. The pain was one thing, but worse, I was pretty sure at this point that I'd ruined the LCD panel in my brother's tablet.

Notice that some sections of the screen around the edges are not in the picture above. They didn't break free when I removed the rest of the digitizer, so I had to scrape those shards off of their adhesive backing separately.

Also notice that the busted digitizer doesn't have a home button or a plastic frame around the pinhole camera opening up top. Most of them do, and this one did when I first removed it from the iPad. However, the replacement digitizer we ordered bafflingly didn't come with a home button or pinhole frame included. It did in the YouTube videos, but surprise! You get to do this on hard mode.

The home button bracket seemed like it was practically welded on there. And remember: we didn't have any spare adhesive or glue or anything.

After nearly giving up in despair, I found another YouTube video showing this specific operation in some detail. The dude in it used tools I didn't have, but what the heck. After heating the home button area with the hair dryer, I pulled out my pocket knife and went for it. I proceeded to separate the home button, its paper-thin ribbon connection, and the surrounding metal bracket from the busted digitizer. Somehow, I managed to keep enough adhesive on the bracket to allow it to attach to the new screen. The button happily clicked without feeling loose. This success massively exceeded my expectations.

Once I'd crested that hill, I  came face to face with that perfect Retina LCD coated with glass dust. Frankly, I'd been trying to bracket off my worries about that part of the operation, or I wouldn't have been able to continue. After lots of blowing on the gummy surface of the LCD panel, I decided what I needed to deal with the remaining glass shards and fingerprints was a microfiber cloth. Lint from cotton would be disastrous. Shortly, my brother went out to his truck and returned with a nasty, dirt-covered microfiber cloth that was pretty much our only option. A couple of the corners were less obviously soiled, so I used them lovingly to brush, rub, and polish the surface of the LCD panel. Several spots where I concentrated my efforts just grew into larger and larger soiled areas. My brother stood looking nervously over my shoulder, asking worried questions about the state of things. However, after rotating the cloth and giving it some time and gentle effort, I was somehow able to dispel the oily patches almost entirely.

From here, it was all downhill, right? I attached each of the miniature ribbon connectors and, before reassembling the tablet, turned it on for a quick test. To my great relief and pleasure, the LCD worked perfectly, with no dead pixels or obvious damage of any kind. And the touchscreen digitizer responded perfectly to my input, even though it wasn't yet layered atop the LCD. It was good to go.

The next step was the tedious process of placing the pre-cut 3M adhesive strips along the edges of the iPad chassis. Somehow, I managed to do this without folding over the glue strips and having them stick to themselves. Really not something I expected to pull off cleanly.

Pictured above is the open iPad with the new digitizer attached. You can see the adhesive strips around the edges of the chassis with the backing still on one side. My bandaged fingers are holding up the LCD panel, and the big, back rectangles you see are the iPad's batteries. The device's motherboard sits under the metal shield just above the batteries. It's a little larger than a stick of gum. I stopped to take a picture at this point mostly because my stress level was finally low enough for me to remember to do so.

With only a little remaining struggle, I was able to re-seat the LCD panel and secure it, remove the adhesive backing, flip over the new digitizer, and push it firmly into place atop the new adhesive layer. After a little clean-up, my brother's iPad Air looked as good as new.

Three hours after my journey began, I turned on the repaired iPad. It booted up quickly. The LCD looked perfect. The home button was clicky and solid. And I swiped to log in.

Didn't take.

I swiped again, a few times, and I was able to log in. And then... the thing went crazy. Phantom touches everywhere ran apps, activated UI buttons, and began typing gobbledygook messages. The touchscreen was completely hosed.

Utter defeat. What followed isn't something I'd like to share on the Internet. Suffice to say that I'm a grown man, and grown men shouldn't act like that.

Initially, I blamed myself for messing up the repair with my clumsiness. I figured I must have ruined a ribbon connector or something. Hours later, after I'd gotten some distance from the whole thing, I poked around online and came to a different conclusion. You see, the original adhesive layer I removed from the iPad was essentially a felt lining with sticky stuff on both sides. The repair kit, however, came only with a thin layer of adhesive, with no insulator. I'm now 99% certain that the touchscreen's problems were caused by making electrical contact with the iPad's aluminum chassis. Others have run into the same issue, looks like.

I may never know for sure. My brother took the iPad back to his home after Thanksgiving and will be paying a repair shop to fix it. I dunno whether they'll offer any feedback about what happened.

Meanwhile, I suppose I got a little bit more experience doing repair work on mobile devices. So far, I've learned two things. First, I can do this. It just takes more of the same patience, precision, and self-imposed calm that working on larger computer systems requires. And a few initial victims, like my daughter's Nintendo DS, my mother-in-law's cell phone, my old laptop, and my brother's iPad Air.

Hey, they were broken anyway.

Second, it takes a special sort of person to do this stuff for fun. I am probably not that sort of person—and I'm okay with that.

Besides, next time I'll have a proper heat gun, more guitar picks, and some insulating tape.

50 comments — Last by epicmadness at 12:45 AM on 12/11/14

Finally light bulb's Tesla tech gives LEDs a worthy rival
— 12:37 AM on November 20, 2014

Ever since I improbably started blogging occasionally about light bulbs, I've been waiting impatiently to get a look at the first product from The Finally Light Bulb Company. This start-up company from Cambridge, Massachusetts has decided to bring a Tesla-era lighting technology into the consumer space.

The tech is known as induction or electrodeless lighting. Induction tech is pretty closely related to fluorescent lighting: a magnetic field excites gases in an enclosed tube. Those gases generate UV light, which strikes the phosphor coating on the tube, causing it to glow. (I'm probably butchering the details, so go here for more info.) Induction lighting has been used for years in industrial and commercial settings, where its reliability and efficiency are appealing, but the fixtures have been much too large for use in the home. The folks at Finally have worked to miniaturize induction lighting radically, so an entire assembly will fit into the space of a conventional A19 light bulb.

Finally calls its miniaturized version of induction lighting "acandescent technology" in an obvious play on "incandescent"—and a tip of the hat to the firm's goal, which is to replicate the warm, welcoming light of an incandescent bulb with very few compromises.

Now, I have almost no specific details about how Finally's implementation of inductive lighting works. All I have is presumably a finished product packaged neatly in retail garb. Heck, I'm not entirely sure why I have this bulb apparently before just about anyone else. Probably they sent me one since I kept bugging them about it.

That said, I suspect Finally may have deployed a couple of important tools in pursuit of their goal. One such tool could be a very fast cycle time. Old-school fluorescents cycle at 60Hz, and I believe CFLs generally run at 2KHz. Some induction lights cycle as quickly as two and a half megahertz. Finally may have chosen a relatively high operating frequency in order to ensure solid, steady illumination. Also, Finally was undoubtedly very particular when selecting the mix of phosphors to use, since those determine the spectrum of light emitted by the bulb.


The yellow-striped Finally bulb next to a Cree 4Flow and a conventional 60W incandescent

Whatever else is going on, there's no question that Finally's miniaturization efforts have succeeded. The payoff is a bulb whose shape closely mimics the teardrop profile of a traditional 60W incandescent.

The rest of the Finally Bulb's specs are competitive with the incumbent LED offerings, as well. It generates 800 lumens of light output using only 14.5W, just a touch above the 13.5W power consumption of Cree's TW-Series LED. The bulb's color temperature is rated at 2700K, the same as other "soft white" bulbs, and its $9.99 list price is in the neighborhood of the best LEDs, even if it is a couple of bucks higher than Finally initially projected. The bulb is EPA rated for 13.7 years of operation at three hours per day, which Finally backs with a 10-year limited warranty.

This bulb can go places some LEDs can't, too. It's rated for use in damp environments like bathrooms (though not in direct contact with water), and it can also be used in enclosed fixtures. For most intents and purposes, the Finally bulb can be used just like an incandescent. There is one place where it falls a bit short: it's not compatible with dimmer switches. Finally has said that future "acandescent" bulbs could be made to work with dimmers, but this first product doesn't go there.

The biggest question, of course, is about the quality of the illumination it produces. Finally makes a big claim about how its bulb reproduces that familiar, warm incandescent glow: "Finally, it is the same." That's a tall order since even the best LEDs don't measure up to the full-spectrum illumination produced by incandescent lights.

The Finally bulb's spec sheet says it has a color rendering index (CRI) of 83. That's short of the perfect 100 produced by incandescent bulbs, but it surpasses the 80 rating of the excellent Cree 60W Soft White LED. (Cree's TW Series claims a CRI of 93.) That said, CRI is an imperfect measure, so I wouldn't get too hung up on those numbers.

When I installed the Finally bulb in a lamp and flipped the switch, I was greeted with a bit of a surprise. The product's packaging says it's "instant on and instant re-start," but that summation misses an important reality. The bulb does light up immediately when you flip the power switch, but it only begins at about 50% of peak brightness. The light then ramps up to full brightness over the course of the next five or six seconds, so quickly that the change in luminance is easy to observe. The ramp up is faster than any CFL I've ever seen, but it doesn't match the immediacy of LEDs or incandescents.

In fact, it's hard to tell for sure, but I suspect the Finally bulb may not reach its absolute peak brightness until several minutes have passed. If I'm right about that, though, the effect is pretty subtle.

Get past that one quirk, and the rest of the story is quite good. As you can probably tell from the picture above, the bulb offers pretty much perfect omnidirectional light distribution, with none of the challenges LEDs sometimes face on this front.

The illumination from the Finally bulb is, as promised, warm and inviting. In my view, it's easily superior to any CFL. Each one of my poor friends and family members who I've accosted for an opinion have agreed with that assessment without reservation. The difference is not hard to see.

Stare at a room lit by this bulb a little longer, and you'll notice something unexpected: the light it produces is noticeably pink in tone. If you've experimented with CFL and LEDs, you may have noticed that not every 2700K light source produces the same mix of colors. Many CFLs tend to be predominantly green, and they can cast a sickly pallor across a living space. LEDs aren't quite so skewed, but they tend to be relatively yellow in tone.

Finally appears to have chosen a phosphor mix that emphasizes red. That's an intriguing aesthetic choice. The rosy pink light from this bulb runs counter to the cooler, flatter, and more antiseptic feel of many CFLs and even LEDs. This emphasis on the red portion of the spectrum makes the Finally bulb more appealing in certain ways. Wood tones appear deeper and more pronounced. Skin tones look healthier, too. I haven't yet combined three of them in the fixture above our kitchen table, but I suspect food presentation will be more pleasing, as well.

That said, the green walls of my bedroom take on more of a gray cast in this light, so it's not perfect. If you compare them side by side, the Finally bulb actually looks somewhat pinker than a 60W incandescent, kind of like GE's original Reveal bulbs with the neodymium coating. Not that there's anything wrong with that. (Happily, this product doesn't make the mistake of providing noticeably less illumination than a regular 60W bulb, either.)

Overall, I'd say the Finally bulb's light quality nearly rivals that of my favorite LED, Cree's 13.5W TW Series. I'm not sure I could say one is clearly superior to the other in every way. I do think the light from the TW Series is probably a little more balanced. If I were installing lamps in a room full of wood paneling, though, I'd pick the Finally bulb for that mission.

All in all, then, this is a spectacular start for an alternative lighting technology that's new to the consumer space—and an auspicious beginning for the young company that produced it. If you're into this stuff, you should grab one and try it out. The bulb is worth seeing in action, and you'll surely get some use out of it.

Unfortunately, I don't yet know where you can purchase one beyond the pre-order form on Finally's website. The firm hasn't yet announced a final availability date for its first product or a list of retailers that will carry it. I expect we'll be hearing more on that front soon.  I may have to snag a few more of these bulbs for myself once they become available.

70 comments — Last by ThatStupidCat at 9:34 PM on 12/08/14

Cree raises its game, lowers prices with 4Flow bulb
— 8:00 AM on October 28, 2014

Since I posted my Friday night topic and then a blog post about LED light bulbs, I've been quietly waiting for another chance to try out something new and interesting on the lighting front. I figured that chance would come with the introduction of the Finally Bulb, but that company's name is proving to be unintentionally uncomfortable. I'm now told they'll have samples ready next month.

Meanwhile, the folks Cree are making news today with the introduction of a new, cheaper consumer LED bulb. The firm's existing 60W replacement bulbs were already my favorites, and this new bulb further refines the formula. Have a look at, yes, our review sample:

As you can see, this puppy is shaped pretty much exactly like an Edison-style A19 light bulb. Cree has eliminated the external heatsink and replaced it with what the firm calls a 4Flow Filament Design. Without the heavy, bulky external heatsink, this LED bulb is shockingly lightweight—under two ounces—and costs quite a bit less to produce. As a result, the price for the 60W-equivalent bulbs is just $8.97, a dollar less than Cree's current 60W-equivalent offering.

Cree plans to offer 4Flow bulbs in 40W- and 60W-equivalent types, with a choice of "soft white" 2700K and "daylight" 5000K color temperatures. The new bulbs will be sold exclusively through The Home Depot, and they will add to Cree's lineup rather than replacing any existing products. Like other Cree LED bulbs, the 4Flow models are instant-on, compatible with dimmers, and rated for ridiculously long lifetimes.

One obvious competitive target for the 4Flow is Philips' nifty heatsink-free SlimStyle LED bulbs. The SlimStyle 60W equivalent sells for $8.97 at The Home Depot, and right now, my local power company is apparently subsidizing these bulbs in a deal that brings their price down to $5.97 in local stores. The SlimStyle offers excellent illumination that's almost indistinguishable from the Cree's. Its only major drawback is a funky, flat shape that may be a little wider than some fixtures will permit. The 4Flow matches the SlimStyle's base price and offers a more conventional shape.

Cree has managed to eliminate the need for a metal heatsink at the base of the bulb by combining several measures. Most obvious is the venting at the top and bottom of the plastic shroud covering the LEDs. Inside, the 4Flow bulb is divided into four chambers by the reflective metal substrates on which the LEDs are mounted. Each chamber contains two LEDs, for a total of eight in each bulb. The heat generated by the LEDs causes air to circulate, and the bulb is then cooled by convection.

Older Cree bulbs have 10 LEDs inside. Cree says it was able to reduce the LED count in the 4Flow thanks to its new Extreme High Power LEDs.

All of the LEDs inside the bulb are situated on the same plane, so the 4Flow retains the filament-like look familiar from Cree's earlier products. One could easily mistake it for an incandescent upon casual inspection. The 4Flow layout, however, eliminates the dark spot at the top of the bulb. Despite this dark area, the older Cree bulbs cast light in all directions pretty effectively, but I suspect some folks will consider the 4Flow an aesthetic improvement.

One downside of the new design is slightly higher power consumption: 11W versus 9.5W for Cree's earlier 60W equivalent. I got the chance to talk with Mike Watson, Cree's VP of Product Strategy, about the 4Flow, and I asked him about the added power draw. He said that the new bulb draws more power in part because of the different thermal process; it's driving the LEDs harder. He pointed out that the energy cost difference between the two bulbs over their lifetimes works out to about $4—$139 versus $135. Cree saw this tradeoff as acceptable so long as it could lower the price of entry without compromising light quality. He also noted that the 40W-equivalent version of the 4Flow has the same 6W power rating as its predecessor.

The 4Flow's open venting could make it susceptible to some problems that other LEDs wouldn't face, including damage from moisture and bugs. Watson told me the 4Flow isn't rated for use in damp settings, although it could go into outdoor fixtures that provide enough protection. As for bugs, Watson pointed out that the 4Flow's mostly indoor usage model should help stave off some problems. He also explained that LEDs do not emit light in the UV spectrum, so they don't tend to attract bugs like incandescents do. That's really interesting and somewhat reassuring, but I'll have to make it through a few Missouri summers with 4Flows in our indoor lamps before I'm entirely persuaded. I figure we're bound to have a cricket or spider fricassee itself on one of those LEDs eventually.

That worry aside, the Cree 4Flow looks to be the most compelling candidate yet to prompt a house-wide conversion from inefficient incandescents or nasty-looking CFLs. The extent to which it mimics the look and feel of a conventional light bulb is unprecedented. Before talking to Watson, I hadn't realized that Cree bulbs could be used in enclosed fixtures, but they can. The 4Flow's packaging warns only against combining LEDs with CFLs or incandescents in the same fixture. That fact opens up a new front at my house. I reckon having that knowledge will cost me some multiple of $8.97.

The one question Watson couldn't answer directly was whether Cree plans to introduce a TrueWhite version of the 4Flow. Thanks to a neodymium coating that reduces the yellow bias in the light produced by LEDs, Cree's TW Series bulbs produce the best color rendering I've seen this side of an incandescent. I'd flip out over an inexpensive TW Series bulb. Of course, Watson couldn't comment on unannounced products. He did say that Cree is committed to having TW Series bulbs available and that if a TrueWhite version of the 4Flow makes sense, "we'll do it." I suppose time will tell.

85 comments — Last by just brew it! at 1:38 AM on 11/18/14

Civ: Beyond Earth with Mantle aims to end multi-GPU microstuttering
— 3:46 PM on October 23, 2014

The next installment in Sid Meier's Civilization series, Civilization: Beyond Earth, comes out tomorrow. The folks at AMD have been working with its developer, Firaxis, to optimize the game for Radeon graphics cards. Most notably, Firaxis and AMD have ported the game to work with AMD"s lightweight Mantle graphics API.

Predictably, AMD and Firaxis report that Mantle lowers the game's CPU overhead, allowing Beyond Earth to play smoother and deliver higher frame rates on many systems. They've even provided a nice bar graph with average FPS showing AMD in the lead, like so:

That's all well and good, I suppose (although *ahem* the R9 290X they used has 8GB of RAM). But average FPS numbers won't tell you about gameplay smoothness or responsiveness. What's more interesting is how AMD and Firaxis have tackled the thorny problem of multi-GPU rendering in Beyond Earth.

Both CrossFire and SLI, the multi-GPU schemes from AMD and Nvidia, handle the vast majority of today's games by divvying up frames between GPUs in interleaved fashion. Frame one goes to GPU one, frame two to GPU two, frame three back to GPU one, and so on. This technique is known as alternate-frame rendering (AFR). AFR does a nice job of dividing the workload between GPUs so that everything scales well for the benchmarks. Both triangle throughput and pixel processing benefit from giving each GPU its own frame.

Unfortunately, AFR doesn't always do as good a job of improving the user experience as it does of improving—or perhaps inflating— average FPS scores. The timing of frames processed on different GPUs can go out of sync, causing a phenomenon known as multi-GPU micro-stuttering. We've chronicled this problem in our initial FCAT article and, most extensively, in our epic Radeon HD 7990 review. AMD has attempted to fix this problem by pacing the delivery of frames to the display, much as Nvidia has done for years with its frame metering tech. But frame pacing is imperfect and, depending on how a game's internal simulation timing works, may lead to perfectly spaced frames that contain out-of-sync visuals.

Making AFR work well is a Hard Problem. It's further complicated by variable display refresh schemes like G-Sync and FreeSync that attempt to paint a new frame on the screen as soon as it's ready. Pacing those frames could be a hot mess.

In a similar vein, virtual reality headsets like the Oculus Rift are extremely sensitive to input lag, the delay between when a user's head turns and when a visual response shows up on the headset's display. If that process takes too long, the user may get vertigo and go all a-chunder. Inserting a rendering scheme like AFR with frame metering into the middle of that feedback loop is a bad proposition. Frame metering intentionally adds latency to some frames in order to smooth out delivery, and AFR itself requires deeper queuing of frames, which also adds latency.

At the end of the day, this collection of problems has conspired to make AFR—and multi-GPU schemes in general—look pretty shaky. AFR is fragile, requires tuning and driver support for each and every game, and doesn't always deliver the experience that its FPS results seem to promise. AMD and Nvidia have worked hard to keep CrossFire and SLI working well for their users, but we at TR only recommend buying multi-GPU solutions when no single GPU is fast enough for your purposes.

Happily, game developers and the GPU companies seem to be considering other approaches to delivering an improved experience with multi-GPU solutions, even if they don't over-inflate FPS averages. Nvidia vaguely hinted at a change of approach during its GeForce GTX 970 and 980 launch when talking about VR Direct, its collection of features aimed at the Oculus Rift and similar devices. Now, AMD and Firaxis have gone one better, throwing out AFR and implementing split-frame rendering (SFR) instead in the Mantle version of Beyond Earth.

AMD provided us with an explanation of their approach that's worth reading in its entirety, so here it is:

With a traditional graphics API, multi-GPU arrays like AMD CrossFire™ are typically utilized with a rendering method called "alternate-frame rendering" (AFR). AFR renders odd frames on the first GPU, and even frames on the second GPU. Parallelizing a game's workload across two GPUs working in tandem has obvious performance benefits.

As AFR requires frames to be rendered in advance, this approach can occasionally suffer from some issues:

·         Large queue depths can reduce the responsiveness of the user's mouse input

·         The game's design might not accommodate a queue sufficient for good mGPU scaling

·         Predicted frames in the queue may not be useful to the current state of the user’s movement or camera

Thankfully, AFR is not the only approach to multi-GPU. Mantle empowers game developers with full control of a multi-GPU array and the ability to create or implement unique mGPU solutions that fit the needs of the game engine. In Civilization: Beyond Earth, Firaxis designed a "split-frame rendering" (SFR) subsystem. SFR divides each frame of a scene into proportional sections, and assigns a rendering slice to each GPU in AMD CrossFire™ configuration. The "master" GPU quickly receives the work of each GPU and composites the final scene for the user to see on his or her monitor.

If you don’t see 70-100% GPU scaling, that is working as intended, according to Firaxis. Civilization: Beyond Earth’s GPU-oriented workloads are not as demanding as other recent PC titles. However, Beyond Earth’s design generates a considerable amount of work in the producer thread. The producer thread tracks API calls from the game and lines them up, through the CPU, for the GPU's consumer thread to do graphics work. This producer thread vs. consumer thread workload balance is what establishes Civilization as a CPU-sensitive title (vs. a GPU-sensitive one).

Because the game emphasizes CPU performance, the rendering workloads may not fully utilize the capacity of a high-end GPU. In essence, there is no work leftover for the second GPU. However, in cases where the GPU workload is high and a frame might take a while to render (affecting user input latency), the decision to use SFR cuts input latency in half, because there is no long AFR queue to work through. The queue is essentially one frame, each GPU handling a half. This will keep the game smooth and responsive, emphasizing playability, vs. raw frame rates.

Let me provide an example. Let's say a frame takes 60 milliseconds to render, and you have an AFR queue depth of two frames. That means the user will experience 120ms of lag between the time they move the map and that movement is reflected on-screen. Firaxis' decision to use SFR halves the queue down to one frame, reducing the input latency to 60ms. And because each GPU is working on half the frame, the queue is reduced by half again to just 30ms.

In this way the game will feel very smooth and responsive, because raw frame-rate scaling was not the goal of this title. Smooth, playable performance was the goal. This is one of the unique approaches to mGPU that AMD has been extolling in the era of Mantle and other similar APIs.

All I can say is: thank goodness. Let's hope we see more of this kind of thing from AMD and major game studios in the coming months and years. Multi-GPU solutions don't have to double their FPS averages in order to achieve smoother animations or improved responsiveness. I'd much rather see a multi-GPU team producing more modest increases that the user can actually feel and experience.

Of course, while we're at it, I'll note that if you measure frame times instead of FPS averages, you can more often capture the true improvement offered by mGPU solutions. AMD has been a little slower than Nvidia to adopt a frame-time-sensitive approach to testing, but it's clearly a better way to quantify the benefits of this sort of work.

Fortunately, AMD and Firaxis have built tools into Beyond Earth to capture frame times. I have been working on other things behind the scenes this week and haven't yet had the time to make use of these tools, but I'm pleased to see them there. You can bet they'll figure prominently into our future GPU articles and reviews.

88 comments — Last by BIF at 12:47 AM on 11/08/14