The Damage Report

Mixing power-line networking with Wi-Fi proves intoxicating
— 11:27 AM on September 28, 2015

You may recall my failed attempt at using a second Wi-Fi router in repeater mode in order to overcome some signal-strength issues in the upper level of my house. I learned very quickly that compromising half (or more) of your wireless bandwidth in order to talk to a second router wirelessly isn't a very good trade-off for most clients. That's especially true in my case since our home cable modem service can reach up to 220 Mbps downstream, well beyond the delivered speeds we see out of our very nice wireless AC router, the Asus RT-AC87U.

The alternative to using a wireless router-to-router link is, obviously, some form of wired connection. Running Ethernet between the two routers and putting our older Asus RT-N66U into access point mode should allow us to have two sources of Wi-Fi signals at different spots in the house, both capable of full-speed communication with the Internet. But there's a big problem with that plan. Between the weirdness of our atrium-split floorplan and my own essential physical laziness, there was about zero chance that I'd actually run an Ethernet cable inside the walls anytime soon.

Fortunately, after my last post on this subject, some of you suggested trying a different sort of wired connection: power-line networking. A pair of power-line adapters will transfer data across your existing home electrical wiring. Although those sorts of products started out pretty poorly, they apparently have matured nicely in recent years. I immediately was intrigued by the idea and soon ordered a pair of these TP-Link adapters from Amazon for 70 bucks.

The idea was to put one adapter next to my main router and the other one next to the access-point router, with Ethernet connections going from each adapter to the adjacent router. The power-line network would then bridge between the two routers, hopefully providing a fast, reliable, low-latency connection.

Making it happen turned out to be a bit of an adventure, but not for the reasons you might expect.

When the power-line adapters arrived, I didn't mess around. I pulled them from the box, briefly glanced at the instructions and discarded them, and connected one adapter to my main router. Then I ran upstairs and plugged the other adapter into the wall socket in my bedroom and attached my laptop to it via Ethernet. I seriously didn't press any buttons or even look at any indicator lights on the little white wall-warts. Within seconds, I was pulling around 120 Mbps—maybe a little more—in a bandwidth test, with packet latency of 1-4 ms.

Man, that was easy.

Yes, the power-line adapter is rated for "up to 1200 Mbps," but I never expected to get practical speeds that fast. 120 Mbps is fast enough to outrun the Wi-Fi capabilities of most of the phones and tablets we use, and heck, I had the thing plugged into an outlet on an exterior wall that's as far away from the other adapter as possible within the house.

My next step was to connect the laptop to the RT-N66U and switch it from repeater mode into AP mode. Then I plugged the router's upstream port into the power-line adapter and fired everything up. Seemed like I was good to go, right?

What followed was a lot of disappointment, as I found that Wi-Fi clients on the RT-N66U only achieved about 60 Mbps on the 2.4GHz network and  about 30 Mbps on 5GHz. What the heck? It seemed like things were no faster than before.

The process was more chaotic that I might care to admit, but my next steps involved a lot of A/B testing of various components of this network in order to track down the problem.

Moving the secondary power-line adapter to an outlet with a more central location in the house boosted Ethernet speeds to about 180 Mbps, with peaks near the 220-Mbps limit of our cable-modem service. My laptop, when directly connected to the power-line adapter, loved it. The location change also raised the speed of Wi-Fi clients on the RT-N66U to 35-38 Mbps on 5GHz and over 70 Mbps on 2.4GHz, but it wasn't exactly a breakthrough.

Was something wrong with my router, or did the combination of Wi-Fi plus power-line somehow not provide the stability needed to reach higher transfer rates?

Ultimately, I wound up sitting here in Damage Labs with the RT-N66U attached to a port on my GigE switch and configured with unique SSIDs on its 2.4GHz and 5GHz segments. Everything was as explicit as possible (maybe including my language). With my laptop five feet away, I could reach peaks of 80 Mbps on 2.4GHz and 40 Mbps on 5GHz, nothing more.

I knew the RT-N66U was capable of higher speeds, but it just wasn't delivering. Thinking there might be some bug in the latest  Asus firmware update, I installed the alternative Merlin firmware to see if that would help, but speeds didn't improve.

More tweaking of Wi-Fi parameters and such was involved along the way. I'm condensing a lot of hazy frustration. But at one point while spelunking through the menus, I noticed some settings for WDS bridging that I couldn't alter while the router was in AP mode. It looked like, possibly, the router might still be configured to bridge to the AC87U over 5GHz Wi-Fi—a leftover from when I had the thing in repeater mode.


I wound up going nuclear and doing a factory reset on the router. Then, after a bit of configuration back into AP mode, a breakthrough: speeds well in excess of 80 Mbps on both the 2.4GHz and 5GHz bands.

The frigging router had been secretly stuck in WDS mode, and at least half of its 5GHz bandwidth had been reserved for wireless bridging. Ugh.

With that issue sorted, I gradually set everything back up exactly how I'd intended, testing periodically along the way. Ultimately, the RT-N66U was talking to the main network over the power-line adapters and broadcasting the same SSIDs as our main router. Clients could connect to it seamlessly. I made sure there was no overlap in Wi-Fi channel use. We now had reasonably solid 5GHz connectivity in every room of the house, with a no-doubt 2.4GHz signal as a backup.

I haven't done a ton of directed testing on the variability or reliability of the power-line link, but in regular use through the past few days, the new setup has been essentially flawless. I've offered my family the chance to complain several times each, but no one has noticed any hiccups. Periodic speed tests with phone and tablets have reached peaks around 75-80 Mbps over Wi-Fi on either router. The desktop PCs with Wi-Fi can range higher, to 180 Mbps or more.  Everything more or less works like it did before, but the Wi-Fi dead spots are eliminated and, thanks to stronger signals, performance is up generally.

I do have one caveat about the power-line adapters, though. After I first plugged them in, I noticed some strange, subtle noises while sitting at my desk working. Eventually, I realized I was hearing interference caused by the power-line network doing its thing. Moving my speakers' power plug from the wall socket to the UPS resolved the problem, but it's possible we could encounter similar problems elsewhere over time.

Other than that concern and the havoc caused by the router issues, setting up the power-line networking stuff has been a huge win. Worth checking out if you need a fast, painless extension to the other side of the house.

66 comments — Last by tipoo at 2:31 PM on 11/19/15

How much video memory is enough?
4GB versus the world
— 10:57 AM on August 12, 2015

One question we haven't answered decisively in our recent series of graphics card reviews is: how much video memory is enough? More pressingly given the 4GB limit for Radeon R9 Fury cards: how much is too little? Will a 4GB video card run into performance problems in current games, and if so, when?

In some ways, this question is harder to answer than one might expect. Some enthusiasts have taken to using monitoring tools in order to see how much video memory is in use while gaming, and that would seem to be a sensible route to understanding these matters. Trouble is, most of the available tools track video memory allocation at the operating system level, and that's not necessarily a good indicator of what's going on beneath the covers. In reality, the GPU driver decides how video memory is used in Direct3D games.

We might be able to approach this problem better by using vendor-specific development tools from AMD and Nvidia—and we may yet do so—but we can always fall back on the simplest thing: testing the hardware to see how it performs. We now have a number of video cards based on similar GPU architectures with different amounts of VRAM, from 4GB through 12GB. Why not run a quick test in order to get a sense of how different GPU memory configurations hold up under pressure?

My weapon of choice for this mission was a single game, Shadow of Mordor, which I chose for several reasons. For one, it's pretty widely regarded as one of the most VRAM-hungry games around right now. I installed the free HD assets pack available for it and cranked up all of the image quality settings in order to consume as much video memory as possible. Mordor has a built-in benchmark that allowed me to test at multiple resolutions in repeatable fashion with ease. The results won't be as fine-grained as those from our frame-time-based game tests, but a big drop in the FPS average should still serve as a clear indicator of a memory capacity problem.

Crucially, Mordor also has a nifty feature that will let us push these video cards to their breaking points. The game's settings allow one to choose a much higher virtual resolution than the native resolution of the attached display. The game renders everything at this higher virtual resolution and then downsamples the output to the display's native res, much like Nvidia's DSR and AMD's VSR features. Downsampling is basically just a form of full-scene anti-aliasing, and it can produce some dramatic improvements in image quality.

Using Mordor's settings menus, I was able to test at 2560x1440, 3840x2160 (aka 4K) and the higher virtual resolutions of 5760x3240 and 7680x4320. That last one is a staggering 33 megapixels, well beyond the pixel count of even a triple-4K monitor setup. I figured pushing that far should be enough to tease out any memory capacity limitations.

My first two victims were the Radeon R9 290X 4GB and the Radeon R9 390X 8GB. Both cards are based on the same AMD Hawaii GPU, and they have similar clock frequencies. The 390X has a 20MHz faster base clock and a tweaked PowerTune algorithm that could give it somewhat higher clock speeds in regular operation. It also has a somewhat higher memory clock. These differences are relatively modest in the grand scheme, and they shouldn't be a problem for our purposes. What we're looking for is relative performance scaling. Where does the 4GB card's performance fail to scale up as well as the 8GB card's?

The 290X's 4GB of memory doesn't put it at a relative disadvantage at 4K, but the cracks start to show at 5760x3240, where the gap between the two cards grows to four FPS. At 7680x4320, the 4GB card is clearly struggling, and the deficit widens to eight FPS. So we can see the impact of the 390X's added VRAM if we push hard enough.

From a purely practical standpoint, these performance differences don't really matter much. With FPS averages of 16 and 20 FPS, respectively, neither the 290X nor the 390X produces playable frame rates at 5760x3240, and the highest resolution is a slideshow on both cards.

What about the Radeon R9 Fury X, with its faster Fiji GPU paired with only 4GB of HBM-type VRAM?

The Fury X handles 3840x2160 without issue, but its performance drops off enough at 5760x3240 that it's slightly slower than the 390X. The Fury X falls further behind the 390X at 33 megapixels, despite the fact that the Fury X has substantially more memory bandwidth thanks to HBM. Almost surely, the Fury X is bumping up against a memory capacity limitation at the two higher resolutions.

What about the GeForce side of things, you ask? Here it all is in one graph, from the GTX 970 to the Titan X 12GB.

Hmph. There's essentially no difference between the performance of the GTX 980 Ti 6GB and the Titan X 12GB, even at the very highest resolution we can test. Looks like 6GB is sufficient for this work. Heck, look closer, and the GTX 980's performance scales very similarly even though it only has 4GB of VRAM.

The only GeForce card whose performance doesn't follow the trend is the GTX 970, whose memory capacity and bandwidth are both, well, kind of weird due to a 3.5GB/0.5GB split in which the 0.5GB partition is much slower to access. We covered the details of this peculiar setup here. The GTX 970 appears to suffer a larger-than-expected performance drop-off at 5860x3240, likely due to its funky VRAM setup.

Now that we've seen the results from both camps, have a look at this match-up between the R9 Fury X and a couple of GeForces.

For whatever reason, a 4GB memory capacity limit appears to create more problems for the Fury X than it does for the GTX 980. As a result, the GTX 980 matches the performance of the much pricier Fury X at 5760x3240 and outdoes it at 33 megapixels.

We've seen this kind of thing before—in the only results from our Radeon R9 Fury review that showed a definitive difference between the 4GB and 8GB Radeons. The Radeons with 4GB had some frame time hiccups in Far Cry 4 at 4K that the 8GB models avoided:

As you can see, the 8GB Radeons avoid these frame-time spikes above 50 ms. So do all of the GeForces. Even the GeForce GTX 780 Ti with 3GB manages to sidestep this problem.

Why do the 4GB Radeons suffer when GeForce cards with 4GB don't? The answer probably comes down to the way GPU memory is managed in the graphics driver software, by and large. Quite possibly, AMD could improve the performance of the 4GB Radeons in both Mordor and Far Cry 4 with a change to the way it manages video memory.

There is one other factor to consider. Have a look at the results of this bandwidth test from our Fury X review. This test runs two ways: using a black texture that's easily compressible, and using a randomly colored texture that can't be compressed. The delta between these two scores tells us how effective the GPU's color compression scheme is.

As you can see, the color compression in Nvidia's Maxwell chips looks to be quite a bit more effective than the compression in Fury X. The Fury X still has a tremendous amount of memory bandwidth, of course, but we're more concerned about capacity. Assuming these GPUs store compressed data in a packed format that saves capacity as well as bandwidth, it's possible the Maxwell GPUs could be getting more out of each megabyte by using stronger compression.

So that's interesting.

Of course, much of what we've just demonstrated about memory capacity constraints is kind of academic for reasons we've noted. On a practical level, these results match what we saw in our initial reviews of the R9 Fury and Fury X: at resolutions of 4K and below, cards with 4GB of video memory can generally get by just fine, even with relatively high image quality settings. Similarly, the GeForce GTX 970 seems to handle 4K gaming quite well in spite of its funky partitioned memory. Meanwhile, at higher resolutions, no current single-GPU graphics card is fast enough for fluid gaming, no matter how much memory it might have. Even with 12GB, the Titan X averages less than 30 FPS in Shadow of Mordor at 5760x3240.

We'll have to see how this memory capacity story plays out over time. The 4GB Radeon Fury cards appear to be close enough to the edge—with a measurable problem in Far Cry 4 at 4K—to cause some worry about slightly more difficult cases we haven't tested, like 5K monitors, for example, or triple-4K setups. Multi-GPU schemes also impose some memory capacity overhead that could cause problems in places where single-GPU Radeons might not struggle. The biggest concern, though, is future games that simply require more memory due to the use of higher-quality textures and other assets. AMD has a bit of a challenge to manage, and it will likely need to tune its driver software carefully during the Fury's lifetime in order to prevent occasional issues. Here's hoping that work is effective.

211 comments — Last by PancersCloud at 9:53 AM on 10/08/15

Is FCAT more accurate than Fraps for frame time measurements?
— 11:36 AM on July 22, 2015

Here's a geeky question we got in response to one of our discussions in the latest episode of the podcast that deserves a solid answer. It has to do with our Inside the Second methods for measuring video game performance using frame times, as demonstrated in our Radeon R9 Fury review. Specifically, it refers to the software tool Fraps versus the FCAT tools that analyze video output.

TR reader TheRealSintel asks:

On the FRAPS/frametime discussion, I remember during the whole FCAT introduction that FRAPS was not ideal, I also heard some vendors performance can take a dive when FRAPS is enabled, etc.

I actually assumed the frametimes in each review were captured using FCAT instead of FRAPS.

When you guys introduce a new game to test, do you ever measure the difference between in-game reporting, FCAT and FRAPS?

I answered him in the comments, but I figure this answer is worth promoting to a blog entry. Here's my response:

There's a pretty widespread assumption at other sites that FCAT data is "better" since it comes from later in the frame production process, and some folks like to say Fraps is less "accurate" as a result. I dispute those notions. Fraps and FCAT are both accurate for what they measure; they just measure different points in the frame production process.

It's quite possible that Fraps data is a better indication of animation smoothness than FCAT data. For instance, a smooth line in an FCAT frame time distribution wouldn't lead to smooth animation if the game engine's internal simulation timing doesn't match well with how frames are being delivered to the display. The simulation's timing determines the *content* of the frames being produced, and you must match the sim timing to the display timing to produce optimally fluid animation. Even "perfect" delivery of the frames to the display will look awful if the visual information in those frames is out of sync.

What we do now for single-GPU reviews is use Fraps data (or in-engine data for a few games) and filter the Fraps results with a three-frame moving average. This filter accounts for the effects of the three-frame submission queue in Direct3D, which can allow games to tolerate some amount of "slop" in frame submission timing. With this filter applied, any big spikes you see in the frame time distribution are likely to carry through to the display and show up in FCAT data. In fact, this filtered Fraps data generally looks almost identical to FCAT results for single-GPU configs. I'm confident it's as good as FCAT data for single-GPU testing.

For multi-GPU configs, things become more complicated because frame metering/pacing comes into the picture. In that case, Fraps and FCAT may look rather different. That said, a smooth FCAT line with multi-GPU is not a guarantee of smooth animation alone. Frame metering only works well when the game advances its simulation time using a moving average or a fixed cadence. If the game just uses the wall clock for the current frame, then metering can be a detriment. And from what I gather, game engines vary on this point.

(Heck, the best behavior for game engine timing for SLI and CrossFire—advancing the timing using a moving average or fixed cadence—is probably the opposite of what you'd want to do for a variable-refresh display with G-Sync or FreeSync.)

That's why we've been generally wary of AFR-based multi-GPU and why we've provided video captures for some mGPU reviews. See here.

At the end of the day, a strong correlation between Fraps and FCAT data would be a better indication of smooth in-game animation than either indicator alone, but capturing that data and quantitatively correlating it is a pain in the rear and lot of work. No one seems to be doing that (yet?!).

Even further at the end of the day, all of the slop in the pipeline between the game's simulation and the final display is less of a big deal than you might think so long as the frame times are generally low. That's why we concentrate on frame times above all, and I'm happy to sample at the point in the process that Fraps does in order to measure frame-to-frame intervals.

I should also mention: I don't believe the presence of the Fraps overlay presents any more of a performance problem than the presence of the FCAT overlay when running a game. The two things work pretty much the same way, and years of experience with Fraps tells me its performance impact is minimal.

Here's hoping that answer helps. This is tricky stuff. There are also the very practical challenges involved in FCAT use, like the inability to handle single-tile 4K properly and the huge amount of data generated, that make it more trouble than it's worth for single-GPU testing. I think both tools have their place, as does the in-engine frame time info we get from games like BF4.

In fact, the ideal combination of game testing tools would be: 1) in-engine frame time recordings that reflect the game's simulation time combined with 2) a software API from the GPU makers that reflects the flip time for frames at the display. (The API would eliminate the need for fussy video capture hardware.) I might add: 3) a per-frame identification key that would let us track when the frames produced in the game engine are actually hitting the display, so we can correlate directly.

For what it's worth, I have asked the GPU makers for the API mentioned in item 2, but they'd have to agree on something in common in order for that idea to work. So far, nobody has made it a priority.

46 comments — Last by Melvar at 6:27 PM on 07/28/15

Reconsidering the overall index in our Radeon R9 Fury review
— 12:20 PM on July 13, 2015

I've been pretty active over the weekend responding to questions in the comments section for our Radeon R9 Fury review.

As you may know, our value scatter plot puts the R9 Fury just behind the GeForce GTX 980 in our overall index of average FPS scores across our test suite. Some of you have expressed surprise at this outcome given the numbers you've seen in other reviews, and others have zeroed in on our inclusion of Project Cars as a potential problem, since that game runs noticeably better on GeForces than Radeons for whatever reason.

I've explained in the comments that we use a geometric mean to calculate our overall performance score rather than a simple average specifically so that outliers—that is, games that behave very differently from most others—won't have too big an impact. That said, the geomean doesn't always filter outlier results as effectively as one might wish. A really skewed single result can have a noticeable impact on the final average. For that reason, in the rush to prepare my Fury review, I briefly looked at the impact of excluding Project Cars as a component of the overall score. My recollection is that it didn't seem to matter much.

However, prompted by your questions, I went back to the numbers this morning and poked around some. Turns out the impact of that change may be worthy of note. With Cars out of the picture, the overall FPS average for the R9 Fury drops by 1.2 FPS and the score for the GeForce GTX 980 drops by 2.8 FPS. The net result shifts from a 0.6-FPS margin of victory for the GTX 980 to a win for the R9 Fury by a margin of 1.1 FPS.

Things are really close. This is why I said in my analysis: "That's essentially a tie, folks."

But I know some of you hang a lot of worth on the race to achieve the highest FPS averages. I also think the requests to exclude Project Cars results from the index are sensible given how different they are from everything else. So here is the original FPS value scatter plot:

And here's the revised FPS-per-dollar scatter plot without the Cars component.

Some folks will take solace in this symbolic victory for AMD in terms of overall FPS averages. Do note that the price-performance landscape isn't substantially altered by this shift on the Y axis, though.

We have long championed better metrics for measuring gaming smoothness, and our 99th-percentile FPS plot is also altered by the removal of Cars from the results. I think this result is a much more reliable indicator of delivered performance in games than an FPS average. Here's the original one:

And here it is without Project Cars:

The picture shifts again with Cars out of the mix—and in a favorable direction for the Radeons—yet the R9 Fury and Fury X still trail the less expensive GeForce GTX 980 in terms of general animation smoothness. I believe this result is much more notable to PC gamers who want to understand the real-world performance of these products. AMD still has work to do in order to ensure better experiences for Radeon buyers in everyday gaming.

Then there's the power consumption picture, which looks like so:

I didn't have time to include this plot in the review, although all of the data are there in other forms. I think it's a helpful reminder of another dynamic at play when you're choosing among these cards.

At the end of the day, I think the Cars-free value scatter plots are probably a more faithful reflection of the overall performance picture than our original ones, so I'm going to update the final page of our Fury review with the revised plots. I've looked over the text that will need to change given the shifts in the plot positions. The required edits amount to just a few words, since the revised scores don't change anything substantial in our assessments of these products.

Still, it's always our intention to provide a clear sense of the overall picture in our reviews. In this case, I'm happy to make a change in light of some reader concerns.

159 comments — Last by gigafinger at 8:07 AM on 07/23/15

Time Warner slings free Maxx upgrades to counter Google Fiber
— 1:35 PM on May 21, 2015

I've been chronicling the slow progress of Google Fiber moving into my metro area, my city, and eventually, into my house. Since Google Fiber started building in the Kansas City area, a funny thing has happened: competition. Even before the Google announcement, we had the option of AT&T U-Verse or Time Warner Cable in my neighborhood. Then Google did its thing, and AT&T later announced the rollout of its own fiber product in parts of the metro. Meanwhile, my incumbent cable provider, Time Warner, has raised the speeds of our cable Internet service several times at no extra charge.

I get the sense that we're pretty fortunate around here, all things considered, compared to a lot of areas in the U.S. One thing we have that many others don't is a real set of options.

Anyhow, I mentioned the other day that the timeline for Google Fiber service turn-ups in my neighborhood is disappointingly slow, even though the fiber's already in the ground. The wait for 1000Mbps up- and downstream was gonna be pretty rough at a continued pace of 50Mbps down and a pokey 5Mbps up.

Happily, we got a notice in the mail (yes, via snail mail) the other day from Time Warner telling us about yet another speed increase at no cost.  This is part of TWC's new Maxx service offering. The "standard" service tier jumps from 15Mbps down/1Mbps up to 50/5. Our "Extreme" package rises from 50/5 to 200/20. And the fastest package goes from 100/5 to 300/20.

Not bad, really. And the change was apparently active. I ran down to my office and did a quick speed test, and sure enough, performance was up. Downstream reached about 110Mbps, and upstream hit about 11Mbps. We have a relatively new modem, from the last couple of years, but the notice said we might need to swap it out for a newer one to reach the full rates. I quickly hopped online and ordered a swap kit, which TWC promised to send out to my house free of charge.

That was on Friday. Then, on Sunday, our Internet service simply stopped working. From what I could tell after some poking and prodding, our home router was fine, and our modem was synced up to the cable network fine. It just wouldn't pass packets. What followed was a weird combination of good and bad.

Somehow, I found TWC's customer service account on Twitter and decided to see if there was an outage in my area. They were incredibly quick to reply and ask me for more info about my TWC account. I provided it, and they soon informed me that my modem had been quarantined in order to alert me that I needed to upgrade my modem to get the full speeds available to me.

Yes, they straight took down my service to let me know that I needed to order a modem I'd already ordered.

If only we had... information technology that would allow companies to target only appropriate customers with these messages. If only other forms of communication existed than a total service shut-off. If only... wow.

Anyhow, the Twitter rep took my modem out of quarantine and explained that most users should see a web-based message about the reason for the quarantine—along with a form to order a new modem and a means of getting the current one out of quarantine. It's just that "some routers" block that message. My excellent Asus AC2400 router was one that did, it seems, likely due to good security design.

Again, wow. I think competition has made TWC aggressive without really making them customer-focused. I suppose it's a start.

Regardless, my new modem arrived yesterday and I installed it. The process was a little clumsy, but I muddled through. The end result was a full realization of our new service speeds. tells me I can reach 216Mbps downstream and 21Mbps upstream, just a little better than the advertised rate.

Man, four times our old upstream and downstream speeds is gonna make the wait for Google Fiber much easier. Heck, I'm not sure how many servers out there really sling bits to consumers at 200Mbps—other than,  you know, Steam. Maybe other folks with fast connections can enlighten us about that. My sense is that, for purposes that don't involve upstream transmissions, what we have now may not differ much in practical terms from fiber-based Internet services. Didn't happen how I expected, but I'm pleased to see it.

77 comments — Last by redavni at 6:44 PM on 06/02/15

Thanksgiving offers a perfect chance to crack open a busted iPad Air
— 11:05 AM on December 1, 2014

Phew. I needed that break. I was able to take off the latter half of last week and this past weekend to spend some time with my family, and it was refreshing to get away. Thanks to Geoff and Cyril, with their alternative Canadian Thanksgiving ways, for keeping the site going.

Because I'm partially crazy, I couldn't just relax during my time away, of course. I took it upon myself to attempt a computer hardware repair. And since doing things that are, you know, sensible isn't a requirement for the halfway insane, I decided to replace the cracked glass digitizer on my brother's iPad Air. Any old chump can fix a busted PC, but only the truly elite hax0rs can tackle hardware maintenance for devices that have been designed with active malice toward the technician.

My preparation for this feat was asking my brother to order a replacement digitizer for his iPad Air and watching the first few minutes of an instructional video on the operation before losing interest. I figured, eh, it's all about glue and guitar picks.

Don't get me wrong. It is all about glue and guitar picks, but the YouTube videos lie. They show operations being performed by competent, experienced people whose hands know what to do in each situation. I am not that person, which is a very relevant difference once you get knee deep into one of these operations.

The other way most of the YouTube videos lie is that they show somebody removing a completely whole, unsullied piece of glass from the front of a device. That was not my fate. The screen on my brother's Air had cracks running clear across its surface, combined with shattered areas covered by spiderwebs of tiny glass shards.

The replacement screen came with a little repair kit, including a guitar pick, mini-screwdrivers, a suction cup, and several plastic pry tools. I used a hair dryer to heat the adhesive around the glass pane, pulled up on the glass with the suction cup, pried under it in one spot with the tiny screwdriver, and slipped a guitar pick into the adhesive layer. Sounds simple, but just getting this start took a lot of trial and error.

I soon discovered two important truths. One, I needed about five more guitar picks to keep the areas where I'd separated the adhesive from re-sealing. I had only the one—and we were away from home, at a little rental house thing, for the holiday. Two, getting a cracked screen to separate from the adhesive is a huge pain in the rear. Suction cups don't stick to cracked glass.

Here's what I eventually pulled free from the chassis, after over an hour's hard work with a bad feeling in the pit of my stomach.

Notice the spiderwebbed section sticking up. Yes, I literally peeled the glass free from its adhesive backing. Not pictured are the hundreds of tiny glass shards that shattered and fell out during the process, all over me and into the iPad chassis. The minuscule shards practically coated the surface of the naked LCD panel beneath the glass, while others worked their way into my fingertips. The pain was one thing, but worse, I was pretty sure at this point that I'd ruined the LCD panel in my brother's tablet.

Notice that some sections of the screen around the edges are not in the picture above. They didn't break free when I removed the rest of the digitizer, so I had to scrape those shards off of their adhesive backing separately.

Also notice that the busted digitizer doesn't have a home button or a plastic frame around the pinhole camera opening up top. Most of them do, and this one did when I first removed it from the iPad. However, the replacement digitizer we ordered bafflingly didn't come with a home button or pinhole frame included. It did in the YouTube videos, but surprise! You get to do this on hard mode.

The home button bracket seemed like it was practically welded on there. And remember: we didn't have any spare adhesive or glue or anything.

After nearly giving up in despair, I found another YouTube video showing this specific operation in some detail. The dude in it used tools I didn't have, but what the heck. After heating the home button area with the hair dryer, I pulled out my pocket knife and went for it. I proceeded to separate the home button, its paper-thin ribbon connection, and the surrounding metal bracket from the busted digitizer. Somehow, I managed to keep enough adhesive on the bracket to allow it to attach to the new screen. The button happily clicked without feeling loose. This success massively exceeded my expectations.

Once I'd crested that hill, I  came face to face with that perfect Retina LCD coated with glass dust. Frankly, I'd been trying to bracket off my worries about that part of the operation, or I wouldn't have been able to continue. After lots of blowing on the gummy surface of the LCD panel, I decided what I needed to deal with the remaining glass shards and fingerprints was a microfiber cloth. Lint from cotton would be disastrous. Shortly, my brother went out to his truck and returned with a nasty, dirt-covered microfiber cloth that was pretty much our only option. A couple of the corners were less obviously soiled, so I used them lovingly to brush, rub, and polish the surface of the LCD panel. Several spots where I concentrated my efforts just grew into larger and larger soiled areas. My brother stood looking nervously over my shoulder, asking worried questions about the state of things. However, after rotating the cloth and giving it some time and gentle effort, I was somehow able to dispel the oily patches almost entirely.

From here, it was all downhill, right? I attached each of the miniature ribbon connectors and, before reassembling the tablet, turned it on for a quick test. To my great relief and pleasure, the LCD worked perfectly, with no dead pixels or obvious damage of any kind. And the touchscreen digitizer responded perfectly to my input, even though it wasn't yet layered atop the LCD. It was good to go.

The next step was the tedious process of placing the pre-cut 3M adhesive strips along the edges of the iPad chassis. Somehow, I managed to do this without folding over the glue strips and having them stick to themselves. Really not something I expected to pull off cleanly.

Pictured above is the open iPad with the new digitizer attached. You can see the adhesive strips around the edges of the chassis with the backing still on one side. My bandaged fingers are holding up the LCD panel, and the big, back rectangles you see are the iPad's batteries. The device's motherboard sits under the metal shield just above the batteries. It's a little larger than a stick of gum. I stopped to take a picture at this point mostly because my stress level was finally low enough for me to remember to do so.

With only a little remaining struggle, I was able to re-seat the LCD panel and secure it, remove the adhesive backing, flip over the new digitizer, and push it firmly into place atop the new adhesive layer. After a little clean-up, my brother's iPad Air looked as good as new.

Three hours after my journey began, I turned on the repaired iPad. It booted up quickly. The LCD looked perfect. The home button was clicky and solid. And I swiped to log in.

Didn't take.

I swiped again, a few times, and I was able to log in. And then... the thing went crazy. Phantom touches everywhere ran apps, activated UI buttons, and began typing gobbledygook messages. The touchscreen was completely hosed.

Utter defeat. What followed isn't something I'd like to share on the Internet. Suffice to say that I'm a grown man, and grown men shouldn't act like that.

Initially, I blamed myself for messing up the repair with my clumsiness. I figured I must have ruined a ribbon connector or something. Hours later, after I'd gotten some distance from the whole thing, I poked around online and came to a different conclusion. You see, the original adhesive layer I removed from the iPad was essentially a felt lining with sticky stuff on both sides. The repair kit, however, came only with a thin layer of adhesive, with no insulator. I'm now 99% certain that the touchscreen's problems were caused by making electrical contact with the iPad's aluminum chassis. Others have run into the same issue, looks like.

I may never know for sure. My brother took the iPad back to his home after Thanksgiving and will be paying a repair shop to fix it. I dunno whether they'll offer any feedback about what happened.

Meanwhile, I suppose I got a little bit more experience doing repair work on mobile devices. So far, I've learned two things. First, I can do this. It just takes more of the same patience, precision, and self-imposed calm that working on larger computer systems requires. And a few initial victims, like my daughter's Nintendo DS, my mother-in-law's cell phone, my old laptop, and my brother's iPad Air.

Hey, they were broken anyway.

Second, it takes a special sort of person to do this stuff for fun. I am probably not that sort of person—and I'm okay with that.

Besides, next time I'll have a proper heat gun, more guitar picks, and some insulating tape.

50 comments — Last by epicmadness at 12:45 AM on 12/11/14

Finally light bulb's Tesla tech gives LEDs a worthy rival
— 12:37 AM on November 20, 2014

Ever since I improbably started blogging occasionally about light bulbs, I've been waiting impatiently to get a look at the first product from The Finally Light Bulb Company. This start-up company from Cambridge, Massachusetts has decided to bring a Tesla-era lighting technology into the consumer space.

The tech is known as induction or electrodeless lighting. Induction tech is pretty closely related to fluorescent lighting: a magnetic field excites gases in an enclosed tube. Those gases generate UV light, which strikes the phosphor coating on the tube, causing it to glow. (I'm probably butchering the details, so go here for more info.) Induction lighting has been used for years in industrial and commercial settings, where its reliability and efficiency are appealing, but the fixtures have been much too large for use in the home. The folks at Finally have worked to miniaturize induction lighting radically, so an entire assembly will fit into the space of a conventional A19 light bulb.

Finally calls its miniaturized version of induction lighting "acandescent technology" in an obvious play on "incandescent"—and a tip of the hat to the firm's goal, which is to replicate the warm, welcoming light of an incandescent bulb with very few compromises.

Now, I have almost no specific details about how Finally's implementation of inductive lighting works. All I have is presumably a finished product packaged neatly in retail garb. Heck, I'm not entirely sure why I have this bulb apparently before just about anyone else. Probably they sent me one since I kept bugging them about it.

That said, I suspect Finally may have deployed a couple of important tools in pursuit of their goal. One such tool could be a very fast cycle time. Old-school fluorescents cycle at 60Hz, and I believe CFLs generally run at 2KHz. Some induction lights cycle as quickly as two and a half megahertz. Finally may have chosen a relatively high operating frequency in order to ensure solid, steady illumination. Also, Finally was undoubtedly very particular when selecting the mix of phosphors to use, since those determine the spectrum of light emitted by the bulb.

The yellow-striped Finally bulb next to a Cree 4Flow and a conventional 60W incandescent

Whatever else is going on, there's no question that Finally's miniaturization efforts have succeeded. The payoff is a bulb whose shape closely mimics the teardrop profile of a traditional 60W incandescent.

The rest of the Finally Bulb's specs are competitive with the incumbent LED offerings, as well. It generates 800 lumens of light output using only 14.5W, just a touch above the 13.5W power consumption of Cree's TW-Series LED. The bulb's color temperature is rated at 2700K, the same as other "soft white" bulbs, and its $9.99 list price is in the neighborhood of the best LEDs, even if it is a couple of bucks higher than Finally initially projected. The bulb is EPA rated for 13.7 years of operation at three hours per day, which Finally backs with a 10-year limited warranty.

This bulb can go places some LEDs can't, too. It's rated for use in damp environments like bathrooms (though not in direct contact with water), and it can also be used in enclosed fixtures. For most intents and purposes, the Finally bulb can be used just like an incandescent. There is one place where it falls a bit short: it's not compatible with dimmer switches. Finally has said that future "acandescent" bulbs could be made to work with dimmers, but this first product doesn't go there.

The biggest question, of course, is about the quality of the illumination it produces. Finally makes a big claim about how its bulb reproduces that familiar, warm incandescent glow: "Finally, it is the same." That's a tall order since even the best LEDs don't measure up to the full-spectrum illumination produced by incandescent lights.

The Finally bulb's spec sheet says it has a color rendering index (CRI) of 83. That's short of the perfect 100 produced by incandescent bulbs, but it surpasses the 80 rating of the excellent Cree 60W Soft White LED. (Cree's TW Series claims a CRI of 93.) That said, CRI is an imperfect measure, so I wouldn't get too hung up on those numbers.

When I installed the Finally bulb in a lamp and flipped the switch, I was greeted with a bit of a surprise. The product's packaging says it's "instant on and instant re-start," but that summation misses an important reality. The bulb does light up immediately when you flip the power switch, but it only begins at about 50% of peak brightness. The light then ramps up to full brightness over the course of the next five or six seconds, so quickly that the change in luminance is easy to observe. The ramp up is faster than any CFL I've ever seen, but it doesn't match the immediacy of LEDs or incandescents.

In fact, it's hard to tell for sure, but I suspect the Finally bulb may not reach its absolute peak brightness until several minutes have passed. If I'm right about that, though, the effect is pretty subtle.

Get past that one quirk, and the rest of the story is quite good. As you can probably tell from the picture above, the bulb offers pretty much perfect omnidirectional light distribution, with none of the challenges LEDs sometimes face on this front.

The illumination from the Finally bulb is, as promised, warm and inviting. In my view, it's easily superior to any CFL. Each one of my poor friends and family members who I've accosted for an opinion have agreed with that assessment without reservation. The difference is not hard to see.

Stare at a room lit by this bulb a little longer, and you'll notice something unexpected: the light it produces is noticeably pink in tone. If you've experimented with CFL and LEDs, you may have noticed that not every 2700K light source produces the same mix of colors. Many CFLs tend to be predominantly green, and they can cast a sickly pallor across a living space. LEDs aren't quite so skewed, but they tend to be relatively yellow in tone.

Finally appears to have chosen a phosphor mix that emphasizes red. That's an intriguing aesthetic choice. The rosy pink light from this bulb runs counter to the cooler, flatter, and more antiseptic feel of many CFLs and even LEDs. This emphasis on the red portion of the spectrum makes the Finally bulb more appealing in certain ways. Wood tones appear deeper and more pronounced. Skin tones look healthier, too. I haven't yet combined three of them in the fixture above our kitchen table, but I suspect food presentation will be more pleasing, as well.

That said, the green walls of my bedroom take on more of a gray cast in this light, so it's not perfect. If you compare them side by side, the Finally bulb actually looks somewhat pinker than a 60W incandescent, kind of like GE's original Reveal bulbs with the neodymium coating. Not that there's anything wrong with that. (Happily, this product doesn't make the mistake of providing noticeably less illumination than a regular 60W bulb, either.)

Overall, I'd say the Finally bulb's light quality nearly rivals that of my favorite LED, Cree's 13.5W TW Series. I'm not sure I could say one is clearly superior to the other in every way. I do think the light from the TW Series is probably a little more balanced. If I were installing lamps in a room full of wood paneling, though, I'd pick the Finally bulb for that mission.

All in all, then, this is a spectacular start for an alternative lighting technology that's new to the consumer space—and an auspicious beginning for the young company that produced it. If you're into this stuff, you should grab one and try it out. The bulb is worth seeing in action, and you'll surely get some use out of it.

Unfortunately, I don't yet know where you can purchase one beyond the pre-order form on Finally's website. The firm hasn't yet announced a final availability date for its first product or a list of retailers that will carry it. I expect we'll be hearing more on that front soon.  I may have to snag a few more of these bulbs for myself once they become available.

70 comments — Last by ThatStupidCat at 9:34 PM on 12/08/14