Sixteen years ago, we fired up a modest web server session and began posting news items about the latest in PC tech and gaming. Over time, that little fly-by-night endeavor grew to become something bigger and better than anything we could anticipate—a full-time job for a number of very sharp people and a publication that produced some of the finest in-depth articles and reviews in the PC hardware space. We were able to build something unique, something that hadn't existed yet in print magazines or elsewhere, a place where community interaction fed our own ambitions to provide smarter testing, imaginative writing, and instant accountability. I'm very proud of what The Tech Report has become, and I'm happy to to have cataloged the incredible progress of an industry that has improbably made dreams come true for a generation of early PC enthusiasts. I'm especially pleased that we've been able to track that progress with an empirical approach to testing that attempts to capture a sense of the user experience.
Some months ago, I got a phone call from Raja Koduri, who heads up the newly formed Radeon Technologies Group at AMD. Raja asked me if I'd be interested in coming to work at AMD, to help implement the frame-time-based testing methods for game performance that I've championed here at TR. In talking with Raja, I came to see that he has a passion for doing things the right way, for creating better experiences for Radeon owners. He was offering me a unique opportunity to be a part of that effort, to move across organizational lines and help ensure that the Radeon Technologies Group creates the best possible experiences for gamers. AMD is a company facing some distinct challenges right now, but it's also loaded with potential—and the Radeon Technologies Group has a renewed vision and focus under Raja's leadership.
In the end, this opportunity was simply too good to pass up. Early in the new year, I will be joining AMD in my new role. As a result, I'll be stepping down as Editor-in-Chief of The Tech Report.
TR will continue, of course, under the able leadership of Jeff Kampman, who has been working with us for a year and a half and, as Managing Editor, has essentially been running the site for the past six months. Regular readers will already know that Jeff is a very capable writer and editor in the mold of our best staffers over the years. He will be assisted by a solid stable of writers, including Mark, Tony, and Bruno, all of whom have been providing excellent reviews and news in the TR tradition through the course of the past year. We are also looking to hire another full-time editor, as you may know, to help with my traditional coverage areas of CPUs and graphics. I believe strongly in The Tech Report's mission to provide honest, in-depth coverage of the personal computing space with style and insight. With your continued support, the site should go on fulfilling that mission for years to come. I'll be happy to provide what advice I can to Jeff and the rest of the TR staff from time to time, but given my new role in the industry, I won't have any input into TR's editorial choices going forward.
On a personal note, I'd like to take a moment to thank as many folks as I can for making the past sixteen years a possibility for me and the rest of the TR staff. First and foremost, the support we've gotten from our community of readers has been the key to everything. Without you all, I never could have had the privilege of testing and writing about the latest tech on a daily basis. Thank you for your moral and financial support, your interest, your patience with our faults, and yes, even your criticism. Serving you has been deeply rewarding.
Next, I want to thank a number of key TR staffers over the years, especially Geoff Gasior and Cyril Kowaliski, who put up with my brusque editorial critiques for way longer than anyone reasonably could have expected. Ronald Hanaki has posted our Shortbread links for ages without ever accepting a penny's worth of compensation, simply as a service to the community, and I can't thank him enough for his contribution. Jordan Drake served as our podcast host for seven years and somehow made a bunch of nerds sound conversationally competent. Our sales guy, Adam Eiberger, somehow managed to sell enough ads to keep multiple full-time editors employed even through the darkest of economic times. Steve Roylance built our custom content management system from the ground up and made it ridiculously quick. Bruno Ferreira extended it with the nifty Metal comments system and then built our distinctive pay-what-you-want subscription system, which I believe points to an important way forward for independent publishers. My lovely wife Stephanie quietly kept the books for TR all these years without ever drawing a paycheck, and she put up with my countless late nights on deadline and long days spent testing with a grace beyond measure. I'm missing people, because the list is too long, but thanks also to Sander Pilon, Steve Gibson, my co-founder Andy Brown, Jeff Atwood, and a host of other TR writers and supporters over the years.
Thanks also to the many companies who agreed to sponsor the site, especially long-time sponsors like Newegg, Asus, Corsair, Gigabyte, XFX, and OCZ. Many of these folks supported us even though our reviews often criticized their products. In a similar vein, thanks to all of the companies brave enough to send us hardware to review, despite knowing the risks they were taking. I'm also grateful for my fellow journalists who have acted as sounding boards and allies countless times. You're good folks.Mixing power-line networking with Wi-Fi proves intoxicating
You may recall my failed attempt at using a second Wi-Fi router in repeater mode in order to overcome some signal-strength issues in the upper level of my house. I learned very quickly that compromising half (or more) of your wireless bandwidth in order to talk to a second router wirelessly isn't a very good trade-off for most clients. That's especially true in my case since our home cable modem service can reach up to 220 Mbps downstream, well beyond the delivered speeds we see out of our very nice wireless AC router, the Asus RT-AC87U.
The alternative to using a wireless router-to-router link is, obviously, some form of wired connection. Running Ethernet between the two routers and putting our older Asus RT-N66U into access point mode should allow us to have two sources of Wi-Fi signals at different spots in the house, both capable of full-speed communication with the Internet. But there's a big problem with that plan. Between the weirdness of our atrium-split floorplan and my own essential physical laziness, there was about zero chance that I'd actually run an Ethernet cable inside the walls anytime soon.
Fortunately, after my last post on this subject, some of you suggested trying a different sort of wired connection: power-line networking. A pair of power-line adapters will transfer data across your existing home electrical wiring. Although those sorts of products started out pretty poorly, they apparently have matured nicely in recent years. I immediately was intrigued by the idea and soon ordered a pair of these TP-Link adapters from Amazon for 70 bucks.
The idea was to put one adapter next to my main router and the other one next to the access-point router, with Ethernet connections going from each adapter to the adjacent router. The power-line network would then bridge between the two routers, hopefully providing a fast, reliable, low-latency connection.
Making it happen turned out to be a bit of an adventure, but not for the reasons you might expect.
When the power-line adapters arrived, I didn't mess around. I pulled them from the box, briefly glanced at the instructions and discarded them, and connected one adapter to my main router. Then I ran upstairs and plugged the other adapter into the wall socket in my bedroom and attached my laptop to it via Ethernet. I seriously didn't press any buttons or even look at any indicator lights on the little white wall-warts. Within seconds, I was pulling around 120 Mbps—maybe a little more—in a bandwidth test, with packet latency of 1-4 ms.
Man, that was easy.
Yes, the power-line adapter is rated for "up to 1200 Mbps," but I never expected to get practical speeds that fast. 120 Mbps is fast enough to outrun the Wi-Fi capabilities of most of the phones and tablets we use, and heck, I had the thing plugged into an outlet on an exterior wall that's as far away from the other adapter as possible within the house.
My next step was to connect the laptop to the RT-N66U and switch it from repeater mode into AP mode. Then I plugged the router's upstream port into the power-line adapter and fired everything up. Seemed like I was good to go, right?
What followed was a lot of disappointment, as I found that Wi-Fi clients on the RT-N66U only achieved about 60 Mbps on the 2.4GHz network and about 30 Mbps on 5GHz. What the heck? It seemed like things were no faster than before.
The process was more chaotic that I might care to admit, but my next steps involved a lot of A/B testing of various components of this network in order to track down the problem.
Moving the secondary power-line adapter to an outlet with a more central location in the house boosted Ethernet speeds to about 180 Mbps, with peaks near the 220-Mbps limit of our cable-modem service. My laptop, when directly connected to the power-line adapter, loved it. The location change also raised the speed of Wi-Fi clients on the RT-N66U to 35-38 Mbps on 5GHz and over 70 Mbps on 2.4GHz, but it wasn't exactly a breakthrough.
Was something wrong with my router, or did the combination of Wi-Fi plus power-line somehow not provide the stability needed to reach higher transfer rates?
Ultimately, I wound up sitting here in Damage Labs with the RT-N66U attached to a port on my GigE switch and configured with unique SSIDs on its 2.4GHz and 5GHz segments. Everything was as explicit as possible (maybe including my language). With my laptop five feet away, I could reach peaks of 80 Mbps on 2.4GHz and 40 Mbps on 5GHz, nothing more.
I knew the RT-N66U was capable of higher speeds, but it just wasn't delivering. Thinking there might be some bug in the latest Asus firmware update, I installed the alternative Merlin firmware to see if that would help, but speeds didn't improve.
More tweaking of Wi-Fi parameters and such was involved along the way. I'm condensing a lot of hazy frustration. But at one point while spelunking through the menus, I noticed some settings for WDS bridging that I couldn't alter while the router was in AP mode. It looked like, possibly, the router might still be configured to bridge to the AC87U over 5GHz Wi-Fi—a leftover from when I had the thing in repeater mode.
I wound up going nuclear and doing a factory reset on the router. Then, after a bit of configuration back into AP mode, a breakthrough: speeds well in excess of 80 Mbps on both the 2.4GHz and 5GHz bands.
The frigging router had been secretly stuck in WDS mode, and at least half of its 5GHz bandwidth had been reserved for wireless bridging. Ugh.
With that issue sorted, I gradually set everything back up exactly how I'd intended, testing periodically along the way. Ultimately, the RT-N66U was talking to the main network over the power-line adapters and broadcasting the same SSIDs as our main router. Clients could connect to it seamlessly. I made sure there was no overlap in Wi-Fi channel use. We now had reasonably solid 5GHz connectivity in every room of the house, with a no-doubt 2.4GHz signal as a backup.
I haven't done a ton of directed testing on the variability or reliability of the power-line link, but in regular use through the past few days, the new setup has been essentially flawless. I've offered my family the chance to complain several times each, but no one has noticed any hiccups. Periodic speed tests with phone and tablets have reached peaks around 75-80 Mbps over Wi-Fi on either router. The desktop PCs with Wi-Fi can range higher, to 180 Mbps or more. Everything more or less works like it did before, but the Wi-Fi dead spots are eliminated and, thanks to stronger signals, performance is up generally.
I do have one caveat about the power-line adapters, though. After I first plugged them in, I noticed some strange, subtle noises while sitting at my desk working. Eventually, I realized I was hearing interference caused by the power-line network doing its thing. Moving my speakers' power plug from the wall socket to the UPS resolved the problem, but it's possible we could encounter similar problems elsewhere over time.
Other than that concern and the havoc caused by the router issues, setting up the power-line networking stuff has been a huge win. Worth checking out if you need a fast, painless extension to the other side of the house.How much video memory is enough?
One question we haven't answered decisively in our recent series of graphics card reviews is: how much video memory is enough? More pressingly given the 4GB limit for Radeon R9 Fury cards: how much is too little? Will a 4GB video card run into performance problems in current games, and if so, when?
In some ways, this question is harder to answer than one might expect. Some enthusiasts have taken to using monitoring tools in order to see how much video memory is in use while gaming, and that would seem to be a sensible route to understanding these matters. Trouble is, most of the available tools track video memory allocation at the operating system level, and that's not necessarily a good indicator of what's going on beneath the covers. In reality, the GPU driver decides how video memory is used in Direct3D games.
We might be able to approach this problem better by using vendor-specific development tools from AMD and Nvidia—and we may yet do so—but we can always fall back on the simplest thing: testing the hardware to see how it performs. We now have a number of video cards based on similar GPU architectures with different amounts of VRAM, from 4GB through 12GB. Why not run a quick test in order to get a sense of how different GPU memory configurations hold up under pressure?
My weapon of choice for this mission was a single game, Shadow of Mordor, which I chose for several reasons. For one, it's pretty widely regarded as one of the most VRAM-hungry games around right now. I installed the free HD assets pack available for it and cranked up all of the image quality settings in order to consume as much video memory as possible. Mordor has a built-in benchmark that allowed me to test at multiple resolutions in repeatable fashion with ease. The results won't be as fine-grained as those from our frame-time-based game tests, but a big drop in the FPS average should still serve as a clear indicator of a memory capacity problem.
Crucially, Mordor also has a nifty feature that will let us push these video cards to their breaking points. The game's settings allow one to choose a much higher virtual resolution than the native resolution of the attached display. The game renders everything at this higher virtual resolution and then downsamples the output to the display's native res, much like Nvidia's DSR and AMD's VSR features. Downsampling is basically just a form of full-scene anti-aliasing, and it can produce some dramatic improvements in image quality.
Using Mordor's settings menus, I was able to test at 2560x1440, 3840x2160 (aka 4K) and the higher virtual resolutions of 5760x3240 and 7680x4320. That last one is a staggering 33 megapixels, well beyond the pixel count of even a triple-4K monitor setup. I figured pushing that far should be enough to tease out any memory capacity limitations.
My first two victims were the Radeon R9 290X 4GB and the Radeon R9 390X 8GB. Both cards are based on the same AMD Hawaii GPU, and they have similar clock frequencies. The 390X has a 20MHz faster base clock and a tweaked PowerTune algorithm that could give it somewhat higher clock speeds in regular operation. It also has a somewhat higher memory clock. These differences are relatively modest in the grand scheme, and they shouldn't be a problem for our purposes. What we're looking for is relative performance scaling. Where does the 4GB card's performance fail to scale up as well as the 8GB card's?
The 290X's 4GB of memory doesn't put it at a relative disadvantage at 4K, but the cracks start to show at 5760x3240, where the gap between the two cards grows to four FPS. At 7680x4320, the 4GB card is clearly struggling, and the deficit widens to eight FPS. So we can see the impact of the 390X's added VRAM if we push hard enough.
From a purely practical standpoint, these performance differences don't really matter much. With FPS averages of 16 and 20 FPS, respectively, neither the 290X nor the 390X produces playable frame rates at 5760x3240, and the highest resolution is a slideshow on both cards.
What about the Radeon R9 Fury X, with its faster Fiji GPU paired with only 4GB of HBM-type VRAM?
The Fury X handles 3840x2160 without issue, but its performance drops off enough at 5760x3240 that it's slightly slower than the 390X. The Fury X falls further behind the 390X at 33 megapixels, despite the fact that the Fury X has substantially more memory bandwidth thanks to HBM. Almost surely, the Fury X is bumping up against a memory capacity limitation at the two higher resolutions.
What about the GeForce side of things, you ask? Here it all is in one graph, from the GTX 970 to the Titan X 12GB.
Hmph. There's essentially no difference between the performance of the GTX 980 Ti 6GB and the Titan X 12GB, even at the very highest resolution we can test. Looks like 6GB is sufficient for this work. Heck, look closer, and the GTX 980's performance scales very similarly even though it only has 4GB of VRAM.
The only GeForce card whose performance doesn't follow the trend is the GTX 970, whose memory capacity and bandwidth are both, well, kind of weird due to a 3.5GB/0.5GB split in which the 0.5GB partition is much slower to access. We covered the details of this peculiar setup here. The GTX 970 appears to suffer a larger-than-expected performance drop-off at 5860x3240, likely due to its funky VRAM setup.
Now that we've seen the results from both camps, have a look at this match-up between the R9 Fury X and a couple of GeForces.
For whatever reason, a 4GB memory capacity limit appears to create more problems for the Fury X than it does for the GTX 980. As a result, the GTX 980 matches the performance of the much pricier Fury X at 5760x3240 and outdoes it at 33 megapixels.
We've seen this kind of thing before—in the only results from our Radeon R9 Fury review that showed a definitive difference between the 4GB and 8GB Radeons. The Radeons with 4GB had some frame time hiccups in Far Cry 4 at 4K that the 8GB models avoided:
As you can see, the 8GB Radeons avoid these frame-time spikes above 50 ms. So do all of the GeForces. Even the GeForce GTX 780 Ti with 3GB manages to sidestep this problem.
Why do the 4GB Radeons suffer when GeForce cards with 4GB don't? The answer probably comes down to the way GPU memory is managed in the graphics driver software, by and large. Quite possibly, AMD could improve the performance of the 4GB Radeons in both Mordor and Far Cry 4 with a change to the way it manages video memory.
There is one other factor to consider. Have a look at the results of this bandwidth test from our Fury X review. This test runs two ways: using a black texture that's easily compressible, and using a randomly colored texture that can't be compressed. The delta between these two scores tells us how effective the GPU's color compression scheme is.
As you can see, the color compression in Nvidia's Maxwell chips looks to be quite a bit more effective than the compression in Fury X. The Fury X still has a tremendous amount of memory bandwidth, of course, but we're more concerned about capacity. Assuming these GPUs store compressed data in a packed format that saves capacity as well as bandwidth, it's possible the Maxwell GPUs could be getting more out of each megabyte by using stronger compression.
So that's interesting.
Of course, much of what we've just demonstrated about memory capacity constraints is kind of academic for reasons we've noted. On a practical level, these results match what we saw in our initial reviews of the R9 Fury and Fury X: at resolutions of 4K and below, cards with 4GB of video memory can generally get by just fine, even with relatively high image quality settings. Similarly, the GeForce GTX 970 seems to handle 4K gaming quite well in spite of its funky partitioned memory. Meanwhile, at higher resolutions, no current single-GPU graphics card is fast enough for fluid gaming, no matter how much memory it might have. Even with 12GB, the Titan X averages less than 30 FPS in Shadow of Mordor at 5760x3240.
We'll have to see how this memory capacity story plays out over time. The 4GB Radeon Fury cards appear to be close enough to the edge—with a measurable problem in Far Cry 4 at 4K—to cause some worry about slightly more difficult cases we haven't tested, like 5K monitors, for example, or triple-4K setups. Multi-GPU schemes also impose some memory capacity overhead that could cause problems in places where single-GPU Radeons might not struggle. The biggest concern, though, is future games that simply require more memory due to the use of higher-quality textures and other assets. AMD has a bit of a challenge to manage, and it will likely need to tune its driver software carefully during the Fury's lifetime in order to prevent occasional issues. Here's hoping that work is effective.Is FCAT more accurate than Fraps for frame time measurements?
Here's a geeky question we got in response to one of our discussions in the latest episode of the podcast that deserves a solid answer. It has to do with our Inside the Second methods for measuring video game performance using frame times, as demonstrated in our Radeon R9 Fury review. Specifically, it refers to the software tool Fraps versus the FCAT tools that analyze video output.
TR reader TheRealSintel asks:
On the FRAPS/frametime discussion, I remember during the whole FCAT introduction that FRAPS was not ideal, I also heard some vendors performance can take a dive when FRAPS is enabled, etc.
I actually assumed the frametimes in each review were captured using FCAT instead of FRAPS.
When you guys introduce a new game to test, do you ever measure the difference between in-game reporting, FCAT and FRAPS?
I answered him in the comments, but I figure this answer is worth promoting to a blog entry. Here's my response:
There's a pretty widespread assumption at other sites that FCAT data is "better" since it comes from later in the frame production process, and some folks like to say Fraps is less "accurate" as a result. I dispute those notions. Fraps and FCAT are both accurate for what they measure; they just measure different points in the frame production process.
It's quite possible that Fraps data is a better indication of animation smoothness than FCAT data. For instance, a smooth line in an FCAT frame time distribution wouldn't lead to smooth animation if the game engine's internal simulation timing doesn't match well with how frames are being delivered to the display. The simulation's timing determines the *content* of the frames being produced, and you must match the sim timing to the display timing to produce optimally fluid animation. Even "perfect" delivery of the frames to the display will look awful if the visual information in those frames is out of sync.
What we do now for single-GPU reviews is use Fraps data (or in-engine data for a few games) and filter the Fraps results with a three-frame moving average. This filter accounts for the effects of the three-frame submission queue in Direct3D, which can allow games to tolerate some amount of "slop" in frame submission timing. With this filter applied, any big spikes you see in the frame time distribution are likely to carry through to the display and show up in FCAT data. In fact, this filtered Fraps data generally looks almost identical to FCAT results for single-GPU configs. I'm confident it's as good as FCAT data for single-GPU testing.
For multi-GPU configs, things become more complicated because frame metering/pacing comes into the picture. In that case, Fraps and FCAT may look rather different. That said, a smooth FCAT line with multi-GPU is not a guarantee of smooth animation alone. Frame metering only works well when the game advances its simulation time using a moving average or a fixed cadence. If the game just uses the wall clock for the current frame, then metering can be a detriment. And from what I gather, game engines vary on this point.
(Heck, the best behavior for game engine timing for SLI and CrossFire—advancing the timing using a moving average or fixed cadence—is probably the opposite of what you'd want to do for a variable-refresh display with G-Sync or FreeSync.)
That's why we've been generally wary of AFR-based multi-GPU and why we've provided video captures for some mGPU reviews. See here.
At the end of the day, a strong correlation between Fraps and FCAT data would be a better indication of smooth in-game animation than either indicator alone, but capturing that data and quantitatively correlating it is a pain in the rear and lot of work. No one seems to be doing that (yet?!).
Even further at the end of the day, all of the slop in the pipeline between the game's simulation and the final display is less of a big deal than you might think so long as the frame times are generally low. That's why we concentrate on frame times above all, and I'm happy to sample at the point in the process that Fraps does in order to measure frame-to-frame intervals.
I should also mention: I don't believe the presence of the Fraps overlay presents any more of a performance problem than the presence of the FCAT overlay when running a game. The two things work pretty much the same way, and years of experience with Fraps tells me its performance impact is minimal.
Here's hoping that answer helps. This is tricky stuff. There are also the very practical challenges involved in FCAT use, like the inability to handle single-tile 4K properly and the huge amount of data generated, that make it more trouble than it's worth for single-GPU testing. I think both tools have their place, as does the in-engine frame time info we get from games like BF4.
In fact, the ideal combination of game testing tools would be: 1) in-engine frame time recordings that reflect the game's simulation time combined with 2) a software API from the GPU makers that reflects the flip time for frames at the display. (The API would eliminate the need for fussy video capture hardware.) I might add: 3) a per-frame identification key that would let us track when the frames produced in the game engine are actually hitting the display, so we can correlate directly.
For what it's worth, I have asked the GPU makers for the API mentioned in item 2, but they'd have to agree on something in common in order for that idea to work. So far, nobody has made it a priority.Reconsidering the overall index in our Radeon R9 Fury review
As you may know, our value scatter plot puts the R9 Fury just behind the GeForce GTX 980 in our overall index of average FPS scores across our test suite. Some of you have expressed surprise at this outcome given the numbers you've seen in other reviews, and others have zeroed in on our inclusion of Project Cars as a potential problem, since that game runs noticeably better on GeForces than Radeons for whatever reason.
I've explained in the comments that we use a geometric mean to calculate our overall performance score rather than a simple average specifically so that outliers—that is, games that behave very differently from most others—won't have too big an impact. That said, the geomean doesn't always filter outlier results as effectively as one might wish. A really skewed single result can have a noticeable impact on the final average. For that reason, in the rush to prepare my Fury review, I briefly looked at the impact of excluding Project Cars as a component of the overall score. My recollection is that it didn't seem to matter much.
However, prompted by your questions, I went back to the numbers this morning and poked around some. Turns out the impact of that change may be worthy of note. With Cars out of the picture, the overall FPS average for the R9 Fury drops by 1.2 FPS and the score for the GeForce GTX 980 drops by 2.8 FPS. The net result shifts from a 0.6-FPS margin of victory for the GTX 980 to a win for the R9 Fury by a margin of 1.1 FPS.
Things are really close. This is why I said in my analysis: "That's essentially a tie, folks."
But I know some of you hang a lot of worth on the race to achieve the highest FPS averages. I also think the requests to exclude Project Cars results from the index are sensible given how different they are from everything else. So here is the original FPS value scatter plot:
And here's the revised FPS-per-dollar scatter plot without the Cars component.
Some folks will take solace in this symbolic victory for AMD in terms of overall FPS averages. Do note that the price-performance landscape isn't substantially altered by this shift on the Y axis, though.
We have long championed better metrics for measuring gaming smoothness, and our 99th-percentile FPS plot is also altered by the removal of Cars from the results. I think this result is a much more reliable indicator of delivered performance in games than an FPS average. Here's the original one:
And here it is without Project Cars:
The picture shifts again with Cars out of the mix—and in a favorable direction for the Radeons—yet the R9 Fury and Fury X still trail the less expensive GeForce GTX 980 in terms of general animation smoothness. I believe this result is much more notable to PC gamers who want to understand the real-world performance of these products. AMD still has work to do in order to ensure better experiences for Radeon buyers in everyday gaming.
Then there's the power consumption picture, which looks like so:
I didn't have time to include this plot in the review, although all of the data are there in other forms. I think it's a helpful reminder of another dynamic at play when you're choosing among these cards.
At the end of the day, I think the Cars-free value scatter plots are probably a more faithful reflection of the overall performance picture than our original ones, so I'm going to update the final page of our Fury review with the revised plots. I've looked over the text that will need to change given the shifts in the plot positions. The required edits amount to just a few words, since the revised scores don't change anything substantial in our assessments of these products.
Still, it's always our intention to provide a clear sense of the overall picture in our reviews. In this case, I'm happy to make a change in light of some reader concerns.Time Warner slings free Maxx upgrades to counter Google Fiber
I've been chronicling the slow progress of Google Fiber moving into my metro area, my city, and eventually, into my house. Since Google Fiber started building in the Kansas City area, a funny thing has happened: competition. Even before the Google announcement, we had the option of AT&T U-Verse or Time Warner Cable in my neighborhood. Then Google did its thing, and AT&T later announced the rollout of its own fiber product in parts of the metro. Meanwhile, my incumbent cable provider, Time Warner, has raised the speeds of our cable Internet service several times at no extra charge.
I get the sense that we're pretty fortunate around here, all things considered, compared to a lot of areas in the U.S. One thing we have that many others don't is a real set of options.
Anyhow, I mentioned the other day that the timeline for Google Fiber service turn-ups in my neighborhood is disappointingly slow, even though the fiber's already in the ground. The wait for 1000Mbps up- and downstream was gonna be pretty rough at a continued pace of 50Mbps down and a pokey 5Mbps up.
Happily, we got a notice in the mail (yes, via snail mail) the other day from Time Warner telling us about yet another speed increase at no cost. This is part of TWC's new Maxx service offering. The "standard" service tier jumps from 15Mbps down/1Mbps up to 50/5. Our "Extreme" package rises from 50/5 to 200/20. And the fastest package goes from 100/5 to 300/20.
Not bad, really. And the change was apparently active. I ran down to my office and did a quick speed test, and sure enough, performance was up. Downstream reached about 110Mbps, and upstream hit about 11Mbps. We have a relatively new modem, from the last couple of years, but the notice said we might need to swap it out for a newer one to reach the full rates. I quickly hopped online and ordered a swap kit, which TWC promised to send out to my house free of charge.
That was on Friday. Then, on Sunday, our Internet service simply stopped working. From what I could tell after some poking and prodding, our home router was fine, and our modem was synced up to the cable network fine. It just wouldn't pass packets. What followed was a weird combination of good and bad.
Somehow, I found TWC's customer service account on Twitter and decided to see if there was an outage in my area. They were incredibly quick to reply and ask me for more info about my TWC account. I provided it, and they soon informed me that my modem had been quarantined in order to alert me that I needed to upgrade my modem to get the full speeds available to me.
Yes, they straight took down my service to let me know that I needed to order a modem I'd already ordered.
If only we had... information technology that would allow companies to target only appropriate customers with these messages. If only other forms of communication existed than a total service shut-off. If only... wow.
Anyhow, the Twitter rep took my modem out of quarantine and explained that most users should see a web-based message about the reason for the quarantine—along with a form to order a new modem and a means of getting the current one out of quarantine. It's just that "some routers" block that message. My excellent Asus AC2400 router was one that did, it seems, likely due to good security design.
Again, wow. I think competition has made TWC aggressive without really making them customer-focused. I suppose it's a start.
Regardless, my new modem arrived yesterday and I installed it. The process was a little clumsy, but I muddled through. The end result was a full realization of our new service speeds. Speedtest.net tells me I can reach 216Mbps downstream and 21Mbps upstream, just a little better than the advertised rate.
Man, four times our old upstream and downstream speeds is gonna make the wait for Google Fiber much easier. Heck, I'm not sure how many servers out there really sling bits to consumers at 200Mbps—other than, you know, Steam. Maybe other folks with fast connections can enlighten us about that. My sense is that, for purposes that don't involve upstream transmissions, what we have now may not differ much in practical terms from fiber-based Internet services. Didn't happen how I expected, but I'm pleased to see it.Thanksgiving offers a perfect chance to crack open a busted iPad Air
Phew. I needed that break. I was able to take off the latter half of last week and this past weekend to spend some time with my family, and it was refreshing to get away. Thanks to Geoff and Cyril, with their alternative Canadian Thanksgiving ways, for keeping the site going.
Because I'm partially crazy, I couldn't just relax during my time away, of course. I took it upon myself to attempt a computer hardware repair. And since doing things that are, you know, sensible isn't a requirement for the halfway insane, I decided to replace the cracked glass digitizer on my brother's iPad Air. Any old chump can fix a busted PC, but only the truly elite hax0rs can tackle hardware maintenance for devices that have been designed with active malice toward the technician.
My preparation for this feat was asking my brother to order a replacement digitizer for his iPad Air and watching the first few minutes of an instructional video on the operation before losing interest. I figured, eh, it's all about glue and guitar picks.
Don't get me wrong. It is all about glue and guitar picks, but the YouTube videos lie. They show operations being performed by competent, experienced people whose hands know what to do in each situation. I am not that person, which is a very relevant difference once you get knee deep into one of these operations.
The other way most of the YouTube videos lie is that they show somebody removing a completely whole, unsullied piece of glass from the front of a device. That was not my fate. The screen on my brother's Air had cracks running clear across its surface, combined with shattered areas covered by spiderwebs of tiny glass shards.
The replacement screen came with a little repair kit, including a guitar pick, mini-screwdrivers, a suction cup, and several plastic pry tools. I used a hair dryer to heat the adhesive around the glass pane, pulled up on the glass with the suction cup, pried under it in one spot with the tiny screwdriver, and slipped a guitar pick into the adhesive layer. Sounds simple, but just getting this start took a lot of trial and error.
I soon discovered two important truths. One, I needed about five more guitar picks to keep the areas where I'd separated the adhesive from re-sealing. I had only the one—and we were away from home, at a little rental house thing, for the holiday. Two, getting a cracked screen to separate from the adhesive is a huge pain in the rear. Suction cups don't stick to cracked glass.
Here's what I eventually pulled free from the chassis, after over an hour's hard work with a bad feeling in the pit of my stomach.
Notice the spiderwebbed section sticking up. Yes, I literally peeled the glass free from its adhesive backing. Not pictured are the hundreds of tiny glass shards that shattered and fell out during the process, all over me and into the iPad chassis. The minuscule shards practically coated the surface of the naked LCD panel beneath the glass, while others worked their way into my fingertips. The pain was one thing, but worse, I was pretty sure at this point that I'd ruined the LCD panel in my brother's tablet.
Notice that some sections of the screen around the edges are not in the picture above. They didn't break free when I removed the rest of the digitizer, so I had to scrape those shards off of their adhesive backing separately.
Also notice that the busted digitizer doesn't have a home button or a plastic frame around the pinhole camera opening up top. Most of them do, and this one did when I first removed it from the iPad. However, the replacement digitizer we ordered bafflingly didn't come with a home button or pinhole frame included. It did in the YouTube videos, but surprise! You get to do this on hard mode.
The home button bracket seemed like it was practically welded on there. And remember: we didn't have any spare adhesive or glue or anything.
After nearly giving up in despair, I found another YouTube video showing this specific operation in some detail. The dude in it used tools I didn't have, but what the heck. After heating the home button area with the hair dryer, I pulled out my pocket knife and went for it. I proceeded to separate the home button, its paper-thin ribbon connection, and the surrounding metal bracket from the busted digitizer. Somehow, I managed to keep enough adhesive on the bracket to allow it to attach to the new screen. The button happily clicked without feeling loose. This success massively exceeded my expectations.
Once I'd crested that hill, I came face to face with that perfect Retina LCD coated with glass dust. Frankly, I'd been trying to bracket off my worries about that part of the operation, or I wouldn't have been able to continue. After lots of blowing on the gummy surface of the LCD panel, I decided what I needed to deal with the remaining glass shards and fingerprints was a microfiber cloth. Lint from cotton would be disastrous. Shortly, my brother went out to his truck and returned with a nasty, dirt-covered microfiber cloth that was pretty much our only option. A couple of the corners were less obviously soiled, so I used them lovingly to brush, rub, and polish the surface of the LCD panel. Several spots where I concentrated my efforts just grew into larger and larger soiled areas. My brother stood looking nervously over my shoulder, asking worried questions about the state of things. However, after rotating the cloth and giving it some time and gentle effort, I was somehow able to dispel the oily patches almost entirely.
From here, it was all downhill, right? I attached each of the miniature ribbon connectors and, before reassembling the tablet, turned it on for a quick test. To my great relief and pleasure, the LCD worked perfectly, with no dead pixels or obvious damage of any kind. And the touchscreen digitizer responded perfectly to my input, even though it wasn't yet layered atop the LCD. It was good to go.
The next step was the tedious process of placing the pre-cut 3M adhesive strips along the edges of the iPad chassis. Somehow, I managed to do this without folding over the glue strips and having them stick to themselves. Really not something I expected to pull off cleanly.
Pictured above is the open iPad with the new digitizer attached. You can see the adhesive strips around the edges of the chassis with the backing still on one side. My bandaged fingers are holding up the LCD panel, and the big, back rectangles you see are the iPad's batteries. The device's motherboard sits under the metal shield just above the batteries. It's a little larger than a stick of gum. I stopped to take a picture at this point mostly because my stress level was finally low enough for me to remember to do so.
With only a little remaining struggle, I was able to re-seat the LCD panel and secure it, remove the adhesive backing, flip over the new digitizer, and push it firmly into place atop the new adhesive layer. After a little clean-up, my brother's iPad Air looked as good as new.
Three hours after my journey began, I turned on the repaired iPad. It booted up quickly. The LCD looked perfect. The home button was clicky and solid. And I swiped to log in.
I swiped again, a few times, and I was able to log in. And then... the thing went crazy. Phantom touches everywhere ran apps, activated UI buttons, and began typing gobbledygook messages. The touchscreen was completely hosed.
Utter defeat. What followed isn't something I'd like to share on the Internet. Suffice to say that I'm a grown man, and grown men shouldn't act like that.
Initially, I blamed myself for messing up the repair with my clumsiness. I figured I must have ruined a ribbon connector or something. Hours later, after I'd gotten some distance from the whole thing, I poked around online and came to a different conclusion. You see, the original adhesive layer I removed from the iPad was essentially a felt lining with sticky stuff on both sides. The repair kit, however, came only with a thin layer of adhesive, with no insulator. I'm now 99% certain that the touchscreen's problems were caused by making electrical contact with the iPad's aluminum chassis. Others have run into the same issue, looks like.
I may never know for sure. My brother took the iPad back to his home after Thanksgiving and will be paying a repair shop to fix it. I dunno whether they'll offer any feedback about what happened.
Meanwhile, I suppose I got a little bit more experience doing repair work on mobile devices. So far, I've learned two things. First, I can do this. It just takes more of the same patience, precision, and self-imposed calm that working on larger computer systems requires. And a few initial victims, like my daughter's Nintendo DS, my mother-in-law's cell phone, my old laptop, and my brother's iPad Air.
Hey, they were broken anyway.
Second, it takes a special sort of person to do this stuff for fun. I am probably not that sort of person—and I'm okay with that.
Besides, next time I'll have a proper heat gun, more guitar picks, and some insulating tape.
|Windows 10 Creators Update set to hit PCs on April 11||4|
|SiSoft Sandra Platinum 2017 is ready for Ryzen||1|
|SteelSeries' Rival 700 gaming mouse reviewed||4|
|Intel lets loose Kaby Lake-based Xeon E3 v6 processors||40|
|Samsung plans to refurbish and resell Galaxy Note 7 handsets||21|
|Respect Your Cat Day Shortbread||30|
|Razer Blade Pro swims in the deep end of Kaby Lake||15|
|AIDA64 version 5.90 supports Ryzen and Apollo Lake||6|
|MSI spills the beans on its cadre of custom GTX 1080 Ti cards||2|
|They were going to launch a G-sync version but trying to represent the price induced an overflow error in their storefront software.||+32|