I asked Google to delete my Google+ account the other day.
I did it without second thoughts, without regrets, after using the service for barely a year. Well, not really using the service. I was one of the first to join Google+ during the invite-only phase last June, but my interest quickly petered out after some early experimentation with Hangouts. Really, for the better part of the last year, I've mostly ignored the little "+Cyril" link at the top left of Google pages.
Yes, yes, I know. There's nothing terribly original about criticizing Google for its desperate, almost tragic struggle to establish itself in the social networking world. It's an open secret that Google+ is kind of awful.
Somehow, though, I thought Google would have realized the error of its ways by now. I thought it would have left Google+ to wither away quietly like Google Buzz and its other failed projects. I didn't expect the relentless and suffocating barrage of integration and promotion efforts we've been subjected to in recent months. Google+ is in Gmail now, and it's in our Google search results, too. All the trendy blogs and news sites are peppered with grey-and-red "G+1" buttons. Even the White House is staging highly publicized Hangouts with handpicked members of the public. Now, thanks to Google+, anyone with a webcam can be Joe the Plumber. How wonderful.
Putting Google+ in our e-mail and in our morning coffee and in direct telepathic streams from outer space would be fantastic—if the service had some unique, intrinsic value. It just doesn't, though. It's a me-too Facebook clone with a clunky interface, and the few neat things it does are like raindrops in a squalid, swirling sea of mediocrity. Google managed to make it worse with the latest redesign, which inexplicably squeezes all useful content into a tiny column on the left side of the screen. It's sad, because Facebook hardly has the cleanest, neatest interface around. Yet Google+ manages to be uglier and more awkward to use.
I might overlook that if it were Google+'s greatest sin, but it isn't. Google+'s greatest sin is something it can't really atone for: everyone, and I mean everyone, is already on Facebook. My girlfriend is on Facebook, my dad is on Facebook, and my aunt is on Facebook. All of my friends are on Facebook. My cousin is on Facebook, my old high school pals are on Facebook, and my work acquaintances are on Facebook. TR's Geoff Gasior isn't on Facebook, but I don't think he's on Google+, either. His enlarged privacy gland makes him allergic to social networks.
So why, exactly, should I split my social networking activities between two services? Google+ only ever let me interact with a subset of my Facebook contacts, and that was never an exciting prospect. Never once did I think, "Oh, hey, cool link. I think I'll share it on Google+!" Never once did I feel like adding a picture to my Google+ Stream. Facebook always provided a broader audience, a bigger sounding board, and nothing Google did ever made up for that. The Circles feature almost made me into a believer, but Facebook was quick to copy it.
To me, it seems Google is fighting a battle when the war is already lost. Google+ arrived far too late and far too long after Facebook had seeped into mainstream culture. They'd already made a movie about it, for crying out loud. All Google could ever do was try and shove Google+ down our throats in the vain hope that, eventually, the service would gain enough users to become a compelling alternative. To this day, they show no sign of letting up. I suppose sheer perseverance might eventually give them what they want. Maybe, through attrition, people will slowly flock to Google+ and share their baby pictures and birthday messages in that infuriatingly thin content column. Maybe they'll revel in daily Hangouts with old friends from far away.
But probably not. And I hope it never happens.
Don't get me wrong; I wish Google all the best. However, Google already has the number-one search engine, the number-one webmail, the number-one mapping service, and the number-one a lot of things. Its Android platform has grown into an unstoppable juggernaut, and it has access to a downright frightening amount of personal user data. Google doesn't need the number-one social network (or even a major one) on top of it all. It doesn't need that, and it shouldn't have that.
Obvious antitrust and Big Brother concerns aside, I think there's just something fundamentally healthy about companies picking their battles and playing to their strengths. Apple gets it, Facebook gets it, and I wish Google got it, too. When big tech firms over-diversify, it leads to sad, shameful things like Zune and Bing, which exist not because someone said, "We can do it better," but because someone said, "We can do it, too." When big tech firms over-diversify, it makes them into lumbering giants who stop serving us and start smothering us with mediocrity. Competition serves us when great ideas compete with other great ideas, not when old companies enter new markets just because they can.
Google, please, just be happy with what you have. Just do a really good job of being Google. If you do, you might find that your users' desires begin to take precedence over yours. You might find that, although you'd love to compete with Facebook, your users aren't really into that. You might find that, since everyone already uses Facebook, maybe they'd just like to see it better integrated with your services. And why not? People use Facebook for the same reason that people use Gmail and Google Maps; it's the best service, and using the best service is always the path of least resistance.
Google+ is not the best service. Using Google+ is not the path of least resistance, and having Google+ waved in our faces day in, day out won't change that.Windows 8 frightens me, and here's why
Some people just hate change. It can't be helped. Those people cling to old versions of their favorite software as long as they can. When support cycles end and upgrades are forced upon them, they work tirelessly to customize new releases to look just like the old ones. When that fails, they take to Internet message boards and complain endlessly. "Why did they move such and such?" they ask. "Why did they merge this menu and that one? Why does it ask for my permission when I try to do this? Why, why, why?"
Those people are the bane of developers and web designers everywhere. When they're spoon-fed improvements with shiny silverware, they purse their lips and shake their heads and cry and moan until whatever they were offered splatters on the floor. They don't care if things change for the better, and they can't conceive that learning something anew might save them time in the long run. They want things exactly as they are now—forever.
I'm not one of those people; or at least, I like to think I'm not. For the most part, I love change. I seek out new software versions regularly, because I get bored with stagnant user interfaces and unchanging feature sets. I used to run beta apps a lot, but I cut back to keep my main work machine stable. Unlike some people I know, I tend to stick to the default settings in most applications I use. Not only does it save me a lot of frustration when customizations disappear or I have to change PCs, but it also gives me front-row seats to whatever new goodies developers add. I try to enjoy software the way its designers intended, not the way I think it ought to be used.
So, I love change, and I love trying new things. I should be ecstatic about Windows 8. It's going to be new. It's going to be very different, and it's going to make me learn new things and sample new behaviors. Upgrading to Windows 8 will mean a few days, maybe weeks, of experimentation and discovery, and my computing habits might change for the better because of it.
The thing is, I'm not excited. I'm terrified. It's like I'm a hot tub enthusiast, complete with a mustache and chest hair and 1970s hairdo, and Microsoft is about to toss me into a steaming hot spring at the bottom of a volcano. And the hot spring is full of sharks. And the sharks also have mustaches and chest hair. Windows 8 just feels like too much change all at once—too much change that's too fundamental. And I'm not convinced that change will be for the better.
Yes, I've tried the Windows 8 Consumer Preview, and it's an improvement over last year's Developer Preview, no doubt about it. Everything works more smoothly. Everything seems to make just a little bit more sense. There's a growing library of Metro apps, so it's easier to get a feel for how the final product will behave on a day-to-day basis. And heck, some of those Metro apps look pretty good—here's looking at you, NewsRepublic.
The problem is, Windows 8 still has that ugly schism between Metro and Desktop. You keep waltzing from one to the other and then back again, whether it's to open an application, to move documents around the file system, or to perform any other task that isn't neatly contained within a single interface. So far, it seems like Metro apps are geared solely toward content consumption, while all the productivity work still has to happen in the Desktop. And it's terribly awkward.
That's probably going to change, of course. I expect the library of Metro apps to grow once Windows 8 hits stores, and once that happens, we'll likely see some productivity software designed for the new interface. But as new Metro apps start to supplant old Desktop ones, I'm going to have to deal with another big, big problem. And I fear it might be a dealbreaker.
Right now, my main PC has two 24" displays sitting side by side. I usually have a web browser, text editor, and Office applications on my left monitor, and my IRC client, IM contacts list, IM windows, and music player on my right monitor. Things move around from time to time, naturally. Sometimes, I'll be running Excel on my left monitor with a browser window on the right. Other times, I'll have a browser on each display. Maybe one will be for reading TR, and maybe the other will have a YouTube music video playing. Today, for example, I've had I Want a New Drug by Huey Lewis and the News looping at my right.
I could do all of those things in Windows 8, but I'd have to do them in the Desktop interface. Metro, with its modal design and huge buttons and giant text, is almost comically ill-suited for heavy multitasking. Basic app-to-app switching is clunky and slow. Windowed multitasking is off the menu. The best you can do is run two apps side-by-side, with one squeezed along the edge of the display, and that's a poor consolation prize.
Now, what happens when Metro gets all the cool new software, and Desktop gets relegated to legacy status? And what if Desktop and Metro continue to co-exist with equal attention from developers; where does that leave me? In either case, my multitasking experience is going to take a bullet in the leg. Unless I want to snub Metro software forever, I'll have to dedicate one display to Metro, with one or two apps running concurrently, and the other display to the Desktop UI, with everything else I want to use. Things might end up stuck that way for the foreseeable future. Or, if developers choose to focus their efforts on Metro, I might have to abandon the Desktop—and multi-window multitasking—for good.
Both of those options would suck. They wouldn't just suck; they would hobble my productivity. I wouldn't be able to keep my eye on a whole suite of different apps at once, and juggling between more than a handful of programs would become a nightmare. As hard as I might try to accept change and to adopt Microsoft's prescribed usage model, I doubt it would do any good.
"Ah, well," you might say. "If you don't like Windows 8, Cyril, you should just stick with Windows 7. Nobody's going to force you to upgrade."
That's true. Sort of. Sticking with an obsolete operating system always starts off great, but then you begin to miss out on new things. After a few years, Microsoft pulls the plug, and you're on your own with no software patches and no security updates. The sad thing is, I don't want to keep using Windows 7 forever. As nice an operating system as it is, it's not perfect, and many of Windows 8's Desktop improvements actually seem awfully compelling. However, I won't be able to enjoy them without dealing with all those Metro-related hassles.
In short, I'm going to be faced with an ugly compromise no matter what I do. And that's why Windows 8 frightens me.How TR gets (some of) its squeaky-clean product photos
If you follow our intrepid Editor in Chief on Twitter, you might have read his tweets about a little misattribution mishap a couple of months back. In short, another site mistakenly copied one of the photos from my Radeon HD 7800-series review and attributed it to AMD, even though I'd snapped the shot myself. Here's what Scott said at the time:
FWIW, TR doesn't use stock photos. Those are our own! We work hard on them. Like this one: http://techreport.com/image.x/pitcairn/money-1600.jpg
However, our photos are often taken by other sites. Like this one at DT, sourced to AMD! http://bit.ly/ACiD4C
So if you ever wonder how TR gets access to all of those great, clean pictures, the answer is: hard work we put into our own photography.
The attribution error was soon corrected, and all was well.
But that didn't address the broader issue: our photos often look a little too clean, and some folks seem to mistake them for stock images supplied by the companies we cover. Part-time TR coder Bruno Ferreira told me one of his acquaintances, another TR reader, thinks we don't do any of the photography in our reviews—which couldn't be farther from the truth. We do occasionally insert stock pics when we don't have the products at hand, but we always label them. See, for example, the third image here.
To help clear things up, I'd like to take you on a little behind-the-scenes tour of my homebrewed photo studio. Scott and Geoff and our other writers have their own setups in their own labs, but they follow a largely similar procedure. Besides, we all know I take the best pictures around here. Ahem.
Here's a still life of my photography gear. Excuse the murky picture; I couldn't use my good camera or lights for this one, obviously. The items you see are as follows:
The Rebel XSi is invaluable, but I'd say the most crucial components of the setup are the lighting kit and the tripod. No, really. You just can't take squeaky-clean product photos without good lights. Using a direct flash will generally make the subject look flat and two-dimensional, and more often than not, it'll cast harsh shadows, as well. What we want are smooth, soft-edged shadows, and a decent set of tungsten lights and umbrellas makes them very easy to obtain.
The tripod helps get as much light as possible into the camera's sensor without sacrificing image quality. I tend to shoot with shutter speeds in the 0.3-0.4" range, which lets me lower the aperture to f/14 and stay at ISO 200. If I weren't using a tripod, I'd have to raise the shutter speed to 1/80 or so to compensate for my unsteady hands. That would mean, in turn, increasing the aperture and cranking up the ISO, which would leave me with a shorter depth of field (i.e. a blurrier foreground and background) and much more background noise. Not good.
Here's the gear in action. The two tunsgtens spit out a combined 500W, which bounces against the unfurled craft paper to produce those clean white backdrops we all love. Some products leave dark blemishes on the paper, and that's where the pencil eraser come in.
Also, throughout each shoot, I'll keep wiping down the product with the microfiber cloth to get rid of smudges, fingerprints, and dust. That helps more with glossy surfaces like laptop bezels than with matte ones like GPU coolers, but I do it regardless. Closeups have a funny way of magnifying little imperfections.
Instead of hitting the shutter release manually, I tether the camera to my PC and use Canon's excellent EOS Utility. The program essentially works like a shutter remote on steroids, with aperture, shutter speed, and ISO controls, not to mention a live preview window that shows me exactly what I just snapped. It's a huge time saver. I don't need to squint at the camera's tiny LCD and wonder how it's going to look on a real display. Also, the high-res preview lets me check for stray dust motes and whatever else survived the microfiber rubdown.
Each shot gets automatically downloaded to my computer in RAW format. Why RAW and not JPEG? Adobe Camera Raw is why. The software allows for post-hoc exposure and white balance corrections, plus all kinds of useful little tweaks, like a "recovery" slider for overexposed highlights and a "fill light" slider that brightens up darker, underexposed areas without affecting the rest. Since RAWs are lossless with 14 bits per color channel, those tweaks don't bring up ugly compression artifacts.
Of course, I rarely use Camera Raw for white balance corrections. I just configure white balance directly on the camera before shooting. All it takes is a shot of my backdrop sans subject and a trip down the Rebel XSi's menu tree, to the "Custom WB" control, and I'm all set.
The last step before a photo makes it into one of our reviews is Photoshop. Here, I adjust levels to make sure the background is as close to white as possible without sacrificing detail. I then crop as close as possible to the edges of the subject.
And there's the result. Not one of my best, and maybe a little overexposed because of the reflective aluminum shroud, but you get the idea.
Almost a year ago today, I waved goodbye to the world of print books and got myself an Amazon Kindle 3. You might recall I blogged about the experience, pointing out some of the advantages of e-books in general and some of the disadvantages of the Kindle 3 in particular—namely on the ergonomics front.
I've spent many hours cradling the Kindle 3 since then, reading everything from Steinbeck to the Hunger Games trilogy to e-novels written by some of my favorite webcartoonists. I did my reading at my desk, in bed, at the beach, and after hours at the Consumer Electronics Show, when I needed to wind down after long days of meetings and writeups. It was great, and I never thought about returning to print media for a minute.
However, I never got over the Kindle 3's ergonomics problems. After a year of use, I still hadn't found a comfortable way to hold the device for extended periods of time. The closest I came was holding the Kindle in my left hand, propped up on an extended pinky finger, with my thumb resting next to the page turn buttons. That was okay, but not great. After reading for a couple of hours in that position the other day, I stood up and felt something like an electric shock propagate down my left arm. I guess I must have pinched a nerve somewhere. Not fun.
I always managed to hit the buttons by accident, too, whether on the keyboard or around it. The d-pad was especially vulnerable to accidental presses. Those would throw me forward or back entire chapters, and pressing "back" didn't always help me find my place again. Heck, I occasionally hit that button by accident. Call me a klutz, but I think the dearth of empty spots on the front bezel is a real problem. I tried one of those leather covers Amazon sells to see if it would alleviate the problem, but it seemed to weigh down the device without making it much more comfortable to hold. I also hated the velvety interior.
I was understandably tempted, then, when Jeff Bezos introduced the Kindle Touch last September—and not just because he put together a great pastiche of a Steve Jobs keynote. The device looked genuinely compelling. At the same time, I had mixed feelings about touch-enabled e-readers. I'd seen them in stores and libraries, and they were always covered in really gross smudges. Since e-ink displays aren't backlit, the smudges almost seemed to compete with the text for attention.
So, I waited. The Kindle Touch wasn't available in Canada right away, which gave me plenty of time to weigh the pros and cons. And weigh them I did.
Amazon cleared the Kindle Touch for international shipments in early February. I placed my order last Sunday. My reasoning went something like this: I'd been spending an increasing amount of time reading, and I was sick of the Kindle 3's poor ergonomics. I figured I'd try the Kindle Touch and return it if the device didn't live up to my expectations.
The Touch arrived alongside a matching zip sleeve (hey, gotta stay protected) at my door the following Tuesday. UPS works in mysterious ways—I had selected the slowest shipping option, and Amazon had told me to expect my shipment some time in late March or early April.
Anyway, I haven't returned it yet.
And I don't think I'm ever going to. This thing is great. It feels like an improvement over the Kindle 3 in just about every way imaginable: better ergonomics, better software, a better form factor, and less weight. Even the screensaver images are better. Where the Kindle 3 spat out creepy renderings of Virginia Woolf or Mark Twain when put to sleep, the Kindle Touch displays cool-looking stock photos of pencils and movable type and stuff.
For me, though, the main thing is that the Kindle Touch is infinitely more comfortable to hold than its predecessor:
Look at that! I can put my thumb anywhere I want on the bezel, and no bad things happen. Well, I do have to avoid the home button at the bottom, but that's no big deal—none of my fingers want to go there when I'm holding the Kindle Touch one-handed. Also, since the Touch is both lighter and thicker than the Kindle 3, I have a better grip on it, and my muscles don't get tired as quickly. It's like Amazon did usability testing on actual humans with hands instead of robot claws this time.
The touch screen works well, too. It's more responsive than I expected, and swiping to turn pages restores some of the tactile intimacy of printed books. If swiping ain't your thing, you can just tap to turn pages. Amazon made the "previous page" area a tiny strip on the left side of the screen, so you can go back and forth with just your left thumb.
Even the on-screen keyboard is solid. The keys are laid out properly, unlike on the Kindle 3's bizarro hardware keyboard, which positions keys on a grid instead of staggering them as it should. I've never had to do a lot of typing on either Kindle, but having a functional, usable keyboard is definitely a good thing.
And there are plenty of other improvements. There's Amazon's X-Ray feature, for the relationally challenged among us who forget which characters are whom, and the new-and-improved dictionary, which lets you look up words by just tapping them. Oh, joy! No more awkward d-pad navigation. The on-screen interface is cleaner and easier to navigate, as well.
Is the Kindle Touch entirely perfect? No. I do notice the smudges sometimes, which makes my pseudo-OCD kick in, and the display is more recessed, which means the bezel can cast a more noticeable shadow depending on the lighting. That second issue is actually due to the way the touch screen works: instead of having a capacitive overlay on top of the screen, the Kindle Touch uses infrared lasers along the edge of the panel to detect finger positions. (According to CNN, Jeff Bezos doesn't like capacitive touch on e-readers because he says it adds glare to the display.) Oh, and I kind of miss the progress bar at the bottom. The Touch has ditched it in favor of a simple location indicator and percentage, which doesn't tell me how close I am to the next chapter or how much I've read since my last session. I wish they'd bring that back.
Overall, though, I'm a happier Kindle user and a happier reader thanks to the Touch. The international, Wi-Fi only version set me back $139, but if you live in the States (as most TR gerbils do), you can get a version with ads for only $99. As I understand it, the ads only show up on the screensavers and on the home screen. The ad-supported model sounds like a no-brainer, and that's probably the one I would have bought if it were available up here. An ad-free Wi-Fi model is also available Stateside for $139.
The little sleeve Amazon sells isn't bad, either.
Ah, Las Vegas. I had hoped never to return. Yet there I was last week, in the ticket line at the Vancouver airport, cursing myself for making it in time and bringing my luggage and passport.
"Maui or Phoenix!" an attendant began to shout, pacing back and forth between the ticket counters. "Anyone going to Maui or Phoenix needs to step to the front of the line!"
We made eye contact. For one brief second, I thought she was beckoning me to a less wretched destination. "I... I'm going to Las Vegas," I said with my hand half-raised.
She stared at me blankly. "Maui or Phoenix! Anyone going to Maui or Phoenix, step forward now!"
A man in a Hawaiian shirt and khaki shorts waddled past me. I looked at him with a loathing, jealous look. His flight would be long, but he would soon be sipping margaritas and sunbathing. I would be nursing my chapped lips and trying not to step on blisters during long marches between casino hotels. I would be hurrying along sidewalks just narrow enough to dispense deep lungfuls of car exhaust. I would be getting friction burn from the strap on my new messenger bag.
My mood bottomed out three hours later, when I emerged from the walkway into the McCarran International Airport. I encountered the slot machines that would haunt my waking hours for the next week. Oh, please, not again.
Then, somehow, it wasn't as bad as I thought.
I'm not saying Vegas is any less awful of a city than when I went there last year. I'm not saying it's any less grotesque or absurd. My friends tell me they know people who work there, and they tell me those people are happy. They say you can make a good living there, with good restaurants, endless entertainment, and a reasonable cost of living. What's not to like? I don't think I'll ever see Vegas that way. Maybe this city just attracts a certain kind of people—people happy to go there on vacation, to settle there, to raise a family there. Maybe, when the rest of us get dragged there despite ourselves, we can't help but hate it.
And maybe, after a while, we start to tune out the bad parts.
Last week, the garish lobby of the Venetian felt like a fact of life, not a cause to stop, mouth agape, wondering why anyone would build such a thing. The people at the slot machines looked like regular people just going about their regular days, not wretched souls unknowingly paying for the hotel's marble columns and impossibly kitsch indoor canals. The overpowering fragrances at the Trump Hotel and the Wynn didn't make me want to choke on my own vomit. All I did was chuckle to myself. "Hah. It still smells like vanilla in here."
Somehow, it all seemed normal. Normal like a crazy homeless man you see on the street every day on your way to work. Normal like an old lady wearing big sunglasses and a fur coat and too much perfume, trying to accessorize away the years that stole her beauty. Normal like little Jimmy getting sent home from school because he smashed a slug with a rock and poked at its guts with a stick.
This year, Vegas just seemed like a quirky backdrop to CES, not an attraction in and of itself. That was lucky—because while I tolerated Vegas better, the show seemed a lot worse to me.
"Hello there, how are you? Come right in. Here's our new product. It looks like the product we released last year, but don't be fooled, because it's slightly different. Have you seen our new tablet? It runs Android and has a black bezel and tapered edges. Have you seen our ultrabook? It's thin and light and cheaper than the MacBook Air. Yes, it is going to be obsolete in three-and-a-half months when Ivy Bridge comes out. So, how's the show been for you guys so far?"
Meeting after meeting, hotel suite after hotel suite, that's what the friendly PR reps we all know by name told us. We smiled, we nodded, we asked questions. We took pictures and wrote it all up in the hotel room at the end of the day, our feet throbbing and our eyelids drooping from the exhaustion. We posted it all on TR because that's what we do, and new products are new products. There were even a few small veins of glittering excitement in the dull, grey bedrock—the high-DPI Transformer Prime, the 7-series mobos.
But the veins were too few and the rock too hard and thick.
I remember Computex 2007. It was my first trade show, my first trip to Taiwan, and my first time being in Asia. Asus announced the very first netbook there, the original Eee PC. Intel demoed the first Atom-based handhelds. Via showed me the first x86 motherboard the size of a business card. OCZ let me try its first brain-wave-powered game controller. I got first-hand word—anonymously, of course—about upcoming processors and graphics cards. I drank my first glass of snake blood in an outing with other press guys, and for the first time in my life, I flew home with the fulfillment of having covered an exciting trade show.
Were there any firsts at this year's CES? None come to mind. CES 2012 was a show of second tries and third wheels, like the $529 Tegra 2 tablet from Toshiba and the convertible ultrabook from Lenovo that folds flat with the keyboard exposed below. It was a CES of me-toos and maybes, where prototypes of dubious value intermingled with MacBook and iPad lookalikes. There was no big, earth-shattering story this year; nothing like the birth of the netbook at Computex '07 or the unveiling of Nvidia's Project Denver at CES '11.
Last Friday, I packed my bags and grabbed a cab back to the McCarran International Airport. I smiled and nodded at the TSA officer who grunted a sarcastic "bonjour" after seeing my French passport. I ate an unfulfilling lunch at the Chili's near security. I figured out why my iPhone could get onto the airport Wi-Fi and my laptop couldn't. I spoofed my laptop's MAC address, got online, and hammered out our last bit of CES coverage for the week. I closed my laptop and stood in line at the gate. I realized how crowded the plane would be and made a last-minute run for the washroom before boarding.
I got into my seat and waited for the plane to take off. And then, for the first time in my life, I flew home from a trade show feeling nothing but disappointment.What's next for PC gaming?
If you're reading this, you're probably a PC gamer. You've probably invested a decent amount of money in a fast graphics card, a decent-sized monitor, and more cheap RAM than you probably needed. I'm willing to bet you've also played some of the latest shooters on that gaming rig of yours.
If my description fits you, then you must have realized that your PC can carry much bigger loads than the lightweight Modern Warfare engine and its ilk. The sad truth is that today's games are developed with six-year-old consoles in mind, and they look the part, too. High-end gaming PCs are roughly an order of magnitude more powerful than the Xbox 360 and PlayStation 3. Playing Modern Warfare 3 on the PC is a bit like taking a Ferrari to go grocery shopping; as flashy as it might look, the resources at hand are being woefully underused.
None of that should be news to you. The question is, what happens next?
Epic Games Technical Director Tim Sweeney said in September that Unreal Engine 4 won't be ready 'til "probably around 2014." Speaking to Develop the following month, Epic President Mike Capps noted, "I want Unreal Engine 4 to be ready far earlier than UE3 was; not a year after the consoles are released. I think a year from a console’s launch is perfectly fine for releasing a game, but not for releasing new tech. We need to be there day one or very early."
Unless there's some miscommunication inside Epic, those two statements tell us the successors to the Xbox 360 and PlayStation 3 won't be out until late 2013 or early 2014. That's a long time to wait with PCs getting more powerful and game developers still forced to target the same old platforms. However, I don't think that means we have to suffer continuing stagnation in PC graphics for the next two years. There's plenty that can be done to improve visual fidelity without tessellating everything and soaking images in photorealistic shader effects.
Mainly, I'm talking about four little visual eccentricities we've been living with for far too long—eccentricities that fast PC hardware could eradicate while we wait for the next generation of games.
I think those are the big ones. Rage already got us part of the way there with a hard 60 Hz target and beautifully effective vsync. Now, other games need to follow suit and iron out the other kinks mentioned above. I certainly hope AMD and Nvidia will push developers in that direction, too. After all, extra graphics horsepower can be put to good use making games look smoother, cleaner, and more seamless—graphics horsepower that would otherwise go unused... or, more crucially, un-purchased. Yes, I know about PhysX, stereoscopic 3D, and PC-only DirectX 11 eye candy, but the GPUs that come out next year and the year after that will no doubt have the brawn to handle those things with cycles to spare.
Of course, if my wishes are fulfilled, then we'll be in an interesting position when the next-gen consoles do come out. If Epic's Samaritan demo is any indication, future titles will take another step toward photorealism. I expect hardware requirements will suddenly spike up, but does that mean we'll be forced to trade silky smooth, shimmer-free graphics just for a taste of all the eye candy future games can throw at us? I certainly hope not. I hope next-gen titles will manage to offer smooth, distraction-free imagery with an added dose of realism. Otherwise, what would be the point? Photorealism with screen tearing, shimmering textures, and microstuttering wouldn't be photorealism at all.Where's my 21st-century television?
Gabe Newell said something that caught my eye the other day. As part of a broader interview with The Cambridge Student, Newell shared some powerful words of wisdom on the topic of piracy and copy protection:
In general, we think there is a fundamental misconception about piracy. Piracy is almost always a service problem and not a pricing problem. For example, if a pirate offers a product anywhere in the world, 24 x 7, purchasable from the convenience of your personal computer, and the legal provider says the product is region-locked, will come to your country 3 months after the US release, and can only be purchased at a brick and mortar store, then the pirate's service is more valuable. Most DRM solutions diminish the value of the product by either directly restricting a customers use or by creating uncertainty.
Our goal is to create greater service value than pirates, and this has been successful enough for us that piracy is basically a non-issue for our company. For example, prior to entering the Russian market, we were told that Russia was a waste of time because everyone would pirate our products. Russia is now about to become our largest market in Europe.
If you're a media industry executive who campaigns to paint pirates as amoral thieves who must be drawn and quartered, those words probably feel like a slap in the face. If you're an average, Internet-savvy adult, they probably ring truer than anything anyone's ever said about piracy—if only because of their eloquence.
I know Newell's right because I can relate. Many years ago, I was just another kid who downloaded MP3s and games off KaZaA and BitTorrent. I would make a point to purchase physical copies of the content I liked, but I would usually leave the jewel cases and boxes unopened. Retail purchases were a show of support for the content creators, not a means of obtaining their work.
Today, I buy almost all of my music on iTunes and almost all of my games on Steam. The exceptions are indie songs distributed through the artists' websites and games inexplicably walled off from the world's most popular PC game distribution service (*coughBattlefield3cough*). I use iTunes and Steam because, as Newell says, they provide a better service than the pirates do. I can get any content I want instantly, I know the quality is up to snuff, there are no viruses or cracks to worry about, and I get to support the content creators without letting shrink-wrapped jewel cases pile up in my apartment.
Valve can claim credit for making online game distribution appealing, and Apple undoubtedly deserves props for doing the same with music. Before iTunes came along, record labels were cluelessly trying to make up for declining CD sales with awkward, unappealing, and restrictive services. Apple didn't invent the concept of digital music distribution, but in true Apple tradition, it was the first to do it right. The move away from digitally locked songs and the introduction of iCloud have only made iTunes more appealing as the years have gone by.
Thanks to Apple and Valve, we're in a good place with music and PC games. Unfortunately, watching one's weekly slate of TV shows is still, inexplicably and frustratingly, a royal pain in the ass.
Yes, you can spend a hundred dollars every month on a carefully customized cable TV service, and then spend valuable time configuring your DVR to record the shows you want to watch. You can pay $1.99 per episode on iTunes, ensuring that you never give obscure shows a chance and that you curb your consumption of nightly programs like The Daily Show. You can use Hulu and watch recent episodes for free, with commercial breaks, the day after they air (provided you live in the United States). You can even scour the websites of different cable networks in the hope that they, too, let you stream recent shows for free.
Or... you can hop on your favorite BiTorrent tracker and download high-definition, commercial-free rips of any TV show on the planet at most an hour after it airs.
Many of the legit offerings are doing things almost right, but the pirates still provide a better service, hands down. It's not even funny. We all know what the problem is and what needs to be done, so why haven't the big networks gotten the message yet?
Here's what I want: a single service like Hulu Plus or Netflix that regroups shows from all major channels (including Comedy Central and HBO), lets me watch all past seasons of shows, and offers new episodes immediately after they finish airing. I want this service to be available in Canada as well as the United States. I want to pay a flat subscription fee, and I'm prepared to live with brief commercial breaks on top of that. I want to be able to cancel my cable TV subscription, because I never watch live TV anyway, and to use my PC or on my girlfriend's Xbox to watch shows. I will pay good money for this service ($50 or more a month doesn't sound unreasonable), and I will use it every day.
Why is nobody willing to take my money and provide this service in exchange? My demands aren't outlandish. All I'm asking for, really, is an on-demand alternative to live TV that doesn't suck. Save for sports, news, and American Idol, I think we can all agree that live television is a relic of the last century. It's high time to give 21st-century television a 21st-century platform on which to flourish, but I fear that won't happen until a dashing, Steve Jobs-esque executive once again strong-arms content providers into doing what's best for their customers. I hope someone rises to the challenge, because every day, entirely too many good TV shows are pirated by people simply following the path of least resistance. And that's just sad.
|AMD's A4-5000 'Kabini' APU reviewed||88|
|Memorial Day Weekend Shortbread||45|
|Deal of the week: A 7850 1GB for $132, and other bargains||7|
|AMD introduces low-power Richland APUs for slim notebooks||60|
|Updated Kinect motion sensor coming to the PC next year||25|
|Intel promises 50% battery life gain for Haswell laptops||76|
|WHQL-certified GeForce 320.18 drivers now available||18|
|OCZ Vertex 450 SSD has 20-nm NAND, tweaked Indilinx controller||16|