The Damage Report

You can snag a 39'' 4K display for $404
— 12:25 PM on January 16, 2014

The other day, my friend and fellow PC enthusiast Andy Brown pinged me and told me I needed to come over to his house to see his new toy: a 39" 4K display that he ordered from Amazon for 500 bucks. Coming from anybody else, I'd have been deeply skeptical of this purchase, but Andy is actually the co-founder of TR and has impeccable taste in such matters.

The product he purchased is this Seiki Digital 39" 4K 120Hz LED television. I started asking him more questions and looking into it. The more I learned, the more intrigued I became. Soon, I was at Andy's place peering into this large and glorious panel. He had the thing placed directly in front of his old 2560x1600 30" HP monitor, and I can't say I blame him. After all, you can almost get four copies of TR, or any other standard web-width site, side by side on the thing.

Yeah, this beast has more real estate than Ted Turner. And it has dropped in price to $404 at Amazon as I write. With free Prime shipping. And it's still in stock.

Sounds too good to be true, right?

Not really. This thing is just a killer deal, available to anyone. But there are a few caveats.

First, there's the matter of refresh rates. This display has a single HDMI input that can support the panel's native resolution of 3840x2160 at a refresh rate of 30Hz. That's a fast enough update rate for desktop and productivity work, but 30Hz is not good for gaming, even with vsync disabled.

Your fall-back option is to drop down to 1920x1080 while gaming, where this thing supports a nice, fast 120Hz refresh rate. That's a compromise on resolution, yes, but this puppy is probably faster than your current display, since 60Hz is the usual standard. Also, 1080p is a nice resolution for gaming because it doesn't require heaps and heaps of GPU horsepower in order to maintain acceptable performance.

And did I mention the price?

The other matter of some importance is the image quality of the display. I believe it's an S-MVA-type panel, which should make it superior to a TN panel and faster than an IPS one. Standing in front of it, that seems about right. There's less color shift than on most TN panels, and there's a heckuva lot of pop to the deep reds and oranges that often seem muted on TN panels.

This is a TV, though, so color correctness is an issue. You may want to buy or borrow a calibrator for it. Andy didn't yet have his display calibrated properly in Windows. The blues in the TR header were alarmingly neon and bright, to the point of being annoying. He'd had more luck with calibration on his Hackintosh, though. When he switched over there, the blues were somewhat tamed, though still brighter and more saturated than I would have liked. He'd put some work in dialing down the backlight intensity in one of the config menus on the TV in order to reach non-retina-searing brightness levels appropriate for a computer monitor.

But did I mention the price?

The simple fact is that you can have a massive array of pixels a couple of feet from your face for about $400. Stretched across a 39" panel, the pixel density is obviously higher than on my own 30" 2560x1600 monitor, but it's not so incredibly high that text becomes completely unreadable. If you do need to bump up the font size, the PPI shouldn't be so out-of-bounds high that the default Windows scaling options are overwhelmed. (I'd still recommend Windows 8.1 for a better experience. Or Mac OS X for the best.)

And there are so, so many pixels.

I know there are a lot of display innovations on tap for this year, including dynamic refresh schemes like G-Sync and 4K TN panels for around $700. This one comes at you from a different angle, and it's not something I expected, to say the least. But if you're willing to front a fraction of the cost of most 4K monitors, you can have the same pixel count today at a crazy discount.

For what it's worth, Newegg has them in for $599, in case Amazon sells out of its stock at $404. There's also a 50" version for $607 and a 65-incher for two grand.

107 comments — Last by epugh at 9:51 AM on 02/27/14

Oculus Rift's 'Crystal Cove' prototype tickles our rods and cones
— 5:52 PM on January 15, 2014

The absolute highlight of last year's CES was getting a first look at an Oculus Rift prototype. Strapping on a Rift for the first time is a mind-blowing experience. It will change your view of what's possible in gaming in the next 5-10 years. Naturally, then, when it came time to plan for CES 2014, I made sure to schedule some time with the folks at Oculus to see what they—and especially new Oculus CTO John Carmack—have been doing.

As you may have heard, the new "Crystal Cove" prototype that Oculus brought to the show this year captured a major award: Best in Show for CES 2014. The news came to the folks in the Oculus meeting room late on Thursday last week, as we were getting a demo of the headset. Given what I saw through those goggles, the recognition seems well deserved.

Crystal Cove is the third generation of hardware Oculus has put on public display. The first generation, with a 720p LCD screen inside, was the one they showed at CES 2013. Later last year, Oculus upgraded to a higher-resolution 1080p LCD. Crystal Cove takes several important steps beyond that.

Much of the new tech in Crystal Cove is intended to overcome one of Oculus' biggest challenges: VR headsets don't work well for everyone. A lot of people develop a sense of vertigo, nausea, or fatigue after using a Rift prototype for a while, sometimes in only minutes. The problem is apparently caused by the disconnect between what your senses expect to see in response to head motions and what's actually displayed. If the system doesn't respond quickly or accurately enough, you may find yourself unexpectedly executing a technicolor yawn.

Even several tens of milliseconds worth of delay can be enough to trigger a problem, so Oculus has been pushing to squeeze any latency it can out of the sensor-to-display feedback loop. That's why the Crystal Cove prototype contains a 1080p AMOLED display. The AMOLED delivers markedly better color saturation and deeper blacks than the earlier LCDs. More importantly, though, the AMOLED has a much faster pixel-switching time: less than a millisecond, versus about 15 ms for the LCDs in prior Rift prototypes.

Interestingly enough, switching to an AMOLED alone doesn't fix the ghosting that's often evident when making sweeping movements with a Rift attached to your noggin. Oculus claims this ghosting effect isn't inherent to the display itself and isn't visible on a high-speed camera; instead, it's caused by an interaction with the human visual system. They have been able to mitigate the problem, however, by implementing a low-persistence display mode. The AMOLED is quick enough to flash on and off again very quickly, at a high enough rate that no flicker is perceptible to the human eye. What you'll notice, instead, is that the ghosting effect is essentially eliminated.

I got to see low-persistence mode in action, and it works. In the demo, I had the Rift attached to my face while I was looking at some big, red text in the virtual world ahead of me. The Oculus rep had me waggle my head back and forth, and I saw obvious ghosting. He then flipped on the low-persistence mode. The entire display became somewhat dimmer, though without any obvious flicker. I again waggled around my enormous noggin, and the text no longer left a blurry trail of red behind it as I moved.

Given the latency sensitivity of the application and the fact that a low-persistence display mode appears to be in the works for monitors based on Nvidia's G-Sync technology, I had to wonder if Oculus has been experimenting with G-Sync-like dynamic refresh rates, as well. (They totally are.) Sadly, the Oculus rep handling our demo wasn't willing to discuss that subject.

The other big enhancement in Crystal Cove is a major upgrade to the head tracking hardware. The sensors in previous Rift prototypes could detect orientation—roll, pitch, and yaw—but that was it. This revision incorporates infrared LEDs placed all around the front and sides of the headset, and their movement is tracked by a camera placed in front of the user. The camera and LEDs give the Rift true positional tracking of the wearer's head in 3D space.

As with the display changes, the positional tracking appears to work well. In our demo, we were encouraged to crane our necks around 180 degrees in an attempt to throw off the tracking. The display was set to revert to a grayscale mode with the loss of tracking, and invoking it was tough to do while sitting in a chair facing the camera, which is how the Rift is intended to be used. Even when one demo subject managed to contort himself well enough to hide the LEDs from the camera and cause a tracking failure, the system recovered quickly. The display shifted back to full color within two or three seconds after the headset came back into plain view.

The combination of positional tracking, a faster display, and low-persistence mode is meant to provide a better, more comfortable VR experience than past Rift prototypes. I wasn't able to use the Crystal Cove headset long enough to judge for myself, and I haven't felt many ill effects during brief stints with the earlier prototypes. However, the Oculus folks seem to think they've pretty much conquered the sickness problem. Even late in the week at CES, after presumably hundreds of demos to the press and industry, they claimed not to have found anyone yet who was sickened by using a Crystal Cove prototype. If true, that's very good news.

I can tell you that the Crystal Cove hardware provides an even more immersive and borderline magical experience than earlier revisions of the Rift. The AMOLED is a big upgrade just for the color quality and sense of depth. Also, the software being demoed makes much better use of the VR headset.

We first got a look at an Unreal Engine 4 demo created by the guys at Epic called Strategy VR. The visuals in it are rich and detailed. I found myself hunching over and looking down, with my head nearly between my legs, peering over the edge of a virtual cliff in wonder.

The real star of the show, though, was the demo of Eve Valkyrie, the in-development game that's slated to be a Rift launch title. The Rift and this game breathe incredible new life into a genre that's been on the brink of death for some time now. When you slide on the headset, you find yourself sitting in the virtual cockpit of a space fighter. Some of the gauges are hard to make out at first, but if you lean forward, the text becomes clearer and easier to read. Above the gauges is a canopy, with a reeling space battle taking place in the sky beyond. The illusion of being there is strong, more so when you find yourself craning your neck to peer out of the canopy above and to your left, attempting to track an enemy fighter positioning itself on your six.

Having never played before, I scored -30, and my demo was over quickly due to an early death. The realism was impeccable.

Given the progress Oculus has made in the past year, we were left wondering how long it will be until the consumer version of the Rift hits store shelves. Right now, Oculus is being very cautious; it hasn't stated any timelines for the release of a final product. The firm says its goal is to be sure "VR is the right experience" for everyone who buys a headset.

Several components of that experience still need to come together before the Rift is ready for prime time. Oculus admits it's still working to improve the Rift's display resolution between now and the consumer product launch. That seems wise to me. When it's that close to your face and divided between two eyes, a 1080p display feels pretty low-res. If you stop and look, you can see the individual subpixels in the Crystal Cove's AMOLED array.

Also, the Rift currently lacks an audio component, which is a major omission. Oculus admits as much, calling positional audio "super-critical" to a VR experience, but it says it won't reveal any info yet about partnerships on the audio front. I assume that means there will be some.

For what it's worth, AMD had a gen-two Rift prototype on display in its CES booth along with a pair of headphones featuring positional audio generated by GenAudio's AstoundSound middleware and accelerated by a TrueAudio DSP block. I gave this setup a brief spin, and I'd say that's a pretty good start.

Oculus also has to make sure the Rift's game support is broad and deep enough to make the VR headset a compelling purchase. Eve Valkyrie looks amazing, but it won't suffice on its own. Fortunately, the company claims to have shipped about 50,000 Rift developer kits already, which should mean plenty of developers have Rifts strapped to their faces. In fact, one of the strange problems Oculus has now is not being able to track what everyone is doing with its development hardware. If the final headset is anywhere near as compelling as the prototypes, we've got to think there will be a steady stream of Rift-enabled applications released in the next couple of years.

That said, we could easily be waiting until CES 2015 or beyond until the Rift makes its way into its final, near-$300 form and ships to consumers everywhere. Given everything, it's easy to understand why that's the case. Still, having seen the goodness of Crystal Cove in action, a big part of me would like very much to hurry up and get on with the future, because it's really gonna be good.

77 comments — Last by Gyromancer at 4:06 PM on 02/22/14

An update on Radeon R9 290X variance
— 12:01 PM on December 10, 2013

We're still tracking the issue of Radeon R9 290X performance variance after our investigation into the matter last week and AMD's subsequent statement. As noted in that statement, AMD acknowledges that the apparent performance gap between the initial press samples and retail cards is wider than expected. Essentially, the variance from one 290X card to another ought not to be as broad as the 5-10% we're seeing. The firm is still investigating the reasons for this disparity, and an AMD rep paid a visit to Damage Labs yesterday in order to discuss the matter.

One thing the folks at AMD asked me to do is test the press and retail R9 290X cards against one another in the "uber" fan mode. Flipping the switch to enter "uber" mode raises the card's blower speed from 40% to 55% of its potential maximum. (Don't be fooled by the percentages there. The default 40% cap is loud, and the 55% cap is pretty brutal. At a full-on 100%, which you'd never experience during normal operation, the 290X blower sounds like a power tool. A noisy one.) You may recall from our original review that the 290X is quite a bit faster in "uber" mode. That's because in the default "quiet" mode, 290X cards constantly bump up against their thermal limits. "Uber" mode is intended to raise those limits by providing more cooling capacity.

At AMD's request, I ran our HIS retail 290X card and our original review sample through our 30-minute Skyrim test three times each and then through our MSI Kombustor worst-case torture test. The results are straightforward enough that I don't need to plot them for you. Cranking up the blower speed limit allows both the 290X press sample and the HIS retail card to run at a constant 1GHz in Skryim. There are virtually no clock speed reductions via PowerTune. Consequently, both cards perform the same in "uber" mode, averaging about 83 FPS during the duration of the test.

The results from Kombustor are similar. The press sample stays pretty much glued to 1GHz. The HIS retail card's GPU clock dips intermittently to as low as 978MHz in this peak thermal workload, but it spends the majority of its time at 1GHz, as well.

This is an important point to the folks at AMD, because it means the card-to-card performance variance we've cataloged can be eliminated quite literally at the flip of a switch. Owners of retail R9 290X cards will have to be willing to put up a non-default configuration that produces substantially more noise, but their cards should should be able to obtain the same performance that the 290X review sample achieved in "uber" mode in our review.

Obviously, the "uber" mode switch isn't a fix for everything that ails retail R9 290X cards. Many folks will prefer to have the combination of noise levels and performance offer by the stock configuration, and the card-to-card variance there remains an issue.

The next step from here is interesting, because AMD expects its partners to produce cards, like the DirectCU II 290X that Asus teased recently, with custom cooling that surpasses the stock cooler by a fair amount. With more effective cooling, these third-party cards could offer "uber" mode-type performance—and, potentially, less card-to-card variance—even at lower noise levels.

We'll have to see how that shakes out once we get our grubby little hands one of those custom 290X cards. I'm hoping that will happen soon. I also expect AMD to have something more detailed to say about the reasons for the unexpectedly broad card-to-card variance on current retail 290X cards before too long, so stay tuned.

134 comments — Last by ronch at 8:41 AM on 02/19/14

A few thoughts on Nvidia's G-Sync
— 11:34 AM on October 21, 2013

On the plane home, I started to write up a few impressions of the new G-Sync display technology that Nvidia introduced on Friday. However, that attempt pretty quickly turned into a detailed explanation of refresh rates and display technology. Realistically, I'll have to finish that at a later date, because I have another big graphics-related project hogging my time this week.

For now, I'll just say that whatever its drawbacks—which are mainly related to its proprietary nature—the core G-Sync technology itself is simply The Right Thing to Do. That's why Nvidia was able to coax several big names into appearing to endorse it. Because G-Sync alters conventional display tech by introducing a variable refresh rate, there's no easy way to demonstrate the impact in a web-based video. This is one of those things you'll have to see in person in order to appreciate fully.

I've seen it, and it's excellent.

You may remember that I touched on the possibility of a smarter vsync on this page of my original Inside the Second article. In fact, AMD's David Nalasco was the one who floated the idea. We've known that such a thing was possible for quite some time. Still, seeing Nvidia's G-Sync implementation in action is a revelatory experience. The tangible reality is way better than the theoretical prospect. The effect may seem subtle for some folks in some cases, depending on what's happening onscreen, but I'll bet that most experienced PC gamers who have been haunted by tearing and vsync quantization for years will appreciate the improvement pretty readily. Not long after that, you'll be hooked.

In order to make G-Sync happen, Nvidia had to build a new chip to replace the one that goes inside of most monitors to do image scaling and such. You may have noticed that the first version of Nvidia's solution uses a pretty big chip. That's because it employs an FPGA that's been programmed to do this job. The pictures show that the FPGA is paired with a trio of 2Gb DDR3 DRAMs, giving it 768MB of memory for image processing and buffering. The solution looks to add about $100 to price of a display. You can imagine Nvidia could cut costs pretty dramatically, though, by moving the G-Sync control logic into a dedicated chip.

The first monitors with G-Sync are gaming-oriented, and most are capable of fairly high refresh rates. That generally means we're talking about TN panels, with the compromises that come along with them in terms of color fidelity and viewing angles. However, the G-Sync module should be compatible with IPS panels, as well. As the folks who are overclocking their 27" Korean IPS monitors have found, even nice IPS panels sold with 60Hz limits are actually capable of much higher update rates.

G-Sync varies the screen update speed between the upper bound of the display's peak refresh rate and the lower bound of 30Hz—or every 33 ms. Beyond 33 ms, the prior frame is painted again. Understand that we're really talking about frame-to-frame updates that happen between 8.3 ms and 33 ms, not traditional refresh rates between 120Hz and 30Hz. G-Sync varies the timing on a per-frame basis. I'd expect many of today's IPS panels could range down to 8.3 ms, but even ranging between, say 11 ms (equivalent to 90Hz) and 33 ms could be sufficient to make a nice impact on fluidity.

G-Sync means Nvidia has entered into the display ASIC business, and I expect them to remain in that business as long as they have some measure of success. Although they could choose to license this technology to other firms, G-Sync is just a first step in a long line of possible improvements in GPU-display interactions. Having a graphics company in this space driving the technology makes a lot of sense. Going forward, we could see deeper color formats, true high-dynamic range displays enabled by better and smarter LED backlights, new compression schemes to deliver more and deeper pixels, and the elimination of CRT-oriented artifacts like painting the screen from left to right and top to bottom. Nvidia's Tom Petersen, who was instrumental in making G-Sync happen, mentioned a number of these possibilities when we chatted on Friday. He even floated the truly interesting idea of doing pixel updates across the panel in random fashion, altering the pattern from one frame to the next. Such stochastic updates could work around the human eye's very strong pattern recognition capability, improving the sense of solidity and fluid motion in on-screen animations. When pressed, Petersen admitted that idea came from one Mr. Carmack.

AMD will need to counter with its own version of this tech, of course. The obvious path would be to work with partners who make display ASICs and perhaps to drive the creation of an open VESA standard to compete with G-Sync. That would be a typical AMD move—and a good one. There's something to be said for AMD entering the display ASIC business itself, though, given where things may be headed. I'm curious to see what path they take.

Upon learning about G-Sync, some folks have wondered about whether GPU performance will continue to matter now that we have some flexibility in display update times. The answer is yes; the GPU must still render frames in a timely fashion in order create smooth animations. G-Sync simply cleans up the mess at the very end of the process, when frames are output to the display. Since the G-Sync minimum update rate is 30Hz, we'll probably be paying a lot of attention to frames that take longer than 33.3 ms to produce going forward. You'll find "time beyond 33 ms" graphs in our graphics reviews for the past year or so, so yeah. We're ready.

G-Sync panels will not work with FCAT, of course, since FCAT relies on a standards-based video capture card. We can use FCAT-derived performance data to predict whether one would have a good experience with G-Sync, but ultimately, we need better benchmarking tools that are robust enough to make the transition to new tech like 4K and G-Sync displays without breaking. I've been pushing both Nvidia and AMD to expose the exact time when the GPU flips to a new frame via an API. With that tool, we could capture FCAT-style end-of-pipeline frame times without the aid of video captures. We'd want to verify the numbers with video capture tools whenever possible, but having a display-independent way to do this work would be helpful. I think game engine developers want the same sort of thing in order to make sure their in-game timing can match the display times, too. Here's hoping we can persuade AMD, Nvidia, and Intel to do the right thing here sooner rather than later.

157 comments — Last by darc at 10:47 AM on 10/30/13

Here's why the CrossFire Eyefinity/4K story matters
— 3:43 PM on September 20, 2013

Earlier this week, we posted a news item about an article written by Ryan Shrout over at PC Perspective. In the article, Ryan revealed some problems with using a Radeon CrossFire multi-GPU setups and multiple displays.

Those problems look superficially similar to the ones we explored in our Radeon HD 7990 review. They were partially resolved—for single displays with resolutions of 2560x1600 and below, and for DirectX 10/11 games—by AMD's frame pacing beta driver. AMD has been forthright that it has more work to do in order to make CrossFire work properly with multiple displays, higher resolutions, and DirectX 9 games.

I noticed that many folks reacted to our news item by asking why this story matters, given the known issues with CrossFire that have persisted literally for years. I have been talking with Ryan and looking into these things for myself, and I think I can explain.

Let's start with the obvious: this story is news because nobody has ever looked at frame delivery with multi-display configs using these tools before. We first published results using Nvidia's FCAT tools back in March, and we've used them quite a bit since. However, getting meaningful results from multi-display setups is tricky when you can only capture one video output at a time, and, rah rah other excuses—the bottom line is, I never took the time to try capturing, say, the left-most display with the colored FCAT overlay and analyzing the output. Ryan did so and published the first public results.

That's interesting because, technically speaking, multi-display CrossFire setups work differently than single-monitor ones. We noted this fact way back in our six-way Eyefinity write-up: the card-to-card link over a CrossFire bridge can only transfer images up to to four megapixels in size. Thus, a CrossFire team connected to multiple displays must pass data from the secondary card to the primary card over PCI Express. The method of compositing frames for Eyefinity is simply different. That's presumably why AMD's current frame-pacing driver can't work its magic on anything beyond a single, four-megapixel monitor.

We already know that non-frame-paced CrossFire solutions on a single display are kind of a mess. Turns out that the problems are a bit different, and even worse, with multiple monitors.

I've been doing some frame captures myself this week, and I can tell you what I've seen. The vast majority of the time, CrossFire with Eyefinity drops every other frame with alarming consistency. About half of the frames just don't make it to the display at all, even though they're counted in software benchmarking tools like Fraps. I've seen dropped frames with single-display CrossFire, but nothing nearly this extreme.

Also, Ryan found a problem in some games where scan lines from two different frames become intermixed, causing multiple horizontal tearing artifacts on screen at once. (That's his screenshot above.) I've not seen this problem in my testing yet, but it looks to be a little worse and different from the slight "leakage" of an old frame into a newer one that we observed with CrossFire and one monitor. I need to do more testing in order to get a sense of how frequently this issue pops up.

The bottom line is that Eyefinity and CrossFire together appear to be a uniquely bad combination. Worse, these problems could be tough to overcome with a driver update because of the hardware bandwidth limitations involved.

This story is a bit of a powder keg for several reasons.

For one, the new marketing frontier for high-end PC graphics is 4K displays. As you may know, current 4K monitors are essentially the same as multi-monitor setups in their operation. Since today's display ASICs can't support 4K resolutions natively, monitors like the Asus PQ321Q use tiling. One input drives the left "tile" of the monitor, and a second feeds the right tile. AMD's drivers handle the PQ321Q just like a dual-monitor Eyefinity setup. That means the compositing problems we've explored happen to CrossFire configs connected to 4K displays—not the regular microstuttering troubles, but the amped-up versions.

Ryan tells me he was working on this story behind the scenes for a while, talking to both AMD and Nvidia about problems they each had with 4K monitors. You can imagine what happened when these two fierce competitors caught wind of the CrossFire problems.

For its part, Nvidia called together several of us in the press last week, got us set up to use FCAT with 4K monitors, and pointed us toward some specific issues with their competition. One the big issues Nvidia emphasized in this context is how Radeons using dual HDMI outputs to drive a 4K display can exhibit vertical tearing right smack in the middle of the screen, where the two tiles meet, because they're not being refreshed in sync. This problem is easy to spot in operation.

 

GeForces don't do this. Fortunately, you can avoid this problem on Radeons simply by using a single DisplayPort cable and putting the monitor into DisplayPort MST mode. The display is still treated as two tiles, but the two DP streams use the same timing source, and this vertical tearing effect is eliminated.

I figure if you drop thousands of dollars on a 4K gaming setup, you can spring for the best cable config. So one of Nvidia's main points just doesn't resonate with me.

And you've gotta say, it's quite the aggressive move, working to highlight problems with 4K displays just days ahead of your rival's big launch event for a next-gen GPU. I had to take some time to confirm that the Eyefinity/4K issues were truly different from the known issues with CrossFire on a single monitor before deciding to post anything.

That said, Nvidia deserves some credit for making sure its products work properly. My experience with dual GeForce GTX 770s and a 4K display has been nearly seamless. Plug in two HDMI inputs or a single DisplayPort connection with MST, and the GeForce drivers identify the display and configure it silently without resorting to the Surround setup UI. There's no vertical tearing if you choose to use dual HDMI inputs. You're going to want to use multiple graphics cards in order to get fluid gameplay at 4K resolutions, and Nvidia's frame metering tech allows our dual-GTX 770 SLI setup to deliver. It's noticeably better than dual Radeon HD 7970s, and not in a subtle way. Nvidia has engineered a solution that overcomes a lot of obstacles in order to make that happen. Give them props for that.

As for AMD, well, one can imagine the collective groan that went up in their halls when word of these problems surfaced on the eve of their big announcement. The timing isn't great for them. I received some appeals to my better nature, asking me not to write about these things yet, telling me I'd hear all about AMD's 4K plans next week. I expect AMD to commit to fixing the problems with its existing products, as well as unveiling a newer and more capable high-end GPU. I'm looking forward to it.

But I'm less sympathetic when I think about how AMD has marketed multi-GPU solutions like the Radeon HD 7990 as the best solution for 4K graphics. We're talking about very expensive products that simply don't work like they should. I figure folks should know about these issues today, not later.

My hope is that we'll be adding another chapter to this story soon, one that tells the tale of AMD correcting these problems in both current and upcoming Radeons.

179 comments — Last by Silus at 10:47 AM on 09/30/13

Those next-gen games? Yeah, one just arrived
— 2:49 PM on March 1, 2013

We've been swept away by a wave of talk about next-gen consoles since Sony unveiled the specs for the PlayStation 4, and we're due for another round when Microsoft reveals the next Xbox. The reception for the PS4 specs has largely been positive, even among PC gamers, because of what it means for future games. The PS4 looks to match the graphics horsepower of today's mid-range GPUs, something like a Radeon HD 7850. Making that sort of hardware the baseline for the next generation of consoles is probably a good thing for gaming, the argument goes.

Much of this talk is about potential, about the future possibilities for games as a medium, about fluidity and visual fidelity that your rods and cones will soak up like a sponge, crying out for more.

And I'm all for it.

But what if somebody had released a game that already realized that potential, that used the very best of today's graphics and CPU power to advance the state of the art in plainly revolutionary fashion, and nobody noticed?

Seems to me, that's pretty much what has happened with Crysis 3. I installed the game earlier this week, aware of the hype around it and expecting, heck, I dunno what—a bit of an improvement over Crysis 2, I suppose, that would probably run sluggishly even on high-end hardware. (And yes, I'm using high-end hardware, of course: dual Radeon HD 7970s on one rig and a GeForce GTX Titan on the other, both attached to a four-megapixel 30" monitor. Job perk, you know.)

Nothing had prepared me for what I encountered when the game got underway.

 

I've seen Far Cry 3 and Assasin's Creed 3 and other big-name games with "three" in their titles that pump out the eye candy, some of them very decent and impressive and such, but what Crytek has accomplished with Crysis 3 moves well beyond anything else out there. The experience they're creating in real time simply hasn't been seen before, not all in one place. You can break it down to a host of component parts—an advanced lighting model, high-res textures, complex environments and models, a convincing physics simulation, expressive facial animation, great artwork, and what have you. Each one of those components in Crysis 3 is probably the best I've ever seen in an interactive medium.

And yes, the jaw-dropping cinematics are all created in real time in the game engine, not pre-rendered to video.

But that's boring. What's exciting is how all of those things come together to make the world you're seeing projected in front of your face seem real, alive, and dangerous. To me, this game is a milestone; it advances the frontiers of the medium and illustrates how much better games can be. This is one of those "a-ha!" moments in tech, where expectations are reset with a tingly, positive feeling. Progress has happened, and it's not hard to see.

Once I realized that fact, I popped open a browser tab and started looking at reviews of Crysis 3, to find out what others had to say about the game. I suppose that was the wrong place to go, since game reviewing has long since moved into fancy-pants criticism that worries about whether the title in question successfully spans genres or does other things that sound vaguely French in origin. Yeah, I like games that push those sorts of boundaries, too, but sometimes you have to stop and see the forest full of impeccably lit, perfectly rendered trees.

 

Seems to me like, particularly in North America, gamers have somehow lost sight of the value of high-quality visuals and how they contribute to the sense of immersion and, yes, fun in gaming. Perhaps we've scanned through too many low-IQ forum arguments about visual quality versus gameplay, as if the two things were somehow part of an engineering tradeoff, where more of one equals less of the other. Perhaps the makers of big-budget games have provided too many examples of games that seem to bear out that logic. I think we could include Crytek in that mix, with the way Crysis 2 wrapped an infuriatingly mediocre game in ridiculously high-zoot clothing.

Whatever our malfunction is, we ought to get past it. Visuals aren't everything, but these visuals sure are something. A game this gorgeous is inherently more compelling than a sad, Xboxy-looking console port where all surfaces appear to be the same brand of shiny, blurry plastic, where the people talking look like eerily animated mannequins. Even if Crytek has too closely answered, you know, the call of duty when it comes to gameplay and storylines, they have undoubtedly achieved something momentous in Crysis 3. They've made grass look real, bridged the uncanny valley with incredible-looking human characters, and packed more detail into each square inch of this game than you'll find anywhere else. Crysis 3's core conceit, that you're stealthily hunting down bad guys while navigating through this incredibly rich environment, works well because of the stellar visuals, sound, and physics.

My sense is that Crysis 3 should run pretty well at "high" settings on most decent gaming PCs, too. If not on yours, well, it may be time to upgrade. Doing so will buy you a ticket to a whole new generation of visual fidelity in real-time graphics. I'd say that's worth it. To give you a sense of what you'd be getting, have a look at the images in the gallery below. Click "view full size" to see them in their full four-megapixel glory. About half the shots were take in Crysis 3's "high" image quality mode, since "very high" was a little sluggish, so yes, it can get even better as PC graphics continues marching forward.

190 comments — Last by shaq_mobile at 6:02 PM on 03/25/13

As the second turns: Frame captures, CrossFire, and more
— 3:09 PM on February 26, 2013

I've been buried in my own work while preparing our Titan review, but lots has happened in the past few weeks, as many in the industry have moved toward adopting some form of game performance testing based on frame rendering times rather than traditional FPS. Feels like we've crossed a threshold, really. There's work to be done figuring out how to capture, analyze, and present the data, but folks seem to have embraced the basic approach of focusing on frame times rather than FPS averages. I'm happy to see it.

I've committed to writing about developments in frame-latency-based testing as they happen, and since so much has been going on, some of you have written to ask about various things.

Today, I'd like to address the work Ryan Shrout has been doing over at PC Perspective, which we've discussed briefly in the past. Ryan has been helping a very big industry player to test a toolset that can capture every frame coming out of a graphics card over a DVI connection and then analyze frame delivery times. The basic innovation here is a colored overlay that varies from one frame to the next, a sort of per-frame watermark.

The resulting video can be analyzed to see all sorts of things. Of course, one can extract basic frame times like we get from Fraps but at the ultimate end of the rendering pipeline. These tools also let you see what portion of the screen is occupied by which frames when vsync is disabled. You could also detect when frames aren't delivered in the order they were rendered. All in all, very useful stuff.

Interestingly, in this page of Ryan's Titan review, he reproduces images that suggest a potentially serious problem with AMD's CrossFire multi-GPU scheme. Presumably due to sync issues between the two GPUs, only tiny slices of some frames, a few pixels tall, are displayed on screen. The value of ever having rendered these frames that aren't really shown to the user is extremely questionable, yet they show up in benchmark results, inflating FPS averages and the like.

That's, you know, not good.

As Ryan points out, problems of this sort won't necessarily show up in Fraps frame time data, since Fraps writes its timestamp much earlier in the rendering pipeline. We've been cautious about multi-GPU testing with Fraps for this very same reason. The question left lingering out there by Ryan's revelation is the extent of the frame delivery problems with CrossFire. Further investigation is needed.

I'm very excited by the prospects for tools of this sort, and I expect we'll be using something similar before long. With that said, I do want to put in a good word for Fraps in this context.

I hesitate to do this, since I don't want to be known as the "Fraps guy." Fraps is just a tool, and maybe not the best one for the long term. I'm not that wedded to it.

But Ryan has some strongly worded boldface statements in his article about Fraps being "inaccurate in many cases" and not properly reflecting "the real-world gaming experience the user has." His big industry partner has been saying similar things about Fraps not being "entirely accurate" to review site editors behind the scenes for some time now.

True, Fraps doesn't measure frame delivery to the display. But I really dislike that "inaccurate" wording, because I've seen no evidence to suggest that Fraps is inaccurate for what it measures, which is the time when the game engine presents a new frame to the DirectX API.

Taking things a step further, it's important to note that frame delivery timing itself is not the be-all, end-all solution that one might think, just because it monitors the very end of the pipeline. The truth is, the content of the frames matters just as much to the smoothness of the resulting animation. A constant, evenly spaced stream of frames that is out of sync with the game engine's simulation timing could depict a confusing, stuttery mess. That's why solutions like Nvidia's purported frame metering technology for SLI aren't necessarily a magic-bullet solution to the trouble with multi-GPU schemes that use alternate frame rendering.

In fact, as Intel's Andrew Lauritzen has argued, interruptions in game engine simulation timing are the most critical contributor to less-than-smooth animation. Thus, to the extent that Fraps timestamps correspond to the game engine's internal timing, the Fraps result is just as important as the timing indicated by those colored overlays in the frame captures. The question of how closely Fraps timestamps match up with a game's internal engine timing is a complex one that apparently will vary depending on the game engine in question. Mark at ABT has demonstrated that Fraps data looks very much like the timing info exposed by several popular game engines, but we probably need to dig into this question further with top-flight game developers.

Peel back this onion another layer or two, and things can become confusing and difficult in a hurry. The game engine has its timing, which determines the content of the frames, and the display has its own independent refresh loop that never changes. Matching up the two necessarily involves some slop. If you force the graphics card to wait for a display refresh before flipping to a new frame, that's vsync. Partial frames aren't displayed, so you won't see tearing, but frame output rates are quantized to the display refresh rate or a subset of it. Without vsync, the display refresh constraint doesn't entirely disappear. Frames still aren't delivered when ready, exactly—fragments of them are, if the screen is being painted at the time.

What we should make of this reality isn't clear.

That's why I said last time that we're not likely to have a single, perfect number to summarize smooth gaming performance any time soon. That doesn't mean we're not offering much better results than FPS averages have in the past. In fact, I think we're light years beyond where we were two years ago. But we'll probably continue to need tools that sample from multiple points in the rendering pipeline, at least unless and until display technology changes. I think Fraps, or something like it, fits into that picture as well as frame capture tools.

I also continue to think that the sheer complexity of the timing issues in real-time graphics rendering and displays means that our choice to focus on high-latency frames as the primary problem was the right one. Doing so orders our priorities nicely, because any problems that don't involve high-latency frames necessarily involve relatively small amounts of time and are inescapably "filtered" to some extent by the display refresh cycle. There's no reason to get into the weeds by chasing minor variance between frame times, at least not yet. Real-time graphics has tolerated small amounts of variance from various sources for years while enjoying wild success.

68 comments — Last by Bensam123 at 10:09 PM on 03/03/13