Single page Print

TR interviews ATI's David Nalasco


We plumb the depths of the Radeon X800 series
— 12:00 AM on May 24, 2004

THE INTRODUCTION OF ATI's Radeon X800 series completed a one-two punch of killer developments for graphics chips, as ATI countered NVIDIA's GeForce 6800 with its own 16-pipe monster capable of astonishing performance. These new chips are very evenly matched in terms of performance, so we've interrogated ATI's David Nalasco about places where they differ: shader models, OpenGL performance, antialiasing methods, and the cards themselves. We've also discussed some of the questions raised by the controversial discovery of ATI's adaptive trilinear filtering algorithm. Read on for an in-depth discussion about the hottest graphics issues of the day.

Introductions

Your title is Technology Marketing Manager. What does your role at ATI entail?

Nalasco: I go over the specs of our new products and kind of look at all the new technologies that we're introducing and try and figure out what are the most interesting, then come up with ways to explain that both internally and externally. All the technologies like SMARTSHADER and SMOOTHVISION and things like that, I come up with what is included in those things and come up with ideas on how to demonstrate them. I also try to keep track of upcoming trends in the industry, and try and feed back some of the input we get from developers, from our users, and things like that into what features we incorporate in future products.

How long have you been with ATI?

Nalasco: A total of almost six years now. I did about a year and a half of ASIC design work, and the rest of the time I've spent in marketing.

What was your background before you joined ATI?

Nalasco: I'm trained as an electrical engineer, so I've done various roles doing hardware and software work. I did some work in the telecom industry. I also have an MBA, a business degree, so after doing that I came back to ATI and got into marketing.

About the Radeon X800 cards

Did the Radeon X800 Pro ship on May 4 as originally planned? We've heard some rumors of delays.

Nalasco: No, it did actually ship on the fourth. As far as I know, within a day of that time—I don't know how it exactly worked out to the hour—but you were able to purchase them from at least Best Buy, I think, and I'm not sure which other retailers. Certainly, online, it may have taken a bit of time to filter to all the different stores, but we do believe you could actually purchase them on the fourth, so we were able to hit that date.

Have you all developed any plans for a $299 card yet? Seems like you might want to meet the competition there.

Nalasco: We're still investigating the possibilities of releasing a lower cost version of the X800 series, but at this stage we still don't have any announcements to make. We're still looking at the market and how sales go, and we'll have to determine at a later point if it makes sense to produce a product at that price point.

Will we see production Radeon X800 cards with dual DVI ports?

Nalasco: I can't confirm that at this point. There's certainly no reason why such a board couldn't be produced. I'm not personally aware of anyone who's shipping one as of yet, but I would not be surprised at all if you don't see one of those eventually or in the near future.

Why leave this feature off of a card that costs over $300?

Nalasco: Well, it always comes back to the market for these products. We do repeatedly look at this because it's interesting that it comes up in conversation more often than we would expect considering the total size of the market of people that have multiple DVI displays. I know our original thought was that as flat panels kind of took off in popularity, and we did foresee that quite a while ago, that there would be a large rise in demand for DVI boards. But I guess what we maybe didn't foresee was that a lot of especially the lower cost and mainstream type of flat panels would largely be still shipping with VGA inputs. And it is changing.

You know, DVI is becoming a more common feature on a wider range of flat panels, certainly, and I think as a result of that you're going to see a trend towards more DVI ports on cards. But again, it just comes down to the size of the market. It's not just us who looks at this; it's also our add-in board partners. They have to look at the economics of producing this card versus how many they're going to sell, and when it makes sense for them, they will produce it. Certainly, in order to do that—in order to support more than one DVI display—you have to, currently, add an extra component on to the board, and that does add additional cost. So you have to weigh the fact that you're going to be pushing that cost on to people who are not necessarily going to use it versus the potential marketing benefits of having it.

How far will X800 technology extend down the market in your coming generation of products? Will we see the new memory controller and longer shader programs in the $199 price range? Below?

Nalasco: I think you will see it transition in the same way it's occurred in all of our previous generations of products. Eventually, we're going to get the technology that's in X800 all the way down to all aspects of our product line, in probably about the same kind of time frame that we've done it in the past. So it starts off in the high end, goes into the mainstream, eventually makes it into the value segment and mobile segment, and even makes it eventually into the integrated segment. So really, no change in our strategy this time around.

Your competitor has committed to extending its latest GPU technology and features across its entire product range fairly soon. Wouldn't features like longer shader lengths and 3Dc compression benefit low-end graphics cards, as well?

Nalasco: Again, in time, we believe that there will be the demand for that, but where you see the initial demand for some of these more advanced features at always in the higher end products where you have the early adopters, the people who are willing to pay a little bit extra to get at these things. One of our philosophies is that we don't like putting extra features into a chip where we don't think they're going to benefit the people who are going to buy a product in that market. We didn't put DirectX 9 into chips where the performance would be so low that it wasn't really going to be useful in any real application, and, you know, it's the same kind of philosophy this time around. When we think we can produce a product with adequate performance that allows you to actually take advantage of some of these new features, then that's when we'll add the features into that product. But if you try to introduce them too early, you're basically adding additional cost to the product that's not providing a benefit to the end user, and that's just something you want to avoid.

We've heard that the Radeon X800 is 160 million transistors and the GeForce 6800 is 222 million. Our own die size comparison showed about a 10% difference between the two chips. If one counted transistors by adding up all possible transistors on the chip like NVIDIA does, what would the X800's transistor count be?

Nalasco: One thing about counting transistors is that every time you see a transistor count on a chip, it's almost certainly a rough estimate. The reason is that there's no simple and straightforward way to just go and count all the transistors on a large ASIC. Usually, what you can at least get is a fairly accurate gate count. That can be generated by the tools when you're creating the chip. And from the gate count, you can make some assumptions to generate a reasonable estimate of the number of transistors by guessing at the number of transistors per gate. But of course, different gates in different places in the chip are going to use different numbers of transistors, and then there's the caches and things which have their own sort of unique counts of transistors. So what happens is that anything that you generate ends up being an estimate based on certain assumptions that you're making, and really, the transistor count figure is much less important than, as you mentioned, the actual die size that you end up with.

And as you did say, whatever the transistor figures that are being quoted might indicate, our die size is approximately ten to fifteen percent smaller, we believe, than the GeForce 6800 series. So when it comes down to it, we believe we've been able to achieve better performance, better image quality, with a smaller number of transistors, and that's really probably the most significant thing to take away from those marketing features.

We noticed that the Radeon X800 cards consume less power sitting at the Windows desktop than the Radeon 9800 XT. Are you guys doing any clock speed throttling at idle on the X800?

Nalasco: No, there's no clock speed throttling, although there is the dynamic fan speed capability. I can't say for sure exactly what the source of that difference is, but that's certainly one possibility. We have multiple fan settings and those are tied to the load that the chip is under, so we do take advantage of that in the X800 series.

We've noticed great progress in ATI's Catalyst software suite over the past couple of years, with the notable exception of HydraVision. Your competition seems way ahead of you in dual-monitor support and desktop management. Any plans for an update to HydraVision, and if so, when?

Nalasco: Well, we are always improving the driver, and as you know, we're on a path to come out with a new version of our driver, an updated version of our driver, monthly. That will continue, and we do have plans for a fairly major update of our Catalyst software suite in the near future. Definitely look forward to that. HydraVision, being a part of the Catalyst software suite, will definitely be among the features that we are working on improving.

Will we see an HDTV tuner in the next generation of All-In-Wonder cards?

Nalasco: We are certainly cognizant of the appeal of HDTV and the broad adoption that it's gaining, and we're not turning a blind eye to that. Without saying anything explicit, I would say "stay tuned."