AMD’s Radeon HD 5670 graphics card

With its DirectX 11 graphics cards now populating the middle and upper echelons of the market—and last year’s supply issues largely behind it—AMD is now proceeding into the low-end arena. Today, we’re getting to see the first product of that expansion: the Radeon HD 5670, which should start cropping up in e-tail stocks soon for about $99.

To keep the price tag in the double digits, AMD has taken the world’s smallest scalpel and carefully sliced away some of the pixel-crunching resources from the bigger and more powerful GPUs in Radeon HD 5700, 5800, and 5900-series cards. That means the newcomer follows a similar recipe to the rest of its brethren, delivering DirectX 11 support and multi-display capabilities in a freshly minted 40-nm chip. You’re just getting it in a smaller, fun-sized portion.

Is fun-sized still tasty, or has AMD removed too much of the icing compared to other DX11 Radeons? That’s what we’re about to find out.

The ents are going to war!

AMD has nicknamed the Radeon HD 5670’s GPU “Redwood,” keeping with the coniferous naming scheme of its DirectX 11 Evergreen family. Under the hood (or bark, rather), Redwood contains half the execution resources of Juniper, the GPU in Radeon HD 5700-series cards, with the same 128-bit memory interface. Seeing as Juniper itself has half the execution resources and half the memory interface width of the Cypress chip that powers 5800- and 5900-series offerings, one could say Redwood is a quarter Cypress with a double-wide memory interface. But that’d be oversimplifying things just a tad. Here’s a human-readable overview of Redwood’s guts:

A block diagram of Redwood. Source: AMD.

Let’s see… We have five SIMD units, each containing 80 ALUs and tied to one texture unit capable of sampling and filtering four texels per clock. That gives us 400 ALUs (or stream processors) and 20 texels/clock, down from 800 SPs and 40 texels/clock on the Radeon HD 5770. AMD has also removed two of the ROP units, leaving Redwood capable of cranking out eight pixels each clock cycle—half as many as the 5770. Again, though, both the 5670 and its bigger brother have the same memory interface setup: 128 bits wide and compatible with 4Gbps GDDR5 memory.

All of this hedge trimming has left AMD with a small (we’ll look at die sizes soon), cheap-to-manufacture GPU that’s also quite power-efficient. According to AMD, the 5670 will draw just 14W at idle and 61W under a typical load. That means no PCI Express power connectors and hopefully low noise levels, despite the spartan-looking single-slot cooler.

Otherwise, as far as we can tell, Redwood has the same DirectX 11 capabilities and hardware features as Juniper. AMD advertises Eyefinity support, too, but the maximum number of supported displays is limited to four, and the reference card will only connect up to three displays.

The Radeon HD 5670 is but the first member of a whole budget DX11 lineup from AMD. For users with really tight budgets, the company plans to follow up in February with the Radeon HD 5500 and 5400 series. The former will cram Redwood chips and 128-bit memory interfaces into sub-50W thermal envelopes, while the 5400 series will be based on the Cedar GPU, whose memory interface is only 64 bits wide. AMD tells us the Radeon HD 5450 will have a low-profile form factor and enough brawn to run Tom Clancy’s H.A.W.X. on a three-display setup.

Considering the recent supply problems AMD has faced because of poor 40-nm yields at TSMC, one might be rightfully concerned about the availability of these products. We brought the subject up with AMD, which replied that it expects “multiple tens of thousands” of Radeon HD 5670 units to be available for the launch, followed by “similar quantities every week.” New 40-nm Radeons are seeing massive demand, AMD added, but 40-nm supply is now meeting the company’s expectations.

On the other side of the fence

Nvidia still doesn’t have any DirectX 11 products out, so if you want to get technical, the Radeon HD 5670 lacks direct competition for the time being. Out in the real world, though, Nvidia currently has two products buzzing close to the same $99 price point.

From a hardware standpoint, the GeForce GT 240 looks like the most direct competitor: it has a newish 40-nm graphics processor, GDDR5 memory, and DirectX 10.1 support. The GeForce GT 240’s GT215 graphics chip also packs 96 stream processors, two ROP units that can output eight pixels per clock, the ability to filter 32 texels per clock, and a 128-bit memory interface that can handle up to 1GB of memory. Default iterations of the card run with a 550MHz core speed, 1340MHz shader clock, and 3400MT/s GDDR5 memory.

For this round of tests, we’ve stacked up the Radeon HD 5670 against Zotac’s GeForce GT 240 512MB AMP! Edition, which comes out of the box with a non-negligible factory “overclock”—its GPU, shaders, and memory run at 600MHz, 1460MHz, and 4,000MT/s, respectively. This card currently sells for $98.99 before shipping at Newegg, so it’s taking the new Radeon head-on.

That’s not the whole story, though. Shortly after we finished running our benchmarks, Nvidia told us it had cut the price of its GeForce 9800 GT to $99 and pulled the vanilla GeForce GT 240 to $89. That means, the firm argued, that the GeForce 9800 GT is the most direct competitor to the 5670. We’re skeptical of that claim. As you can see in the pictures above and will see on the next page, the GeForce GT 240 and the Radeon HD 5670 have quite a bit in common, including 40-nm GPUs of similar sizes, 128-bit memory interfaces connected to GDDR5 memory, and no need for an external power input. The GeForce 9800 GT is a much older graphics card based on a much larger, 55-nm graphics chip; it has a wider memory interface coupled with slower GDDR3 RAM, a larger cooler and PCB, and generally requires an auxiliary power input. Also, the 9800 GT supports only DX10, while the GeForce GT 240 supports DX10.1 and the 5670 supports DX11. These three cards might briefly overlap in price here at the end of the 9800 GT’s run, but they are not truly comparable offerings, neither in capability nor in cost to produce.

Even if we did buy Nvidia’s argument, we didn’t have time to go back and add the 9800 GT to our comparison, since we only had two days to test and put this article together.

We have included the 9800 GT in our specs table, should you wish to see how it compares on paper. Let us direct your attention to the GT 240 and 5670, though, for the main event.

  Peak

pixel

fill rate

(Gpixels/s)

Peak bilinear

texel

filtering

rate

(Gtexels/s)

Peak bilinear

FP16 texel

filtering

rate

(Gtexels/s)

Peak

memory

bandwidth

(GB/s)

Peak shader

arithmetic (GFLOPS)

Single-issue Dual-issue
GeForce 9500 GT 4.4 8.8 4.4 25.6 90 134
GeForce 9600 GT 10.4 20.8 10.4 57.6 208 312
GeForce 9800 GT 9.6 33.6 16.8 57.6 336 504
GeForce GT 240 4.4 17.6 8.8 54.4 257 386
Zotac GeForce GT 240 4.8 19.2 9.6 64.0 280 420
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4770 12.0 24.0 12.0 51.2 960
Radeon HD 5670 6.2 15.5 7.8 64.0 620
Radeon HD 5750 11.2 25.2 12.6 73.6 1008
Radeon HD 5770 13.6 34.0 17.0 76.8 1360

The Radeon HD 5670 and the GeForce GT 240 match up pretty closely to one another on paper. The new Radeon leads in peak theoretical shader power and memory bandiwdth, but the GT 240 has the edge in texture filtering capacity. Those of you who have been following the GPU architectures from these two firms in recent generations won’t be surprised to see such a split. Of course, how these things work out in practice doesn’t always match with the expectations set by such numbers.

Who’s got the smallest chip?

  Estimated

transistor

count

(Millions)

Approximate

die size

(mm²)

Fabrication

process node

G92b 754 256 55-nm TSMC
GT215 727 144 40-nm TSMC
Redwood 627 104 40-nm TSMC
Juniper 1040 166 40-nm TSMC
Cypress 2150 334 40-nm TSMC

Please note that the numbers in the table are somewhat approximate, since they’re culled from various sources. Below are pictures of the various GPUs sized up, again approximately, next to a quarter for reference.

As you can see below, Redwood and the GT215 aren’t too far off in terms of die size and transistor count (although Redwood is the leaner of the two). Cutting computing resources by half has enabled AMD to reduce the transistor count by about 413 million compared to Juniper, resulting in a 37% smaller die. Redwood ended up even smaller than AMD’s previous $99 GPU, the 133-mm² RV740.

GT215 and Redwood in a staring contest

The 9800 GT’s G92b chip

The GT215

Redwood

Juniper

Cypress

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
System bus QPI 6.4 GT/s (3.2GHz)
Motherboard Gigabyte EX58-UD5
BIOS revision F7
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update 9.1.1.1015

Matrix Storage Manager 8.9.0.1023

Memory size 6GB (3 DIMMs)
Memory type Corsair Dominator TR3X6G1600C8D

DDR3 SDRAM at 1333MHz

CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.5919 drivers

Graphics AMD Radeon HD 5670 512MB

with Catalyst 8.69-091211a-093208E drivers

Zotac GeForce GT 240 512MB AMP! Edition

with ForceWare 195.81 beta drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM
OS updates DirectX March 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Call of Duty: Modern Warfare 2

To test this game, we played through the first 60 seconds of the “Wolverines!” level while recording frame rates with FRAPS. We tried to play pretty much the same way each time, but doing things manually like this will naturally involve some variance, so we conducted five test sessions per GPU config. We then reported the median of the average and minimum frame rate values from all five runs. The frame-by-frame results come from a single, representative test session.

We had all of MW2‘s image quality settings maxed out, with 4X antialiasing enabled, as well.

The newest Radeon looks to be off to a promising start in Modern Warfare 2, handily beating our hot-clocked GeForce GT 240 and delivering very playable frame rates at 1680×1050 with all the eye candy on. Definitely not bad for a $99 GPU.

Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested at 1366×768, 1680×1050, and 1920×1080 resolutions with all of the in-game quality options at their max, recording three playthroughs for each resolution. We couldn’t enable antialiasing, because the game’s Unreal Engine doesn’t support it.

Incidentally, we should probably point out a little snag we ran into with our lowest test resolution. 1366×768 is quickly taking over the realm of low-cost 16:9 displays, but Nvidia’s drivers don’t seem to support that resolution properly—our GeForce GT 240 ran Borderlands and our other games at 1360×768, instead. Those extra six horizontal lines probably don’t make much of a difference (we are, after all, looking at a minuscule 0.4% drop from 1.049 to 1.044 megapixels), but do keep it in mind. If the difference threatens to cause your OCD to flare-up, just ignore those results and concentrate on the higher resolutions, instead.

The GeForce GT 240 pulls ahead here, scoring a small but solid victory over the Radeon HD 5670 at our two lowest resolutions.

Things get a little bit more complicated at 1920×1080: the GeForce GT 240 somehow produced much lower minimum frame rates than its competitor, despite a higher average. You’ll probably want to turn the resolution down to avoid choppiness in fast-paced firefights with either of these cards.

Left 4 Dead 2

In Left 4 Dead 2, we got our data by recording and playing back a custom timedemo comprised of several minutes of gameplay.

The 5670 gets back on top here, and by quite a big margin.

To tell you the truth, though, even 51 FPS should be plenty smooth for zombie-killing frenzies. AMD’s card might let you crank the resolution up to 2560×1600, but we don’t imagine too many folks capable of affording a $1,000+ 30″ monitor will buy a $99 graphics card to start with.

DiRT 2

This excellent new racer packs a nicely scriptable performance test. We tested at the game’s “high” quality presets with 4X antialiasing in both DirectX 9 and DirectX 11 modes (DiRT 2 appears to lacks a DX10 mode). For our DirectX 11 testing, we enabled tessellation on the crowd and water.

Chalk up another win for the AMD camp. On the Radeon, pushing the resolution from 1680×1050 to 1920×1080 barely impacts frame rates and keeps the minimum FPS at a very respectable 33. You can also enable DX11 effects, although that induces a sizable performance hit. The tessellated crowd probably deserves most of the blame there.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution. We have a broader set of results here because we’ve included those from our Radeon HD 5970 review, which were themselves largely composed of data from our Radeon HD 5700-series review. Although we used older drivers for most of the cards in that review, we don’t expect that to affect power consumption, noise, or GPU temperatures substantially.

Interestingly, while the Radeon HD 5670 has a smaller GPU, it draws a teeny bit more power than our factory “overclocked” GeForce GT 240. Still, this is a very close race.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Under load, the Radeon HD 5670 is the quietest contestant in our test bench by far. (The Radeon HD 5870 has kept that crown in the idle test.) Meanwhile, the Zotac GeForce GT 240’s fan seems to spin away at the same speed regardless of what you’re doing with your computer, producing quite a bit more noise in the process.

GPU temperatures

We used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, we recorded temperatures on the primary card.

Ah, so those blissfully low load noise levels on the Radeon come with a trade-off. The 5670 sees its temperature shoot past the GeForce GT 240 and even the Radeon HD 5770 under load, settling at a toasty 71°C. We’ve seen far worse, though, especially from AMD’s last generation of cards.

Conclusions

AMD has succeeded in bringing DirectX 11 under the $100 mark with the Radeon HD 5670, and for the money, this is a very capable little graphics card. Aside perhaps from Borderlands, the 5670 ran all of our games smoothly at 1680×1050 with antialiasing and detail levels cranked up. Gamers on tight budgets shouldn’t require much more performance than that. Also, because it’s based on AMD’s latest architecture, this newcomer may perform better than its predecessors in GPU-compute applications—another selling point that could gain importance in the not-too-distant future.

Compared to the GeForce GT 240, the most similar design Nvidia offers right now, the new Radeon looks to be faster overall. Sometimes by quite a bit.

Nvidia’s recent pricing moves do put the Radeon HD 5670 up against the GeForce 9800 GT, as well. As we noted earlier, the 9800 GT likely offers higher overall performance, but it has the downside of being older, larger, hungrier for power—and, since it’s a 55-nm, DX10-only part, probably not long for this world.

If you want higher performance without having to compromise on the feature set, we would direct you another Radeon instead. For a little over $50 more, AMD offers the Radeon HD 5770 1GB, a considerably faster product capable of running every new game out there at a 1080p resolution. Not a bad trade up for about the price of a game. That price difference might make the 5770 too big a leap for the most cash-strapped among us, and casual gamers probably shouldn’t bother. Still, anyone with a 1080p monitor really ought to consider stepping up.

As a side note, the relative strength of the Radeon HD 5670 bodes well for AMD’s new, notebook-bound Mobility Radeon HD 5700 and 5600 GPUs. AMD based those parts on the same Redwood chip, although it lowered speeds slightly to 650MHz for the GPU and 3.2Gbps for the memory. Nevertheless, Redwood’s mobile incarnation seems like it could deliver great gaming performance at notebook-friendly resolutions, hopefully without forcing users to lug around a big, bulky laptop.

Comments closed
    • Dagwood
    • 10 years ago

    You could have added a 9600GT to the review, if memory serves me, you have done a review with that card. Similar price and it does not need an external power connector. And since you have not done a review of the low power 9800GT, this review would have been a good time to do one. The GT240 is a disappointment, it is priced in the 9800GT range but falls short of even 9600GT performance.

    I also question your conclusion that the GT240 is beat by the 5670. Your own numbers do not agree with this conclusion. Sure it clobbered the 240 when frame rates were in the 100’s but when it frame rates are near the 30’s the GT240 wins. This really where the rubber meets the road.

    And lastly, you have continually hyped DX11 while nearly ignoring the “performance hit”. Most other review sites are quick to point out that if you want DX11 eye candy you are going to need an upper end card. Very few people are going to enable DX11 features if it means the frame rates drop into the 20’s. It is identical to GPU physics in terms of trading off frame rates for eye candy, but for some reason TR sees GPU physics as a gimmick and DX11 as a must have. It does not add up in my mind, unless flipmode is doing your reviews for you.

    • Vrock
    • 10 years ago

    I would have liked to have seen the 5670 compared to a 4670 and/or a 4830. The decision for me is not whether to purchase the 5670 or GTX 240, it’s whether to upgrade at all from the previous generation. I think showing how the previous generation’s performance stacks up should be a mandatory item, especially when reviewing budget cards (which are of course, for budget conscious people.

    • Palek
    • 10 years ago

    Damage/Cyril, I assume the Radeon 5670 card you benchmarked is a reference design from AMD, correct? (Sorry if I’m asking a question that is answered in the article, I did not spot any relevant info.)

    The reason I’m asking is 5670 cards have started to pop up here in Tokyo but so far none of them use the single-slot cooler seen on the card you tested. Instead all cards appear to be using the same dual-slot cooler that showed up on 4770 cards (or one that looks like it). It would be great to find out if the single-slot design will be mass-produced by any card vendors. (I have a Shuttle case that will not take dual-slot cards…)

    FYI, available 5670 cards are:

    Sapphire (1GB and 512MB):
    §[<http://www.sapphiretech.com/presentation/product/?cid=&psn=0001&gid=3&sgid=915<]§ Powercolor (1GB and 512MB): §[<http://www.powercolor.com/Global/products_layer_3.asp?ModelID=661<]§ Kuroutoshikou (local brand): 1GB: §[<http://kuroutoshikou.com/modules/display/?iid=1468<]§ 512MB: §[<http://kuroutoshikou.com/modules/display/?iid=1469<]§

    • puppetworx
    • 10 years ago

    Disappointing review which I mostly skipped through. I’d love to see a second go at this one.

    I personally want to find out how it performs in comparison with the other Radeon HD 5000 line. Disappointingly the 5750/570 were tested using different games so no direct comparison can be made.

    • anotherengineer
    • 10 years ago

    Is TR going to add to this review with comparisons to the 4670 card also??

    edit: §[<http://www.techpowerup.com/reviews/HIS/Radeon_HD_5670_IceQ/<]§

    • Shinare
    • 10 years ago

    I love the miserly space consumption of the card but wish it was dual DVI to allow my dual dvi monitor setup to work.

      • cataphract
      • 10 years ago

      It does support dual dvi, you just need a hdmi-dvi converter.

        • MadManOriginal
        • 10 years ago

        Yeah I think people get fooled by the wealth of output ports on the reference 5700 and 5800 cards. It can only do TWO TDMS ports (DVI or HDMI) at a time plus ONE Displayport. They u[

        • BlackStar
        • 10 years ago

        Judging from my 4850, many (all?) vendors will include an HDMI->DVI adapter in the box, so that’s not an issue.

    • clone
    • 10 years ago

    to me the real question now is will Fermi have a DX11 lineup or will Nvidia coast on older G200 and 9xx hardware beneath it’s flagship lineup?

    if Nvidia doesn’t release a full lineup like ATI has and only has 2 or 3 DX11 supporting cards…. then Fermi will be much ado about nothing.

    when Fermi is finally released hopefully ATI will drop prices and suddenly a 5670 selling for $70 or less like the 4670 has on occasion will be a compelling little piece of hardware ignoring that 5750 and 5770 will be bouncing around $100 and 5830 and 5850 will bounce around sub $200 leaving 5870 a little above $200 – $250.00

    that is a really compelling lineup and all of it from top to absolute bottom offers playable DX11 frames at really decent resolutions……. none of it will be DX11 “in name only”.

    Nvidia has to offer something to counter this and can’t just pretend that having the performance crown….. if they get it (5890) will be enough.

    Nvidia really needs to flush out the junk and get back in the game.

      • MadManOriginal
      • 10 years ago

      The thing is if Fermi architecture is high-end only then AMD will have little reason to drop prices on anything but perhaps the 5800s which also just happen to be the only real performance+ improvement cards in the 5k series. It seems that the price wars and the much greater price/performance we saw with the 4k series across the board may have been a fluke.

        • clone
        • 10 years ago

        l[

          • MadManOriginal
          • 10 years ago

          Umm, look at prices. I thought it was clear enough throughout my posts that is what I was basing comparisons on, at least in other posts, but if you think a $160 card is the replacement for a ~$100 card that’s an interesting proposal. Sure, the 5770 is a replacement for the 4770 if you just think ‘replacement’ means ‘same last 3 digits’ but their MSRPs not to mention street prices tell a whole other story. Going by MSRP and street prices the 5670 is actually the ‘replacement’ for the 4770.

            • Bombadil
            • 10 years ago

            Disregarding price, it looks like an update to the 4670–the 128-bit GDDR5 is overkill. It is actually lower on the performance spectrum than the 4670 was when it was launched. It is more like the 3650/2600/etc. Apple and other OEMs will love these for the $50 or so they will pay for them.

            • clone
            • 10 years ago

            you are coming from the position that prices will never fall.

            history is against you on that one.

            if you can’t see past the reality that prices will adjust were done.

            that said I’m not sure I could buy any non DX11 parts today…… I’d hold off on 5670 and buy it later but if I had to buy a card today between a 4770 and a 5670…. that would be a tough call.

            I’d probably go 5670 for the DX11 support and hopefull driver improvements.

            • MadManOriginal
            • 10 years ago

            So you’re saying wait 6-12 months for possible price drops to see what’s a replacement for what? I fail to see the logic in that. And prices don’t ‘always drop’ – the 4770 is actually a perfect example of that. Just face it, the 5700 is not really a replacement for the 4770 and saying it is just to make up a comparison that shows better performance doesn’t make it so.

        • LawrenceofArabia
        • 10 years ago

        l[

      • Game_boy
      • 10 years ago

      The’ve launched Geforce 3xx for desktop and mobile. Mobile up to 360M I believe. That tells me that Fermi will fill the high-end only (as 360/380), until sufficient time has passed for there to be a 400 series.

    • liquidsquid
    • 10 years ago

    HTPC, perfect match IMHO, especially if a fan-less version is produced!

    • flip-mode
    • 10 years ago

    I can only imagine the horror and panic that must be consuming the corporate peeps at Nvidia. This is truly, truly the worst year they have had since the year of the 5800 Ultra. AMD has a COMPLETE DX11 lineup including mobile and Nvidia has yet to launch a single DX11 card. Not only that, but most of the available Nvidia are two-year old designs. After they launch “Fermi”, how long till they roll out the rest of the lineup?

      • MadManOriginal
      • 10 years ago

      But if you only buy a new card when the old one isn’t fast enough the other guy having DX11 doesn’t matter does it? 😉

        • flip-mode
        • 10 years ago

        I don’t understand what you’re saying.

          • ImSpartacus
          • 10 years ago

          He’s talking about the general public.

          “My PC is slow!”

          *Goes to Best Buy*

          “I’ll sell you this $500 gaming PC with a 1GB graphics card!”

          “Oh my, I bet that is a lot faster than what I have now!”

          • MadManOriginal
          • 10 years ago

          It was slightly in jest but meant as a reply to your point #1 in post #44, you just didn’t mention the ‘NV no DX11’ in that post.

      • deruberhanyok
      • 10 years ago

      I’ve wondered that myself. “GT200” was supposed to cover their whole product line but as far as I know even the GT240 used for comparison here is based on the older G92/G80 silicon.

      So will the same thing happen for “GT300” aka Fermi? Two crazy expensive video cards while the rest of the product line continues to be recycled G92 hardware? I hope not… it wouldn’t make for very good competition in the mainstream price ranges.

        • BlackStar
        • 10 years ago

        It’s already happened. Check the 340 and 360 specs…

          • MadManOriginal
          • 10 years ago

          What specs where? The only 300 series card I know of officially is the GT310 (rebadged G210) and haven’t heard even solid rumors of other 300 series other than the highend.

            • thecoldanddarkone
            • 10 years ago

            He’s talking about mobiles.

            §[<http://www.nvidia.com/object/geforce_m_series.html<]§ The only nice thing about this, they created a chart for easy comparing.

            • MadManOriginal
            • 10 years ago

            Well that’s hardly useful at all since mobile is neve the same specs as desktop anyway, nevermind that NVs mobile renaming is even more atrocious than their desktop renaming.

            • BlackStar
            • 10 years ago

            Well, Ati’s 5xx0 mobile series is pretty close to their desktop parts in 3d capabilities (slower, of course. Also disregarding the 51×0 chips which are 4xx0 left-overs).

            On the other hand, the “new”, “high-end” 360 by Nvidia is a G92 re-rename, supporting DX10 (I hope their specs are mistaken here, but they really say DX10 rather than DX10.1…) Much as I love Nvidia, Ati has beaten them pretty thoroughly this round.

            • thecoldanddarkone
            • 10 years ago

            ignore this, I have no idea. Someone needs to do better at cut and paste.

        • Meadows
        • 10 years ago

        The chip is called GF100 (Fermi), or something along those lines. Also, if you’re talking about the GT 240 as a card, then do insert a space.

        Just to be clear.

        The GT 240 may be based on a fundamentally older design, but you can’t argue that it doesn’t work well regardless. After all, for how long have pipelines been used on videocards? Nobody said “oh my gosh this is technology that’s half a decade old” because it worked. NVidia also bothered to add GDDR5 and Dx10.1 to that chip at the very least, so it earns a small thumbs up here.

          • poulpy
          • 10 years ago

          Nope. It’s “pinkie up” max for this kind of innovations in 2010..

            • Meadows
            • 10 years ago

            When you look at the most popular games, their cards still provide you with more than enough. ATI only doubled the icing, but you still get a complete cake with either side.

            • poulpy
            • 10 years ago

            Nice cake analogy but off topic. Point is they’re just cruising off the back of a very strong 2 year old architecture almost untouched.
            Hence no thumbs up whatsoever.

          • grantmeaname
          • 10 years ago

          Nope, the GPU core for the GTX 260/280 is not called the GF100. Nice reading comprehension though!

            • Meadows
            • 10 years ago

            Wrong again. It’s called GT200.
            Fermi is called GF100.

            Nice attempt though! Idiot.

            • grantmeaname
            • 10 years ago

            That would explain the presence of the word r[<*[

            • Meadows
            • 10 years ago

            Nothing explains the fact why you called Fermi “GT300”.

            • MadManOriginal
            • 10 years ago

            Yeah, nothing except for the fact that he never actually called it that.

            • flip-mode
            • 10 years ago

            Read 66 then 78 then 104 then 105. Point goes to Meadows. deruberhanyok called Fermi GT 300 and Meadows was correcting him.

            • MadManOriginal
            • 10 years ago

            No, the point doesn’t go to Meadows. You guys need to pay more attention to the little yellow name next to posts, you know, the one that tells you who said what. :p u[

            • poulpy
            • 10 years ago

            Yeah I was going to say I didn’t see how it made sense for Meadows to misread /[

            • Meadows
            • 10 years ago

            Like I have time to read names. [/inexpensive excuses] At any rate, I was accused of bad reading comprehension by him first off (#78), which, if you observe correctly, was pretty hypocritical and stupid not least of all, so at that point I just made quick connections and started replying “as if…”.

            • MadManOriginal
            • 10 years ago

            Not reading names is pretty bad reading comprehension.

            §[<http://youlose.ytmnd.com/<]§

            • Meadows
            • 10 years ago

            Lack of comprehension doesn’t follow from lack of reading itself.

            • MadManOriginal
            • 10 years ago

            Whatever makes you feel better man.

    • derFunkenstein
    • 10 years ago

    On page 2:

    l[

      • Austin
      • 10 years ago

      l[<;)<]l I wondered about that too.

      • mczak
      • 10 years ago

      Looks like this has been fixed. However, a 37% difference is a bit more than “similar die size”. The GT216 would be much more similar in die size…

    • Helmore
    • 10 years ago

    Nice review, but there is one thing I’m missing.
    What are the exact clock speeds of the 5670?

    I’m able to look it up myself, but it’s always nice to have a chart included in the review with all the specs of the different cards available at a single glance.

    • Hattig
    • 10 years ago

    Thanks for the review, although there could have been more comparison cards, but the limited time for testing clearly put shot to that.

    I agree that it’s nice to see a spread of cards:

    – historical cards (for upgraders) – how much will these beat an 8600GTS for example.
    – cards +/- 20% of the price just to see if spending that little more is worthwhile.

    I agree that for $50 more, the 5770 is a good catch.

    Also a $100 card should be reviewed within a matching system (i.e., a mid-range system here), but I can understand trying to remove the system influence from the benchmark results as much as possible by over-specifying the hardware.

      • Anonymous Coward
      • 10 years ago

      I’d like to see an acticle where they went through a whole big range of cards, as far back as DX-9 and PCI-E can take them. Not ever model, mind you, just some interesting ones. It would be interesting to see just how much progress actually has been made.

      The other day my brother’s old 3ghz P4 died (GF-7600). He replaced it with a Core i7 860 (R-4650). I have very little idea what that means for performance. Maybe the CPU is typically twice as fast single threaded, and the GPU probably more than that.

      I think that TR needs to do a “decade in review” review, or some other interesting time period.

        • tay
        • 10 years ago

        I like this idea, but it would be a tremendous amount of work.

    • jokinin
    • 10 years ago

    I don’t quite get why a rather low end graphics card is reviewed on such a high end computer.
    I’d rather have a review on a mainstream computer and comparing it against previous generation mainstream graphics cards.
    Besides, you can still buy some 4850’s for the same price, and 4870’s aren’t far off.

      • reactorfuel
      • 10 years ago

      This always comes up, and it’s always the same answer: if you’re trying to see how well a video card performs, you want the video card to be the limiting factor. A review where every card gets 15 fps, because the processor is holding them all back to that level, doesn’t tell you anything about the video cards.

      Since performance is almost always a matter of finding the bottleneck, you can look at a review of midrange CPUs (done with an ultra-high-end video card) and a review of midrange video cards (done with ultra-high-end CPUs). The slower of the two will determine your performance. Putting two reviews together this way is ultimately a lot more informative than one test with all-midrange hardware, because it lets you figure out any combination you’d like.

        • jokinin
        • 10 years ago

        While I see the point in your explanation, do you really think that a core i3 or a core 2 duo would hold down this graphics card performance?
        I don’t think so.
        I think that it would be better to benchmark it on those kind of machines too see if it’s a worthly upgrade on mid range machines or it is not.
        And by that review I can’t get that kind of information.

          • grantmeaname
          • 10 years ago

          if there was a different config for the high end and low end GPUs it would require twice as much hardware at the benchmarking sweatshop without giving us any new information. In fact, it would be worse, because in reviews with a wide spread of cards (most GPU reviews), you couldn’t compare cards to each other unless they were built around the same system. On top of that, you wouldn’t know if low framerates were due to CPU, GPU, RAM…

          What good would it do? I can’t even see any benefits that it has, other than that you “gain” the inability to see the GPU’s raw performance.

            • MadManOriginal
            • 10 years ago

            This subject gets brought up in nearly every video card review I’ve seen for the last 10+ years. You’d think people would have learned by now.

            Anyhow you do have a point regarding upgrading older systems but what you’re looking for is a CPU review that has some gaming tests. Those will typically use a very fast graphics card for the flip-side of the reason video cards tests use very fast CPUs. So you have a CPU test that shows framerates with a certain CPU, non-GPU limited, then you cross-reference that with a GPU test, non-CPU limited. The lower number is what you’d expect pairing a given CPU with a given GPU – the ‘weakest link’ between the two is what determines performance.

            I wish more sites would set up their review information in database form so people could do such things without having to pour through articles.

          • travbrad
          • 10 years ago

          If a Core i3/2 don’t hold back the performance of the video card, then what is the point of testing with it? They are already using a CPU which doesn’t hold back the performance of the card.

          I do see how it would be nice to have a low-end card tested on a low-end computer, but as others have mentioned you can figure out pretty well exactly what kind of performance you’ll be getting if you also read the CPU reviews. They only have so much time. They can’t test a million different configurations.

        • Freon
        • 10 years ago

        I think if they had infinite review time it would be preferrable to benchmark low end cards on a more representative system.

        We very well could imagine a scenario where Card A for $Y is faster than Card B $Z on a top end system, they are about equal on a lower end system, and $Z<$Y. If the conclusion to pay $Y (>$Z) for Card A is based on that better performance, it is a poor conclusion since it would be money wasted for someone who didn’t have that top end system.

        This isn’t an awful method as is–I don’t have a huge problem with it, but I definitely see some practical limits on the conclusions you can draw. If one were to decide to use a lower end system to benchmark lower end cards, you still have to decide what exactly that is. It throws a huge wrench into being able to cross-compare a wide range of cards as well. Lots of decisions have to be made on what segment things are “supposed” to fit into and such. Not a problem with a clear solution, unfortunately.

        I think TR has tended to pick one standard high end system and stick with it as long as possible, and that method has its own advantages for comparing numbers across multiple reviews over time.

      • OneArmedScissor
      • 10 years ago

      Even low end CPUs are generally about 3 GHz now, quad-cores included. A low end CPU is not going to hold this card back.

    • Palek
    • 10 years ago

    Some more constructive criticism / whining:

    Since many people (including myself) will be looking at this card for Home Theater use it would have been nice to see CPU usage, temperature and noise figures for Blu-ray / DVD playback. Also, GPU -accelerated video encoding (say, MPEG-2 VOB to H.264) performance tests could be a useful addition for future GPU reviews.

      • jinjuku
      • 10 years ago

      I agree 100%. The Radeon 5K family presents the first time that HTPC enthusiasts have a all in one, cost effective solution for both HD video and now HD audio (bitstreamed, not LPCM).

      Just want to see what the 53XX series will do for HTPC at ~$50.

        • khands
        • 10 years ago

        Agreed, when it comes out I might have to recommend a fanless one to my dad for his HTPC build.

    • wira020
    • 10 years ago

    I would like to suggest reviewers to also include cards which are closest priced ( 1 higher priced and 1 lower ) in addition to the direct competitor of the card.. i hope 4 is not too much … because usually when i look for new something, i’d also consider if i add another $10 to my budget and get product B instead, will it be more than worth the extra money?… or if i buy the $15 cheaper product, will i miss out anything?… will it still satisfy my purpose?

    Thanks for the reviews btw…

    • fpsduck
    • 10 years ago

    Borderlands patch 1.2.0 is already released.
    But it might be a bad news for those who love Bloodwings.

    §[<http://gbxforums.gearboxsoftware.com/showpost.php?p=1755756&postcount=1<]§

    • BooTs
    • 10 years ago

    May need to drop Borderlands from the benchmarking line-up. I love the game, but it has serious issues. The PC support is (pathetically) limited from Gearbox, and the game isn’t nearly optimized in the same way for ATI cards as for Nvidia. “The way it’s mean to be played” is fine but the game under performs for ATI cards compared to similar titles.

      • pot
      • 10 years ago

      The same thing happened with Dirt 2, except for Nvidia. Codemasters took out SLI support after ATI paid them to add DX11. The Dirt 2 DEMO has SLI support but then the delay comes when ATI pays them to add DX11 and all of a sudden, retail doesn’t support SLI. But if you add the demo .exe SLI is functional, very shady. I understand why Nvidia and ATI work with game dev’s to optimize for their cards but it seems to hurt the consumer sometimes.

        • BooTs
        • 10 years ago

        Wow. That’s a new level of ridiculous. “How about I pay you some money to un-do your work?”

          • BlackStar
          • 10 years ago

          Nope, not new. Remember Assassin’s Creed? Nvidia payed them to remove DX10.1 support after the game was released.

            • BooTs
            • 10 years ago

            You’re shattering my world.

      • clone
      • 10 years ago

      I’ve played Borderlands…. it’s not nearly so good looking compared to the performance demands.

      on a side note I liked it but not over the long run….. too much rinse and repeat.

      great idea just needs more bot comment variety and …. not longer missions ….. but something was missing I don’t know after 4 hours it got to be dull and I haven’t played it since.

      story wasn’t compelling, didn’t care to see it through to the end.

    • obarthelemy
    • 10 years ago

    to reviewers:

    When I’m looking for a graphics card, my question is not “what is the best card I can get for between $98 and $99”.

    it is:
    1- are the new cards within my budget (typically $100) that much better than my current one (which is usually 2-3-4 generations/years old) ?

    2- how much should I spend ? would spending $50 , $100, $150, $200 more give me significantly better results with my games (which are rarely the latest and greatest) ?

    3- What are the extra benefits or risks of upgrading ?

    The present review does not really answer any of these questions.

      • moritzgedig
      • 10 years ago

      1.) almost always – yes
      2.) unless you are discontend with the processing power of your current card or want a new feature you don’t need to ask the question. If you want more processing power or new features a new card will almost guarantee it. At the moment hardware is strong compared to the demand of the new software. Unless you have a screen with high resolution (>1280×1024) all but the cheapest will satisfy (8xAF 4xAA).
      3.) features and quality, that don’t need the comparison you are asking for (to old hardware). Also guaranteed: extra environmental pollution.
      Next time you buy a (new) game you might want a more powerfull GPU with DX10.1, but until then, why bother?

      • flip-mode
      • 10 years ago

      I think #23 is an absurd comment.

      1) You upgrade when your current card isn’t fast enough, not just when new cards are faster.

      2) You should spend whatever you decide to spend – you want someone else to tell you how much to spend? Spend what you can afford; get the fastest card you can afford.

      3) Benefits / risks of upgrading? If you’re asking such a basic question then you need additional reading beyond a video card review. This is a video card review, not a “idiot’s complete guide to video card upgrades”.

        • derFunkenstein
        • 10 years ago

        I have to admit, though, it’s weird to see only 2 cards listed in the benchmarks. Surely results for a couple other cards could be recycled from other reviews for convenience.

          • FuturePastNow
          • 10 years ago

          Anandtech’s review threw a lot of other cards into its charts, including a 3870 and 8800GT, for comparison to older cards.

          • flip-mode
          • 10 years ago

          It seems weird, but it is, IMO, totally consistent with what I believe to be TR’s modus operandi: a disdain for low-end cards. Scott is not the least bit interested in these cards and reviews them more out of a sense of duty than anything else and I’d be tempted to bet money that he gave real consideration to the thought of not reviewing the card at all.

            • nexxcat
            • 10 years ago

            He does mention “we had to get this out in 2 days”. It’s also possible that the card was loaned to them by AMD, on the condition that they get the review out in 2 days. I doubt they’d put that short a deadline on themselves if they were doing their own scheduling.

            • BooTs
            • 10 years ago

            That timeline would probably when they got the card to test, and when the NDA was lifted.

            • flip-mode
            • 10 years ago

            Ah, I didn’t read that part. So maybe it is not out of disdain for the card then, but because of the available time.

            • derFunkenstein
            • 10 years ago

            True, but he’s got a wealth of benchmarks on other cards from other reviews that could have just been added. I wouldn’t complain I know that much. Having to flip between reviews to see how cards perform on the same benchmarks is a pain. :p

            • BooTs
            • 10 years ago

            Agreed. A low-budget review for a low-budget card. Makes sense to me.

            • flip-mode
            • 10 years ago

            I’m sure gamers with low budgets would still like to see as comprehensive of a review as possible.

            • FuturePastNow
            • 10 years ago

            People with low budgets especially need comparisons to cards from 2-3 generations back, because they don’t upgrade often.

            • derFunkenstein
            • 10 years ago

            That implies that he’s just phoning it in. In that case, why make a review at all?

    • ssidbroadcast
    • 10 years ago

    Hm. Not a bad choice for the type of guy that wants to play DiRT2 from his couch, all on a 720p Hi def television.

    • DrDillyBar
    • 10 years ago

    Dx11 @ $99 is the catcher.

      • Freon
      • 10 years ago

      Did you miss the performance hit for enabling DX11 features? It’s not fast enough to run DX11! Sure it is possible, if you don’t mind a pretty slideshow.

      This has been a repeating pattern for years. The first slew of value and even mainstream cards for a new DX version are never really fast enough to use the new features. Great for checkbox wars, not so much for consumers.

        • Meadows
        • 10 years ago

        Slideshow? It’s not nearly as bad. The people who buy these probably don’t have a need for antialias and don’t even run 1920×1200.

        • grantmeaname
        • 10 years ago

        the minimum framerate was 25. that’s not a slideshow.

          • Freon
          • 10 years ago

          25fps is what I call a slideshow. Maybe ok for playing Bejeweled, but not an action or racing game.

            • moritzgedig
            • 10 years ago

            25fps is more than a movie at the cinema.
            I don’t call cinemas slideshows.

            • Vrock
            • 10 years ago

            Comparing a movie’s framerate to a video game’s framerate is comparing apples to oranges. And in the United States, movies are filmed and played back at 24 fps.

            • flip-mode
            • 10 years ago

            q[<25fps is more than a movie at the cinema.<]q (#136) q[

            • Vrock
            • 10 years ago

            Yeah, I misread it and thought he was saying movies were filmed at 25 fps. You got me! I FAIL!!!!!! ZOMG! I’m sure this is a great moment for you! Way to go.

            • flip-mode
            • 10 years ago

            No biggie; happens all the time.

            • Vrock
            • 10 years ago

            Well, I wouldn’t say you’re a douchebag /[

            • Freon
            • 10 years ago

            Is this supposed to be a troll or are you that ignorant?

            • flip-mode
            • 10 years ago

            “Troll” is such an overused term; it clearly was not a troll. He was making a point. This is all just opinion – what you consider a “slideshow” compared to what others do – so try to remember that.

            • MadManOriginal
            • 10 years ago

            Besides which the movie framerate versus video game framerate debate has been done to death. They shouldn’t be compared as if they’re the same – movie is more ‘natural’ and has inherent motion blur for example. The for games it also depends upon the genre, you wouldn’t want to play most any shooter at 24FPS but an RTS would be ok.

            • flip-mode
            • 10 years ago

            Agreed. Movies and games are different. I can even watch a stuttering game, but I can’t very well play a stuttering game.

            • Freon
            • 10 years ago

            If you can give me a game at 24fps that has been down sampled from infinite, we can start to have a reasonable conversation about this.

            Or we can show a film with lots of fast action for long periods of time (20-30 minutes at least) captured with a ridiculously fast shutter speed (let’s say, at least 1/1000th second max). You can bring the puke bucket and the Advil.

            Making a point? You’re giving him way too much credit. There’s no point here.

            • flip-mode
            • 10 years ago

            You’re free to counter the point (as you did), but a point was still made. Don’t just whip out the “troll” term because someone differs with your opinion and happens to back it up with a weak point.

            • grantmeaname
            • 10 years ago

            troll!

            In all seriousness though, everyone just loves that word this last two weeks or so… it’s getting old.

            • poulpy
            • 10 years ago

            Amen to that. I’d also add that “25fps = Slideshow” sounds more like a bait than an opinion to me..
            Won’t be silky smooth and I’d rather play at 40/50fps but slideshow is just ludicrous. More like 10/15fps.

            Google had this to say:
            §[<http://www.gametrailers.com/user-movie/60-fps-vs-24-fps-a-comparison/85283<]§ Edit: streaming probably capped @30fps, file downloadable here §[<http://kimpix.net/2006/12/03/60fps-vs-24fps/<]§

            • Meadows
            • 10 years ago

            Photography doesn’t downsample “infinite”, because in physics, there is no “infinite”.

            Leave that to the theoretical maths dudes.

    • Xenolith
    • 10 years ago

    Any chance of this going fanless, or low profile?

      • thecoldanddarkone
      • 10 years ago

      Well we have fanless 9800 gt’s, so I’m going to say yes.

      • OneArmedScissor
      • 10 years ago

      There’s a fanless 5750. There were even fanless 4850s.

      The problem is that fanless cards almost always cost a lot more if they’re in the $100 or higher price range. 🙁

      However, you could likely turn the fan down to the point where it’s inaudible, even if the case is right next to you. It’s not exactly in dire need of a significant air flow.

        • thecoldanddarkone
        • 10 years ago

        I totally forgot about those.

      • Anonymous Coward
      • 10 years ago

      How about fanless, and compatible with the classic Shuttle form factor? Sadly that sort of thing is trending down-market pretty badly.

      • adisor19
      • 10 years ago

      Yes, this needs to be done. It would be an amazing card for an HTPC.

      Adi

        • OneArmedScissor
        • 10 years ago

        Why would you want to add a power gobbling video card to a HTPC?

        Granted, if you also play games on it, that starts to make some sense, but only if your TV is 720p.

    • JustAnEngineer
    • 10 years ago

    I again bemoan TR’s repeated use of an overclocked card from one manufacturer against a stock-clocked card from the other. Worse still, the overclocked card is again not labeled as such on the performance charts. 🙁

      • asdsa
      • 10 years ago

      Yep, not really a fair scenario I’ve used to see in TR but HD5670 still takes the crown. I hope this site is not starting to be another Tom’s where they do overclocked nvidia vs. reference radeon with old drivers stacking all the time.

      • Meadows
      • 10 years ago

      And that card still lost, so it’s not all bad.

      • ztrand
      • 10 years ago

      I disagree. From a consumers viewpoint the internal workings of a card are pretty irrelevant. As long as features are more or less equal, what matters is price vs performance. In this case the price of the overclocked GF is equal to the Radeon (they differ in features, but that’s not due to the overclock). The average consumer barely knows what a silicon wafer is, so the greater engineering beauty of the Radeon is probably lost on them 🙂

      edit: I thought about it some more, maybe you are right. An uninformed consumer might not notice the card in the review is overclocked, go to the store and buy a non-overclocked one (maybe it’s even cheaper) and be disappointed.

      • wira020
      • 10 years ago

      I think it is fair since it’s most comparable in term of price… thats what most people care about when they want to buy something… which is the better architecture is often the question we want answers.. but in reality, when making decision of what to buy it comes back to budget…

        • JustAnEngineer
        • 10 years ago

        1. It will be very easy for a casual reader to be linked directly to one of the pages with the performance charts without closely reading the early introductory page that mention that one of them is a cherry-picked overclocked card and not the run-of-the-mill stock card that you would expect from the series labels on the charts.

        2. The review provides no information about the performance of the more plentiful stock-clocked cards that are more likely to be available where the prospective consumer is shopping.

        3. Many times, the cherry-picked over-clocked card that TR reviews does not go down in price like the stock-clocked cards do. Six months from now, there may be great deals available on stock-clocked cards, but the super overclocked card may still be near its current price. The stock-clocked cards have more tendency to maintain price-competitiveness with the competitor’s stock-clocked cards.

      • derFunkenstein
      • 10 years ago

      I’d rather have something representative of what you can buy for the same dollars than a theoretical match-up.

      • SecretMaster
      • 10 years ago

      I agree as well. However my understanding is that the company sends out sample test cards to reviewers, such as Damage. I’d imagine they don’t really have a choice in the type of card they receive (stock or OC). Or at least, that is how I have envisioned the whole thing.

    • Suspenders
    • 10 years ago

    Looks like ATI is still firing on all cylinders.

    And with Nvidia still practically nowhere in sight.

      • ClickClick5
      • 10 years ago

      Nvidia is still leaking oil into cylinder number two and five on a V6.
      ATI brought out their turbo V8…

    • MadManOriginal
    • 10 years ago

    Ok, I was wrong-ish in previous front page comments partly because I’d thought the 4670 had 400 shaders at first but this card seems to get more than a 25% boost over the 4670. Sadly I had to go to other sites to see benchmarks versus more than one card *cough* but overall it looks fair, at least roughly matching other $100 price point cards. I do wish there wasn’t this ‘match what’s available at the pricepoint’ thing going on similar to the 5700s (at launch at least) but it is better than I’d suspected.

      • OneArmedScissor
      • 10 years ago

      The 4670 had hardly any memory bandwidth. This has plenty. That’s what really sets it apart, although it is a bit more powerful of a GPU.

      This is still a mind boggling disappointment to me, though. I thought it was going to use less power. It uses MORE than the 4670!

      That figures. So much for GDDR5’s “higher efficiency.” That was a farce in the beginning and after significant improvements, it’s still a farce. That apparently goes out the window real quick when you make it do twice the work.

    • JustAnEngineer
    • 10 years ago

    Single digit prices or double-digit prices? If they were just $9 a piece, I’d probably try Crossfire-X.

      • wira020
      • 10 years ago

      Lols…keep on wishing…

    • insulin_junkie72
    • 10 years ago

    Thanks for the review!

    /[

    • thecoldanddarkone
    • 10 years ago

    Not having the 9800 gt was a mistake from the very beginning. Not only do they have lower power ones now, but they have been 90-110 for awhile.

    edit
    No 4670 either…

      • Meadows
      • 10 years ago

      They had limited time for the review.

        • thecoldanddarkone
        • 10 years ago

        I know that, but how useful is the review? This review only tells me two things, it’s faster than the gt 240 and it has good thermal/power usage.

          • Meadows
          • 10 years ago

          Isn’t that all one needs to know?

            • OneArmedScissor
            • 10 years ago

            No, because it really doesn’t have good power use. Therein lies the problem with limited comparisons…

            • insulin_junkie72
            • 10 years ago

            It seems that TSMC’s problems may have helped to extend the lifespans of both the 9800GT and 4850s (I thought the 4850 was done, but ATI told Anandtech they’re still being produced), and the 5670 is the third best of those three ~$100 cards for gaming performance, so the GT 240 isn’t the only still-in-production competitor.

            Anand pretty much concluded that the 5670 [late EDIT: is too slow] to actually play DX11 games, so that really isn’t a point in the card’s favor.

            • OneArmedScissor
            • 10 years ago

            Plenty of people may be considering an upgrade from the previous generation, as well.

            I can understand reviews being limited in comparisons because of time constraints, to keep things in perspective, and to keep the quality of results consistent, but there’s a point of significant diminishing returns in *[

            • wira020
            • 10 years ago

            I dont mind if the article got out late if it clears up stuff other sites doesnt care investigating… instead of publishing things that hundreds other reviewers had also done.. but i usually trust TR more… just saying, it’s better in my view if it’s late but fully baked…

            • swaaye
            • 10 years ago

            I also think that the GT 240 is a rather unpopular misfit of a card too. It’s not what people would be upgrading from, that’s for sure. I’m not sure what reason there is to own it. Perhaps there’s no reason to own a 5670 either though, considering the other options at the price point.

            The DX10.1 GT 2xx series has been one big disappointment and I suppose this review does further show that however. It reminds me of how lame X700 was compared to 6600GT. Now NVIDIA is on the other side.

            • BoBzeBuilder
            • 10 years ago

            l[

            • swaaye
            • 10 years ago

            The new DX10.1 GT 2xx cards.

            • OneArmedScissor
            • 10 years ago

            I actually put a 6600GT back in my computer with a P35 board the other day and it cut down the power use quite a bit. Still handy, after all those years.

            That thing gets outrageously hot if you turn the little jet engine fan down from 100%, though. The “heatsink” is just a few aluminum fins enclosing the fan, blocking air from the memory chips lol.

            The biggest difference with newer cards is reference coolers designs, if you ask me.

            • Meadows
            • 10 years ago

            Other than the fact they’re all over five times more powerful than your 6600 GT.

            • OneArmedScissor
            • 10 years ago

            Not all of them, Captain Exageratory Rape.

            • Meadows
            • 10 years ago

            Yes, all of them.

            • insulin_junkie72
            • 10 years ago

            /[http://www.newegg.com/Product/Product.aspx?Item=N82E16814131331<]§ ) ever make it down to the $109 price range they were expected to, the 5670 even be more screwed unless its price drops.

            • MadManOriginal
            • 10 years ago

            By ATi’s handling of APIs I take it you mean Linux. So does that mean anything for those who don’t use Linux?

            As for the NV stuff, it’s cool to have it on GPU just for the power draw and geek factor but I’m curious if you or anyone knows how much CPU power is actually needed to do those same functions? I understand it’s nice to get a cheap and low power CPU and team it up with a GPU that can do certain things but I’m just curious if a more basic GPU and more powerful CPU might not be as usable. So are settings available in software to use the CPU for those functions?

            • insulin_junkie72
            • 10 years ago

            Not just Linux. For example, if you’re using a program that talks to the CUDA API, you can access a lot of the stuff that you can get when you playback a video in regards to the video hardware (the resizing, de-interlacing, etc), which depending on what you’re doing, can save a ton of time since it can avoid bottlenecks.

            HD de-interlacing, for example. The dedicated hardware on a GPU can make mincemeat of it compared to my Phenom II 955. On a 1080i OTA source, it’s twice as fast for me to run it on the GPU as opposed to the CPU.

            Since it’s rarely the only thing I’m doing to the source, even with multi-threading, having the CPU do it causes a slowdown in my whole script.

            Now if Nvidia could just put the hardware Inverse Telecine they have in their driver, and expose it in the CUDA API, I’d be in heaven…

            While some commercial programs may be able to talk to ATI hardware, most folks are using open-source (or low-cost, small company/single developer type) stuff.

            • MadManOriginal
            • 10 years ago

            So wait, are you talking about encoding or decoding (playback)? You mention playback in the first paragraph but then go on to say it ‘saves a ton of time’ – for playback it’s either fast enough or not there is no ‘saving time.’

            The second two paragraphs sound like encoding to me.

            • insulin_junkie72
            • 10 years ago

            Encoding.

            The point I was making was that with the CUDA API, for encoding purposes you can access many of the same hardware-based goodies that normally get used on playback.

            • swaaye
            • 10 years ago

            I’ve used Donald Graft’s CUDA-based H.264/VC1/MPEG2 avisynth source filter, which does resize too, on my old 8600GT. It is neat but not that big of an improvement compared to doing it all in software because there are other bottlenecks.

            • insulin_junkie72
            • 10 years ago

            I think NVtools is certainly worth the $15 if you encode a lot, particularly with HD sources. It may depend on one’s filter chain, but deinterlacing was my major bottleneck.

            (IVTCing is the only other thing I can’t currently work around, since you’re cruising through single-threaded-ville when you have to do it.)

            You can do some of the same things with the free MediaCoder, but it’s a lot less flexible, and the last time I tried, could only be used if you were doing the actual encoding on the GPU (obviously not recommended unless you’re willing to throw a whole lot of bitrate at it)

            • MadManOriginal
            • 10 years ago

            Then perhaps the point about the GT240 is that it’s pretty weak overall for a CUDA card. It brings low power/performance but nothing special in absolute performance. Is there something that special about VP4 where it actually performs the things you do a lot better than an otherwise more powerful VP3 card?

            In any case the post to which you first replied was talking about GT 240 vs HD5670 in terms of comparison and said that a person wouldn’t be upgrading from one to another and that’s why more cards to compare would have been useful.

            • insulin_junkie72
            • 10 years ago

            The only VP3 desktop card is a later rev 8400GS. I don’t think that’s a more powerful card 😛

            Everything else from Nvidia other than the GT210/220/240 are VP2 cards, regardless of price point.

            VP4 over VP2 adds full hardware accleration of everything [AVC/VC-1/DivX-XviD/MPEG2), and added some other improvements here and there (resizing, etc).

            I could have dropped down to a GT 220, I suppose, but the end price difference was very small when I bought, and I wanted 96 vs. 48 shaders for the one fft filter I occasionally run that is shader-based (as opposed to the other things that run on the dedicated video hardware chip).

            • MadManOriginal
            • 10 years ago

            Ok, but you didn’t answer my question. Specifically for encoding because that’s where it matters, does VP4 do things that previous ones absolutely can’t, or that they just do slower (maybe per shader?)

            • insulin_junkie72
            • 10 years ago

            I thought I did.

            It’s primarily speed. For the most part, VP2 cards can do the same things, but it can’t do everything on the dedicated chip, where it’s more efficient to do so.

            I did get a chance to run some things on my 8800GT before I got rid of it, and the slower GT240 does get me better framerates for the things I do.

            Well, VP4 cards do natively support the codec that 3D Blu-Rays are going to be encoded on this year, but that’s not exactly important right now.

            NVIDIA doesn’t exactly spell out what changes too clearly (“We’ve done this, this and this!:); going through VDPAU docs helped a bit.

      • BoBzeBuilder
      • 10 years ago

      Agreed. Wish TR delayed the article by a day but included more cards in the mix.

        • tfp
        • 10 years ago

        Or included the other cards just like other sites did. I expect that they tested with the same amount of time.

      • Anonymous Coward
      • 10 years ago

      Yeah I would have liked to see the R4670 in there.

Pin It on Pinterest

Share This