AMD to launch Radeon HD 4000 series in June?

So far, we’ve heard next to nothing from AMD about next-generation desktop graphics cards beyond the existing Radeon HD 3000 series. The company made no mention of any next-gen graphics products due this year in roadmap slides at its 2007 Financial Analyst Day last December, prompting some to believe it delayed the products and planned to keep offering only 3000-series cards throughout 2008.

According to German site Hartware.de, that may not be so. The site says it has received a roadmap for so-called Radeon HD 4000-series graphics cards from a reader. This roadmap shows seven upcoming cards, dubbed Radeon HD 4450, Radeon HD 4470, Radeon HD 4650, Radeon HD 4670, Radeon HD 4850, Radeon HD 4870, and Radeon HD 4870 X2, and it suggests they’ll be out before long.

The first two of the seven will supposedly have 40 stream processors each, while the 4600 series will have 240, the 4800 series will have 480, and the 4870 X2 will sport a total of 960 SPs. Graphics core clock speeds for the 4400 models aren’t listed, but other models will apparently run between 800MHz and 1050MHz. As for memory specs, those will range from 128MB of DDR2 with a 128-bit bus on the 4450 to 1GB of GDDR5 with a 256-bit bus on the 4870, and double the 4870’s specs for the 4870 X2. Overall, Hartware.de says we can expect the 4870 to offer maximum floating point processing power of just over one teraFLOPS—roughly twice that of the existing 3870.

The burning question on everyone’s lips now is probably, "When will these cards be out?" Hartware.de warns that its information is unofficial and should therefore be taken with a grain of salt, but it says its reader’s roadmap pegs the release date at some time in June. No word yet on which models will become available first, though. (Thanks to TR reader KB for the tip.)

Comments closed
    • danny e.
    • 12 years ago

    did a longer check on all games at the link.. and used only the 1920×1200 scores. % diffs below (g92 gts over 3870)

    +60% ….battlefield 2142
    +37%….bioshock
    -91% ….call of juarez (score one for ATI! .. the only one)
    +74% ….call of duty
    +13% …..crysis
    +42% …..et quake wars
    +52% … HL2
    +102% .. STalker
    +5%…… UT3
    +23%…..lost planet
    +68%…. tomb raider
    +21%…..hellgate
    +80%….. gothic 3
    +13%…. oblivion
    +60% … company of heroes
    +113% .. world in conflict
    —————————————–
    AVERAGE = +42%

    did not include command & conquer since its capped at 30.

    hmmm seems i remember a number around 40% being mentioned before

      • pogsnet
      • 12 years ago
        • zgirl
        • 12 years ago

        we were not talking about 2 GPU cards. We were talking about single. But two 8800GTs in SLI will beat it. And that will end with Nvidia releases it’s X2 part.

        What we are trying to say is single GPU against single GPU ATI still is behind the game.

    • danny e.
    • 12 years ago

    *[http://www.xbitlabs.com/articles/video/display/ati-radeon-hd3870-x2_12.html#sect1<]§ <]/ wow.. i'm impressed that you care so little about your reputation that you'd link to a site that makes you look even more crazy. perhaps you linked to that particular game so we'd think you were smart? no.. no.. it just made you look like even more of the blinded fanboy. so, just becuase its fun publically humiliating you.. Xbits scores .. all 1920 x1200 except crysis @ 1280x1024 to more closely match TRs. --------------------------- Battlefield 2142: +60% Bioshock:............ +37% COD ....................+74% Crysis ...................+73% (yikes) .. the 1920x1200 is better ET QW.................+42% HL2 ......................+52% Stalker..................+102% UT 3......................+ 5% hmmm.. so you wanted us to see the 5% diff on UT3 according to that website and not look at any other games? you do realize that when i said "average of 40%" .. that actually means an average performance gain? well.. in that little short list of games we have an average of 56% .. thats even WORSE than the TR games showed. unbelievable. the saddest part about this is that i'm actually a fan of ATI .. have only ever owned one NVidia card.. and currently only own ATI. I guess being a true fanboy is really a whole different mess.

    • Fighterpilot
    • 12 years ago
      • zgirl
      • 12 years ago

      His math is right even on your link. Unless of course you are referring to the X2 card. Yeah nothing like comparing a dual GPU card to a single GPU card.

      At least Danny e. was doing a fair comparison.

      Stop thinking the X2 is the single fastest card when in fact is a a dual GPU solution. It should only be compared to other dual GPU solutions.

        • Fighterpilot
        • 12 years ago

        Duh,the results showed 3870 v GTS ……(The X2 results are irrelevant)
        His math is clearly shown there to be crap.
        Stop telling people to stop thinking things genius…your posts are worthless.

          • danny e.
          • 12 years ago

          you really like making yourself look the fool dont you.. you thought that link would prove your point? it makes you look even dumber that i already knew.

          • zgirl
          • 12 years ago

          My posts are useless? I’m not the moron who posted a link that invalidated his own claim. Any idiot with half a brain can see that upon casual inspection those numbers closely matched TR own numbers. And thanks to Danny e’s due diligence show that in fact the numbers are even worse.

          One graph bar isn’t going to help you here. Keep thinking you actually know something genius cause it is obvious that you don’t. And I’ll stop telling people to stop thinking things when you actually start thinking.

    • danny e.
    • 12 years ago

    for the blind folks that are bad at math. looking at the 1920×1200 scores.. with the exception of crysis where i look at 1280×800..

    G92 GTS % > HD 3870
    COD: +72%
    HL2: + 60%
    ET Q: +28%
    UT 3: +34%
    Crysis: +25%

    wait.. now what is that averaged? 43% .. . and the thing that helped ATI was their very much improved crysis scores.
    —————-
    for those of you who dont know how these things work…
    COD scores:
    G92 GTS = 47.2
    HD 3870 = 27.4

    difference = 19.8 fps

    19.8 fps is 72% of 27.4. …. so, alas.. in COD the GTS is 72% faster than the HD 3870.

    just trying to clear up things for the slower folk.
    —————-
    also of note.. i did this very rapidly..
    the scores i posted in the thread i linked to early are done more carefully… and scores reported more precisely.

      • MadManOriginal
      • 12 years ago

      Nice cards, nevermind my pre-edit post :p

        • danny e.
        • 12 years ago

        no. you must be using fighterpilot math.

        72% of the GTS would be 33.984 fps.

    • crabjokeman
    • 12 years ago

    If they keep going like this, pretty soon they’ll be at Radeon 7000 or 9000. This is a clear violation of the space-time continuum. I demand they change the naming scheme.

      • evermore
      • 12 years ago

      They did. It’ll be the Radeon HD 7000. Clearly completely different. (And I pity the fools who’ll be sold actual Radeon 7000 cards as if they were the latest technology.)

      Actually, maybe by that point they’ll change to Radeon BD 7000 now that Blu-Ray has won. But of course, being AMD, the BD will in no way actually be referring to Blu-Ray Disc, it will mean something totally different and they don’t know why people think otherwise, just because it’s optimized for playing Blu-Ray video.

      (The above is a lame joke about Performance Ratings, thank you.)

    • PRIME1
    • 12 years ago

    4200ti?

    • evermore
    • 12 years ago

    Geez, another generation already? The 3000 series isn’t even mainstream yet.

    What’s with the huge leap between the low-end and mid-range? 40 SPs to 240 to 480? Six times the SPs from low to mid, then just twice as many for the high?

      • bjm
      • 12 years ago

      I always saw the whole 3800 series as just next-generation in name only. Hopefully the HD 4000 series will be the beginning of a new level of performance. We haven’t really turned any pages since the 8800 GTX was released way back in Nov. 2006.

        • willyolio
        • 12 years ago

        i hope it’s truly next-gen, as in “runs DX10/10.1 as smoothly as DX9”

        the massive performance hits of this gen’s dx10 performance makes their dx10 compatibility just about worthless.

          • marvelous
          • 12 years ago

          Nothing has changed. This will be faster. Current cards run DX10 just fine. Only thing that doesn’t is Crysis. Again Crysis is more than just dx10. It’s the level of graphic detail that is put into the game that makes it much slower. Not because it’s dx10.

        • 0g1
        • 12 years ago

        Yeah, from AMD’s own slides for the 3800 series it goes something like “same performance as 2900XT with half the power usage”. Its basically just an optimized 2900XT. 53 million less transistors, 256-bit bus instead of 512-bit, 55nm instead of 80nm, and less than half the size.

        4870 will have 50% more shader power, twice as much texture memory units, and DDR5 memory almost twice as fast as DDR4. It seems as though AMD has been holding back with the 3800 and even the 2900 to get its foundations for the 4000 series just right.

        • moritzgedig
        • 12 years ago

        the 3800 was indeed just a better 2900.
        I find it interesting that AMD/ATI is outputting so many new types in such a short time.
        The 1xx0 series was there for such a long time compared to the 2×00/38×0 series or has my feeling of time failed me again?
        I wonder whether the 4000 series will be something new or just the 38×0 with more of the same plus a new type or RAM?

      • Stefan
      • 12 years ago

      I do see a market for low-end cards (which can give you Aero goodness, but not much else). However, there is no real need for mid-low cards, which are to weak for current games, but overpowered for simply running aero. So to me such a move would make sense.

    • 0g1
    • 12 years ago

    Hmm, 256 bit bus, same as the 3870 …. I assume this bus width would allow for higher memory clock frequency while decreasing performance very little (as you can see by looking at the 3870 vs 2900). Does 2200 Mhz memory speed mean DDR5-4400?

    nVidia was rumored to be prepping a card 1TFLOP capacity too and it should be released a few months earlier than June. nVidia should use a ring bus memory architecture so they can move on from DDR3. Even if they don’t, they should still have the most powerfull card if it has 1TFLOP, over twice the amount of TMU’s, and almost twice the amount of ROP’s. To get to 1 TFLOP, they’d need about 192 stream processors at 2600 Mhz. Or 256 at 1950 Mhz.

    Another thing I am not sure about is how AMD can get the core to run at 1050 Mhz. They must have separated the stream processors from the core somewhat like the 8800 series. This thing should be over 1 billion transistors if it scales from the 3870’s 666 million.

    The nVidia card would be over 1.1 billion with 192 stream processors and 1.5 billion with 256 SP’s… cant see 256 happening though!

      • BoBzeBuilder
      • 12 years ago

      No. 256bit is half as fast as 512bit interface so its performance is disappointing. The only upside is that its cheaper. And 2200Mhz memory translates to 11000Mhz since its GDDR5.

        • Nitrodist
        • 12 years ago

        I assume you mean 1100mhz, not 11000.

        Yeah, most likely it’s 1100mhz, 2200mhz effective.

        • 0g1
        • 12 years ago

        The 3870 is 256 bit and its performance isn’t disappointing compared to the 2900. How can GDDR5-2200 translate to 1100 mhz when GDDR4 is already faster than this?

      • marvelous
      • 12 years ago

      Yes 2200mhz is ddr so 4400mhz. I think GDDR5 might have higher latency though.

        • Saribro
        • 12 years ago

        Nono, feet back on the ground. It’s 1100Mhz <-> 2200MTransfers/s :D.

          • marvelous
          • 12 years ago

          l[

    • ew
    • 12 years ago

    Can all you idiots please learn to use the Reply button? It helps tremendously with skipping over inane arguments.

      • danny e.
      • 12 years ago

      irony? bah!

        • Tycho
        • 11 years ago

        The ironic thing is most of the pointless insane arguments are bring fueled by you

    • 2_tyma
    • 12 years ago

    and the million dollar question is……. how does it perform in crysis??

      • danny e.
      • 12 years ago

      well, it would need to be twice as fast as the HD 3870 to peform well..
      here’s hoping. I’ve been putting off my video card upgrade for way too long.. its almost time for an entire new system build.

      • marvelous
      • 12 years ago

      It’s not going to help some but no way is it going to play crysis with max detail at reasonable resolutions.

        • Tycho
        • 11 years ago

        Geforce 10 anyone?

    • ssidbroadcast
    • 12 years ago

    Um, what’s the codename behind the GPU? Would it be another r6 ?

    • danny e.
    • 12 years ago

    ugh why i even bother trying to correct blatant fanboys is a wonder… but here goes:

    current newegg prices.

    the G92 GTS = $290
    the HD 3870 = $220

    this gives the following price/performance:
    G92 GTS 4.90
    HD 3870 4.58

    So, no matter how you argue it.. the G92 GTS is still a better performer for the price.
    ————–
    As for the arguement about non-released items.. we’re all just guessing. What I was point out was EVEN IF the HD 4870 were only the exact same core except for more SPs.. it would be roughly 50% faster. I dont care what some website says about what they think performance would be. 480 is 50% more than 320… not 100% more.

    Now, perhaps the core is significantly improved.. or the clockspeed goes up.. that could change things.. but thats not what i said in my first post..

    • Fighterpilot
    • 12 years ago

    G92 GTS is roughly $100 more expensive than the 3870 cards and consequently is in a different class.
    Presuming similar price differences in the next gen series it is a spurious argument to claim”it would barely beat the GTS”
    As the article speculates…”over 1 TFlop of floating point processing for the 4870″…that would give it roughly double(100%) the horsepower of the 3870…not a 50% increase…

    • danny e.
    • 12 years ago

    even if the RV770 was the exact same thing as the RV670 with more cores, it would still be a 50% performance boost… not too shabby… but also not enough to dominate anything. The G92 GTS is approx 40% faster than the HD3870, so if the HD 4870 was just 50% faster .. it would barely beat the G92 GTS.

    still good though.. especially if the price is right.
    If ATI can continue to improve the drivers for the X2 versions.. they might be on to something.

      • pogsnet
      • 12 years ago
        • toyota
        • 12 years ago

        its all in the wording. 100% would mean equal not twice the performance. 100% /[

        • danny e.
        • 12 years ago

        480 is 50% more than 320 not 100% more. would you perhaps be a fighterpilot?

          • Furen
          • 12 years ago

          You also have to look at clockspeeds. 96 SIMD Arrays (what ATI calls 480 shader processors) means that the core is 50% wider (with 2x the TMUs, by the way). Now, take into account that clockspeeds may be hiked some 35% (from 775ish to 1050ish) and then you have a 50% wider core at a 35% higher clock which, roughly, would give us 203% the performance.

          This, of course, is the reason why this GPU would NEED GDDR5, doing twice the work should require twice the memory bandwidth. GDDR4 was a pretty big flop because memory clocks just didnt ramp up all that well (or rather, GDDR3 ramped up too well).

          I am a bit doubtful ATI can hit those power consumption targets but if it can then it should have a pretty compelling product on its hands.

      • 0g1
      • 12 years ago

      I wouldn’t call it 40% faster. Sometimes its 5% … sometimes its 70%. On average, I’d say more like 25%. Depends on the game and if it uses a lot of texturing fill rate — AMD is about 1/3 of nVidia in bandwidth there.

      AMD calculates the FLOPS by MAD/s. So just Ghz * 2 * SP’s = over 2x the FLOPS of the 3870. It should dominate the GTS even though it has half the amount of TMU’s.

        • danny e.
        • 12 years ago

        around 40 for the average of 5 relatively new games.. easier to quote an average than to list the differences in many different apps. / games.

      • shank15217
      • 12 years ago

      The g92 gts isn’t 40% faster that the 3870 on average, site your information

        • green
        • 12 years ago

        i take it you didn’t bother reading all the posts here?
        otherwise you would have seen #11

          • shank15217
          • 12 years ago

          he sited his own analysis, I don’t see the 40% performance difference between 3870 and G92 in the latest tech report review.

            • danny e.
            • 12 years ago

            then you’re blind or really bad at math.

            • Tycho
            • 11 years ago

            well bind people can still read, not sure about you though

    • BoBzeBuilder
    • 12 years ago

    I highly doubt next generation of cards are going GDDR5. I’m guessing those specs are nothing but speculation.

    • VILLAIN_xx
    • 12 years ago

    GDDR5? damn, already!?

      • alex666
      • 12 years ago

      Nice link Marvelous. They really go into full detail on this. If AMD could get these cards out on schedule, they would really be back in the video card game.

        • SPOOFE
        • 12 years ago

        Me likey the power consumption and price information, if it turns out to be accurate.

      • 0g1
      • 12 years ago

      Nice one. One thing that doesnt make sense from that article is “ATI’s future is value-oriented and if you’re longing for that 1 billion transistor megachip, you can stop doing it right now. ATI isn’t into that.”

      If the 4870 has 50% more SP’s than the 3870 and the 3870 has 666 million transistors, doesnt that make 1 billion?

        • evermore
        • 12 years ago

        SPs in each generation don’t necessarily have the same number of transistors, plus a large part of the chip is not made up of transistors dedicated to the SPs. Then cache, memory controller circuits, etc. all may vary in the number of transistors.

          • 0g1
          • 12 years ago

          Yes, good point, I shouldn’t assume so much. Still, I think that its possible for AMD to make a billion transistor chip even if they are focusing on the budget sector.

      • My Johnson
      • 12 years ago

      They are finally increasing the TMU’s and ROP’s to GF equivalent or so? About time.

        • 0g1
        • 12 years ago

        Pity the G92 GTS is already at 64 TMU’s, double that of the “increased” 4870’s count. The TMU’s of nVidia and AMD aren’t really comparable though because they process the textures at different rates. Maybe AMD has improved their TMU’s even more this generation.

        You’d hope so anyway because of their relatively poor performance with 16x aniso compared to nVidia. Also, AMD currently dominates nVidia in shader benchmarks while nVidia dominates AMD in the multitexturing benchmarks, leading one to assume that the TMU’s are AMD’s greatest weakness. AMD was betting on a high level of shader’s in games and not so much textures. Looks like that was the wrong bet….

        I think doubling the TMU’s while ‘only’ increasing the SP’s by 50% is a step in the right direction. And of course, almost doubling the memory bandwidth will help A LOT with texturing and 16x aniso.

Pin It on Pinterest

Share This