Nvidia hard-launches its GeForce RTX 2070 graphics card

Nvidia's GeForce RTX 2070 graphics card launches this morning. First, the bad news: we don't have an RTX 2070 in the TR labs for testing. We're working to obtain one of those cards as soon as we can to review in depth, but my test bench (and attention) is presently occupied by Intel's Core i9-9900K and a raft of other CPUs. Stay tuned for more details on those chips soon.

A block diagram of the TU106 GPU. Source: Nvidia

With the bad news out of the way, here's what we know so far about the RTX 2070. The smallest Turing sibling introduces a third chip powered by Nvidia's latest architecture called TU106. From the top, this chip uses three Turing graphics processing clusters (or GPCs), each with 12 streaming multiprocessors (SMs) for a total of 36 graphics-processing "cores." Each of those SMs has 64 shader ALUs, or CUDA "cores," for a total of 2304 such resources. TU106 has 144 texture units, 64 ROPs, and a 256-bit bus to 8 GB of GDDR6 memory running at 14 Gbps. Here's how that stacks up versus some of today's most popular graphics cards:

Boost

clock

(MHz)

ROP pixels/

clock

INT8/FP16

textures/clock

Shader

processors

Memory

path (bits)

Memory

bandwidth

Memory

size

RX Vega 56 1471 64 224/112 3584 2048 410 GB/s 8 GB
GTX 1070 1683 64 108/108 1920 256 259 GB/s 8 GB
RTX 2070 FE 1710 64 120/120 2304 256 448 GB/s 8 GB
GTX 1080 1733 64 160/160 2560 256 320 GB/s 8 GB
RX Vega 64 1546 64 256/128 4096 2048 484 GB/s 8 GB
RTX 2080 FE 1800 64 184/184 2944 256 448 GB/s 8 GB
GTX 1080 Ti 1582 88 224/224? 3584 352 484 GB/s 11 GB
RTX 2080 Ti FE 1635 88 272/272 4352 352 616 GB/s 11 GB
Titan Xp 1582 96 240/240 3840 384 547 GB/s 12 GB
Titan V 1455 96 320/320 5120 3072 653 GB/s 12 GB
Peak

pixel

fill

rate

(Gpixels/s)

Peak

bilinear

filtering

INT8/FP16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

FP32

shader

arithmetic

rate

(TFLOPS)

RX Vega 56 94 330/165 5.9 10.5
GTX 1070 108 202/202 5.0 7.0
RTX 2070 FE 109 246/246 5.1 7.9
GTX 1080 111 277/277 6.9 8.9
RX Vega 64 99 396/198 6.2 12.7
RTX 2080 115 331/331 10.8 10.6
GTX 1080 Ti 139 354/354 9.5 11.3
RTX 2080 Ti 144 473/473 9.8 14.2
Titan Xp 152 380/380 9.5 12.1
Titan V 140 466/466 8.7 16.0

Other Turing architectural changes might not show up in our tables, but they're still important. The RTX 2070 has twice as much L2 cache as the GTX 1070, at 4 MB versus 2 MB, and the total size of its register files (distributed among each SM) has ballooned to 9.216 MB, versus 3.840 MB on the GTX 1070. Keeping more data close to the execution units that can digest it is one reliable way to improve performance, and the on-chip cache of TU106 can certainly claim a much greater endowment in that regard compared to its Pascal predecessor.

Like its bigger Turing siblings, the RTX 2070 also features some resources devoted to the acceleration of certain ray-tracing operations. The RTX 2070 has 288 Turing tensor cores (eight per SM) and 36 Turing RT cores (one per SM). Compare that to the 368 tensor cores and 46 RT cores of the slightly-cut-down TU104 chip that powers the RTX 2080, or the 544 tensor cores and 68 RT cores of the modestly-gelded TU102 chip that runs the RTX 2080 Ti.

Since we don't have any real-world applications that use these processing resources yet, it's hard to say how the RTX 2070 stacks up in the ray-tracing department beyond the obvious point that it will be less proficient at those tasks than its larger siblings. For what it's worth, Nvidia says the RTX 2070 Founders Edition can perform 45 RTX tera-OPS (a measurement of the performance potential of hybrid rendering with Turing GPUs), versus 60 for the RTX 2080 FE and 78 for the RTX 2080 Ti FE. Until we have applications in hand that can take advantage of those resources, we'll hold off judgment on just how useful the ray-tracing features in the RTX 2070 will be.

The RTX 2070 Founders Edition

Since the RTX 2070 is fabricated on TSMC's 12-nm FFN process, its 10.8 billion transistors don't benefit from much, if any, of an areal shrink versus the 16-nm FinFET process used to make Pascal. Consequently, the TU106 die is 445 mm², compared to 314 mm² for the GP104 chip that powered the GTX 1070 and GTX 1080. The RTX 2070's board power is up versus the GTX 1070, as well, at 175 W for the "reference" spec and 185 W for the "factory-overclocked" Founders Edition. The GTX 1070 needed 150 W to do its thing.

From a pool of 25 games and synthetic benchmarks, Nvidia's testing labs expect that the RTX 2070 will deliver a median 33% better performance for 2560×1440 SDR gaming over the GTX 1070 and a median 35% improvement for HDR gaming. If the RTX 2070 delivered that improvement for the same price as the GTX 1070 before it, we would be shouting from the rooftops about it. As with all Turing cards, though, Nvidia is charging more for that performance increase. The company suggests partner cards will start at $499, and the RTX 2070 Founders Edition will land for $599. That's versus $379 for partner GTX 1070 cards or $449 for the Founders Edition when those pixel-pushers launched. Ultimately, we'll need to put an RTX 2070 through its paces in the TR labs to see how it stacks up.

Comments closed
    • Voldenuit
    • 1 year ago

    Techpowerup has an article on the performance hit of ray-tracing (albeit from the perspective of a single developer and engine)

    [url<]https://www.techpowerup.com/248649/remedy-shows-the-preliminary-cost-of-nvidia-rtx-ray-tracing-effects-in-performance[/url<]

    • ronch
    • 1 year ago

    Guys will this give me a better gaming experience over my HD7770 in Space Quest 3?

      • Redocbew
      • 1 year ago

      No.

    • DavidC1
    • 1 year ago

    When a reference edition, oh-sorry-founder’s-edition, is $599, rarely, if any 3rd party manufacturers are going to price it lower than that.

    The reference card is again one of the loudest, hottest, running cards. The fancy naming change doesn’t change that.

    Third party manufacturer has to be very generous or stupid to price their cards lower than the reference card price. Founder’s Edition cards are an extremely sleazy way for Nvidia to raise prices across the board. Also partners can’t be happy about that either.

    I don’t see 7nm changing things either. New processes may be more expensive, but constant record-high revenue quarter after quarter tells me its a fraction of the prices increases. Those who forget that are looking at technology over human behavior.

    • NovusBogus
    • 1 year ago

    I’m kind of interested in this as a Cyberpunk 2077 ready replacement for my GTX 960, but at 500 bucks it’s gonna have to be a heck of a card to make me really care.

    • derFunkenstein
    • 1 year ago

    Of course the pricey halo products get sent to TR but when it comes to cards that the masses can start to afford, Nvidia drops the ball. Boooooo

    • freebird
    • 1 year ago

    Hard to get excited about these cards for the price they are asking.

    I guess I’ll be waiting for high-end 7nm GPUs in late 2019 or early 2020 at this pace…
    cuz it sounds like 2019 Navi is going to be a mainstream part and RTX 20×0 are too expensive compared to the GTX 1070s & water cooled Vega 56s I already have. So I guess I should put $50/month in a jar until early 2020? Or sell some spare 1070s….

      • DancinJack
      • 1 year ago

      Quite confused as to how you think 2019-2020 7nm GPUs will be cheaper than this…

        • Voldenuit
        • 1 year ago

        1. More expensive process

        but may be balanced by:

        2. Smaller dies = more chips per wafer
        3. Amortized development cost
        4. Larger market (players updating from 10xx series)

        and of course:

        5. AMD having 7nm parts to compete in the marketplace (although they probably won’t be competitive in the enthusiast segment).

          • DancinJack
          • 1 year ago

          I just don’t think you’re going to get Nvidia high-end products on 7nm for cheaper than this. AMD will have nothing to compete just like now. There isn’t a whole ton of difference from what we have now unless Nvidia makes some pretty serious architectural changes IMO.

          Happy to have all your possible outcomes be proven true, I just don’t think they will at this point. Nvidia is gonna pack more RT and Tensor cores into their next GPUs.

            • Voldenuit
            • 1 year ago

            >I just don’t think you’re going to get Nvidia high-end products on 7nm for cheaper than this.

            Yeah, you’re probably right.

            On the other hand, nvidia isn’t getting any of my money if their next enthusiast card is more than $600. I’m perfectly happy to sit out a couple generations until Cyberpunk 2077 is out, anyway.

        • NarwhaleAu
        • 1 year ago

        It’s not that hard to understand.

        Nvidia bumps prices another $100+ for each segment. GTX 3080 Ti is now up near $1400. GTX 3070 is now $600. 35% margin. Intel, who hired the former head of AMD’s graphics group and announced they will produce a card in 2020 that they are working on NOW, sees that FAT margin and decides to compete. AMD, also seeing that fat margin, only has to produce something that isn’t completely junk to fill in that $600 void. That’s how competition works – there is so much profit in the business right now that Intel is looking at it and thinking “we can play in that space and make better margin than our CPUs”.

        Nvidia, now with a reputation for gouging customers, is left with plenty of room to reduce that margin. They haven’t exactly needed to compete lean, so I’d be willing to bet there is a lot of corporate fat they can trim. They could also reduce their margin to 25% and still be hugely profitable. They are left with the option of selling a small number of cards above the $600 price point, or protecting their market share and reducing prices to compete. Guess which one they choose?

    • Krogoth
    • 1 year ago

    Here comes the Geforce 3 Ti200 2.0…….

      • christos_thski
      • 1 year ago

      It will get worse before it gets better.

      If 2070 is the 3 Ti200, expect the 2060 to be the equivalent to Geforce “4” MX420…

      • derFunkenstein
      • 1 year ago

      Ti 200 could be OC’d to Ti 500 levels. No amount of OC is going to overcome how much the 1070 trails the 1080 in rendering resources.

        • Krogoth
        • 1 year ago

        Only if you got a “golden sample” GF3 Ti200 that was artificially binned. GF3 Ti200 overclocking typically got you somewhere within to a little over regular GF3 land without resorting to volt-modding.

    • Chrispy_
    • 1 year ago

    Until a real-world game with RTX support arrives on the scene, the 2000-series are more expensive and more power-hungry than their 1000-series competition.

    Don’t forget, reviews of the 2000-series are comparing performance per dollar against the list price of the 1000-series founder’s edition, but realistically the OEM 1000-series cards are both faster and cheaper, with deep discounts at the moment. Pretty much every e-tailer has a $400-$425 GTX 1080 on offer and you can bet that it’ll be a while before we see 2070 cards for less than $599.

    RTX 2070:
    50% more expensive, maybe 20% faster, but you do get that cheaty 1440p upscaling to ‘4K’ and RTX features that may or may not provide playable framerates when those games eventually add full support for them.

      • Voldenuit
      • 1 year ago

      >Until a real-world game with RTX support arrives on the scene, the 2000-series are more expensive and more power-hungry than their 1000-series competition.

      The power consumption of Turing seems to be in line with Pascal in terms of fps/W… for now.

      But currently all the games tested are using only about half the silicon on the Turing cards (the RT cores + Tensor cores together use up about as much space as the traditional CUDA cores). When we start stressing the cards with RT + denoising + raster loaded games, power consumption can only go up.

      • Krogoth
      • 1 year ago

      Turing family is built for async computing and DX12 features and they will begin distance themselves away from their Pascal predecessors with future content down the road.

      Ironically, GCN stuff will end-up benefiting from it even the old-fanged Tahiti while Pascal, Maxwell and Kepler will struggle with DX12-era features.

      The only complain(s) are really about price points but that’s due to lack of competitive pressure and price wars. Nvidia is going to be milking the performance gaming GPUs market for all of its worth until somebody provides competitive options (The ball is in Intel’s/AMD’s court).

      • Acidicheartburn
      • 1 year ago

      I’m not sure where you’re getting that the 1000 series cards are faster. If you look at Anandtech’s review of the 2070, it beats the 1080. Not sure how that equates to 1000 series being faster.

      I suppose if you’re comparing price/performance then yeah, it’s not really an improvement considering the MSRP of the 2070FE has gone up accordingly.

        • Airmantharp
        • 1 year ago

        At most 5-10 watts more power hungry, [url=https://hardforum.com/threads/msi-geforce-rtx-2070-gaming-z-performance-review-h.1969719/page-9#post-1043887175<]according to Brent Justice over at [H]ard|OCP[/url<].

        • ptsant
        • 1 year ago

        Maybe he means that the aftermarket 1000-series models that you can buy are faster than the reference 1000-series FE cards that are often used for the comparisons. On the other hand, nothing stops you from buying an aftermarket 20×0 and, unless the OC potential of 20×0 is much better, this should come down to the same.

        The difference in ACTUAL prices (not MSRP) is much more significant. Where I live, 2080 is at least $200 more expensive than the 1080 Ti.

        • Chrispy_
        • 1 year ago

        I’m not sure where you’re getting that I said the 1000 series cards are faster.

        Read my post. I specifically say that the 2070 is “maybe 20% faster” and in this case we’re talking about the comparison with a 1080.

        Edit:
        Oh wait, I think I know where you’re coming from:
        [quote<]list price of the 1000-series founder's edition, but realistically the OEM 1000-series cards are both faster and cheaper[/quote<] That's me saying that the MSI, Asus, EVGA 1080 cards are faster than the 1080 founders edition - they are typically 150MHz or more faster than the founders edition.

      • PixelArmy
      • 1 year ago

      [quote<]it'll be a while before we see 2070 cards for less than $599.[/quote<] Just on launch day... [url<]https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=rtx+2070&ignorear=0&N=-1&isNodeId=1[/url<] All the $499(MSRP) ones are back-order, but there's several at $549 that can added to cart. Edit: bleh, newegg search results acting up... here's a few direct links: [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16814932089&cm_re=2070-_-14-932-089-_-Product[/url<] [url<]https://www.newegg.com/Product/Product.aspx?Item=14-487-412&utm_medium=Email&utm_source=GD101718&cm_mmc=EMC-GD101718-_-landing-_-Item-_-14-487-412[/url<] [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16814137363&cm_re=rtx_2070-_-14-137-363-_-Product[/url<] [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16814487411&cm_re=rtx_2070-_-14-487-411-_-Product[/url<] MSRP! [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16814932091&cm_re=rtx_2070-_-14-932-091-_-Product[/url<]

        • Voldenuit
        • 1 year ago

        Yep, the 2070 is a more mainstream part than the 2080 and 2080 Ti, and manufacturers know that this market segment is more price sensitive than the people who *have* to have the fastest card.

          • ptsant
          • 1 year ago

          Good point. The highest end will always sell even at $2000 or $3000, especially if market availability is limited. Some people just don’t care.

          Down at the $300-500 range buyer behavior is different but I guess nVidia will give us the 2060 to play with.

    • RickyTick
    • 1 year ago

    I’m sure this is a nice upgrade from anything less than a GTX980, but I’m still feeling good about my $369 purchase of a GTX1070 a year and a half ago.

      • Platedslicer
      • 1 year ago

      And before my current 1070, I had a 7970 GHz from 2012. Boy did that one earn its keep!

        • Spunjji
        • 1 year ago

        Nice! I had a similarly lengthy experience at the lower performance level with Pitcairn. Would be nice to have AMD back at that level of competition.

    • Gastec
    • 1 year ago

    If a well established tech review site like The Tech Report doesn’t have a sample, am I to understand that the RXT 2070 is just a paper launch?

      • Krogoth
      • 1 year ago

      2070 isn’t a paper launch. There should be fair amount of silicon since it is the binned version of TU104 chip. The only thing that would hold back inventory would be GDDR6 availability.

      Edit: D’oh, it is a fully enabled TU106 chip which still should be easier to fab en mass then TU104 chip(s).

        • Jeff Kampman
        • 1 year ago

        It’s a separate chip entirely, not a binned/cut-down part.

          • Krogoth
          • 1 year ago

          Interesting, it looks like Nvidia hasn’t even bother to released a binned versions of TU104 yet. They are probably waiting to launch a “2070Ti” for it then.

        • jihadjoe
        • 1 year ago

        I’m surprised it’s fully enabled!

        2304 shaders seems a rather odd amount and hinted at a 9/10 chip. I completely thought there was room on the chip for a 2070ti with 2560.

          • techguy
          • 1 year ago

          2304 / 64 = 36

          That’s a pretty standard number of SMs for an “upper-mid/lower-high” tier graphics card.

      • flptrnkng
      • 1 year ago

      I haven’t looked too far, but any links to buy one of these Hard Launched cards today?

      Amazon/Newegg ? Don’t see one.

      Nvidia’s store? Notify Me button.

      I did see an Ebay link, though. Edit: Sold Out.

        • PixelArmy
        • 1 year ago

        Technically the launch date was actually the 17th (a day after this article and your comment).
        As of now just type “rtx 2070” into newegg…

        [url<]https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=-1%208000&IsNodeId=1&Description=rtx%202070&bop=And&Order=PRICE&PageSize=36[/url<]

    • techguy
    • 1 year ago

    Given just how little the RTX 2080 moved the performance curve relative to Pascal, I was pleasantly surprised to see 2070 solidly outperform the 1080. At least NV is smart enough to realize a 2070 == 1080 move would have been a disaster.

      • benedict
      • 1 year ago

      Since when is 5% advantage considered solid?

        • techguy
        • 1 year ago

        [url<]https://www.hardocp.com/article/2018/10/14/msi_geforce_rtx_2070_gaming_z_performance_review/12[/url<] Try to keep up

          • wierdo
          • 1 year ago

          “We talked about potential “it” factor and bang-for-the buck above. The GeForce RTX 2070 definitely has the “it” factor. The RTX 2070 out-performs the GeForce GTX 1080 in all of our tested games. However, it does need to be a highly clocked factory overclock to do so”

          nVidia is sending overclocked 2070 and 2080 GPUs to reviewers so far, so difficult to say. The actual OEM versions will come in stock speeds usually.

            • K-L-Waster
            • 1 year ago

            Rly?

            There’s loads of cards from all the majors with “factory overclocks” applied — that’s both for 10 series NVidia and for AMD cards.

            Stock clocks are typically the exception rather than the rule in GPUs.

          • benedict
          • 1 year ago

          Hardocp is comparing a heavily overclocked 2070 to a standard 1080. The 2070 has a 10% advantage in those tests.
          Try to read properly.

            • Klimax
            • 1 year ago

            Incorrect. They are comparing cards by same vendor of same branding and as it happens of same clocks. (Including under Boost)

      • Krogoth
      • 1 year ago

      It is firmly in Vega RX 64 land with lower loaded power consumption. 2070 is overpriced for now (1080Ti is faster at a similar price point) until GP104 stocks dwindle and lesser Turing SKUs come along.

    • Pville_Piper
    • 1 year ago

    “That’s versus $379 for partner GTX 1070 cards or $499 for the Founders Edition. ”

    $449 for the 1070 Founders Edition…

    • DPete27
    • 1 year ago

    When performance is on par with the GTX1080, we should compare price against the GTX1080. Nvidia has no reason to change the price/performance curve.

      • Pville_Piper
      • 1 year ago

      Thank Red Team for that…

        • Pville_Piper
        • 1 year ago

        -6… Poor little Red Team fanboys can’t handle the truth that the only reason NVidia can get away with the pricing they charge is because the Red Team has nothing to compete with.

        “One thing has become clear, Radeon RX Vega 64 (even factory overclocked) is falling behind quite harshly with the level of performance factory overclocked GeForce RTX 2070 is producing.”

        [url<]https://www.hardocp.com/article/2018/10/14/msi_geforce_rtx_2070_gaming_z_performance_review/12[/url<]

      • Freon
      • 1 year ago

      We need to see both the 1080 and 1080 Ti. Street price of a 1080 Ti isn’t much more than the $599 2070 FE, and 1080 is solidly lower.

      I imagine the 2070 is going to sit squarely between the top two Pascal cards on the price/performance line if we look at actual street prices.

      Yay for ~27 months of progress in silicon technology? At least there is DLSS…

      • XTF
      • 1 year ago

      They don’t?

        • Freon
        • 1 year ago

        Turing has moved up market and is really offering no more price/performance than Pascal unless you count DLSS or ray tracing.

        DLSS has some potential from what I’ve seen so far, though it still take some (minimal?) effort from developers to enable.

          • Voldenuit
          • 1 year ago

          >DLSS has some potential from what I’ve seen so far, though it still take some (minimal?) effort from developers to enable.

          My take with DLSS is:

          1. It’s a brittle implementation; it requires nvidia to commit cluster time to training every game and to package the learning results in driver blobs. That also means that game support will be extremely limited.
          2. It’s more computationally expensive to run DLSS @ 1440p (pretending to be 4K) than 1440p + TAA (but it looks better).
          3. DLSS and Ray-Tracing compete for resources and we’re not seeing any games that have both, at least for now. Ray Tracing right now is limited in rays per pixel and relies on denoising to produce acceptable results. Denoising that uses the same Tensor cores that are used by DLSS.

          Re: 3. This means that even Ray Tracing requires AI training at nvidia’s end to establish ‘Ground Truth’ reference images. It’s very much not a plug-and-play system and ties developers deeply to nvidia. That has worrying implications for the future.

            • Freon
            • 1 year ago

            You can train various super-resolution models on a home computer overnight with something like a 1080 or 1080 Ti. Also once the first model is trained, subsequent models can take that as a starting point, drastically reducing further training time. I think you’re WAY overestimating here.

            The only trick is they may swap out training data based on content. I.e. several models per game, swapped out with simple feed forward AI (i.e. hard written rules). Like cutscene, map 1, map 2, forest, etc. Still doesn’t seem like that big of a deal even if a game has 10 models. If they had 200, ok, finally room for concern, but that would also be a lot of dev work to figure out how to swap models.

            There are several super-resolution models on github you can download and try out yourself.

            • Voldenuit
            • 1 year ago

            Well, good luck getting nvidia to open up their APIs and tools for home user AI training, and the same goes for their drivers for DLSS inferencing.

            We can access the Tensor cores with CUDA, but the community will probably have to write their training software from scratch, as I doubt nvidia is willing to share.

            • Freon
            • 1 year ago

            There’s zero reason for a home user to train DLSS themselves. You completely missed the point. You claimed “cluster time” like it was some substantial knock. I’m claiming whomever does the training likely can do it on the equivalent of a high end consumer PC in a day. If I had to guess Nvidia will help devs implement DLSS on Nvidia’s own dime. Even if its run on some cluster the costs is literal dimes in electricity. You are being ridiculously bombastic in your initial claims.

            Nvidia is hardly trying to obfuscate how to use their tensorcores. They have dev guides and even rewrote ResNet for mixed precision to utilize the FP16 tensorcores.

            [url<]https://devblogs.nvidia.com/mixed-precision-resnet-50-tensor-cores/[/url<] They have an amazingly strong incentive to make it as easy as possible to use tensor cores and make DLSS as easy to implement as possible. You seem to think they're actively trying to make it harder. You're insane.

      • K-L-Waster
      • 1 year ago

      Their shareholders likely beg to differ.

      Consumers of course are not obligated to like it. Nor are they obligated to buy.

      • Laykun
      • 1 year ago

      Does that mean comparing DXR performance as well? 😉

      • jihadjoe
      • 1 year ago

      Turing are big chips! The 106 chip in the 2070 is 445mm^2, that’s close to 1080ti size.

      Price performance might not be moving, but Nvidia won’t be earning any more money from Turing than they did for Pascal. The BOM alone guarantees Nvidia will make less money per GPU sold, and that’s before all the developer support they have to do in order to get game studios to support RTX features.

    • NTMBK
    • 1 year ago

    The RTX 2070- for when you want to raytrace at 720p.

      • Tirk
      • 1 year ago

      Great post, although I’d even wager that, the RTX 2080-for when you want to raytrace at 720p.

      The RTX 2070 might be a sub 720p raytracing card.

        • Freon
        • 1 year ago

        Right, the 2080 Ti seemed to be struggling at 1080p. We can only hope that there is a ton of optimization yet to do for RT.

      • RtFusion
      • 1 year ago

      But its REAL-TIME RAYTRACING!!!! at 720p…. and only for lighting/shadow effects…

      • ZZZTOPZZZ
      • 1 year ago

      Ray tracing at any speed or resolution: I get the feeling that the Nvidia 20xx boards are here at a good 2 or maybe 3 years ahead of those game you want so badly to RT.

      • lycium
      • 1 year ago

      Let’s not forget that the first 3D major acceleration chip, the S3 Virge, was widely lampooned as a “graphics decellerator”, and here we are today with insanely powerful rasterising GPUs.

      You have to start somewhere, and all respect to Nvidia for their risky first jump into the fray.

        • Klimax
        • 1 year ago

        And that’s not mentioning NV1…

          • Liron
          • 1 year ago

          I remember the IGN article explaining in great detail why the GeForce 256 was a doomed effort. Nvidia show lots of graphs where the GeForce is faster, but that’s only when using HW T&L, they said. Now the new 3DFX card comes out and it does not have HW T&L, but it has a much faster fill rate. That is what matters in today’s games and that is why 3DFX’s proposition will triumph and GeForce is doomed to be nothing more than a passing curiosity.

            • DoomGuy64
            • 1 year ago

            That’s true, aside from 3dfx’s poor business decisions that ended up causing them to sell the company.

            GeForce 256 was an overpriced brute force card only useful for 32-bit color. Even without Kyro, you could get by with a TnT2 or voodoo2 in 16-bit all the way until the GF4/R8500. There weren’t any games that needed GeForce 1-2 to play. Nvidia however won by selling these cards at ridiculous prices because they had better hardware, and 3dfx mismanaged every card after the voodoo2.

            To top it off, if Geforce 1-2 were overpriced brute force cards, the Voodoo5 was a double down of that stupidity which barely added any new functionality. 3dfx could and should have skipped [i<]all[/i<] of their 3-5 cards, and coasted by with Voodoo2 until Rampage was ready. That would have saved them from going under. [url<]http://www.thedodgegarage.com/3dfx/rampage.htm[/url<] Now here we are with Geforce 256 (RTX) all over again, waiting for the next Kyro to bring efficiency back to gaming. Nvidia isn't interested in efficiency because that doesn't make them money. Their business model is based on overcharging for inefficient new features that beat competitors to market, but isn't useful until the second or third generation. That said, raytracing is pretty difficult to pull off with any decent level of efficiency, so I don't know how anyone is going to make it feasible. It'll take years to reach mid-range, or be playable at higher resolutions. Neither Geforce 256 or RTX are bad cards for existing games. It is a working business model that disrupts competitors. The only downside is that it is not consumer friendly, and the new features aren't worth the price premium.

      • psuedonymous
      • 1 year ago

      Joking aside, RT sample rate does not need to equal final output sample rate (or even equal raster sample rate). Games can be tuned for more complex RT effects at a lower sample rate, or reduce the complexity to increase the sample rate, or [i<]both at once[/i<] (as in ATAA where samples are clustered along edges).

      • Concupiscence
      • 1 year ago

      Admittedly, if you want antialiasing at a relatively low performance hit DLSS is pretty nice. If I were in the market for a card in this performance bracket I’d consider snapping one up, but there’s not enough distance between the 2070 and the 1070 Ti I picked up late last year to remotely justify it.

Pin It on Pinterest

Share This