Rumor: GK110 coming in $899 GeForce Titan

Nvidia’s GK110 GPU, otherwise known as Big Kepler, may soon be coming to a consumer-oriented graphics card. The chip has thus far been reserved for enterprise-grade Tesla products. However, Sweclockers quotes multiple sources as saying the top rung in the Kepler ladder will power a GeForce Titan card due to be released in March. According to the site, the card will cost an eye-popping $899 and offer 85% of the performance of the GeForce GTX 690.

The GTX 690 is this generation’s SLI-on-a-stick; it features dual GK104 GPUs from the GeForce GTX 680 and costs about a grand. Given that context, offering 85% of the performance for 90% of the price doesn’t sound all that unreasonable.

Although the GeForce Titan’s specifications are unknown, it’s worth noting that the GK110 implementation in Nvidia’s flagship Tesla K20X isn’t a full-fat affair. In that product, one of GK110’s 15 SMX units is disabled, and the clock speed is only 732MHz. GK104 is clocked from 915-1006MHz in the GeForce GTX 680 and 690, although the chip has only eight SMX units. GK110 also enjoys a much wider memory interface than GK104: 384 bits versus 256.

Ultra-high-end graphics cards may have little appeal to budget-conscious enthusiasts, but everything we’ve heard suggests that these kinds of products routinely sell out. Halo products are all about image, too, and Nvidia will want to have something to counter the 8000-series Radeons rumored to be just around the corner.

Comments closed
    • LukeCWM
    • 7 years ago

    Last I heard, 2-GPU SLI still suffers quite a bit from micro-stutter. If this offers 85% of the average FPS of the 690 at 90% of the price, but while only using a single GPU, thus likely doing away with most microstuttering, this could have a clear advantage against SLI and SLI-on-a-stick for those looking for more performance than $500 can buy them.

    • Silus
    • 7 years ago

    Makes sense to release this in the GeForce market. Although initially designed for the HPC market, it still has lots of components specifically used for graphics intensive tasks, like TMUs and ROPs, so why not put them to use ?
    The price is of course outrageous, but it’s been usual for ultra high-end products to cost this much. It certainly won’t appeal to those looking for a sweet spot, since ultra high-end is about bragging rights and little more.

      • BestJinjo
      • 7 years ago

      You are saying it’s a normal course of business when a high end single GPU costs $900? No, the normal course of business is over time faster flagship cards release at $499-649 price levels we are used to. This is the opposite of what’s usual. It actually sounds like a money grab like 6800 UE, 8800 U and 7800GTX 512MB were. All those cards got replaced by cards with 90% of their performance for half the price shortly and they suffered severe levels of depreciation. Sure, there are some people who will buy a high-end NV GPU even if it costs $2,000. That doesn’t make it a smart decision when most PC games in 2013 will still be console ports, while 20nm Maxwell/Volcanic Islands are likely launching by Q2 2014 and will pawn this thing for $500.

      Chances are once PS4/720 launch Q4 2013/Q1 2014, they will have next gen titles like Watch Dogs, Star Wars 1313. It is those games that will push our cards to the limit, not games like Tomb Raider, Bioshock Infinite, Dead Space 3, etc. I don’t expect we’ll see the first wave of next generation graphics until Q4 2013/Q1 2014 at the earlier, right in time for 20nm GPUs.

        • Silus
        • 7 years ago

        First of all, this is a rumor. We don’t know if it will be released or if it is indeed released, that it will cost that much.
        Second, all I said is that it’s been usual (as in recent years) to get ultra high-end prices at these prices, plus that ultra high-end products are geared towards a niche and it’s about bragging rights. It has nothing to do with “smart decisions” and it never has been. And it certainly has nothing to do with consoles or console ports, since upgrading hardware is the appeal of the PC market. People CAN upgrade their hardware, be it expensive or not, to run their games at max settings, putting consoles in their place in terms of graphics fidelity.

        Also, if your upgrade path is determined by “I’m going to wait because in a few months there will be something new” you’ll never upgrade.

    • DeadOfKnight
    • 7 years ago

    Big Kepler has the most beautiful die shots I have ever seen.

    Sandy-E a close second, followed by either Fermi or Vishera.

    • ronch
    • 7 years ago

    At this price, it should come autographed by Jensen himself.

      • Prestige Worldwide
      • 7 years ago

      “I never asked for this.”

      • HisDivineOrder
      • 7 years ago

      If it has 85% of the 690 with 90% of the price, I don’t think that’s really out of line since it’ll have none of the problems that SLI inherently brings with it while the 690 does.

      Assuming the performance is there, of course.

        • ronch
        • 7 years ago

        [quote<]If it has 85% of the 690 with 90% of the price[/quote<] Er... can you please say that again?

      • Arclight
      • 7 years ago

      Would his fingerprint be acceptable instead?

        • ronch
        • 7 years ago

        Unfortunately, no. It could get mixed up with all the other fingerprints on the thing.

    • End User
    • 7 years ago

    Must we always worry about what the budget-conscious enthusiasts think?

      • One Sick Puppy
      • 7 years ago

      Amen. Dam lower classes – who cares about ’em?

    • Krogoth
    • 7 years ago

    Not surprised at $899 as the starting price point.

    It seems like the yields on GK110 aren’t so hot, just like it GF100 predecessor. Nvidia learn the hard way why it isn’t a good idea to start off with their flagship chip when it is a PITA to produce in bulk and offers 20-35% more performance then its mid-range part in terms of gaming performance. It is better to allocate the small number of good chips that they can get towards customers who are willing to pay more for its GPGPU performance (Quadro/Tesla). GF104s outsold the GF100 units by a good margin and they delivered 60-70% of the performance for a more reasonable price point since Nvidia was getting far better yields with them. Nvidia decided to play it safe by first launching their more profitable (via volume) mid-range parts first towards the gaming market. Which is why we saw GK104-base units before GK110s hit the gaming market.

      • chuckula
      • 7 years ago

      Krogoth: Not surprised since at least December 7, 1941, when he came out of his bungalo in Hawaii and was like: Why are you all running around like this? I’m not surprised the Japanese are bombing!

    • tyr2
    • 7 years ago

    Graphics … who cares? Full-throttle CUDA, a full complement of DDR-5, a “reference design” for waterblocking and I’m in for four, as a starter. LGA-2011 dual sockets might start making some sense, finally!

      • Deanjo
      • 7 years ago

      [quote<]LGA-2011 dual sockets might start making some sense, finally![/quote<] They already made sense for some cpu dependent uses. For a gamer it still wouldn't make any sense however.

        • moose17145
        • 7 years ago

        Perhaps not… but… If I had the money… I would have a high end Dual 8 core Xeon machine as my primary gaming rig. Just once in my life I would like to build my over the top dream machine with no compromises. After I get that once I will happily come back down to Earth.

        I would also get a really nice telescope and mount. Nothing too big as I would want it to be portable, but still of high quality. Like a nice Schmidt-Cassegrain with about a 8″-10″ aperture I am thinking. Maybe a 12″ if I could find one that’s still portable enough, but that’s gonna REALLY be pushing the limits for something I would wanna haul to the top of a dark secluded hill top.

        My two hobbies that bring me joy in life. Building computers for myself and studying astronomy. Sadly I have not had hardly any time lately in the last several years to actually get out and observe the skies. One day I will have my dream computer and telescope though…

          • Joe Miller
          • 7 years ago

          So I want one day only to drive a Maserati. Not going to happen.

    • michael_d
    • 7 years ago

    I read this story on Xbitlabs. It states that this GPU is extremely complex and partners will not be allowed to tinker with it. They will receive ready boards and will just put their sticker on it.

    I wonder what AMD has in store for us.

      • indeego
      • 7 years ago

      More: Stickers and marketing.
      Less: Technology.

        • Arclight
        • 7 years ago

        …………since the 4000 series they have been quite competitive, so why would you say that?

        AMD’s next gen will come sooner than nvidia’s. A dual GPU card sporting 2 8950s chips could easily outpeform the titan and if priced right, could outsell it as well for people that want the ultimate performance with a single card.

          • HisDivineOrder
          • 7 years ago

          Amazing that you already know the performance level of the 8950 AND the fact that AMD can put them onto the same board reliably, a feat they cannot even do currently with the similarly spec’ed 7950.

          Tell me what lotto numbers to pick. I want to be a bajillionaire!

            • Arclight
            • 7 years ago

            Well i’m not clairvoyant, but could you please remember this discussion for me and link it in the comments section of the future HD 8950 review? My memory isn’t what it used to be but i’d like to think that my power of deduction is still brilliant.

        • Prestige Worldwide
        • 7 years ago

        Technically, there’s actually more technology in this chip than the gk104, at least from a quantity perspective.

      • ronch
      • 7 years ago

      Better drivers, hopefully.

    • DeadOfKnight
    • 7 years ago

    So this is the true successor to the GTX 580?

      • Deanjo
      • 7 years ago

      Yes, had the GK104 fell short of matching the 7870 we probably would have seen the GK110 released in consumer form probably a bit earlier but it is pretty clear that nvidia had a monster in waiting that could thwart AMD’s next gen cards. This allowed Nvidia to sell the early GK110’s to the high profit market before needing to allocate some of them to the consumer market and at the same time allow them to continue plugging away on finalizing Maxwell development.

        • BestJinjo
        • 7 years ago

        “This allowed Nvidia to sell the early GK110’s to the high profit market before needing to allocate some of them to the consumer market”

        Great theory. I agree for the most part but I think NV couldn’t launch GK110 realistically in 2012 without hurting their business profitability/disrupting wafer allocation. NV publicly announced that they had 150,000 Tesla K10/K20/20X pre-orders from corporate customers around March 2012 but they would only start shipping those by November 2012. Even if HD7970 beat GTX680 by 20%, it’s highly unlikely that NV would have launched GK110 in a reasonable volume in the consumer market in 2012 anyway. First, it took NV until at least late October/early November to start delivering just a couple thousand of these chips to customers like Oak Ridge. If yields were great and NV wasn’t wafer constrained, why did they make their highly valued corporate customers all the way until November to start getting GK110 chips? Second, NV had to delay the launch of sub-$300 desktop Kepler cards below GTX660Ti by nearly 6 months because they were wafer constrained and had to meet contractual obligations for 300+ Kepler mobile dGPU design wins. So even their own sub-$300 desktop Kepler GPUs had to be delayed.

        While in theory GK110 probably could have launched in very low volume on the desktop in 2012, in practice it was never going to happen even if GTX680 was slower by 20% compared to HD7970. It worked out perfectly for NV because they were able to remain competitive in the consumer GPU space and make a killing on Tesla chips in the corporate HPC space. However, if AMD delivered HD7970GE right away instead of a 925mhz HD7970, NV would have been caught with no response. Even when HD7970GE beat the GTX680 in June 2012, NV didn’t even bother releasing a faster clocked 680.

        If Titan is going to deliver 85% of the performance of the GTX690, Maxwell 20nm must be a real beastly GPU 🙂

    • bfar
    • 7 years ago

    $900 seems a bit much for last year’s tech. This is Nvidias way of dumping chips on the market that didn’t make the tesla grade. 8800 Ultra anyone? They were always listed but never in stock.

    • Bensam123
    • 7 years ago

    And now Nvidia fans will realize that their ‘high end offering’ was simply a lobotomized version of the card.

    While it makes sense that ‘now is the time’ to release it, right before the 8xxx series hits and it should offer them breathing room till the 7xx series comes out, I can’t imagine why they would release an entirely new product, the 690, when they have something like this in backup. This may be nothing more then just another rumor. That breathing room would just be limited to a ultra expensive card unless they simply drop it down to the 680 levels as soon as the 8xxx series comes out and push the rest of their products down a rung in the ladder.

    So I suppose this is only believable if this planned on eventually being pushed to $400 levels. I would still argue that the benefits of releasing this chip now are rather lost, long after the high end GPU battles have been waged for this generation.

      • Airmantharp
      • 7 years ago

      If anyone was paying any attention, the ‘GK104’ label would have given that away before the product was released. Hell, I was disappointed by the rumors, and then disappointed again when they turned out to be true. And I still bought a GTX670, because it was the better product for my money, knowing that it was only a ‘half-Kepler’.

      I think Nvidia’s strategy is quite clear and reasonable- they had such an excellent product with the Kepler generation that they were able to beat AMD’s large silicon with their mid-range product, while simultaneously winning massive contracts for big-iron solutions. The success in the consumer market may or may not have been planned, but it definitely worked out for Nvidia.

        • Bensam123
        • 7 years ago

        I don’t know about that, they still could’ve designed desktop solutions using a GK110 and made server solutions with the same chip… They have with past generations, just the same as AMD has done with their server chips.

        Why would the want to segment performance between servers and desktops, when they can use other methods to do so (like CAD and DCC acceleration)? It doesn’t make sense when they could have their cake and eat it too.

          • Krogoth
          • 7 years ago

          Yields is the problem. Nvidia learn the hard way with GF100. They rather have limited yields of GK110 go into the more lucrative HPC and enterprise markets then being wasted on gaming platforms when their mid-range silicon is up to the task.

            • Bensam123
            • 7 years ago

            You could say that about any chip. The GK110 isn’t produced on a smaller process. I don’t know why they’d have such trouble producing a GK110 over a GK104…

            Why would they choose to release such a product now when they already have the 690 occupying the niche this would replace (with supposedly better yields as it actually exists)?

    • cynan
    • 7 years ago

    [quote<]Halo products are all about image, too, and Nvidia will want to have something to counter the 8000-series Radeons rumored to be just around the corner.[/quote<] This makes sense to me. Why else release a $900 card, when you already have a similarly performing part for $1000. But they still won't sell many more of them than they have the 6990. And if the HD 8970 offers even remotely comparable performance (not that a big jump in performance is expected with the HD 8000 series), that $900 price will drop fast. At least AMD will have better justification for releasing a $600 plus single GPU flagship than they did when the HD 7970 first appeared at $550. And that's not a good thing for consumers.

    • jdaven
    • 7 years ago

    AMD never released an official 7990, AFAIK. Now Nvidia is releasing the Geforce Titan. I wonder if both companies have decided to stop offering dual chip cards. I thought that dual chip cards were to save space and spread out the area of heat by offering two chips separated by a space. How is Nvidia offering what is essentially a GTX 690 in the space of a single chip without needing a huge heatsink and fan?

      • d0g_p00p
      • 7 years ago

      Have you seen the Tesla K20? It does not need a huge HSF setup to work. I am guessing that the Titan will ship with a similar setup.

        • Silus
        • 7 years ago

        The GeForce GK110 based product should have higher frequencies than K20, so it should also need a “bigger” HSF, but definitely not as “big” as those 7990 abominations that AMD’s partners have made, that are triple slot and the length is like 12-13 inches long….

      • jihadjoe
      • 7 years ago

      Titan is almost exactly the same die size as the GTX580 (521mm vs 520mm). Nvidia just has to use the same level of cooling as in the old 580s and Titan will be fine.

    • R2P2
    • 7 years ago

    The flagship has a unit disabled? Are they shipping *anything* with a full GK110?

      • Forge
      • 7 years ago

      You misread. Nobody knows what this hypothetical “GeForce Titan” is bringing. The point made was that we may never see a full fat GK110 at full speed, since even the ultra-high-cost Kepler Teslas have an SMX disabled, and those only clock just below 750MHz.

      In other words, even if GF Titan brings the world’s first fully enabled GK110, it may be doing so at 900MHz, or even much, much less.

      Edit: Later comments make it clear I’m looking at V1.1 of the article. My apologies.

      Holy mother of God, I just realized, the only folks who would really see this as a huge win over the GTX 690…. Are the folks who would want to run 3 or 4 of them.

      Sick excess. I like it.

        • Deanjo
        • 7 years ago

        [quote<]Holy mother of God, I just realized, the only folks who would really see this as a huge win over the GTX 690.... Are the folks who would want to run 3 or 4 of them.[/quote<] Or anyone that does not want to have to worry about the microstuttering or having to wait for an optimized SLi profile for their application.

      • Firestarter
      • 7 years ago

      it could be that the yields on fully functional (15 SMX units) GK110 dies is not high enough to bother binning for it

        • DeadOfKnight
        • 7 years ago

        No, this is something we’re likely to see more and more of as transistor sizes decrease and core counts increase. Google “dark silicon”. I doubt it has anything to do with yields, although that’s always a challenge for them, it seems.

        More than likely they disabled a unit because they could get more performance by disabling it and setting a higher clock speed than leaving it enabled and clocking it lower. They simply can’t push that much power through a big chip like this, not with transistors as small as they are. Still, I expect it is true that these would probably be lower binned offerings than the tesla chips.

        For comparison, look at Intel’s Sandy Bridge-E processors. The consumer grade has two cores disabled, and that’s even at $1000. There’s a reason Ivy-E is so far behind. However, maybe 3D transistor tech and low leakage current will help with this problem for getting to 22nm, I have no idea. I was under the impression, however, that their 22nm process was tuned for lower power envelopes.

        Looking forward, smaller chips like GK104 and LGA 1155 quad cores are always going to be what’s targeted at PC enthusiasts, even. Eventually we probably won’t be seeing too much of an increase in transistor budget even with a die shrink. They’ll do their best to keep the performance increases coming, but you can see them all aligning their roadmaps towards power efficiency rather than performance for a very good reason. The mainstream going mobile is only part of the story.

          • Krogoth
          • 7 years ago

          Thermal and power consumption is the reason for “dark silicon”.

          8-core Sandy Bridge E chips do exist in Xeon flavor, but they operate at lower clockspeeds than their 6-core counterparts. The reason? To keep thermal and power consumption somewhat sane. You can get 8-core Sandy-Bridge E silicon with the same clockspeed as a 6-core counterpart, but it will require high-end air-cooling or liquid-cooling to keep it from throttling when fully loaded. High-end air-cooling and liquid-cooling is a big no in the enterprise market and has too many liabilities for the mainstream market. Which is why high-end air-cooling and liquid-cooling only exist in aftermarket solutions.

          The same is applicable for GK110 silicon. I’m willing to bet that $899 “GK110 Titan” will have healthy appetite for power when loaded, but it will not be as bad as 480, probably a little lower than the 580.

            • kcarlile
            • 7 years ago

            The K20 appears to only need two 6pins, like a GTX680, and that’s a GK110.

            • DeadOfKnight
            • 7 years ago

            The K20 is an even further cut down part and it also has to be passively cooled in server racks. The GeForce cards do have a chance to have all units enabled with more power and cooling available to them, but I wouldn’t count on it.

            • jihadjoe
            • 7 years ago

            I dont think those are meant to be completely passive.

    • Vrock
    • 7 years ago

    So when do the new consoles come out again?

    • Dissonance
    • 7 years ago

    Due to some questionable Google translating, this story initially attributed Tesla K20X specifications to the GeForce Titan. The post has since been updated.

    • kcarlile
    • 7 years ago

    Just bought a K20 for prototyping remote virtualization. Let me tell you, this would be a LOT more appealing option at 1/4 of the price, since I’m not doing computation on the card–and since we’ll probably be buying a bunch of them once we go into production.

    EDIT: visualization, not virtualization. I’m not a complete idiot. Yeesh.

    • CoWBoY
    • 7 years ago

    Would love to see some “Mercury Playback Engine” benchmarks of this card, on account the GTX 680 is so popular with the Adobe Creative Suite. * [url<]http://ppbm5.com/DB-PPBM5-2.php[/url<]

    • ish718
    • 7 years ago

    Brings back memories of the 8800 Ultra Crysis days

      • Arclight
      • 7 years ago

      When is Crysis 3 launching?

        • indeego
        • 7 years ago

        When it runs Crysis.

        • BestJinjo
        • 7 years ago

        February 19, 2013.

          • Arclight
          • 7 years ago

          So i’d say the launch of the card will be just in the nick of time. When PCs will go up in smoke from trying to run Crysis 3 at 60 fps, when all those cards will fail to deliver, when all hope will almost be lost a Titan will rise to slay the beast.

            • BestJinjo
            • 7 years ago

            Prepare to be disappointed then. Titan is only expected to be 85% of GTX690’s performance. GTX690 is getting destroyed by Crysis 3 maxed out:

            [url<]http://gamegpu.ru/images/stories/Test_GPU/Action/Crysis%203%20MP%20Alpha/test/crysis%203%20vh%201920.png[/url<] [url<]http://gamegpu.ru/images/stories/Test_GPU/Action/Crysis%203%20MP%20Alpha/test/crysis%203%20vh%202560.png[/url<] The chance of Titan maxing out Crysis 3 at 60 fps is practically 0%. Crysis 3 has tessellated vegetation, dynamic area, lights, volumetric fog/effects and all kinds of DX11 punishing graphical enhancements. I bet it'll take 3-4 years before any single GPU will 'max' that game.

            • Airmantharp
            • 7 years ago

            Still trying to figure out why people use the Crysis series as a point of comparison. It’s worse than 3DMark.

            • Arclight
            • 7 years ago

            Because unlike 3DMark, you can actually play it. Also, it’s not just the performance hit, the game looks amazing when maxed out, compared even with recent games, it still seems superior from a graphical point of view.

            • Arclight
            • 7 years ago

            With some of the graphical settings dialed a bit back and with a touch of OC as well as some pixie dust magic on the drivers and with engine optimizations, i could easily see the Titan giving 50+ fps at 1080p.

    • alphadogg
    • 7 years ago

    I would think “reasonableness” would *begin* at having 90% performance at 85% of the price.

      • BestJinjo
      • 7 years ago

      People ripped HD7970 apart for launching at $550 when it offered 20% more performance over the $450 GTX580 (even though AMD just went back to ATI’s historical pricing when ATI had flagship cards), and with a bit of overclocking matched the $700 HD6990/GTX590.

      Now NV is trying to raise the single high-end GPU pricing from GTX680’s $500 to $900 and justify it by comparing it to the GTX690? In retrospect, how would we feel if ATI launched HD5870 for prices similar to MSRP of HD4870X2/GTX295 because HD5870 traded blows with those cards?

      [url<]http://www.techspot.com/review/198-ati-radeon-hd-5870-review/page15.html[/url<] What happened to the days when over time we used to get more performance at a similar price level or similar level of performance at a lower price? NV offered more performance without increasing prices this dramatically when moving from 8800GTX--> GTX280 --> GTX480/580 --> 680. It seems NV is willing to offer us more performance over the GTX680 but wants to charge $400 extra to get it. It appears we the consumers are footing the bill of more expensive 28nm wafers, while NV sure is smiling right now: "Gross margin widened to 52.9% from 52.2%." [url<]http://www.marketwatch.com/story/nvidia-earnings-up-17-on-strong-demand-2012-11-08[/url<] As a year approaches to GTX680's launch, I expect more performance at a similar price, not paying $400 more to get it. It doesn't really matter that GTX690 costs $1000 in retail right now because on the price/performance technology curve, that GPU is no longer really worth $1000 come March 2013. Tech is supposed to get faster and/or cheaper over time. This is definitely not how things have worked for more than a decade in the GPU space. If you look at prices of GTX680, they have barely dropped since launch. I am still seeing GTX680s going for $440-450 on Newegg. By March 2013, we should be able to buy GTX680's performance level for $349 in the form of GTX770/760Ti, etc. The next flagship should offer more performance at $499-549, maybe at $649 level of GTX280. I realize that all of these cards will sell out but with the way things are going, does that mean next generation flagships will be $800-900 cards and $400-500 cards will be the new "mid-range"? I hope not. NV already raised mid-range pricing from GTX460 768/1GB $199-229 and GTX560Ti at $249 positioning new mid-range cards like GTX660Ti at $299. AMD abandoned the price/performance of HD4000 to 6000 parts. I am seeing price increases all across the board from both NV and AMD. I am not liking the direction the GPU industry is heading to be honest. $900 for a flagship single GPUs is steep.

        • Arclight
        • 7 years ago

        I’m thinking they are doing this to make some quick cash before the HD 8000 series launch. Who knows, it might be powerfull enough to match the 8970, or dare i say it, maybe beat it? We got no rumours yet regarding AMD’s upcoming top performer and since we know about the GK110, i’m thinking that AMD might do an “accidental spill” of information. Man, i can’t wait to read the reviews.

          • BestJinjo
          • 7 years ago

          At 85% of the performance of GTX690, this is almost certainly faster than the HD8970. AMD is only rumored to increase die size to 410-420mm2 with a 5.1 billion transistor chip. I can’t see HD8970 increasing performance by more than 15-20% over HD7970GE due to TDP/die size issues. I can’t see how they’ll match a 7 billion transistor Kepler that also gets a much needed 50% boost in memory bandwidth that held back GK104.

          Even if HD8970 is 20-25% slower, if NV prices this chip at $899, is AMD going to raise HD8970 to $599-649? You see my point? Slowly AMD and NV seem to be determined to raise prices on us. I wouldn’t mind if games actually started looking next generation. 2012 was a terrible year for graphics/ports. We got even more unoptimized console ports, with crippling DX11 effects (Hitman Absolution, Sleeping Dogs, Sniper Elite V2), or very poorly optimized games (Assassin’s Creed 3, NFS:MW) or just ugly looking console ports (Black Ops 2).

          During the tail-end of the PS360 console generation, so many PC games ended up being shoddy unoptimized ports with graphics that still can’t top Crysis 1 easily but GPU demands continue to grow (sloppy programming?). Instead of encouraging GPU upgrades by lowering prices or at least keeping them steady, NV and AMD are raising them. Interesting. I might skip this generation entirely then and wait until things normalize at 20nm. I can easily see a $499-549 Maxwell delivering similar performance to the Titan by end of Q1/early Q2 2014.

          Hopefully Crysis 3 and Metro Last Light surprise me.

        • Deanjo
        • 7 years ago

        [quote<]xPeople ripped HD7970 apart for launching at $550 when it offered 20% more performance over the $450 GTX-580[/quote<] It was nowhere close to giving 20% more performance over the GTX-580. Might I also reading up on tech reports review as well. [url<]https://techreport.com/review/22192/amd-radeon-hd-7970-graphics-processor[/url<]

          • BestJinjo
          • 7 years ago

          Nowhere near 20% more than GTX580? It’s more like it’s at least 20% faster. If you can’t do mathematics, please refer to reviews that have a sliding graph scale that does it for you:

          [url<]http://www.computerbase.de/artikel/grafikkarten/2011/test-amd-radeon-hd-7970/10/[/url<] The lead only got larger with more modern games - 29% at 1080P and 36% at 1600P: [url<]http://www.techpowerup.com/reviews/VTX3D/Radeon_HD_7870_XT_Black/28.html[/url<] Take overclocking into account and HD7970 is 49-59% faster at 1080P-1600P: [url<]http://www.techpowerup.com/reviews/HIS/HD_7970_X_Turbo/28.html[/url<] Plus 3GB of VRAM for Skyrim mods. Not to mention HD7970 OC is still the fastest single GPU this generation even against GTX680 OC. So 20% more performance over 580 right off the bat, with at least 50% more with overclocking (more at 1440/1600P), extra VRAM for mods, and the 7970 OC still remains the fastest single GPU in the hands of enthusiasts a year after release....All that for $100 extra and people moaned and cried. Let's not forget bitcoin mining that ensured HD7970 has already paid for itself for every early adopter who take advantage of this perk. In hindsight, HD7970 for $549 was a very good deal if you know how to overclock and can take 10 min to set up bitcoins. If you don't, it was still 20% faster than 580 for $100 more. Last time I checked GTX680 was barely 10% faster than 670 for $100 more when it launched. Just sayin'.

            • Deanjo
            • 7 years ago

            Take a look at your own links:

            [url<]http://www.techpowerup.com/reviews/VTX3D/Radeon_HD_7870_XT_Black/28.html[/url<] The GTX-580 beat the plain jane reference 7870 in every test. Nm, reading comprehension issue here. For some reason I thought you were comparing the 7870, long day, eyes are tired.

        • HisDivineOrder
        • 7 years ago

        You’re comparing the 690 to prior generation cards. If this really is the real Big Kepler, then is the same gen card as the 680/670 the 690 is based off of. That’s why it works better than your examples.

        This just sounds like a way for nVidia to dump some GK110 inventory (probably the defective ones) while also crushing AMD’s hopes and dreams of having the single card performance crown decisively. Meanwhile, the halo product can help sell the more standard 680 and 670’s while making them both look much cheaper and more reasonably by comparison.

        It’s a win, win, and a win for nVidia. I can’t see why they WOULDN’T want to do it. Moreover, assuming the performance is there, they’ll be pissin’ all over AMD’s hushpuppies right around the rumored launch of the 8xxx series.

        It really reminds me of the 8800 Ultra launch. The 8800 came out, ruled for a long while, and then after yields improved, they released the Ultra. Then repidly replaced the entire 88xx series afterward with the 9xxx series, except the 8800 Ultra. Imagine if they continued to do this. Every year they release the highest of the high end for the last gen card while immediately preparing the next gen variant for a few months after the fact, staggering the releases every three months (similar to the 680, 690, 670, 660 Ti, 660) launches from last year.

        So Big Kepler 699 Titan, three months later, 770/780, three months later, 760 Ti, three months later, 760/750. Zip on to next year and around this time, Big Kepler’s replacement comes. Every year, they own the high end with a halo product and every year they have updates coming at increasingly more friendly price points with better tech with more optimal dieshrinks.

        Kinda like Sandy Bridge-E which continues to be Intel’s high end while IB has been out forever. And Haswell seems destined to come before IB-E replaces SB-E.

    • CampinCarl
    • 7 years ago

    I’m more interested in this as an alternative to buying a $5000 GK110-based quadro. Assuming there’s no finagling going on with CUDA support for these things, they should be very awesome in the computational-accellerator category.

      • codedivine
      • 7 years ago

      Previously, Nvidia has crippled the fp64 support in GeForce cards compared to the Teslas and Quadros. For example, Tesla C2050 and GTX 480 were both based on the same silicon, but fp64 support was cut-down in the GTX 480 for product segmentation purposes. I am guessing the same would happen with the Titan.

        • thesmileman
        • 7 years ago

        Though I don’t want it to be crippled I will still grab it as a test/dev device at the office for lots of developers and then we will deploy/final tests on teslas because it is a huge cost savings. Likley it will have 6GB or 8GB of memory which is needed for a number of tasks.

      • cobalt
      • 7 years ago

      In the past, things like ECC and double precision support have been missing from the GeForce version of the Teslas, so I wouldn’t be surprised to see those missing here. But otherwise I’d agree — if this comes out with some of interesting CUDA compute capability 3.5 features, this could be a much better chip for computing than, say, the 680.

        • CampinCarl
        • 7 years ago

        The thing, to me at least, is that even if they do cut down the double-precision compute support by 25% or whatever, it will STILL have way more DP compute power than a single GK104 chip. ECC is another beast altogether, but typically not all that necessary in most compute use cases.

          • BestJinjo
          • 7 years ago

          What program are you actually using that needs DP compute power? If you don’t need ECC, have you considered getting the Asus Matrix Platinum HD7970 and overclocking it to 1300mhz? That chip will likely deliver way higher DP performance than this GK110. You’ll be looking at > 1.3 Tflops of DP on the 7970 @ 1300mhz. Or does this program not support AMD cards or runs very slow on them?

          • Deanjo
          • 7 years ago

          [quote<] ECC is another beast altogether, but typically not all that necessary in most compute use cases.[/quote<] Whoa, compute cases is EXACTLY where it matters especially when you are talking DP use cases. Data corruption is intolerable for scientific and financial work. Sorry but you are flat out wrong with that statement.

            • Namarrgon
            • 7 years ago

            There are plenty of other high-compute fields that don’t need ECC. Rendering and image processing, for example – a bad pixel can easily be tolerated (and re-rendered if need be), and low costs are important when you’re setting up a farm of hundreds.

            • Deanjo
            • 7 years ago

            This is true for items like that but then again you are not going to be concerned about DP performance for items like that at all. SP performance for items like that is all you are concerned about for those use cases and a GK104 would be more then enough.

            • Namarrgon
            • 7 years ago

            True enough. There are certainly some fields that may require full double precision, but can also tolerate occasional errors – large-array sensor processing perhaps – but I imagine not nearly as many.

    • dpaus
    • 7 years ago

    [quote<]offering 85% of the performance for 90% of the price doesn't sound all that unreasonable[/quote<] No, but it does seem pointless from a SKU perspective. Unless you're one of those who think that many Nvidia product offerings are a bit SKU'd. Not me, of course, but there are some.

Pin It on Pinterest

Share This