GeForce Titan Z review leaks, card coming in Q2

Where in the world is Nvidia’s Titan Z graphics card? The dual-GK110 monster was revealed in March, but we haven’t heard much about it since—at least not officially. Rumors abound that the card is delayed or possibly even canceled altogether. The reason? The $2,999 Titan Z is apparently slower than AMD’s dual-Hawaii Radeon R9 295 X2, which sells for a mere $1,500.

As it turns out, there’s some evidence to support that claim. A member of the Linus Tech Tips forums has posted images from a review of the Titan Z published by Hong Kong computer magazine E-Zone. My browser’s translation engine can’t make any sense of the Chinese characters in the images, but the numbers in the performance table are easy to understand. According to E-Zone, the Titan Z is slower than the Radeon in 3DMark, Battlefield 4, and Sleeping Dogs. The GeForce actually comes out ahead in Batman: Arkham Origins and Tomb Raider, though.

Interestingly, E-Zone indicates that the Titan Z consumes about 60W less than the Radeon under load. The GeForce GPUs reportedly run 20°C hotter than their Hawaii counterparts, too, likely due to cooling differences. The Titan relies on traditional air cooling, while the Radeon is strapped to a closed-loop liquid cooler.

When asked about the cancellation rumors, Nvidia told Gamers Nexus that the Titan Z is still on track for a Q2 release. The second quarter usually ends in June, but Nvidia’s fiscal schedule lags behind the normal calendar by two months (the company’s first fiscal quarter ended April 27), so we could be waiting a while for the Titan Z to make its official debut.

Even if the dual-chip Titan isn’t the fastest gaming card around, its double-precision compute prowess should be a hit with folks who use GPUs for general-purpose number crunching. We should also note that E-Zone’s benchmark scores appear to be in frames per second. Our latency-focused methods exposed some troublesome frame time spikes in the Radeon R9 295 X2 that don’t show up in FPS averages. We ran into multi-GPU scaling issues with the card, as well. Despite what the initial numbers say, the Titan Z may yet provide a better overall gaming experience. Thanks to Videocardz for the tip.

Comments closed
    • eloj
    • 6 years ago

    So how many of these do you think they’re going to manufacture? Probably some small multiple of a thousand, right?

    These exist only for PR, nothing else. Even with the highend-margin these will bring, there won’t be enough product to sell that it’s going to make /any/ difference to the bottom line.

    Given this, the amount of space these types of products are given is annoying. Just ignore them and they’ll hopefully go away. Waste of time and energy for all parties involved.

    • Bensam123
    • 6 years ago

    Shit guys, 60w… AMD totally get piz-pwned there! Who would buy a 295 x2 when the TitanZ is so much more efficient!

      • JohnC
      • 6 years ago

      Are you trying to be sarcastic there? Why? I don’t see anyone willing to buy this redundant, overpriced device…

        • Bensam123
        • 6 years ago

        That was the point brah. Perhaps there is a bit more there too.

    • Tech Savy
    • 6 years ago

    If the Titan Z is a better overall experience in gaming considering frame pacing, I doubt anyone would argue that it is a $1500.00 better experience. I imagine that they will be building a more efficient cooler and dropping the price quite a bit.

    They will need to be a convincingly better card otherwise it might be better to take the loss on the Double GPU release this round and scrap it. Unless they can still make a killing in other markets that utilize GPU Tech.

      • UnfriendlyFire
      • 6 years ago

      AMD is catching up with the micro-stutters though, so Nividia’s advantage isn’t going to be as big compared to back when AMD launched the 7990.

    • Chrispy_
    • 6 years ago

    I have never seen the point of these:

    If you need to use two of these for “quad-SLI” because three Titan’s aren’t enough in triple-SLI, then you have some serious problems that one extra GPU isn’t going to magically fix.

    (unless that problem is having more money than sense)

      • peaceandflowers
      • 6 years ago

      Just marketing, I presume. Apparently being able to say “we have the world’s fastest card” has some value for these companies.

        • Chrispy_
        • 6 years ago

        The “halo effect”

        These days, Nvidia’s halo product is maxwell. Fast enough for most people and extremely cool, quiet, compact, flexible.
        It’s super cheap, based on its tiny die-size, though those costs aren’t being passed down to the buyers yet….

    • HisDivineOrder
    • 6 years ago

    I was never the target audience for either of these cards (TitanZ or 295), so…

    Maybe I’m getting old. I’ve just got no feelings about the overpriced cards today except…

    …wow, they’re both insanely overpriced. One moreso than the other.

    • SomeOtherGeek
    • 6 years ago

    Aww, man! I will have to like wait forever for my 4 Titan Zs. So looking forward to wasting my money. I guess I’ll just cancel the whole thing and buy a GT610 and be happy.

    • gmskking
    • 6 years ago

    Very overpriced video cards…no thanks.

    • alientorni
    • 6 years ago

    how did this happened.
    how is that an $1500 msrp is considered a good price. and yet a $3000 msrp is even considered.
    oh nvidia, what have you done.

    • albundy
    • 6 years ago

    its cards like this that hinder progress. nvida sucks for wasting efforts and R&D funds for this.

      • Airmantharp
      • 6 years ago

      This is the easy stuff; the R&D needed to put this card out isn’t even a drop in the bucket compared to the R&D needed to port a GPU to a new process, let alone what’s needed to design and fab a brand new architecture.

    • LordVTP
    • 6 years ago

    As someone who ran 2x 295’s and is still rocking 2x 590’s… I really don’t see myself going this route. Time to pony up for a proper 4-way setup next time!

    • alientorni
    • 6 years ago

    purportedly “better overall gaming experience” for double the price. c’mon, don’t try to excuse this insult to consumers.

      • Airmantharp
      • 6 years ago

      Those that have known ATi/AMD long enough find wisdom in Geoff’s words.

        • alientorni
        • 6 years ago

        it has already been seen how the 295 x2 performs on frametimes, and it isn’t any worse than it’s single gpu counterparts so there’s nothing to be seen or found.

    • UnfriendlyFire
    • 6 years ago

    If Nividia launched the $3000 GPU during the 2013 fall, OR if the crypto-coin mining craze continued long enough to send the 295X’s price skyrocketing, OR if the 295X’s cooling was terrible, they could’ve gotten away with it.

      • JustAnEngineer
      • 6 years ago

      …if it hadn’t been for those meddling kids!
      [url<]http://www.youtube.com/watch?v=wTTxDWZcbxI[/url<]

    • hellstrider
    • 6 years ago

    it’s not the drivers, they are probably trying to implement closed loop cooling system, the thing is running too hot

      • JohnC
      • 6 years ago

      I doubt that they’ll do it considering that the air cooler design’s photos were already posted… They are just desperately trying to compensate for reduced clock speeds through further driver optimizations, which takes time.

    • JohnC
    • 6 years ago

    Eh… They should just cancel it alltogether. Just because AMD still has one of those gimmicky 2-GPU cards and it was kind of a “traditional” thing to do in the past does not automatically mean that you should still try and produce these things. Very few people will be willing to buy this even for “bragging rights” (it is affordable to me but I see no reason why I should get it), which is mostly about benchmark numbers and you’ll get more out of SLI’ing a couple of overclocked Titan Blacks, especially if you’ll get the ones with watercooling blocks pre-installed.

      • NarwhaleAu
      • 6 years ago

      I’m not sure what profit margin there is on a $2,500 card… but they keep making them so someone must be buying!

        • JohnC
        • 6 years ago

        These particular 2-gpu models aren’t really for profit (or for any practical usage at all), more about “e-penis measuring contest” between AMD and Nvidia. They don’t even care if these will sell at all.

    • Prestige Worldwide
    • 6 years ago

    Irrelevant card is irrevelant.

    If you want that much GPU power, just buy 2 Titan Blacks and enjoy the $1000 you just saved.

      • superjawes
      • 6 years ago

      People buy these types of products for power? I thought they were only for bragging rights.

      • alientorni
      • 6 years ago

      or a R9 295 x2 and enjoy the $1500 you just saved.

        • Klimax
        • 6 years ago

        Not if you need FP performance. (Even last Radeons got crippled…) And for FP NVidia is usually faster and another thing to consider IIRC there are some SW packages which don’t recognize regular Radeons, but do recognize Titans.

        Of course not to mention OpenGL or CUDA…

          • peaceandflowers
          • 6 years ago

          With FP performance, I presume you mean DP FP? The cards should be close in SP FP – which I think is more relevant for consumers (if at all).

      • Deanjo
      • 6 years ago

      There are advantages to the single card route for computing purposes.

        • Waco
        • 6 years ago

        Not when they have separate memory spaces…

          • Deanjo
          • 6 years ago

          Yes there is an advantage there even then with UVA. Plus you are not having to deal with bottlenecks of the system bus (such as a having one card in a x16 and another in a x8). In addition the performance states are going to stay synchronized between the two GPUs and you are only having to deal with one preemption wait state vs one per card.

            • Laykun
            • 6 years ago

            If you’re buying two Titan blacks then you’ve probably got a motherboard with 2 x16 slots. That’s also assuming the x8 PCI-e gen3 is a bottleneck, which I highly doubt.

            • Deanjo
            • 6 years ago

            That’s all dependent on the computation and data sets that you are using. Even a PCIe 2 x16 slot which has the same bandwidth as a PCIe3 x8 slot does become a bottleneck on some loads. Remember, when copying from one GPU to another, you are only as fast as the slowest negotiated link.

            • Laykun
            • 6 years ago

            Why on earth would you copy between GPUs? Accelerating workloads on a GPU is primarily supposed to be non data dependant, if your threads are depending on data from other threads on the GPU then you lose the entire advantage of doing work through GPU processing.

            I know people that do research and large volume data (3d textures, ct scan volumes) with cuda and bandwidth of the PCI bus is never an issue.

            • Waco
            • 6 years ago

            Bandwidth and latency are often an issue for data crunching. You may get a 50x+ speedup from doing the compute on the card but if you spend time copying the data to/from the card your overall speedup might only be 2x overall (this is doubly true if your dataset doesn’t fit onto the GPU memory).

            • Waco
            • 6 years ago

            And this is related to the problem at hand…how?

            1. Copying between GPUs is extremely rare as it implies communication…something GPUs are bad at if you care about latency (and you do if you’re communicating).
            2. What do you think happens if you talk to both GPUs on the card at the same time? Why do you think that going through another PCIe switch is better than splitting the CPU lanes up between 2 dedicated GPUs?
            3. Why am I arguing with you when you clearly don’t know what you’re talking about?

            • Waco
            • 6 years ago

            Right, because two cards on a PCIe switch is totally better than two cards with dedicated bandwidth direct from the CPU with less latency. /sarcasm

            There’s zero advantage to these for compute [i<]unless[/i<] you are trying to put as many GPU cores into a single box as you can...and at that point there's no reason not to build a box with a PCIe expander since you don't care about bandwidth anyway.

            • Deanjo
            • 6 years ago

            [quote<]Right, because two cards on a PCIe switch is totally better than two cards with dedicated bandwidth direct from the CPU with less latency. /sarcasm[/quote<] With Cuda you do not need to copy back to CPU, copying back to CPU adds another copy step which is not needed.

            • Waco
            • 6 years ago

            I’m not sure what this has to do with what I said (nor am I sure what you mean by “copy back to CPU”).

            Memory access (copy to/from GPU) goes across the PCIe links, through the CPU, and into main memory.

            Adding an extra switch in there induces latency which is undesirable when offload latency is already an issue with many workloads.

        • JohnC
        • 6 years ago

        Are there? Interesting… I’d like to see proper test results of this thing vs. 2 Titan Blacks for computing purposes.

          • Waco
          • 6 years ago

          No, no there aren’t. Problem solved. 🙂

            • JohnC
            • 6 years ago

            So it was another case of “armchair theorycrafting”? That’s a pity 😉

            • Waco
            • 6 years ago

            Pretty much. His comments sound reasonable but only if you don’t understand the subject at hand.

        • UnfriendlyFire
        • 6 years ago

        If you’re going to buy consumer cards for server computing stuff… That’s being dangerous.

        I recall there was someone on serverfault.com who tried to use consumer motherboards without ECC for mission critical tasks. Boy was his thread interesting when he started begging for help.

    • oldDummy
    • 6 years ago

    I would like one of these.
    Can’t justify it at this price.
    ha, guess that isn’t to unusual.

Pin It on Pinterest

Share This