Personal computing discussed

Moderators: renee, morphine, SecretSquirrel

  • 1
  • 2
  • 3
  • 4
  • 5
  • 8
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 8:36 pm

So I'm reading GPU articles and, in the comments without fail even if it is not about Nvidia hardware, there are people saying how the holy grail will come with the advent of Pascal. They say not to buy anything until its release and that with its new architecture, process node at 16nm, and probable use of HBM that it will be heaven on earth.

I have a problem with this theory. Most tech companies stagger their debut of new tech advances so as not to stumble implementing far too many new features that could fail to work properly. Even Intel, with some of the most advanced process tech do not have the gumption to implement more than 2 major advancements at any one time (I.E. the reason they implemented their tick tock cadence). Why is it then that people assume Pascal will be the holy trinity of implementations and execute all of this flawlessly?

A new architecture is hard enough; but also using a 16nm node that's intrinsically a different design beast with finfetts, and executing HBM which by itself requires a completely different design shift from gddr5 leads me to highly doubt their predictions of Pascal. I would think that it will either be heavily delayed, not implement one of these advancements correctly, or be a highly expensive halo product out of most people's budget. Which, btw, if Pascal is only a halo product will that mean most of Nvidia's gpus next year are re-brands and still have to be using gddr5? Am I naive to not believe a tech company can implement all of that at one generation flawlessly? Do any of you have examples of where a trifecta of major advancements were all implemented correctly and on time in one generation of product?

Discuss and thank you for your insights.
 
VincentHanna
Gerbil
Posts: 63
Joined: Mon Dec 22, 2014 10:40 am

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 8:47 pm

I assume that gamer fanboy psychology and rumor-millonimics aren't adequate answers for the chatter you've been seeing?

That said, Tick Tock is about die shrinks, not new tech. Implement new, smaller lithographic process, then in the next revision, you can refine it. Generally speaking, when Intel does add new tech, they add it to a chip-set, rather than to the silicon itself, and they almost never upgrade the software/firmware. That isn't really analogous to AMD/Nvidia, and what they do, even if they do sometimes follow a simmilar cadence.

If I were to guess, having no context for the statement, I'd look at the general state of the PC/Gaming scene and say we are probably about due for a convergence. High bandwidth memory is becoming "the thing," So are larger memory packages. Both are probably due to the direction MSFT/Sony went with their new consoles. Devs make games for consoles. Consoles have 8GB of GDDR5(well, the good one does). Ergo, a $500-$650 top of the (consumer) line chip ought to have at least that. We've already seen a 12 GB chip out of Nvidia, and chances are at some point they are going to up the ante for the big number boner effect.

DX12 is coming out too, and almost every DX revision has also been benefited by specialized subprocessing of some kind (the same will likely be true of vulcan and Open CL). 4k is coming into its own as well, and we are already kissing 4k power at the high end. Usually that means in a couple years it will trickle down to most people's comfort zone in the (?)60-TI range. Pascal is also, if I recall correctly going to be on a much smaller node (like they are reducing the 28nm node nearly by half) and, for all I know they might even be incorporating USB 3.0 as monitor support. LOTS of stuff going on right now.

All that said, I generally agree with "them." I'd rather wait until my last gen cards stop fufilling my needs (I see no need to continually upgrade without good reason) and then, wait for the "best" chip of a generation to come along (meaning a revised version of an already beefy process) to come along and wow me before I upgrade and go all-in. So far my dual GTX 580s have served me well in that regard.
Last edited by VincentHanna on Sat Jun 20, 2015 9:55 pm, edited 3 times in total.
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 8:55 pm

Oh that definitely could be it, but I thought i'd post this to see if there were any other reason.
 
NovusBogus
Graphmaster Gerbil
Posts: 1408
Joined: Sun Jan 06, 2013 12:37 am

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 9:03 pm

I'm sure Pascal will be excellent, but it must be said that marketers and fanboys have trumpeted virtually every new GPU architecture as the holy grail of gaming glory that will usher in a thousand years of that brand's cosmic majesty. If I was in the market for a new GPU--which, as a matter of fact, I am--I'd continue to do what I've done gor a long time and get something that's good for the next 2-3 years at which point there will be a very nice upgrade at a similar price point. Eventually the GPU curve will slow down ala CPUs, but it'll take longer since massive parallelism actually counts for something here. I may buy slightly less card than I had originally intended, but that's more due to the current uneasy state of the $250 card than promises of the Next Big Thing.

As for the merits of Pascal...we know that Nvidia can work some serious magic with optimization and efficiency, and the expected lower heat output of a 16nm node would give them some serious headroom. It will also make it more difficult to overclock, and generally lead them farther down the 'brainiac' design path with increased reliance on up-to-date drivers to achieve acceptable performance (as we saw with Witcher 3). HBM is a more complicated beast, and I don't think it will offer quite the advantages that its proponents claim when put to the test. And, as you note, new technologies rarely get done right on the first attempt.
 
JustAnEngineer
Gerbil God
Posts: 19673
Joined: Sat Jan 26, 2002 7:00 pm
Location: The Heart of Dixie

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 9:12 pm

There are compensated NVidia shills (part of an astroturfing campaign) posting in on-line comment sections. Couple that with the psychology of the fanboy, and you can expect this sort of behavior.

If you started out evaluating A and you thought that it was pretty good, then you looked at B and you thought that it was also pretty good, you made an objective evaluation of each thing on its own merits. However, once you put A and B side-by-side and you are forced to choose which one is better, you automatically mentally downgrade the one that you selected against. Even though you previously thought that both A and B were good, once you have selected against B, your future evaluations will down-rate B as "bad".

Once someone has invested in a ridiculously-expensive graphics card, they do not want to admit that they might have made a wrong decision. Narcissistic millennials are even more unwilling to admit to errors than previous generations were. This provides plenty of emotionally-charged confirmation bias.

Lastly, the anonymity of the internet allows sociopathic behaviors that would be less likely in a real-life environment, in accordance with Penny Arcade's G.I.F.T.
· R7-5800X, Liquid Freezer II 280, RoG Strix X570-E, 64GiB PC4-28800, Suprim Liquid RTX4090, 2TB SX8200Pro +4TB S860 +NAS, Define 7 Compact, Super Flower SF-1000F14TP, S3220DGF +32UD99, FC900R OE, DeathAdder2
 
UnfriendlyFire
Gerbil Team Leader
Posts: 285
Joined: Sat Aug 03, 2013 7:28 am

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 9:45 pm

The only time a major design win occurs is when the competitor(s) fumble.

Ex:

-AMD's Althons during Intel's Pentium 4 train wreak
-Intel's Sandybridge (still going strong after nearly 5 years) during AMD's Bulldozer train wreak

-Silicon Graphics got demolished by Linux and Windows on the workstation/server OS area, and by Intel in the CPU area. And of course ATi and Nividia were a significant threat in the graphics area. A major failure occured when they tried to switch from their CPU stuff to the infamous Itanium architecture.

And I'm sure there are also examples in the GPU history as well, where ATI/AMD and Nividia fumbled. 3dfx never recovered.
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 10:11 pm

A company is only as good as their *next* product. You can't sit back and rest on your laurels because someone else is gunning for you. For AMD that's the Fury family. For nVidia that's Pascal, farther into the future.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
chuckula
Minister of Gerbil Affairs
Posts: 2109
Joined: Wed Jan 23, 2008 9:18 pm
Location: Probably where I don't belong.

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 10:37 pm

Tirk wrote:
So I'm reading GPU articles and, in the comments without fail even if it is not about Nvidia hardware, there are people saying how the holy grail will come with the advent of Pascal.


Who said that exactly? And why do you think they are wrong when we all know that you think the same thing about Fury (and we've seen that Fury is good, but not any "holy grail" either).

You are confusing people being more excited about Pascal vs. Maxwell with some sort of "holy grail" that only exists in your head. It's pretty obvious why Pascal is a bigger deal than say the GTX-980ti and for that matter the next-generation "arctic islands" AMD parts are a bigger deal than Fury: For the first time since 2011 there will be a die shrink in the GPU world.

The move to HBM2 is a nice side-feature, but as I've been saying over & over again... memory cannot increase the inherent capability of a GPU. Memory can sure cripple a GPU if it's not fast enough or there isn't enough of it, but superfast memory ain't going to speed up a GPU beyond its nominal performance envelope.
4770K @ 4.7 GHz; 32GB DDR3-2133; Officially RX-560... that's right AMD you shills!; 512GB 840 Pro (2x); Fractal Define XL-R2; NZXT Kraken-X60
--Many thanks to the TR Forum for advice in getting it built.
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 11:24 pm

Of course Fury is not a holy trifecta, don't put words in my mouth. Fury implements one major improvement not three at once. It has been inferred that 28nm has hurt both AMD and Nvidia to create more powerful chips, as GM200 and Fiji are very close to the max size of a chip made on 28nm. GM200 is successful by focusing on changes in the architecture but leaving the memory and fabrication the same. Fury "could" be successful by making improvements on the memory. If you want to talk about Fury make your own post Chuckula, don't hijack mine.

Has a GPU or CPU generation been successful or not delayed when they've tried to grapple so many new major improvements all in one iteration? Seems like a train wreck waiting to happen if any of those things fall through. I.E. The ongoing negotiations with chip fab agreements http://www.kitguru.net/components/graph ... ring-deal/ . Even if Nvidia produces the chips on a TSMC 16nm finfet, which has already seen major delays, how much can we expect from a fab process that has yet to produce viable production yields?

Is it then viable for people to promote waiting for a product trying to grapple with all three of those advancements? But maybe Pacal won't and will drop one of them for improved chance of success. Which would people prefer they focus on? 16nm? new architecture? new memory standard? At this point I doubt they will have all three unless its delayed.

But be my guest and join the original question and name some products that successfully grappled all three at once.
 
Airmantharp
Emperor Gerbilius I
Posts: 6192
Joined: Fri Oct 15, 2004 10:41 pm

Re: Nvidia Pascal the holy trifecta?

Sat Jun 20, 2015 11:42 pm

I gotta wonder why Samsung isn't interested in fabbing these chips for AMD and Nvidia. I'm sure there's at least one very good reason, like the process not being suited for large, high-power designs (they mostly make mobile stuff), but Samsung seems like the only company that's actually giving Intel a run for their money in the die-shrink race.
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 12:01 am

BTW to use your own logic, a die shrink is a nice side-feature, but as I've been saying over & over again... a die shrink cannot increase the inherent capability of a GPU design. A larger die can sure cripple a GPU design if it cannot be designed larger because of lithography limitations of a node size but super small dies aren't going to speed up a GPU design beyond its nominal performance envelope.

Have we seen massive performance improvements with Intel's constant die shrinks? No, they've been incremental improvements because a die shrink has certain trade offs of which not all are good. Haven't seen very many high power finfet chips, not even from Intel. Its helped them continually increase the efficiency per watt which is great but high power chips have generally lagged behind in process nodes this small until the process has matured more. Same would be true of AMD's CPUs if they magically had access to Intel's process nodes. It'd definitely increase the efficiency of the Bulldozer design but wouldn't change the design's inherent limitations. They need to change the design to capture the advantages from moving to a node in the previous generation. Hence Intel's tick tock cadence and it has served them very well in capturing most of the CPU market.

Doesn't seem to me Nvidia should be immune to the difficulties other manufacturers have to deal with. Would be very amazing if they were immune though. Infinite advancement implementations FTW.
 
jihadjoe
Gerbil Elite
Posts: 835
Joined: Mon Dec 06, 2010 11:34 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 3:56 am

Tirk wrote:
Have we seen massive performance improvements with Intel's constant die shrinks?


On the server side, yes actually.

Haswell-EP has way more cores than Sandy Bridge-EP ever had, and those small IPC improvements add up and get multiplied by the number of cores.

I understand that the desktop hasn't seen much performance benefit from Intel's process advantage, but that's because desktop computing is inherently not very parallelizable. Graphics, on the other hand more closely resembles a server or HPC workload and more transistors are always welcome.

Of course this is not to say that implementing a new architecture on a new process won't be a challenge. Nvidia got burned by that back with the GTX480, which is why they now introduce smaller chips first before going in with their big guns. I expect they'd do the same once 14nm is ready. Maybe the 750ti and mobile parts first, then GP104, with big Pascal likely reserved for the second generation refresh.
 
f0d
Gerbil XP
Posts: 422
Joined: Sun Jan 23, 2011 3:07 pm
Location: austrALIEN

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 7:30 am

jihadjoe wrote:
Tirk wrote:
Have we seen massive performance improvements with Intel's constant die shrinks?


On the server side, yes actually.

Haswell-EP has way more cores than Sandy Bridge-EP ever had, and those small IPC improvements add up and get multiplied by the number of cores.

I understand that the desktop hasn't seen much performance benefit from Intel's process advantage, but that's because desktop computing is inherently not very parallelizable. Graphics, on the other hand more closely resembles a server or HPC workload and more transistors are always welcome.


exactly
graphics processing is massively parrellel and they can take advantage of as many shader processing engines or virtually any other part of a gpu they can multiply
if you asked a gpu designer if they could take advantage of double or quadruple the amount of transistors you watch him have a big grin and say "yes"

in this sense cpu's are nothing like gpu's - the only real way to add performance to normal cpu workloads that we use everyday (not talking server or hpc workloads) is to improve ipc or improve clock speed and both are difficult
graphics on the other hand pretty much just have to copy/paste more shader clusters in and TAADAA faster gpu (ok maybe not that easy but damn well close to it)

also on the subject, a new node/process almost always get a new gpu design - can you remember the last time they used a old gpu design on a new process? gtx6XX was 28nm and gtx4XX was 40nm - maybe back when they used the 55nm 1/2 node was the last time the diddnt introduce a new gpu design to go along with it

and the last part of the trifecta is HBM which I don't think has anything to do with it - adding It is just part of the design process of a gpu

so out of the three things the OP mentions two are normal operating procedures and one is more of a design choice than a problem

all of what I have said applies to amd as well and nvidia - both of their first gpu's that appear on the next process node (16nm or 14nm whatever it will be) will be a big performance jump just because they can cram in more engines to their designs
Last edited by f0d on Sun Jun 21, 2015 8:42 am, edited 2 times in total.
Image
 
UnfriendlyFire
Gerbil Team Leader
Posts: 285
Joined: Sat Aug 03, 2013 7:28 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 7:54 am

Die-shrinks would allow a GPU to consume less power (typically), which is a major benefit in the mobile area.

On a side note, if you looked at Intel's IGP designs, they went from 12 "execution units" (HD 3000) to 24 for i5-5200U (HD 5500) and 48 for i5-5250U (HD 6000).
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 8:34 am

The one difference I think is that if I recall correctly Nvidia was not the first to produce high powered chips on any of those nodes so issues were already known.

I very much agree a small node process can bring about increasing the amount of space allowed for increasing the design of the chip. Which is why in the same post I mentioned how 28nm has limited both AMD and Nvidia. I hope both companies are able to move to a smaller node next year to increase the amount of parallel processing they can design into their chips. And if that were the only thing Nvidia had to wrap their resources around next year it could be easy to see how they make a good solution. The problem I see is tackling 3 major advancements on Pascal at once. I can even accept being successful tackling 2 but 3 seems overly ambitious. The first line of the post about die shrinks was a jab at Chuckula's narrow minded viewpoint on shifting to HBM, I'm sorry if it made confusion that I discount a company moving to a smaller die size.
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 8:45 am

On the power note, a smaller node will indeed allow a GPU to usually consume less power. But would the higher density of the node (more difficult heat dissipation due it it being smaller) and lower restriction on power also hinder the GPU in achieving the power necessary to make a viable chip. After all, many have mentioned Samsung's 14LPE and TMSC's 20nm is not suitable for high powered chips like PC GPUs.

I am wary of these sub 28nm nodes because of how the current 20nm TMSC and 14nm LPE Samsung process are so ill-suited for large GPU designs. Are we setting ourselves up for another node disappointment?
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 8:56 am

I don't think adding HBM is as simple as clumping it into just another part of designing a chip or we would have more products with it than FIJI this year. It requires completely changing the way the GPU is connected to its memory. That doesn't seem like a simple side note to me.
 
UnfriendlyFire
Gerbil Team Leader
Posts: 285
Joined: Sat Aug 03, 2013 7:28 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 9:22 am

Tirk wrote:
On the power note, a smaller node will indeed allow a GPU to usually consume less power. But would the higher density of the node (more difficult heat dissipation due it it being smaller) and lower restriction on power also hinder the GPU in achieving the power necessary to make a viable chip. After all, many have mentioned Samsung's 14LPE and TMSC's 20nm is not suitable for high powered chips like PC GPUs.

I am wary of these sub 28nm nodes because of how the current 20nm TMSC and 14nm LPE Samsung process are so ill-suited for large GPU designs. Are we setting ourselves up for another node disappointment?


The 20nm nodes were no good for GPUs because they were specifically meant for tablets and smartphones, aka low power consumption.

Take a look at Kaveri as an example. The desktop versions were only barely better than Richland because they lost clock rates, while the mobile versions gained several hundred MHz on top of the architectural improvements over Richland.

The 20nm GPUs might've been doable for the laptop market, but I guess AMD and Nividia deemed it wasn't worth the effort.
 
Prestige Worldwide
Gerbil Elite
Posts: 765
Joined: Mon Nov 09, 2009 10:57 pm

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 9:38 am

The 980ti is too expensive in Canada (over $800) thanks to our dollar tanking over the last year or two.

For that reason I will be waiting for Pascal's GP104 release to upgrade from my 970.
8700k@5GHz, Custom Water Loop | ASRock Fatal1ty Gaming K6 | 32GB DDR4 3200 CL16
RTX 3080 | LG 27GL850 144Hz | WD SN750 1TB| MX500 1TB | 2x2TB HDD | Win 10 Pro x64
X-Fi Titanium Fatal1ty Pro | Sennheiser HD555 | Seasonic SSR-850FX | Fractal Arc Midi R2
 
the
Gerbil Elite
Posts: 941
Joined: Tue Jun 29, 2010 2:26 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 10:40 am

Tirk wrote:
A new architecture is hard enough; but also using a 16nm node that's intrinsically a different design beast with finfetts, and executing HBM which by itself requires a completely different design shift from gddr5 leads me to highly doubt their predictions of Pascal. I would think that it will either be heavily delayed, not implement one of these advancements correctly, or be a highly expensive halo product out of most people's budget. Which, btw, if Pascal is only a halo product will that mean most of Nvidia's gpus next year are re-brands and still have to be using gddr5? Am I naive to not believe a tech company can implement all of that at one generation flawlessly? Do any of you have examples of where a trifecta of major advancements were all implemented correctly and on time in one generation of product?



Oh, nVidia did delay a new architecture but it wasn't Pascal, it was Volta. Pascal was created to fill in the gap created by Volta's delays.

Volta was initially to be nVidia's first GPU family with stacked memory when it was placed on nVidia's roadmap in 2013. A year later, the roadmap radically changed as Maxwell lost a few features and Volta was pushed back a year. New in the 2014 roadmap was Pascal which fits in between Maxwell and Volta. Pascal got a few of the features that were dropped from Maxwell (unified memory, on-die ARM CPU) and a few features from Volta (stacked memory, NVlink).

Much of these changes are likely connected to TSMC's 20 nm process being mainly suited for mobile parts and not high performance. Maxwell was initially to have some chips using this 20 nm production but had to settle for 28 nm. The 'big' GM200 chip found in the Titan X isn't actually the high end Maxwell design but rather what nVidia specced out for the midrange chip. It simply became the big chip by being forced to the 28 nm production node to instead of the 20 nm design originally planned. Before Pascal arrives, we will likely see the real big Maxwell chip on a 14/16 nm FinFET process later this year/early next year. It would not surprise me if consumers do not receive this 14/16 nm FinFET Maxwell chip as it'll be a stop gap solution for the HPC market.

What we will be seeing next year in Pascal is a return to nVidia's original roadmap. Using 14/16 nm FinFET, the big Pascal chip will have the room to include plenty of double precision units. NVlink, unified memory and stacked memory are also a go for the big chip too. The low end and midrange Pascal chips will focus more on single and half precision datatypes. The real question is if nVidia will using stacked memory in their consumer chips. The big Pascal chip is aimed squarely at HPC applications: a concept board shown off at GDC earlier this year fits into a proprietary socket instead of a PCIe slot. It is unclear if the big Pascal chip will find its way onto a PCIe card for consumers. nVidia could also pull off a surprise by including some stacked memory in their midrange product for consumers so that every segment sees a nice bandwidth increase. The lowend chips will likely remain GDDR5 or DDR4 due to cost.

I'd personally predict that the midrange and low end Pascal chips arrive first which follows nVidia's pattern for the Kelper and Maxwell generations. The big Pascal chip should be arriving in late 2016/early 2017. nVidia does not want a repeat of the GTX 480 where they attempted to move to a new process node with their big chip first.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 [email protected] Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 [email protected] Ghz, 16 GB DDR3, GTX 970, GA-X68XP-UD4
 
the
Gerbil Elite
Posts: 941
Joined: Tue Jun 29, 2010 2:26 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 10:46 am

Airmantharp wrote:
I gotta wonder why Samsung isn't interested in fabbing these chips for AMD and Nvidia. I'm sure there's at least one very good reason, like the process not being suited for large, high-power designs (they mostly make mobile stuff), but Samsung seems like the only company that's actually giving Intel a run for their money in the die-shrink race.


There have been recent rumors that both nVidia and AMD are looking at Samsung for 16/14 nm FinFET production.
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 [email protected] Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 [email protected] Ghz, 16 GB DDR3, GTX 970, GA-X68XP-UD4
 
Chrispy_
Maximum Gerbil
Posts: 4670
Joined: Fri Apr 09, 2004 3:49 pm
Location: Europe, most frequently London.

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 10:49 am

Maybe I'm way too cynical, but this debate looks far too much like a turfing thread from someone with 10 total posts, 7 of which are in his own turfing thread. Even if the intention was not, it certainly looks like bait from my sofa.

Fiji's not even out. It looks like it's going to be about the same as a 980Ti, which is disappointing for competition reasons, but priced appropriately it will keep Nvidia in check. The 28nm process is the limiting factor here and AMD have gone with supercharging the memory technology to push GCN architecture into competition whilst Nvidia dialled back and cut out yet more GpGPU capabilities to selectively tweak Maxwell for current game engines.

It's two sides of the same 28nm coin, Maxwell sacrifices capabilities to optimise for a specific balance of GPU resources common in today's games, Fiji's GCN cores dividing resources and capabilities more evenly but needing HBM steroid injections to keep up in those (many) instances where the game is just better suited to Maxwell.

I'm waiting for Fiji reviews, more specifically, reviews of Air-cooled Fiji products. I'm expecting any GCN product to be hotter and hungrier than the competing Maxwell product, but AMD have two cards to play, as always:

  1. AMD will price their cards to be competitive based on their performance, regardless of cost to produce or the originally intended market position. If Fiji was supposed to be a Titan X killer and it's only a 980Ti, then they'll undercut the 980Ti.
  2. If you plan on owning an AMD card, history has proven that AMD cards age better. The 7970 and GTX680 were a close match at launch, but constant updates to GCN drivers make the 7970 a venerable performer today. Nvidia abandon their architectures like a fickle child once they have a new one, so even AAA games like Shadow of Mordor are a mess on older Nvidia cards. Same is true for The Witcher 3, if the forum whining is to be believed.
Congratulations, you've noticed that this year's signature is based on outdated internet memes; CLICK HERE NOW to experience this unforgettable phenomenon. This sentence is just filler and as irrelevant as my signature.
 
l33t-g4m3r
Minister of Gerbil Affairs
Posts: 2059
Joined: Mon Dec 29, 2003 2:54 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 2:48 pm

I don't think speculation matters at all, and is quite pointless to discuss until actual launch. If you want to play games today, pascal doesn't matter, what is available today matters. We'll find out how pascal performs when it comes out, but it really isn't relevant until it does.
 
JustAnEngineer
Gerbil God
Posts: 19673
Joined: Sat Jan 26, 2002 7:00 pm
Location: The Heart of Dixie

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 4:22 pm

I expect that Pascal will be a good GPU family for gaming. However, that's next year, and the future is not certain.

Let's spin the clock back to 13 years ago. In July 2002, ATI introduced a new family of graphics chips starting with the high-end Radeon 9700Pro. It outperformed NVidia's best GeForce4 Ti4600 at existing games and it supported DirectX 9.0, while NVidia's products did not. Smart gamers bought a Radeon 9700Pro right away (or actually a month later when they were available in volume). NVidia fanboys and shills said "wait for NVidia's NV30 chip that is due out in just a few months. It is so much better." Eight months later, NVidia brought out the GeForceFX 5800 and FX 5700 - which were a flop. When 3DMark03 showed that DirectX 9 performance of the GeForceFX series was terrible, instead of NVidia making their product better, NVidia shills and fanboys mounted an organized effort to discredit the benchmark. When actual DirectX 9 games arrived on the market, the games showed that the DirectX 9 performance of the GeForceFX series was terrible, just as 3DMark03 had shown.
· R7-5800X, Liquid Freezer II 280, RoG Strix X570-E, 64GiB PC4-28800, Suprim Liquid RTX4090, 2TB SX8200Pro +4TB S860 +NAS, Define 7 Compact, Super Flower SF-1000F14TP, S3220DGF +32UD99, FC900R OE, DeathAdder2
 
Tirk
Gerbil
Topic Author
Posts: 58
Joined: Sat Sep 06, 2014 8:57 am

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 9:11 pm

Chrispy_ wrote:
Maybe I'm way too cynical, but this debate looks far too much like a turfing thread from someone with 10 total posts, 7 of which are in his own turfing thread. Even if the intention was not, it certainly looks like bait from my sofa.

Fiji's not even out. It looks like it's going to be about the same as a 980Ti, which is disappointing for competition reasons, but priced appropriately it will keep Nvidia in check. The 28nm process is the limiting factor here and AMD have gone with supercharging the memory technology to push GCN architecture into competition whilst Nvidia dialled back and cut out yet more GpGPU capabilities to selectively tweak Maxwell for current game engines.

It's two sides of the same 28nm coin, Maxwell sacrifices capabilities to optimise for a specific balance of GPU resources common in today's games, Fiji's GCN cores dividing resources and capabilities more evenly but needing HBM steroid injections to keep up in those (many) instances where the game is just better suited to Maxwell.

I'm waiting for Fiji reviews, more specifically, reviews of Air-cooled Fiji products. I'm expecting any GCN product to be hotter and hungrier than the competing Maxwell product, but AMD have two cards to play, as always:

  1. AMD will price their cards to be competitive based on their performance, regardless of cost to produce or the originally intended market position. If Fiji was supposed to be a Titan X killer and it's only a 980Ti, then they'll undercut the 980Ti.
  2. If you plan on owning an AMD card, history has proven that AMD cards age better. The 7970 and GTX680 were a close match at launch, but constant updates to GCN drivers make the 7970 a venerable performer today. Nvidia abandon their architectures like a fickle child once they have a new one, so even AAA games like Shadow of Mordor are a mess on older Nvidia cards. Same is true for The Witcher 3, if the forum whining is to be believed.


Sorry if it looks like bait, that was not my intention. But if I'm going to start some where, I might as well engage them with a seemingly trending subject such as waiting for Pascal and whether those expectations are warranted with what Pascal has to succeed in to meet those expectations. There are some interesting insights so far about Pascal and quite varied ones which is what I was hoping for when I started the thread. It does seem like an awfully long wait for something that has so many what ifs. People are antsy waiting for Fiji which will retail in only a few days now.

As to your note about Fiji supposed to be a Titan X killer, I don't see that need as Nvidia already made a Titan X killer with the 980TI. I'd feel bad for the purchasers of the Titan X if most of them didn't already sound so pretentious;-) It'd be suicide for Fiji to be priced anywhere near the Titan X when the 908TI can be had for far cheaper and with almost the same performance as the Titan X. Just look at techreport's review http://techreport.com/review/28356/nvid ... eviewed/13 . If Fiji reviews come out showing it trading blows with the 980TI than it'd obviously also be trading blows with the Titan X in performance. In TR's review the 980TI had 0-4 fps difference with the Titan X. Even as mentioned in the recent podcast, it is highly doubtful that AMD is making a loss selling the Fiji GPU for the same price as the 980TI. AMD could try and gouge its consumers with a Titan X price tag but I don't think that would be a good idea to win back market share. It does puzzle me however, that Nvidia seems to get a pass from consumers when they release the 980TI so soon after the Titan X and seemingly spit on their own consumer's purchases of the Titan X. Being competitive against the 980TI's performance and price only seals the horrible value proposition that the Titan X has.

Enough about that, this thread was made to talk about Pascal expectations. Some of you pointed out the likelihood of Pascal being more limited or delayed due to various constraints. This would most likely lead to Nvidia's lineup still mostly containing rebrands and gddr5 designed chips. Would this seemingly poison the water as some have said about AMD's 300 series? Or is a halo product like Fiji or Pascal enough to keep a positive consumer mindset?
 
Krogoth
Emperor Gerbilius I
Posts: 6049
Joined: Tue Apr 15, 2003 3:20 pm
Location: somewhere on Core Prime
Contact:

Re: Nvidia Pascal the holy trifecta?

Sun Jun 21, 2015 10:44 pm

Not even close.

The days of massive bumps and features are long over. Silicon is running out of room and the dang chips are so complex that it takes man-years of R&D to make a marketable let alone a viable product.

I expect Pascal to be the next major architecture change from Nvidia since Fermi but it isn't going to be make 4K gaming at 120FPS+ possible. There simply isn't enough transistor budget to go around. Pascal will probably do a bunch of interesting stuff and be a far more capable GPGPU then its Fermi-based predecessors.
Last edited by Krogoth on Tue Jun 23, 2015 12:53 am, edited 1 time in total.
Gigabyte X670 AORUS-ELITE AX, Raphael 7950X, 2x16GiB of G.Skill TRIDENT DDR5-5600, Sapphire RX 6900XT, Seasonic GX-850 and Fractal Define 7 (W)
Ivy Bridge 3570K, 2x4GiB of G.Skill RIPSAW DDR3-1600, Gigabyte Z77X-UD3H, Corsair CX-750M V2, and PC-7B
 
ultima_trev
Gerbil XP
Posts: 363
Joined: Sat Mar 27, 2010 11:14 am
Contact:

Re: Nvidia Pascal the holy trifecta?

Mon Jun 22, 2015 3:07 am

I wonder if nV will continue the trend of bringing their mid range chip to market first before unleashing the halo chip. I would guess GP204 could be up to 30% faster than Titan X and Fury X with 16 GB of HBM2 and nV would be "generous" in charging us $1,000 for this midrange product to pay for their 14nm and HBM2 adoption costs. Then when GP200 comes, which is nearly double the speed of anything on the market today, $2,000 high end cards will finally be a reality. It will make us long for the days of $830 8800 Ultras or $700 7800 GTX 512s, which we considered stupidly overpriced in their day.
Ryzen 7 1800X - Corsair H60i - GA AB350 Gaming - 32GB DDR4 2933 at 16,16,16,36 - GTX 1080 at 1924 / 5264 (undervolted) - 250GB WD Blue SSD - 2TB Toshiba 7200rpm HDD
 
BlackDove
Gerbil Elite
Posts: 694
Joined: Sat Oct 19, 2013 11:41 pm

Re: Nvidia Pascal the holy trifecta?

Mon Jun 22, 2015 3:53 am

What no one seems to realize is that the large GPU from Nvidia has to compete with other accelerators that are being used in supercomputers.

The big GP100 chip is very similar in performance and overall design to Intels Knights Landing, which is what it will be competing with. Both are around 3 TFLOPS DP and 6 TFLOPS SP.

The other thing no one seems to realize is that Intel and Nvidia are both building pre-exascale computers and memory bandwidth is their focus.

GP100 will make a good consumer GPU and its the first big change for Nvidia in a long time. It should be the next GK110. Being a consumer GPU is not what these things are designed for but neither was GK110.
 
the
Gerbil Elite
Posts: 941
Joined: Tue Jun 29, 2010 2:26 am

Re: Nvidia Pascal the holy trifecta?

Mon Jun 22, 2015 5:58 am

Where is everyone getting their expectations of the Pascal generation of chips? Anything out there that isn't speculation? (Granted a lot of what has been speculated is making sense, just needs to be prefixed as such.)
Dual Opteron 6376, 96 GB DDR3, Asus KGPE-D16, GTX 970
Mac Pro Dual Xeon E5645, 48 GB DDR3, GTX 770
Core i7 [email protected] Ghz, 32 GB DDR3, GA-X79-UP5-Wifi
Core i7 [email protected] Ghz, 16 GB DDR3, GTX 970, GA-X68XP-UD4
 
southrncomfortjm
Gerbil Elite
Posts: 574
Joined: Mon Nov 12, 2012 7:57 pm

Re: Nvidia Pascal the holy trifecta?

Mon Jun 22, 2015 6:03 am

Has there actually been any indication that Pascal will use HBM? Seems to me that HBM is too new to be incorporated in non-AMD GPUs that are coming out so soon, especially since HBM is AMD's baby. Based on that, I think we'll see the first Pascal cards using GDDR5 and follow-on cards using HBM - at least once HBM gets past the 4GB VRAM limit.

Either way, since I game from my couch on a 60inch, 1080p TV, I think I'll stick with my GTX 760 for at least 2 more years, which is about the length of time I expect it to take 65inch 4K OLEDs to get to near mainstream prices (sub-$3000).
Gaming: i5-3570k/Z77/212 Evo/Corsair 500R/16GB 1600 CL8/RX 480 8GB/840 250gb, EVO 500gb, SG 3tb/Tachyon 650w/Win10
  • 1
  • 2
  • 3
  • 4
  • 5
  • 8

Who is online

Users browsing this forum: No registered users and 56 guests
GZIP: On