I nabbed one. Looking forward to a much needed upgrade from my Titan Xm for 4K gaming.
I'm thinking about programming my own stuff, but I rarely use doubles in my own code. Actually, I hate floating points in general because they're so wonky. But I'd think that 32-bit Floats of 2^128 to 2^-127 with ~7-decimal digits precision is enough for most cases.
About the only thing I use doubles for in the little bit of programming I do are large lists of output values from random number generating algorithms (to make sure I get the same result with a constant seed).
Determinism btw isn't affected by single-floats vs double-floats. Errors propagate exponentially in all floating point applications. Integers (64-bit) are best for determinism, but if you want deterministic floats... going from single-to-double won't help at all!
Instead, determinism is achieved by recognizing that Floating-point operations are NOT
associative. (A+B) + C will offer different results than A + (B+C) in the general case. The easiest way to achieve determinism, is to therefore sort your numbers before performing any series of calculations. If you have a list of numbers [5, 1, 7, 2, 3], then sorting them into ((((1+2)+3)+5)+7) ensures determinism in Floating-point settings. Sorting from smallest to largest magnitude helps out with cancellation error as well.
There are other methods to achieve bitwise determinism. But the sorting methodology is both educational and practically useful.
If that's too much effort, then just represent decimals as 64-bit integers. Pretend that all 64-bit integers are a number [0, 1) with value X / 2^64. It works surprisingly well in my experience, and avoid floating-point associativity / cancellation error issues.