I do some stuff for work now which uses some commercial software which leverages GPUs. It's photogrammetry, actually, if anyone's interested.
Anyway this leads me to try to test various system configurations so I am now in possession of:
EVGA GeForce GTX 1080Ti SC Black
MSI Radeon RX Vega 56
EVGA GeForce GTX 980Ti Classified
EVGA GeForce GTX 1050 I think this is SuperClock but who cares, really
I've got only two host machines so far and have been swapping the GPU's in and out like jenga blocks. I've also got a Sonnet 550W TB3 external GPU enclosure I gotta find time to tinker with (so i can theoretically also leverage the Kaby Lake in my TB3 MacBook), but that's neither here nor there. One thing that bugs me is when I try to run a benchmark to confirm that the drivers have been set up kosher, most programs only ever attempt to put load on GPU1. For example I've got the Vega and 1050 in my machine no.2 and the Vega shows up as GPU2 and I really just want to tell windows to swap them around at least so I can run Furmark or Superposition on it, but they just run on the 1050.
At the end of the day though it's pretty clear that until software that i care about really starts to leverage multi-gpu, there's really no point sticking multiple of them inside a single box.
I suppose if i'm going to tinker with both OpenCL and CUDA code that it'd be useful to have a green and red card in the box, but that's pretty much it, benchies and stuff mostly just cater to the basic use case, and things like SLI/CF really need you to provide identical cards to pair up.
I just dont want to physically swap the position of the cards just to change which one is GPU1 (to windows). I really want to be able to at least set it in bios or set it in windows and i'm happy to reboot, i'm just not happy having to physically unplug both cards just to run the same benchie software on the other card!
All google will ever give me are a bunch of threads with people being confused about how optimus works.