BIGADV eventually to go away

Come join the... uh... er... fold.

Moderators: just brew it!, farmpuma

BIGADV eventually to go away

Postposted on Sat Feb 15, 2014 2:14 pm

I just noticed an announcement from January.

1. Beginning May 1st, 2014 BIGADV will require a minimum of 24 cores.
2. On Jan 31st, 2015, BIGADV ends.

I don't run BA work units now, but have thought about building a big server with lots of CPU cores for graphic rendering, and just putting it to work 24/7 for BAs when I'm not doing any rendering. Now with this announcement, it seems that if I did build such a machine this year, my bonus-benefit would eventually go away.

------------------
Questions below in the context of an example system loosely fitting this description:

A 2P Intel system with two 10-core CPUs and let's say two dual-GPU cards (such as the not-yet announced GTX 690 or R9 290 X2; each with GPUs running separately, not in Crossfire or whatever), resulting in something like this...

1 CPU folding slot with 10 X 2P X HT = 40 cores
4 separate GPU slots
------------------

Question 1: Will massively multi-CPU-slot/multi-CPU-core server systems still be worthwhile to run with regular "A4" CPU Work Units?

Question 2a: For folding, does increasing the number of CPU cores eventually reach diminishing returns with "regular Work Units"?

Question 2b: If yes to 2a above, can Virtualization be used to "virtually split" the CPU folding slot so as to accept more than one a4 CPU work unit so that the machine can process more in parallel and overcome the point of diminishing returns? For example, running a Linux or Windows guest under HyperV, Virtualbox, or VMWARE alongside the Windows system and setting aside some cores in one or more guest OS instances for CPU-only folding?

Example: Aforementioned 40 cores split into 20 cores for Widows CPU folding unit and two 10-core units each running in a virtual Linux guest under HyperV for a total of three CPU units...

Question 2c: If yes to 2a and 2b above, can I use workload balancing to allow the 3D graphic rendering program(s) to dynamically grab CPU cores from the Linux folders and give them back when the rendering project has completed?

Note: X17 Work Units do not commandeer/serialize a CPU core like X16 WU's did and more than 95% of my WUs are X17, so I did all my math without removing any CPU cores from the total available.
BIF
Gerbil Jedi
Gold subscriber
 
 
Posts: 1600
Joined: Tue May 25, 2004 7:41 pm

Re: BIGADV eventually to go away

Postposted on Sat Feb 15, 2014 2:38 pm

You were there when farmpuma brought that up. :P

And it all came from the big controversy that bumped BIGADV to 24 and then 32 in really quick succession. All the negativity just pushed PG over the edge and said screwed it and gave up. The fact of the matter is, donors come and go. With a more connected world and simply more PCs (yes, in overall count, not sales) comparing to 10 years ago, there are simply more nodes available for all kinds of DC projects. And just like in entertainment and other areas of life, we have more competition for our attention. I for one are not going to proclaim the sky is falling when there are more DC projects spun up competing for our cycles. More variety of research only benefits mankind. So I think all these animosity towards people "defecting" pretty silly.
Image
The Model M is not for the faint of heart. You either like them or hate them.

Gerbils unite! Fold for UnitedGerbilNation, team 2630.
Flying Fox
Gerbil God
 
Posts: 24492
Joined: Mon May 24, 2004 2:19 am

Re: BIGADV eventually to go away

Postposted on Sat Feb 15, 2014 3:27 pm

RE Question 2b: This was done way back around ~2008-2009 when the Q6600 was in its heyday. For best performance, folders would run 2 linux VMs inside Windows Vista (each running a dual-core configured SMP client) and still see much better performance than running the Windows SMP client (I was one of them). Some even ran 4 VMs with single-thread clients.

Things have changed greatly since then, in particular Pande Group eventually rewrote the Windows SMP codebase to fix this issue. There's no longer any advantage to splitting up cores, because the projects will take longer and face decreased early-return bonuses.The entire point of the Bigadv many-core rigs was that they could chew through a single WU so fast as to achieve a super-high early return bonus.

Regarding Question 1: That's debatable. A single 780 Ti can achieve >200,000 PPD if it gets core 17 projects. So several of them in one system would cost much less than a fast 4P rig and would give more PPD once the Bigadv projects cease. A second thing to consider is that Maxwell will supposedly offer a huge jump in compute performance over Kepler. Even a small boost from 880 cards would likely have a huge impact on Core 17 PPD numbers given what even a minor OC does to a 780's PPD figures. (some reports show as much as 300K for a single heavily OC'd 780 Ti)
Kougar
Gerbil XP
 
Posts: 358
Joined: Tue Dec 02, 2008 2:12 am
Location: Texas

Re: BIGADV eventually to go away

Postposted on Sat Feb 15, 2014 4:11 pm

Kougar wrote:Things have changed greatly since then, in particular Pande Group eventually rewrote the Windows SMP codebase to fix this issue. There's no longer any advantage to splitting up cores, because the projects will take longer and face decreased early-return bonuses.The entire point of the Bigadv many-core rigs was that they could chew through a single WU so fast as to achieve a super-high early return bonus.

The problem is that if you have >8 core systems and running just one instance of non-Bigadv WinSMP, you don't get projects that use all the cores anyway. So in that case you need to run multiple instances to use all the cores. The thing that people got pissed off about, is that with even multiple SMP WUs working, it still won't beat the BIGADV WUs. The scale is not linear. And for people who build those server grade monsters for dedicated folding, the costs to buy and run become "not worth it" for some.
Image
The Model M is not for the faint of heart. You either like them or hate them.

Gerbils unite! Fold for UnitedGerbilNation, team 2630.
Flying Fox
Gerbil God
 
Posts: 24492
Joined: Mon May 24, 2004 2:19 am

Re: BIGADV eventually to go away

Postposted on Sat Feb 15, 2014 6:52 pm

Flying Fox wrote:You were there when farmpuma brought that up. :P


Yeah, I suppose I was, but my comment was only about the T-Shirt. It's possible that I was merely impressed with myself for how clever I (thought) was being... :oops:

Kougar wrote:RE Question 2b: This was done way back around ~2008-2009 when the Q6600 was in its heyday. For best performance, folders would run 2 linux VMs inside Windows Vista (each running a dual-core configured SMP client) and still see much better performance than running the Windows SMP client (I was one of them). Some even ran 4 VMs with single-thread clients...


Interesting; this was precisely my thinking!

Flying Fox wrote:The problem is that if you have >8 core systems and running just one instance of non-Bigadv WinSMP, you don't get projects that use all the cores anyway. So in that case you need to run multiple instances to use all the cores. The thing that people got pissed off about, is that with even multiple SMP WUs working, it still won't beat the BIGADV WUs. The scale is not linear. And for people who build those server grade monsters for dedicated folding, the costs to buy and run become "not worth it" for some.


Well, my current 6-core rig is using all twelve threads all the time. Maybe not all at 100%, but even the lowest one(s) is/are usually close to 80% and they are all participating in the CPU folding process.

I guess I'm just curious if there is a point of diminishing returns with regards to available bonus points and a number of CPU cores available to work on a single task.
BIF
Gerbil Jedi
Gold subscriber
 
 
Posts: 1600
Joined: Tue May 25, 2004 7:41 pm

Re: BIGADV eventually to go away

Postposted on Sun Feb 16, 2014 2:47 pm

This was a huge controversy over at the Folding Forums. First off, we all know points don't mean anything other than a way of tracking progress. But, the consensus of the huge conversation was that it was not worth the money spent on systems and electricity to run a Big Adv. machine on the regular CPU WU's. The point return is terrible compared to BA or GPU WU's.

The Pande Group tried to ignore the issue. When that didn't work they made some comments that did not really say anything. They finally acknowledged there is a huge issue with they way they communicate with the donors and promised to fix the issue. They said the same thing every two years for the past six or eight years. When ever some one made a post point out the flaws or lack of information in the Pande Group posts the forum moderators censored or deleted such posts and banned a lot of people. This naturally brought on another round of complaints that just got ignored. A lot of people from across all the teams but mainly the top 10 teams got fed up and shut their systems down or moved over to running projects on Boinc where their multi core systems were of great use. :o

As a donor its just down right arrogant to me when a project I am donating my computer time and electricity to ignores my and a lot of others concerns. Then to resort to censorship and banning people when they do not drop the subject tells me all I need to know about the project leaders attitude concerning donors. Then add in the Pande Groups ever persistent fan boys/disciples insisting anything Pande does is all to the good and we are all ignorant fools for not falling into line. Yes, I was called a fool and trouble maker in a PM from a F@H forum mod because I asked for a clearer answer than we got from PG over the BA situation. I no longer have acess to that PM or much of anything else on the F@H forums so I can't produce evidence, take my word for it or not. Does not mater to me since I am no longer folding. :evil:

There are several huge posts on several team forums about the issue. About half way through the posts you will start to see the discussion about censorship and banning start to pop up. If your curious go check out the forums for the EVGA, [H]ardcoreOCP, AMD Zone, and many other teams.

I don't like to have my concerns ignored. Nor do I like to be censored and banned when I don't agree with some one and try to keep a civil debate going. I stopped folding in January and moved my two systems over to Boinc projects. There are plenty of medical research projects to be done via the Boinc client. I am running GPU Grid, it uses the GPU to run projects studying proteins much like F@H does. I am also running POEM@Home which also studies proteins and their relation to several disease. There are many more projects you can choose from in the medical sciences along with mathematics and astronomy if that's more to your liking. Oh and no, its not all about finding aliens or asteroids like some around here seem to think. :roll:

Here are some links for those interested in what was and is still being said about the whole situation on other forums. Get comfy because there is a lot reading there.

http://forums.evga.com/tm.aspx?m=2079048
http://www.amdzone.com/phpbb3/viewtopic.php?f=521&t=140030
http://hardforum.com/showthread.php?t=1797336
Khali
Gerbil XP
 
Posts: 336
Joined: Thu Dec 20, 2012 5:31 pm

Re: BIGADV eventually to go away

Postposted on Sun Feb 16, 2014 5:39 pm

K, I've read up a little on this, and it really does seem like there's a management problem going on over at Stanford. It's a shame that such a simple issue to take care of seems to be the one step the powers-that-be don't want to do: Bring in an effective manager. Adding only a part-time community rep is a half-measure at best.

Reading thru the links (and a few non-related topics over there) took me off onto a wild mental tangent, and brought up some questions that I haven't really found answers to:

1) Exactly how big are the individual WUs, and how much work(calulations/sec) gets done on each individual unit ?
2) How much RAM ends up being used on the graphics card, if you're doing GPU folding?
3) Why does GPU folding end up using ~500Mb/sec of PCI-E bandwidth? I saw that figure in a post on FF regarding someone wanting to run 6 GPUs in a single system with an ASRock MB.
4) Regarding using non-PC devices (like bitcoin miners), I understand that FPGAs and ASICs like the ones Butteryfly Labs make/bought are unsuitable, due to the extreme difference in what types of work they do and are optimized for. I read a little about curecoin, and was wondering why someone couldn't create an FPGA or ASIC that DOES accelerate the type of work GROMACS does? Why couldn't someone gangbang a bunch of Raspberry PIs together (as has been done for other purposes), and use those for the grunt work (besides the fact that F@H doesn't have the resources to devote to creating a Raspberry F@H client)? Like either 1 WU per RP, or use some type of "pre-processor" to split up/distribute/recombine chunks of WUs amongst all the RPs simultaneously.
5) The F@H Wikipedia page has some conflicting statements on it (I know...), like " GPU hardware is difficult to use for non-graphics tasks and usually requires significant algorithm restructuring and an advanced understanding of the underlying architecture" and then goes right to "...OpenMM-based GPU simulations do not require significant modification but achieve performance nearly equal to hand-tuned GPU code, and greatly outperform CPU implementations". Is that suggesting that someone (like Tim Sweeny or Carmack) could helpful to Stanford in getting more performance out of GPU than is currently occuring, or that OpenMM is as good as it gets for now?
Hz so good
Gerbil Elite
 
Posts: 711
Joined: Wed Dec 04, 2013 5:08 pm

Re: BIGADV eventually to go away

Postposted on Sun Feb 16, 2014 6:29 pm

Flying Fox wrote:The problem is that if you have >8 core systems and running just one instance of non-Bigadv WinSMP, you don't get projects that use all the cores anyway. So in that case you need to run multiple instances to use all the cores. The thing that people got pissed off about, is that with even multiple SMP WUs working, it still won't beat the BIGADV WUs. The scale is not linear. And for people who build those server grade monsters for dedicated folding, the costs to buy and run become "not worth it" for some.


Of course the scaling wasn't linear, it wasn't designed nor intended to be. Bigadv was a bonus system designed exclusively for high-end servers when it came out in 2009. It was never intended for consumers to be buying into the Bigadv platform, and each time they did and got burned on the updated hardware requirements there was a ton of fallout. It was meant to be a top 10% thing but that part got ignored and quickly buried under the rush for points (not pointing fingers, I myself folded bigadv projects on an OC'd 980X until that no longer met the requirements). I'm glad they will be doing away with it. So many people have gotten upset due to Bigadv that they've quit F@H entirely or gone on to alternate distributed computing projects.

Just remember, when Bigadv first came out in 2009 the top-end chips were still Quadcore! Bigadv was intended for servers, so it would be expected that as servers began adopting multiple 6+ core chips that the Bigadv project would adopt a matching enterprise update cadence, something even most Bigadv users can't afford to keep up with. That, and there were problems with Quadcore users running Bigadv anyway desperately wanting the extra PPD, something Pande Group could only stop by upping the processor requirements. Back then a fast-clocked i7 920 was only barely able to meet the early bonus deadline and that was assuming it had 24/7 folding time. Most didn't and missed the ERB completely which undermined what they intended bigadv to have been for.

Didn't know that about the WinSMP, I thought it was capable of handling a 12 core 24-thread system with one instance. At least the v7 client is designed to be capable to run multiple CPU + GPU projects concurrently, just makes it a little more complex to set up initially. I still remember the original dual-core SMP client, it assumed it was the only one on the system so running two of them together could corrupt things without some editing and directory modifications. :(

BIF wrote:I guess I'm just curious if there is a point of diminishing returns with regards to available bonus points and a number of CPU cores available to work on a single task.


Going from a Quad to a Hex-core adds 50% more cores, but it's hard to tell if it delivers 50% more PPD. Results are all over the place even if only comparing within the same project #. There's even older processors outperforming newer ones at higher clocks on the same project, so it looks like either the info is inaccurate or some systems were not optimized, had load issues, or weren't running actual 24/7 figures.

I would venture to say GPUs still offering better scaling. Every GPU added will double the production as long as there's an empty core/thread to feed it. The nice thing about GPUs is that they stay fairly consistent PPD-wise as the machines age, with a nice uptick as the software is eventually able to 100% load the card & better optimized cores are developed such as core 17. GPUs are not as reliant on early return bonuses as processors have been.
Kougar
Gerbil XP
 
Posts: 358
Joined: Tue Dec 02, 2008 2:12 am
Location: Texas

Re: BIGADV eventually to go away

Postposted on Sun Feb 16, 2014 7:00 pm

Hz so good wrote:1) Exactly how big are the individual WUs, and how much work(calulations/sec) gets done on each individual unit ?


Downloaded projects are under 20mb in size, usually much smaller I believe. Finished projects for upload are around 50-100MB if i remember correctly, its been awhile since I monitored my net traffic for it.

Hz so good wrote:2) How much RAM ends up being used on the graphics card, if you're doing GPU folding?


Assuming GPU-Z can accurately measure this (which I have no idea if its accurate or not) my VRAM usage went from 381MB to 444MB when I turned on my 480 GTX GPU client. F@H has always been designed to have a small download / upload footprint to minimize the loads on donator's networks as well as the infrastructure required to support the F@H project as a whole.

Hz so good wrote:3) Why does GPU folding end up using ~500Mb/sec of PCI-E bandwidth? I saw that figure in a post on FF regarding someone wanting to run 6 GPUs in a single system with an ASRock MB.


It doesn't. I've heard this perpetuated a few places, but it's not true. The compute-work is mostly self-contained within the card, and cards don't share the workloads :P I'm sure there's a ton of data being crunched, but I highly doubt it's being sent over the PCIe bus. If it was, where is it going? The GPU+CPU working directories are not even 500Mb in size combined. And F@H uses less than 200MB of RAM per project.

Hz so good wrote:4) Regarding using non-PC devices (like bitcoin miners), I understand that FPGAs and ASICs like the ones Butteryfly Labs make/bought are unsuitable, due to the extreme difference in what types of work they do and are optimized for. I read a little about curecoin, and was wondering why someone couldn't create an FPGA or ASIC that DOES accelerate the type of work GROMACS does? Why couldn't someone gangbang a bunch of Raspberry PIs together (as has been done for other purposes), and use those for the grunt work (besides the fact that F@H doesn't have the resources to devote to creating a Raspberry F@H client)? Like either 1 WU per RP, or use some type of "pre-processor" to split up/distribute/recombine chunks of WUs amongst all the RPs simultaneously.


Any sort of dedicated hardware would require dedicated development specifically for it, which is why the ASIC's being developed for cyptocoins are not even inter-compatible. Even though a bitcoin or litecoin ASIC only differ in the encryption used and the ASICs are doing 99% of the same work, engineers that reported on them stated the ASICs are still not inter-changeable between currencies. It's my guess a Gromacs optimized ASIC wouldn't be specific enough to the variety of projects F@H utilizes to offer any benefits, however I am not an engineer so that's just my uneducated guess. Building for Gromacs would be too general a target, and because F@H's own implementation of OpenMM itself is ever changing, dedicated ASICs may become obsolete once new cores with new versions of OpenMM are released.

Hz so good wrote:5) The F@H Wikipedia page has some conflicting statements on it (I know...), like " GPU hardware is difficult to use for non-graphics tasks and usually requires significant algorithm restructuring and an advanced understanding of the underlying architecture" and then goes right to "...OpenMM-based GPU simulations do not require significant modification but achieve performance nearly equal to hand-tuned GPU code, and greatly outperform CPU implementations". Is that suggesting that someone (like Tim Sweeny or Carmack) could helpful to Stanford in getting more performance out of GPU than is currently occuring, or that OpenMM is as good as it gets for now?


Those statements don't directly conflict, although I would dispute just how optimized OpenMM is compared to "hand-tuned GPU code". Core 15 and Core 17 both use OpenMM, but the optimization of the OpenMM code and how F@H uses it greatly increased the performance of Core 17 projects on recent NVIDIA GPUs. https://folding.stanford.edu/home/sneak ... u-core-17/
Kougar
Gerbil XP
 
Posts: 358
Joined: Tue Dec 02, 2008 2:12 am
Location: Texas

Re: BIGADV eventually to go away

Postposted on Mon Feb 17, 2014 7:35 am

Kougar wrote:
Hz so good wrote:Regarding using non-PC devices (like bitcoin miners), I understand that FPGAs and ASICs like the ones Butteryfly Labs make/bought are unsuitable, due to the extreme difference in what types of work they do and are optimized for. I read a little about curecoin, and was wondering why someone couldn't create an FPGA or ASIC that DOES accelerate the type of work GROMACS does? Why couldn't someone gangbang a bunch of Raspberry PIs together (as has been done for other purposes), and use those for the grunt work (besides the fact that F@H doesn't have the resources to devote to creating a Raspberry F@H client)? Like either 1 WU per RP, or use some type of "pre-processor" to split up/distribute/recombine chunks of WUs amongst all the RPs simultaneously.
Any sort of dedicated hardware would require dedicated development specifically for it, which is why the ASIC's being developed for cyptocoins are not even inter-compatible. Even though a bitcoin or litecoin ASIC only differ in the encryption used and the ASICs are doing 99% of the same work, engineers that reported on them stated the ASICs are still not inter-changeable between currencies. It's my guess a Gromacs optimized ASIC wouldn't be specific enough to the variety of projects F@H utilizes to offer any benefits, however I am not an engineer so that's just my uneducated guess. Building for Gromacs would be too general a target, and because F@H's own implementation of OpenMM itself is ever changing, dedicated ASICs may become obsolete once new cores with new versions of OpenMM are released.


The FPGAs and ASICs that mine bitcoin are just doing SHA-256 hashes on a massive scale. That's a standard algorithm that does one thing, and it was likely designed/selected to minimize the cost of a hardware implementation.

Litecoin, on the other hand, uses scrypt, which is a key derivation function that was explicitly designed to minimize the feasibility of FPGA/ASIC implementations by heavily relying on a time-memory trade-off. In other words, without sufficient memory available, computation time increases greatly. Since implementing memory chews up considerable amounts of gate/die space, scrypt makes FPGA/ASIC non-competitive. Forgoing that memory makes them very slow, as they essentially have to repeatedly recompute things, but implementing that memory means that a lot of parallelism is lost because you don't have the space. Thus the resultant design is still uneconomical/slow. That is why everyone mines litecoin with GPUs, as they have fast memory as part of their design (and they use AMD GPUs in particular because of their integer performance).

That should give you adequate background to understand why there aren't any ASICs for pande group. For one thing, the algorithm may not lend itself well to FPGA/ASICs because of its memory usage or other considerations.

But, and even more importantly: Once you've implemented an ASIC, that's what you've got. Forever. An ASICs to do GROMACS, or any other type of work unit, would mean that Pande could never change that work unit. If they find a bug, improvement, or simply want to adjust it to look for something similar or slightly different, oh well. Too late. :( That's what the S in ASIC means: specific. *Very* specific. That isn't a problem for something like bitcoin because SHA-256 is a very specific and very standard application that will never change by design. It can't, because then people's hashes wouldn't match and bitcoin wouldn't work. :wink:

FPGAs can be reprogrammed, but they are generally considerably slower. You would also need someone very conversant in HDL, and you'd need a product that is designed to be reprogrammed by the end-consumer. I don't know enough to say how common that is, but I imagine it isn't very common at all.

As to Raspberry PI, what's the point? The performance isn't there. It'd be a waste of effort and resources to develop software and interconnects to turn them into mini-"super"computer.
Glorious
Darth Gerbil
Gold subscriber
 
 
Posts: 7877
Joined: Tue Aug 27, 2002 6:35 pm

Re: BIGADV eventually to go away

Postposted on Mon Feb 17, 2014 8:57 am

Kougar wrote:Going from a Quad to a Hex-core adds 50% more cores, but it's hard to tell if it delivers 50% more PPD...

...I would venture to say GPUs still offering better scaling. Every GPU added will double the production as long as there's an empty core/thread to feed it. The nice thing about GPUs is that they stay fairly consistent PPD-wise as the machines age, with a nice uptick as the software is eventually able to 100% load the card & better optimized cores are developed such as core 17. GPUs are not as reliant on early return bonuses as processors have been.


I think it's safe to say that 50% more cores probably does not deliver 50% more PPD, because parallelism while working on one unit does indeed have a point of diminishing returns; I just don't know where that point is, or how bad it gets how quickly.

I should clarify that I want to build a high-core system anyway for the graphic rendering which would benefit from more cores. The folding just runs when I'm not making stuff. 8) I just wanted to know if I could mitigate that point of diminishing returns by use of virtualization for some of the CPU folding, or if it's maybe not even worth worrying about with my example 40 CPU-core system.
BIF
Gerbil Jedi
Gold subscriber
 
 
Posts: 1600
Joined: Tue May 25, 2004 7:41 pm

Re: BIGADV eventually to go away

Postposted on Mon Feb 17, 2014 2:31 pm

BIF wrote:I think it's safe to say that 50% more cores probably does not deliver 50% more PPD, because parallelism while working on one unit does indeed have a point of diminishing returns; I just don't know where that point is, or how bad it gets how quickly.


It does, but the early return bonuses offsets the it to some extent. With the original i7 920 vs i7 980X bigadv, it would have been worth it purely because of the ERB benefit. These days I'm honestly not sure because comparing even the same project is neigh impossible except in a general/broad overview.

BIF wrote:I should clarify that I want to build a high-core system anyway for the graphic rendering which would benefit from more cores. The folding just runs when I'm not making stuff. 8) I just wanted to know if I could mitigate that point of diminishing returns by use of virtualization for some of the CPU folding, or if it's maybe not even worth worrying about with my example 40 CPU-core system.


There's zero point in virtualization. The regular v7 client can run multiple SMP projects and it is supposed to automatically consider core affinity. Just determine what the max number of threads one SMP client can fully utilize, then manually set them to that number of threads when adding enough to max the system. The v7 client will be able to pause/start them all for simple management.
Kougar
Gerbil XP
 
Posts: 358
Joined: Tue Dec 02, 2008 2:12 am
Location: Texas

Re: BIGADV eventually to go away

Postposted on Mon Feb 17, 2014 9:23 pm

Kougar wrote:There's zero point in virtualization. The regular v7 client can run multiple SMP projects and it is supposed to automatically consider core affinity. Just determine what the max number of threads one SMP client can fully utilize, then manually set them to that number of threads when adding enough to max the system. The v7 client will be able to pause/start them all for simple management.


Holy cow, just for grins I added a CPU client left it set to -1 affinity, saved my config and poof, the client downloaded a second CPU work unit! Eeeek! I didn't know it would do that.

It's currently running a 7515 (a3) and a 9006 (a4) in addition to the X17 that'S running in the GPU slot.
BIF
Gerbil Jedi
Gold subscriber
 
 
Posts: 1600
Joined: Tue May 25, 2004 7:41 pm


Return to TR Distributed Computing Effort

Who is online

Users browsing this forum: Google [Bot] and 2 guests