Personal computing discussed

Moderators: renee, farmpuma, just brew it!

 
Holdolin
Gerbil In Training
Topic Author
Posts: 4
Joined: Sun Aug 08, 2010 11:42 am

Low PPD on diskless farm

Thu Aug 26, 2010 1:51 pm

Hello all,

Thanks to the wonderfully easy guide Notfred wrote about building a diskless farm, I set up one. I currently made 4 nodes from parts that were left behind from my ever-raging upgrade addiction. As it stands now, the 4 nodes consist of 1 i7 930, 1 PII 1055, 1 PII 955, and one PII 940. I have had each of these systems folding in a box, and they did ~6-10K PPD. Having put them in a diskless farm, the numbers for the PII 940 and 955 are ~600 PPD. The 1055 is ~5000 PPD. Now, before I came here buggin y'all, I did some looking at my setup. I did notice that the -verbosity 9 flag is not in the system. Could that cause the dramatic drop in PPD? If so, how would I put that in the system? If that's not the problem, any ideas on what it might be? Thank you all for any/all help :)
 
Evaders99
Gerbil First Class
Posts: 154
Joined: Fri May 16, 2008 10:48 am
Contact:

Re: Low PPD on diskless farm

Thu Aug 26, 2010 2:52 pm

Did you use the -smp parameter?
 
Holdolin
Gerbil In Training
Topic Author
Posts: 4
Joined: Sun Aug 08, 2010 11:42 am

Re: Low PPD on diskless farm

Thu Aug 26, 2010 2:53 pm

Yes, the .cfg file is set to smp=8.
 
Holdolin
Gerbil In Training
Topic Author
Posts: 4
Joined: Sun Aug 08, 2010 11:42 am

Re: Low PPD on diskless farm

Thu Aug 26, 2010 10:02 pm

Ok, i set the smp to =2 in the .cfg file. That helped a lot. Now, would there be an advantage to setting it to 4 (the smallest amount of cores i have on any of my machines) aside from fewer instances running?
 
Ragnar Dan
Gerbil Elder
Posts: 5380
Joined: Sun Jan 20, 2002 7:00 pm

Re: Low PPD on diskless farm

Thu Aug 26, 2010 11:38 pm

Hi, and welcome to the TR folding forum.

With the way the SMP Work Units are given bonus points based on how quickly they're returned to Stanford by the client machines, it's probably optimal to put the correct number of cores per machine in each of their config files.

Your output still seems low to me, given that my i7 930, overclocked to 3.8 GHz for the time being, produces up to nearly 30,000 points per day on some WU's. You probably aren't running it with the -bigadv flag in your config file. That's definitely very helpful for point (and science) production, thus the large bonuses. But it means the machine will work on a WU for quite a long time, which means it should probably have a backup scheme of some sort. My i7 machine had been taking about 2 days and ~10 hours per WU for the last several, but for some reason the one I just turned in Thursday (earlier today) and the one it's currently folding look like they're only going to be worth ~60,000 points, which is barely over 20,000 points per day. The one machine I have that's running notfred's client uses a 4 GiB USB flash drive to backup its data every once in a while (probably every 10 minutes, but I forget off hand). Anyway, normally I'd been getting over 70,000 points when I turn in one of the -bigadv WU's, and hopefully that will happen again. But that's my problem, not yours. :wink:

(And I hope you're folding for TR's team 2630. :D)
 
Holdolin
Gerbil In Training
Topic Author
Posts: 4
Joined: Sun Aug 08, 2010 11:42 am

Re: Low PPD on diskless farm

Fri Aug 27, 2010 5:40 pm

Ragnar Dan wrote:
Hi, and welcome to the TR folding forum.

With the way the SMP Work Units are given bonus points based on how quickly they're returned to Stanford by the client machines, it's probably optimal to put the correct number of cores per machine in each of their config files.

Your output still seems low to me, given that my i7 930, overclocked to 3.8 GHz for the time being, produces up to nearly 30,000 points per day on some WU's. You probably aren't running it with the -bigadv flag in your config file. That's definitely very helpful for point (and science) production, thus the large bonuses. But it means the machine will work on a WU for quite a long time, which means it should probably have a backup scheme of some sort. My i7 machine had been taking about 2 days and ~10 hours per WU for the last several, but for some reason the one I just turned in Thursday (earlier today) and the one it's currently folding look like they're only going to be worth ~60,000 points, which is barely over 20,000 points per day. The one machine I have that's running notfred's client uses a 4 GiB USB flash drive to backup its data every once in a while (probably every 10 minutes, but I forget off hand). Anyway, normally I'd been getting over 70,000 points when I turn in one of the -bigadv WU's, and hopefully that will happen again. But that's my problem, not yours. :wink:

(And I hope you're folding for TR's team 2630. :D)


Thanks for the Info. How do I set the -bigadv in the .cfg. Also, how would i set the smp value to the corect cores? My farm currently has 2 quad core cpu's, 1 hex-core, and the i7. I'll post my .cfg file below for any clarification as to exactly how i have my host set up. Again, thanks for the help :)

DEFAULT fold64
TIMEOUT 150
PROMPT 0
DISPLAY fold.txt

LABEL fold
KERNEL kernel32
APPEND initrd=initrd USER=notfred2630 TEAM=2630 PASSKEY= BIG=big MEM= ADVMETHODS=yes SMPCPUS=4 BACKUP=15 REBOOT=enabled INSTALL=yes BENCHMARK=no BLANK=0 SAMBA=2 GROUP=DISKLESS PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PASS= INTF= IP= MASK= GATEWAY= DNS= TFTP= SHELL=yes

LABEL fold64
KERNEL kernel64
APPEND initrd=initrd USER=Holdolin TEAM=***** PASSKEY=**** BIG=big MEM= ADVMETHODS=yes SMPCPUS=2 BACKUP=5 REBOOT=enabled INSTALL=yes BENCHMARK=no BLANK=0 SAMBA=2 GROUP=DISKLESS PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PASS= INTF= IP= MASK= GATEWAY= DNS= TFTP= SHELL=yes

LABEL benchmark
KERNEL kernel32
APPEND initrd=initrd USER=notfred2630 TEAM=2630 PASSKEY= BIG=big MEM= ADVMETHODS=yes SMPCPUS=4 BACKUP=15 REBOOT=enabled INSTALL=yes BENCHMARK=yes BLANK=0 SAMBA=2 GROUP=DISKLESS PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PASS= INTF= IP= MASK= GATEWAY= DNS= TFTP= SHELL=yes

LABEL benchmark64
KERNEL kernel64
APPEND initrd=initrd USER=notfred2630 TEAM=2630 PASSKEY= BIG=big MEM= ADVMETHODS=yes SMPCPUS=4 BACKUP=15 REBOOT=enabled INSTALL=yes BENCHMARK=yes BLANK=0 SAMBA=2 GROUP=DISKLESS PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PASS= INTF= IP= MASK= GATEWAY= DNS= TFTP= SHELL=yes
 
notfred
Maximum Gerbil
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: Low PPD on diskless farm

Fri Aug 27, 2010 7:31 pm

There's no easy way to set bigadv on my diskless stuff currently, you would need to run with the shell prompt enabled, stop the current client and restart with -bigadv by hand. I thought Stanford were pulling the bigadv WUs from the Linux boxes and giving them to WIndows these days anyway.
 
Ragnar Dan
Gerbil Elder
Posts: 5380
Joined: Sun Jan 20, 2002 7:00 pm

Re: Low PPD on diskless farm

Tue Aug 31, 2010 12:12 pm

notfred wrote:
There's no easy way to set bigadv on my diskless stuff currently, you would need to run with the shell prompt enabled, stop the current client and restart with -bigadv by hand. I thought Stanford were pulling the bigadv WUs from the Linux boxes and giving them to WIndows these days anyway.

Sorry I haven't replied sooner. As notfred points out, you can't run the -bigadv work units in Linux for the time being. I forgot about the change, which is supposed to be temporary while they fix a bug, but even so you should still be able to get better output using notfred's software. It should automatically detect that you've got multiple cores and automatically run using the -smp switch. You choose the number of cores it uses per instance in the setup page before downloading his software.

What you put in the previous post looks like the init file, which isn't the config file I meant. The file you want to edit requires you to do some things to get to and change it.

You have to have enabled a login shell. Then you login as the username "root", and "cd" to "/etc/folding". And then it varies depending on your setup. If you have a quad running one instance of folding on all 4 cores, you'll see a directory under there by using the list command, "ls", called "1". Change your location to that directory by typing "cd 1" and then you should be where you need to be. If you have a 6 core machine, I'm not sure what notfred's will do, but it may have an instance for 4 cores and one for the remaining 2. Assumedly, the latter will be located under /etc/folding/2. If that's the case, I think it would make killing the running folding processes a bit more complicated.

Anyway, you have to stop the folding client before you edit the client.cfg file (which is the config file I meant for you to edit in my prior post), otherwise the folding client will replace what you've changed the next time it updates the file with the current count of WU's completed. To do that you type "killall fah6", and then you can edit the client.cfg file. A problem is, I don't know how to make it easy for you to edit, but hopefully someone else here recalls the best editors available for you. The one I use is "vi", but it's old and was originally from back in the days when low bandwidth terminal connections to Unix machines were common, and therefore uses one letter commands and in general is a bit of a pain to pick up when easier options are available. I think "nano" may be available, and it's fairly commonly recommended as an editor for new *nix users, but I'm not sure. Anyway, under the [settings] heading, at the bottom of the listed options you can add the line "extra_parms=-verbosity 9". You can also add more than one parameter, each separated by a space.

Until notfred tells us how it divides things up, and hopefully the best way to stop the folding clients individually, I'd leave your 6 core 1055 alone for the time being.
 
notfred
Maximum Gerbil
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: Low PPD on diskless farm

Tue Aug 31, 2010 1:22 pm

I did have the verbosity=9 flag on the command line but then there was a version of the Stanford code that crashed with it enabled so I had to pull the parameter out.

For the CPUs, it starts an instance of folding per SMPCPUs that you have. In the case of a 6 core processor with SMPCPUS=2 you will get 3 instances, each using 2 CPUs, with SMPCPUS=4 then you will get 2 instances, the first theoretically using 4 CPUs and the second theoretically using 2, but I don't bind processes to CPUs, so they will all end up sharing equally.

Who is online

Users browsing this forum: No registered users and 27 guests
GZIP: On