Personal computing discussed

Moderators: renee, farmpuma, just brew it!

 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Wed Feb 27, 2008 9:30 pm

i believe i am as i have benchmark options upon boot.

i noticed the problem started more so when i move my fan closer to the nodes to help keep them cool, so last night i moved it further way and i hope that fixes it.

i tried building my own suite from nf's code to setup 1 client per 2 cpus, but i guess i dont have the stuff on my rig to get all the way through the: make all

it was worth a shot as if it works for me, i can tell nf that it works just fine.
Image
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Wed Feb 27, 2008 9:49 pm

runlinux wrote:
i believe i am as i have benchmark options upon boot.

i noticed the problem started more so when i move my fan closer to the nodes to help keep them cool, so last night i moved it further way and i hope that fixes it.

i tried building my own suite from nf's code to setup 1 client per 2 cpus, but i guess i dont have the stuff on my rig to get all the way through the: make all

it was worth a shot as if it works for me, i can tell nf that it works just fine.


That's strange??? I actually put an 80mm fan INSIDE my nodes sitting ON the PCI slots to cool the NB.

The Client per 2 cores based on my VM'd setups should be monsters on a Q6600 @ 3.3GHz with the right WUs nearly 5000PPD!
Image
 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Wed Feb 27, 2008 9:52 pm

maybe i should add that it was a BIG box fan cooling the nodes... EMR ftl...

i cant wait till he can get around to working on this.

btw, im more than willing to help with this in any way. i have a bit of linux knowledge and have quit a few quad cores to test things on....
Image
 
Flying Fox
Gerbil God
Posts: 25690
Joined: Mon May 24, 2004 2:19 am
Contact:

Re: New release of diskless folding suite

Wed Feb 27, 2008 10:21 pm

I recommend you join up on SourceForge to submit patches and stuff if you are interested.
The Model M is not for the faint of heart. You either like them or hate them.

Gerbils unite! Fold for UnitedGerbilNation, team 2630.
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Wed Feb 27, 2008 10:27 pm

runlinux wrote:
maybe i should add that it was a BIG box fan cooling the nodes... EMR ftl...

i cant wait till he can get around to working on this.

btw, im more than willing to help with this in any way. i have a bit of linux knowledge and have quit a few quad cores to test things on....


I figured you meant a big fan ;) I don't think it's the issue though...

I have almost no linux knowledge, actually that's how I stumbled upon the CD in the first place :oops:

If you want an ISO of the previous version that works flawlessly (for me at least) PM me.

Flying Fox wrote:
I recommend you join up on SourceForge to submit patches and stuff if you are interested.


Agreed. NF has added most of my requests posted there and it keeps the project organized.
Image
 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Wed Feb 27, 2008 11:09 pm

awesome news NF and fellow folders:

its very easy to change the settings to get 2 instances of folding up on a quad.

i did it on mine and i have successfully got 6 clients going over my 3 nodes.

i wont go into detail as this is NF's work, but for his sake, it is an easy task - took me about 3 minutes to get it working. he just needs the time to get around to working on it; time isn't too easy to come by these days...

EDIT:

and i registered over at sourceforge and added a request for a kill link if the system hangs. now that i know how i can work on this, i bet i could add that in my self and give him the work later to include in his releases.
Image
 
jeffry55
Grand Gerbil Poohbah
Posts: 3181
Joined: Sat Oct 30, 2004 4:38 pm
Location: Menlo Park - just down the street from the F@H Servers!
Contact:

Re: New release of diskless folding suite

Thu Feb 28, 2008 10:34 am

Thanks for the Team Spirit RunLinux!! :D We appreciate anything you do to help NotFred. 8)
Join UGN's Drive to the Top!
Image
UnitedGerbilNation wants you!!
 
notfred
Maximum Gerbil
Topic Author
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: New release of diskless folding suite

Thu Feb 28, 2008 12:46 pm

runlinux wrote:
its very easy to change the settings to get 2 instances of folding up on a quad.

i did it on mine and i have successfully got 6 clients going over my 3 nodes.

i wont go into detail as this is NF's work, but for his sake, it is an easy task - took me about 3 minutes to get it working. he just needs the time to get around to working on it; time isn't too easy to come by these days...

I don't mind people posting details, for that matter here they are:
In the init script, if it is an SMP, there are a couple of places where it does /4 to go from cpu count to folding count. For a quick hack those can be changed to /2. For the real fix, it should be made available to the end user and that means make it a parameter in the init script with validation and defaulting as the other params have, add the html to the CD generator page and add the parameter to isolinux.cfg file.

The per 2 CPUs SMP is quick and easy, I'm more bothered about why the hang check has broken and I want that working on the next release. It's no good working on 2 WU instead of 1 if they don't upload!
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Thu Feb 28, 2008 5:11 pm

notfred wrote:
I'm more bothered about why the hang check has broken and I want that working on the next release. It's no good working on 2 WU instead of 1 if they don't upload!


Maybe not the most elegant idea... but how about going back to the previous version and dropping the benchmark stuff?

Are you guys using the benchmark stuff?
Image
 
Flying Fox
Gerbil God
Posts: 25690
Joined: Mon May 24, 2004 2:19 am
Contact:

Re: New release of diskless folding suite

Thu Feb 28, 2008 6:15 pm

theMASS wrote:
Are you guys using the benchmark stuff?

Of course we are. ;)
The Model M is not for the faint of heart. You either like them or hate them.

Gerbils unite! Fold for UnitedGerbilNation, team 2630.
 
EvilAlchemist
Gerbil
Posts: 28
Joined: Tue Jan 29, 2008 1:54 am

Re: New release of diskless folding suite

Thu Feb 28, 2008 9:23 pm

notfred wrote:
I'm more bothered about why the hang check has broken and I want that working on the next release. It's no good working on 2 WU instead of 1 if they don't upload!


I have not had any hang in the lsat 7 days
I have binded all my diskless folders's MAC adress to specific IP's ( Using Linksys WRT300N - DHCP Reservations Tab)
Also changed the DHCP refresh time to 5 days.

Thanks for all your hard work Notfred.
I would rather have the Hang Check fixed then the /2 SMP switch.
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Thu Feb 28, 2008 10:26 pm

EvilAlchemist wrote:
notfred wrote:
I'm more bothered about why the hang check has broken and I want that working on the next release. It's no good working on 2 WU instead of 1 if they don't upload!


I have not had any hang in the lsat 7 days
I have binded all my diskless folders's MAC adress to specific IP's ( Using Linksys WRT300N - DHCP Reservations Tab)
Also changed the DHCP refresh time to 5 days.

Thanks for all your hard work Notfred.
I would rather have the Hang Check fixed then the /2 SMP switch.


I used to bind all my MAC addresses but on my last few boxes haven't. The only one that hangs though is bound to a specific address and currently the only one running the latest nf version. It seems to hang in stages... good for a few days... then hangs for 4-5 WUs.... then OK ???? Even without manually binding IP's to MAC's, they always pickup the same address.

I'll try running the new software in a Virtual Machine tonight and see what happens. VMware Server seems to eliminate network problems.
Image
 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Fri Feb 29, 2008 12:34 am

well, i made a few changes to the check_hang.sh script. im gunna let it run for a few days and see if it catches any hangs.

if it finds FINISHED_UNIT at the end of the file, it waits 10 minutes then it checks for it again.
if it is still there after 10 minutes (assuming we hung), it does all the normal stuff that was there. looks for the processes and kills it.

i think the issue may have been in how they may have changed the output from the client change.

NF had it looking for FINISHED_UNIT or CoreStatus, and if only FINISHED_UNIT was there after 5 minutes, it would kill the cores.

i have found that when ever CoreStatus is in the log file, it means the client actually finished and hasnt crashed and then continues. only a FINISHED_UNIT means its hung and its gunna sit there.

i'll post back later to report my findings.

EDIT:

one thing i am finding with 2 smp clients is that i am getting a few client core communication errors 0x0 and 0x1. this only seems to be happening on one of my nodes, and it has 1gb of ram. i think more ram may solve this as i know SMP folding uses ram and if its not there, and with no swap file, it may die.

maybe i should put usb drive in them...
Image
 
notfred
Maximum Gerbil
Topic Author
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: New release of diskless folding suite

Fri Feb 29, 2008 11:25 am

Be careful about just checking for stuff at the end of the logfile because you get the auto upload text in there every so often, that's why I have that complicated grep in there. With the old client you would get FINISHED_UNIT and then if it didn't hang then it would do CoreStatus. My first attempt just looked for FINISHED_UNIT at the end and the auto upload text sometimes got in after that and meant that it didn't pick up that it was hung.

I'll take a look at my logs tonight and double check on what the new client does with FINISHED_UNIT and CoreStatus and maybe make some changes based on that, thanks for pointing out it may be different!
 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Fri Feb 29, 2008 1:54 pm

[18:19:44] Completed 10000000 out of 10000000 steps  (100 percent)
[18:19:45] Writing final coordinates.
[18:19:45] Past main M.D. loop
[18:19:45] Will end MPI now
[18:20:44]
[18:20:44] Finished Work Unit:
[18:20:44] - Reading up to 232416 from "work/wudata_07.arc": Read 232416
[18:20:44] - Reading up to 13720960 from "work/wudata_07.xtc": Read 13720960
[18:20:45] goefile size: 0
[18:20:45] logfile size: 265850
[18:20:45] Leaving Run
[18:20:48] - Writing 14619582 bytes of core data to disk...
[18:20:48]   ... Done.
[18:20:48] - Shutting down core
[18:20:48]
[18:20:48] Folding@home Core Shutdown: FINISHED_UNIT
[18:35:47] CoreStatus = 64 (100)
[18:35:47] Unit 7 finished with 72 percent of time to deadline remaining.
[18:35:47] Updated performance fraction: 0.819928
[18:35:47] Sending work to server


[18:35:47] + Attempting to send results
[18:35:47] - Reading file work/wuresults_07.dat from core
[18:35:47]   (Read 14619582 bytes from disk)
[18:35:47] Connecting to http://171.64.65.63:8080/
[18:39:49] Posted data.
[18:39:49] Initial: 0000; - Uploaded at ~58 kB/s
[18:39:50] - Averaged speed for that direction ~58 kB/s
[18:39:50] + Results successfully sent
[18:39:50] Thank you for your contribution to Folding@Home.
[18:39:50] + Starting local stats count at 1
[18:43:54] - Warning: Could not delete all work unit files (7): Core returned invalid code
[18:43:54] Trying to send all finished work units
[18:43:54] + No unsent completed units remaining.
[18:43:54] - Preparing to get new work unit...
[18:43:54] + Attempting to get work packet
[18:43:54] - Will indicate memory of 1000 MB
[18:43:54] - Detect CPU. Vendor: GenuineIntel, Family: 6, Model: 15, Stepping: 11
[18:43:54] - Connecting to assignment server
[18:43:54] Connecting to http://assign.stanford.edu:8080/
[18:43:54] Posted data.
[18:43:54] Initial: 40AB; - Successful: assigned to (171.64.65.64).
[18:43:54] + News From Folding@Home: Welcome to Folding@Home
[18:43:54] Loaded queue successfully.
[18:43:54] Connecting to http://171.64.65.64:8080/
[18:43:57] Posted data.
[18:43:57] Initial: 0000; - Receiving payload (expected size: 2965944)
[18:44:02] - Downloaded at ~579 kB/s
[18:44:02] - Averaged speed for that direction ~398 kB/s
[18:44:02] + Received work.
[18:44:02] Trying to send all finished work units
[18:44:02] + No unsent completed units remaining.
[18:44:02] + Closed connections
[18:44:02]
[18:44:02] + Processing work unit
[18:44:02] Core required: FahCore_a1.exe
[18:44:02] Core found.
[18:44:02] Working on Unit 08 [February 29 18:44:02]
[18:44:02] + Working ...
[18:44:02] - Calling './mpiexec -np 4 -host 127.0.0.1 ./FahCore_a1.exe -dir work/ -suffix 08 -checkpoint 15 -forceasm -verbose -lifeline 506 -version 601'

[18:44:02]
[18:44:02] *------------------------------*
[18:44:02] Folding@Home Gromacs SMP Core
[18:44:02] Version 1.74 (November 27, 2006)
[18:44:02]
[18:44:02] Preparing to commence simulation
[18:44:02] - Ensuring status. Please wait.
[18:44:19] - Assembly optimizations manually forced on.
[18:44:19] - Not checking prior termination.
[18:44:19] - Expanded 2965432 -> 15213631 (decompressed 513.0 percent)
[18:44:20] - Starting from initial work packet
[18:44:20]
[18:44:20] Project: 2653 (Run 18, Clone 194, Gen 69)
[18:44:20]
[18:44:20] Assembly optimizations on if available.
[18:44:20] Entering M.D.
[18:44:26] Rejecting checkpoint
[18:44:26] Protein: Protein in POPCExtra SSE boost OK.
[18:44:26]
[18:44:26] Extra SSE boost OK.
[18:44:27] Writing local files
[18:44:27] Completed 0 out of 500000 steps  (0 percent)
[18:50:46]
[18:50:46] Folding@home Core Shutdown: INTERRUPTED
[18:50:50] CoreStatus = 66 (102)
[18:50:50] + Shutdown requested by user. Exiting.***** Got a SIGTERM signal (15)
[18:50:50] Killing all core threads

Folding@Home Client Shutdown.



well, the script worked... but a little too late.. lol. guess i gotta reboot the client - atleast it turned in a unit!

EDIT:

i played around with a backup log file and the check_hung script and made these changes and i think they will work and not just kill everything this time...
this time, it looks for CoreStatus, and if its there, (we actually ended the right way) it wont kill the cores like my last check did... :(

    grep -E 'FINISHED_UNIT' /etc/folding/$instance/FAHlog.txt | tail -n 1 | grep -q FINISHED_UNIT
    if [ $?  -eq 0 ]
    then
        sleep 600
        grep -E 'FINISHED_UNIT|CoreStatus' /etc/folding/$instance/FAHlog.txt | tail -n 1 | grep -q FINISHED_UNIT
        if [ $?  -eq 0 ]
        then
            #Do the killing code here
        fi
    fi



EDIT x2:

well, after the change, i have yet to see it kill the client like it did last time, so its good for now...

i just have to wait for a client to hang to see how it reacts.... *waits for WU's to finish....*
Image
 
notfred
Maximum Gerbil
Topic Author
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: New release of diskless folding suite

Fri Feb 29, 2008 9:57 pm

OK, I just found (and fixed) the bug in the hang check.

It's actually nothing to do with the newer version of the client or the logfile, and the bug is actually present in the 11th January version as well, it would not have been present in the 28th November version. It was when I fixed "1853661 - Hang check kills all on > 4 processors", changed from just doing a killall on the cores to walking the /proc tree and looking for the right cores and killing them individually. Buggy code is:
for procdir in `find /proc -name '[1-9]*' | awk '/\/proc\/[1-9]*$/ {print $0}'`
The problem is that it is only going to work for processes with 1-9 in them. I was just debugging and my 4 cores were 677-680, it only found the first three of them because 680 has a 0 in it. Fix is to change it to:
for procdir in `find /proc -name '[0-9]*' | awk '/\/proc\/[0-9]*$/ {print $0}'`
This is why it will have worked for some time and then failed to do the hang check because it will depend on what the process id is. I suspect that the last version just happened to end up with process ids containing 0 more often and so that's why the hang check was failing.

I want to address "1903637 - Add kill link to homepage" and "1870815 - Allow SMP per 2 CPUs" before I release a new version, but those should be relatively simple. Depending on how busy I am over the next few days, I should get something out relatively soon.
 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Fri Feb 29, 2008 10:14 pm

sweet news - that makes sense. ill change my code so i dont go and screw up and more Wu's... lol

i also added a bit more time to wait for the system to clean it up before killing the cores. on my latest unit, it took almost 11 minutes for the client to exit normally.
Image
 
notfred
Maximum Gerbil
Topic Author
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: New release of diskless folding suite

Sat Mar 01, 2008 10:46 pm

OK, new version is out:

1 March 08: Fix a bug in the hang check - wasn't killing cores with a 0 in the PID. Fix 1903637 - Add kill link to homepage. Fix 1870815 - Allow SMP per 2 CPUs.

Come and get it!
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Sun Mar 02, 2008 3:43 am

notfred wrote:
OK, new version is out:

1 March 08: Fix a bug in the hang check - wasn't killing cores with a 0 in the PID. Fix 1903637 - Add kill link to homepage. Fix 1870815 - Allow SMP per 2 CPUs.

Come and get it!


Just booted up the SMP per 2 core version. I'll start the "standard" SMP per 4 in a couple hours when the next box finishes it's current WU. My initial concern for performance with the SMP/2 version is that 4 cores are reported to Stanford so the WUs received are the WUs that are generally assigned to 4-core boxes. In comparison when running 2 SMP clients in VMs only 2-cores are reported and different WUs are assigned.

I only have a few percent complete so far but running 2 clients in VMware Server appears to result in considerably higher PPD. The 2 WUs will complete before preferred deadlines but for the relatively modest increase in PPD, I think it's better to run 1 SMP/4 cores or 2 SMP/4 in VM.

*EDIT Increase is about 10% PPD which I guess isn't modest. But it's ~25% in VMW Server. It's a WU assignment issue. based on recognized hardware. The Win client isn't affected currently because the same WUs are assigned (for the most part) to both 2 and 4 core boxes.

I may be RAM bound on this box, only running a single stick 1GB. I'll bump it up to 2GB and dual channel and see if that makes a difference. With a single client Dual Channel has no advantage. I have 2 boxes using same hardware but as a test I ran 1 with 2GB Dual Channel and 1 1GB Single Channel and over the last month either machine can be slightly faster or slower on the same WU frame to frame.
Image
 
GTX
Gerbil
Posts: 18
Joined: Sun Jun 11, 2006 9:39 pm

Re: New release of diskless folding suite

Sun Mar 02, 2008 12:29 pm

Thanks for your time and effort notfred !! :D
 
EvilAlchemist
Gerbil
Posts: 28
Joined: Tue Jan 29, 2008 1:54 am

Re: New release of diskless folding suite

Fri Mar 07, 2008 5:12 pm

theMass , any results on the 2/smp vs 4/smp in the system with more then 1 GB ...??


notfred, the new /fixed hang check is awesome. Not a single hang since release date. Thanks
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Fri Mar 07, 2008 6:03 pm

EvilAlchemist wrote:
theMass , any results on the 2/smp vs 4/smp in the system with more then 1 GB ...??


notfred, the new /fixed hang check is awesome. Not a single hang since release date. Thanks


I keep putting it off... but I'll add the second stick tonight. Almost did it last night but I had to burn 30 DVDs of a school play for the kids and I know if I would have done it last night... it would have been one of those "Damn why did I screw with a perfect box nights ;)"
Image
 
runlinux
Gerbil
Posts: 11
Joined: Wed Feb 27, 2008 1:29 pm

Re: New release of diskless folding suite

Sun Mar 09, 2008 1:43 pm

on my box that had 2gb of ram, it ran just fine with 2x SMP going.

it went from about 3kppd at 3ghz (1x SMP) to almost 4400ppd at times at the same speed. it really helped out the points.

but i noticed that on my boxes with only 1gb of ram i would randomly get Long 1-4 errors and the client would crash and nuke the WU.
Image
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Mon Mar 10, 2008 2:40 am

runlinux wrote:
on my box that had 2gb of ram, it ran just fine with 2x SMP going.

it went from about 3kppd at 3ghz (1x SMP) to almost 4400ppd at times at the same speed. it really helped out the points.

but i noticed that on my boxes with only 1gb of ram i would randomly get Long 1-4 errors and the client would crash and nuke the WU.


After a few days of running with 2GB I get basically the same results as with 1GB.

With 2 SMP clients ~3950PPD - 1 SMP client ~3350PPD The only problem I had, was that 256MB USB stick was full. The box normally runs headless so I would have never noticed if I didn't add the second stick, when I hooked up a monitor to check BIOS settings I saw the error in the console output.

@runlinux what WUs are you getting?

I get about 4400PPD 4300PPD @ 3GHz when running 2 SMP in VMWare Server but not when run natively. The VMW clients get 26XX WUs while the native clients get 30XX WUs. On the rare occasion I get a 26XX WU on a quad with 1 SMP client PPD jump about 10%.

Strange how my 1 SMP does better than yours and my 2 SMP worse??? I've only tried the 2 SMP natively on one box... I guess I should try it on another and see what happens. I would have predicted PPD for 2 SMP to be in the range you are seeing based on VMWare clients and Windows 2 SMP testing with affinity changer.

I tend to run my RAM with loose timings maybe I should try tightening them up and see if that helps. What speed is your RAM running @?

I'd love to see what 2 SMP clients do on my box @ 3.6GHz, I've already seen over 3800PPD with 1 SMP! (and at a reasonable 155Watt total system usage!)

UPDATE:
I had to see what the 3.6GHz system would do with the 2SMP setup... I cranked up the RAM 1000GHz 4-4-4-10 tRD=5 - 2 x 2653 WUs = 4962/PPD :D ...so close to 5000/PPD
Last edited by theMASS on Sun Mar 23, 2008 4:47 am, edited 2 times in total.
Image
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Mon Mar 10, 2008 5:11 am

I'm running the 2 SMP client set up on a Q6600 @ 3.3GHz 2GB RAM and I got 2 2653 WUs. Strange since it's the first time I've received a 26XX WU with the 2 SMP set up in over a week. Early results are showing ~4500PPD vs. ~3600PPD for 1 SMP with a 2653.

I'll let it run a few days and see what happens with different WUs...
Image
 
theMASS
Gerbil First Class
Posts: 132
Joined: Thu Sep 27, 2007 3:24 am

Re: New release of diskless folding suite

Tue Mar 11, 2008 9:53 pm

Well... I'm finding that my results are very WU dependent with the 2xSMP setup.

If the WUs are both 26XX series the PPD gain is ~30% vs. 1 SMP with a 30XX series WU and a ~20% gain vs. 1 SMP with a 26XX WU.

With 2 30XX WUs the gain vs 1 SMP with a 30XX WU is ~15% and ~11% vs. 1 SMP with a 26XX WU.

1 GB RAM and 2 GB RAM provided the same results.

On the two boxes I ran the 2 x SMP setup on I had crashes, they were all recoverable but the same machines have been stable for several months with the 1 SMP setup.

Depending on how often you monitor your machines the gain from 2 SMP clients may or may not provide higher PPD, if you don't catch a crash with a couple of hours 1 SMP will result in a better PPD.

EDIT: With 2 x SMP 1 30XX & 1 26XX WU give similar results as 2 26XX WUs.
Image
 
notfred
Maximum Gerbil
Topic Author
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: New release of diskless folding suite

Fri Mar 14, 2008 8:02 pm

OK, another new release out:
  • Upgrade to new kernel version (support more network cards), glibc and busybox.
  • Fix 1881099 - Add folding directory link to web interface.
  • Fix 1880850 - Optional enable of screen blanking.
  • Fix 1914624 - Configurable Workgroup.
  • Fix 1914625 - Disabling of nmbd and smbd.
The fixes were ones I could throw in quickly whilst upreving the core stuff. Let me know what I broke this time :wink:

Plans for next release include:
  • 1853657 - Add support for proxy web access
  • 1853837 - "Official" VMware Appliance
  • 1853668 - Make options selectable / changeable at boot screen.
 
bollix47
Gerbil In Training
Posts: 9
Joined: Wed Mar 26, 2008 1:51 am

Re: New release of diskless folding suite

Wed Mar 26, 2008 1:56 am

There has been a new beta release of the SMP client.

http://www.stanford.edu/group/pandegrou ... -Linux.tgz

One of my diskless computers shutdown overnight and now it will not start up again because it can't find the old beta file.

Is there any way that this can be fixed so that a new kernel doesn't have to be created every time Stanford releases a new beta/release? Perhaps a text file with the various client links or the client names in it that we could modify when this happens?

Thanks for any guidance.
 
Flying Fox
Gerbil God
Posts: 25690
Joined: Mon May 24, 2004 2:19 am
Contact:

Re: New release of diskless folding suite

Wed Mar 26, 2008 7:50 am

bollix47 wrote:
There has been a new beta release of the SMP client.

http://www.stanford.edu/group/pandegrou ... -Linux.tgz

One of my diskless computers shutdown overnight and now it will not start up again because it can't find the old beta file.

Is there any way that this can be fixed so that a new kernel doesn't have to be created every time Stanford releases a new beta/release? Perhaps a text file with the various client links or the client names in it that we could modify when this happens?

Thanks for any guidance.

That sucks. I would imagine they will let the old client expire a little later instead of right away. :evil:

AFAIK we have no idea what name Stanford will use for their next beta, so it is hard to not hardcode it in. I see 2 possibilities:
  1. Stanford using the same file name for all their betas, that will confuse the hell out of people
  2. Similar to above, but use HTTP redirection to serve different files on the same URL (as if Stanford will listen to us :roll:)
  3. Someone has to host a "tracker" that the diskless client goes to and do the redirection (you get uptime, bandwidth, scalability issues, and the onus is on us regular users, never mind if Stanford will get antsy on this)
The Model M is not for the faint of heart. You either like them or hate them.

Gerbils unite! Fold for UnitedGerbilNation, team 2630.
 
notfred
Maximum Gerbil
Topic Author
Posts: 4610
Joined: Tue Aug 10, 2004 10:10 am
Location: Ottawa, Canada

Re: New release of diskless folding suite

Wed Mar 26, 2008 8:44 am

I think it's just one of the "joys" of running beta software. Hopefully Stanford will eventually release the client as a full release and then it will stop changing URLs.

I'll try to get an update out later tonight.

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On