|
[Sponsors] |
OF21 not running when submitted to queue in ROCKS cluster |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 27, 2012, 08:32 |
OF21 not running when submitted to queue in ROCKS cluster
|
#1 |
Senior Member
|
Hi all,
We have managed to install OpenFOAM2.1.0 in a ROCKS cluster and I can run it either when logged in the front-end or when I ssh to one of the compute.nodes and I launch it locally from inside there. Now, I want to be able to 'qsub' to any computer.node from the front-end. I have created the 'machines' file and I also have a 'run.sh' script file. I have paste them below in case someone would be so kind to take a look at them and guide me. I have read the User's guide and looked for forum threads. Thanks! *********************** *** machines file *** *********************** all.q@compute-1-6.local cpu=4 *********************** *** run.sh *** *********************** #!/bin/bash # #$ -cwd #$ -j y #$ -S /bin/bash #$ -pe * 4 export PATH=$PATH:/opt/openmpi/bin # FORMATO GENERAL: # mpirun --hostfile <machines> -np <nProcs> <foamExec> <otherArgs> -parallel > log & mpirun --hostfile machines -np 4 icoFoam -parallel > log & |
|
April 27, 2012, 11:01 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi aerospain,
If you had used OpenFOAM's foamJob script before, you would know that you can use the foamExec script for launching remotely . The last line on your script should look something like this: Code:
mpirun --hostfile machines -np 4 `which foamExec` icoFoam -parallel > log & Code:
mpirun --hostfile machines -np 4 /path/to/OpenFOAM-2.1.0/bin/foamExec icoFoam -parallel > log & Best regards, Bruno
__________________
|
|
May 3, 2012, 07:15 |
|
#3 |
Senior Member
|
Hello Bruno,
Sorry for taking so long to reply. I have tried your advice without any success. I found the path to the foamExec script and included it in my run.sh file. BTW, I have used foamJob to run jobs in my personal workstation in parallel and didn't need the foamExec script. One last question; if I log into the node from my front-end and launch the job in parallel, is this exactly the same as 'qsub'mitting it? Thanks for your time and help! Carlos |
|
May 3, 2012, 16:59 |
|
#4 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Carlos,
Well, after searching here on the forum, I picked up on the following two threads:
Code:
#!/bin/bash # #$ -cwd #$ -j y #$ -S /bin/bash #$ -pe * 4 #$ -v MPI_BUFFER_SIZE=200000000 #Activate OpenFOAM's environment source /opt/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc #or #. /opt/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc #Shouldn't be necessary, since the OpenFOAM should already have this defined #export PATH=$PATH:/opt/openmpi/bin # FORMATO GENERAL: # mpirun --hostfile <machines> -np <nProcs> <foamExec> <otherArgs> -parallel > log & mpirun --hostfile machines -np 4 icoFoam -parallel > log & Best regards, Bruno
__________________
|
|
May 3, 2012, 18:15 |
|
#5 |
Senior Member
|
Thank you Bruno,
I had found the first link a few days ago, but thought it was too much for my needs since they mention the BUFFER_SIZE limitation. I had not found the second one, thank you for your time. I have also found the following one after submitting my reply earlier today: http://www.cfd-online.com/Forums/ope...s-cluster.html I'm gonna try your suggestions. Kindest regards, Carlos |
|
May 7, 2012, 04:41 |
|
#6 |
Senior Member
|
Hi Bruno,
Thanks a lot for your help! I have solved the problem and I can now submit jobs from the ROCKS front-end to any node of my liking. I always could ssh into any of those nodes and launch locally a parallel job, but our administrators don't like that behaviour since they will not be able to see who is using the nodes by 'qstat'. My next step is to understand how to send to more than one node, in the meanwhile I will leave my scripts pasted in this message for anyone's help. From the typo in my machines I'm assuming this file could be avoided when 'qsub'ing the 'run.sh' file. I'll test it latter today. cheers! ***** run.sh ***** #!/bin/bash # #$ -cwd #$ -j y #$ -S /bin/bash #$ -pe * 12 #$ -v MPI_BUFFER_SIZE=200000000 # ACTIVATE OPENFOAM ENVIRONMENT # source /share/apps/centFOAM/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc # or . /share/apps/centFOAM/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc # Shouldn't be necessary, since the OpenFOAM should already have this defined export PATH=$PATH:/opt/openmpi/bin # FORMATO GENERAL: # mpirun --hostfile <machines> -np <nProcs> <foamExec> <otherArgs> -parallel > log & mpirun --hostfile machines -np 12 foamExec simpleFoam -parallel > log ***** machines ***** compute-1-8.local cpu=4 >>>>>>>>> |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[OpenFOAM.org] Installing OpenFOAM 2.1.0 on ROCKS cluster | aerospain | OpenFOAM Installation | 6 | December 15, 2014 04:35 |
Fluent on Rocks 5.4.3 Cluster | kami146 | FLUENT | 0 | March 29, 2012 04:51 |
Rocks and linux for fluent cluster | Far | FLUENT | 2 | March 2, 2012 10:31 |
Statically Compiling OpenFOAM Issues | herzfeldd | OpenFOAM Installation | 21 | January 6, 2009 10:38 |
Kubuntu uses dash breaks All scripts in tutorials | platopus | OpenFOAM Bugs | 8 | April 15, 2008 08:52 |