CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Installation

OF21 not running when submitted to queue in ROCKS cluster

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   April 27, 2012, 08:32
Default OF21 not running when submitted to queue in ROCKS cluster
  #1
Senior Member
 
aerospain
Join Date: Sep 2009
Location: Madrid, Spain
Posts: 149
Rep Power: 17
aerospain is on a distinguished road
Send a message via Skype™ to aerospain
Hi all,

We have managed to install OpenFOAM2.1.0 in a ROCKS cluster and I can run it either when logged in the front-end or when I ssh to one of the compute.nodes and I launch it locally from inside there.

Now, I want to be able to 'qsub' to any computer.node from the front-end. I have created the 'machines' file and I also have a 'run.sh' script file. I have paste them below in case someone would be so kind to take a look at them and guide me. I have read the User's guide and looked for forum threads.

Thanks!

***********************
*** machines file ***
***********************
all.q@compute-1-6.local cpu=4

***********************
*** run.sh ***
***********************

#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -pe * 4

export PATH=$PATH:/opt/openmpi/bin

# FORMATO GENERAL:
# mpirun --hostfile <machines> -np <nProcs> <foamExec> <otherArgs> -parallel > log &

mpirun --hostfile machines -np 4 icoFoam -parallel > log &
aerospain is offline   Reply With Quote

Old   April 27, 2012, 11:01
Default
  #2
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi aerospain,

If you had used OpenFOAM's foamJob script before, you would know that you can use the foamExec script for launching remotely .

The last line on your script should look something like this:
Code:
mpirun --hostfile machines -np 4 `which foamExec` icoFoam -parallel > log &
Or:
Code:
mpirun --hostfile machines -np 4 /path/to/OpenFOAM-2.1.0/bin/foamExec icoFoam -parallel > log &
Searching in this forum for "qsub" would probably pop up other results as well.

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   May 3, 2012, 07:15
Default
  #3
Senior Member
 
aerospain
Join Date: Sep 2009
Location: Madrid, Spain
Posts: 149
Rep Power: 17
aerospain is on a distinguished road
Send a message via Skype™ to aerospain
Hello Bruno,

Sorry for taking so long to reply. I have tried your advice without any success. I found the path to the foamExec script and included it in my run.sh file.

BTW, I have used foamJob to run jobs in my personal workstation in parallel and didn't need the foamExec script.

One last question; if I log into the node from my front-end and launch the job in parallel, is this exactly the same as 'qsub'mitting it?

Thanks for your time and help!

Carlos
aerospain is offline   Reply With Quote

Old   May 3, 2012, 16:59
Default
  #4
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Carlos,

Well, after searching here on the forum, I picked up on the following two threads:
So, basing myself more on the 2nd link, your script might have to look something like this:
Code:
#!/bin/bash
#
#$ -cwd
 #$ -j y
#$ -S /bin/bash
#$ -pe * 4
#$ -v MPI_BUFFER_SIZE=200000000 


#Activate OpenFOAM's environment

source /opt/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc
#or

#. /opt/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc


#Shouldn't be necessary, since the OpenFOAM should already have this defined
#export PATH=$PATH:/opt/openmpi/bin

# FORMATO GENERAL:
# mpirun --hostfile <machines> -np <nProcs> <foamExec> <otherArgs> -parallel > log &

mpirun --hostfile machines -np 4 icoFoam -parallel > log &

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   May 3, 2012, 18:15
Default
  #5
Senior Member
 
aerospain
Join Date: Sep 2009
Location: Madrid, Spain
Posts: 149
Rep Power: 17
aerospain is on a distinguished road
Send a message via Skype™ to aerospain
Thank you Bruno,

I had found the first link a few days ago, but thought it was too much for my needs since they mention the BUFFER_SIZE limitation. I had not found the second one, thank you for your time.

I have also found the following one after submitting my reply earlier today:
http://www.cfd-online.com/Forums/ope...s-cluster.html

I'm gonna try your suggestions.

Kindest regards,
Carlos
aerospain is offline   Reply With Quote

Old   May 7, 2012, 04:41
Default
  #6
Senior Member
 
aerospain
Join Date: Sep 2009
Location: Madrid, Spain
Posts: 149
Rep Power: 17
aerospain is on a distinguished road
Send a message via Skype™ to aerospain
Hi Bruno,

Thanks a lot for your help! I have solved the problem and I can now submit jobs from the ROCKS front-end to any node of my liking. I always could ssh into any of those nodes and launch locally a parallel job, but our administrators don't like that behaviour since they will not be able to see who is using the nodes by 'qstat'.

My next step is to understand how to send to more than one node, in the meanwhile I will leave my scripts pasted in this message for anyone's help.

From the typo in my machines I'm assuming this file could be avoided when 'qsub'ing the 'run.sh' file. I'll test it latter today.

cheers!

*****
run.sh
*****
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -pe * 12
#$ -v MPI_BUFFER_SIZE=200000000

# ACTIVATE OPENFOAM ENVIRONMENT
# source /share/apps/centFOAM/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc
# or
. /share/apps/centFOAM/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc

# Shouldn't be necessary, since the OpenFOAM should already have this defined
export PATH=$PATH:/opt/openmpi/bin

# FORMATO GENERAL:
# mpirun --hostfile <machines> -np <nProcs> <foamExec> <otherArgs> -parallel > log &

mpirun --hostfile machines -np 12 foamExec simpleFoam -parallel > log

*****
machines
*****
compute-1-8.local cpu=4

>>>>>>>>>
aerospain is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[OpenFOAM.org] Installing OpenFOAM 2.1.0 on ROCKS cluster aerospain OpenFOAM Installation 6 December 15, 2014 04:35
Fluent on Rocks 5.4.3 Cluster kami146 FLUENT 0 March 29, 2012 04:51
Rocks and linux for fluent cluster Far FLUENT 2 March 2, 2012 10:31
Statically Compiling OpenFOAM Issues herzfeldd OpenFOAM Installation 21 January 6, 2009 10:38
Kubuntu uses dash breaks All scripts in tutorials platopus OpenFOAM Bugs 8 April 15, 2008 08:52


All times are GMT -4. The time now is 17:07.