|
[Sponsors] |
September 30, 2011, 17:27 |
OpenFoam 2.0.1 interFoam
|
#1 |
New Member
Join Date: Sep 2011
Posts: 5
Rep Power: 15 |
I managed to compile OpenFoam 2.0.1 on Linux (RHEL 4.8), and after running decomposePar (damBreak), I'm trying to run interFoam -parallel using mpirun as noted in the documentation.
I'm getting the following error: --> FOAM FATAL ERROR: bool IPstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor From function UPstream::init(int& argc, char**& argv) in file UPstream.C at line 80. FOAM aborting decomposePar generated 4 "processor1-4" directories. We have intel's mpi on an HPC cluster for mpirun. Any help is greatly appreciated. |
|
September 30, 2011, 17:51 |
|
#2 |
Senior Member
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22 |
Can you post the exact command you used for mpirun? How did you define the distribution among the CPUs?
|
|
September 30, 2011, 18:07 |
|
#3 |
New Member
Join Date: Sep 2011
Posts: 5
Rep Power: 15 |
We use qsub to submit jobs, which is the standard way on our cluster. It allocates free nodes based on availability and resources required.
However, i did try running the command without using qsub: mpirun mpd.hosts -np 4 interFoam -parallel > test.log & The line "nodes=1pn=8" indicates one compute node using eight processors. I'm not quite sure how to define the distribution among CPUs. I ran decomposePar and it generated four processor directories. #!/bin/bash ## PBS job submission settings: ##PBS -N CS5 #PBS -l nodes=1pn=8 #PBS -l walltime=1:00:00 #PBS -W x=NACCESSPOLICY:SINGLEJOB #PBS -m ae #PBS -M email #PBS -j oe #PBS -e exec.err #PBS -o exec.log mpirun interFoam -parallel |
|
October 1, 2011, 18:06 |
|
#4 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Greetings to both!
@wnowak1: Might I suggest searching in Google: Code:
site:cfd-online.com/Forums qsub openfoam Best regards, Bruno
__________________
|
|
October 3, 2011, 03:02 |
|
#5 |
Senior Member
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22 |
In my qsub script to run in parallel, I use
-pe mpi_shm 4 Here 4 is the amount of CPUs, and mpi_shm forces shared memory usage, basically restricting to run on one node. Do you use the system mpirun or the Third-Party mpirun? In the latter case as far as I remember you have to compile it with GridEngine support. Good luck! |
|
October 3, 2011, 03:37 |
|
#6 | |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
Did mpirun without using the queuing system work or not? It's not clear from your post. You tell Openfoam how many CPUs to use in system/decomposeParDict (which is clearly set to 4 in your case). Then you ask for 8 nodes, which is a waste, but not the issue at hand right now. I believe what you are missing in the PBS script is the "-np" option you used when you tried without PBS.
- Anton Quote:
|
||
October 3, 2011, 03:39 |
|
#7 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
||
October 5, 2011, 11:47 |
|
#8 |
Senior Member
Kent Wardle
Join Date: Mar 2009
Location: Illinois, USA
Posts: 219
Rep Power: 21 |
HI,
Depending on what mpi implementation you are using you need to add a few things to your mpirun call. First, the qsub flag "-l nodes=1:Ppn=8" (not cap P but the forum gives a stupid smiley otherwise) tells qsub how to schedule the job, but mpirun does not get this info unless you tell it to. For example, here is a qsub script I use: Code:
#!/bin/sh #PBS -l nodes=25:ppn=8 #PBS -l walltime=72:00:00 #PBS -j oe #PBS -N jobname ##PBS -W depend=afterany:699146 cd ${PBS_O_WORKDIR} NN=`cat ${PBS_NODEFILE} | wc -l` echo $NN cat ${PBS_NODEFILE} > nodes mpirun -machinefile ${PBS_NODEFILE} -np $NN interFoam -parallel > jobname-$NN.out exit 0 Hope this helps! -Kent |
|
October 5, 2011, 17:09 |
|
#9 | |
New Member
Join Date: Sep 2011
Posts: 5
Rep Power: 15 |
Quote:
|
||
October 5, 2011, 17:12 |
|
#10 | |
New Member
Join Date: Sep 2011
Posts: 5
Rep Power: 15 |
Quote:
Perhaps this has something to do with it: When I run decomposePar, it creates the 8 Processor directories but the output of decomposePar shows this: nProcs : 1 $ decomposePar /*---------------------------------------------------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.0.1 | | \\ / A nd | Web: www.OpenFOAM.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ Build : 2.0.1 Exec : decomposePar Date : Oct 05 2011 Time : 15:11:59 Host : host1 PID : 15665 Case : 2.0.1/damBreak nProcs : 1 Does the nProcs have anything to do with this ? |
||
October 5, 2011, 17:19 |
|
#11 |
Senior Member
Kent Wardle
Join Date: Mar 2009
Location: Illinois, USA
Posts: 219
Rep Power: 21 |
Wait, is that the complete output of decomposePar? You should see it break up the mesh showing you the number of cells and face patches for each processor and then the last lines should be field transfers to each processor 0 through 7.
Note that decomposePar itself is NOT a parallel application, run it in serial. Scratch that, I see that you did say it creates the 8 processor directories so you must have run it correctly. So you did have the "-np 8" flag in your mpirun command? |
|
October 5, 2011, 17:50 |
|
#12 | |
New Member
Join Date: Sep 2011
Posts: 5
Rep Power: 15 |
Quote:
|
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
SpalartAllmaras wall function in OpenFOAM 2.0.1 | moser_r | OpenFOAM Running, Solving & CFD | 4 | September 18, 2013 18:37 |
OpenFoam 2.0.1 installation | shailesh.nitk | OpenFOAM Installation | 4 | October 4, 2011 09:50 |
[swak4Foam] OpenFOAM 1.6 and 1.7 with interFoam, groovyBC give different strange results | Arnoldinho | OpenFOAM Community Contributions | 7 | December 9, 2010 17:29 |
Cross-compiling OpenFOAM 1.7.0 on Linux for Windows 32 and 64bits with Mingw-w64 | wyldckat | OpenFOAM Announcements from Other Sources | 3 | September 8, 2010 07:25 |
Modified OpenFOAM Forum Structure and New Mailing-List | pete | Site News & Announcements | 0 | June 29, 2009 06:56 |