|
[Sponsors] |
Problems running in parallel - Pstream not available |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
September 28, 2013, 10:27 |
Problems running in parallel - Pstream not available
|
#1 | |||||
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
hi
I run my case in 1core without any error but when I run with mpirun I face this error: Quote:
Quote:
Quote:
Quote:
|
||||||
September 28, 2013, 11:40 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
It looks like you have an incomplete system installation. What Linux Distribution are you using and which installation instructions did you follow? Best regards, Bruno
__________________
|
|
September 29, 2013, 08:55 |
|
#3 | |
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
OpenFOAM2.1.1
suse 12.3 I use this installation instructions Quote:
|
||
September 29, 2013, 09:07 |
|
#4 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
What do the following commands give you: Code:
echo $WM_MPLIB echo $FOAM_MPI_LIBBIN mpirun --version Best regards, Bruno
__________________
|
|
September 30, 2013, 09:15 |
|
#5 |
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
I give this result:
Code:
hadi@172-15-2-53:~> echo $WM_MPLIB SYSTEMOPENMPI hadi@172-15-2-53:~> echo $FOAM_MPI_LIBBIN hadi@172-15-2-53:~> mpirun --version ----------------------------------------------------------------------------- Synopsis: mpirun [options] <app> mpirun [options] <where> <program> [<prog args>] Description: Start an MPI application in LAM/MPI. Notes: [options] Zero or more of the options listed below <app> LAM/MPI appschema <where> List of LAM nodes and/or CPUs (examples below) <program> Must be a LAM/MPI program that either invokes MPI_INIT or has exactly one of its children invoke MPI_INIT <prog args> Optional list of command line arguments to <program> Options: -c <num> Run <num> copies of <program> (same as -np) -client <rank> <host>:<port> Run IMPI job; connect to the IMPI server <host> at port <port> as IMPI client number <rank> -D Change current working directory of new processes to the directory where the executable resides -f Do not open stdio descriptors -ger Turn on GER mode -h Print this help message -l Force line-buffered output -lamd Use LAM daemon (LAMD) mode (opposite of -c2c) -nger Turn off GER mode -np <num> Run <num> copies of <program> (same as -c) -nx Don't export LAM_MPI_* environment variables -O Universe is homogeneous -pty / -npty Use/don't use pseudo terminals when stdout is a tty -s <nodeid> Load <program> from node <nodeid> -sigs / -nsigs Catch/don't catch signals in MPI application -ssi <n> <arg> Set environment variable LAM_MPI_SSI_<n>=<arg> -toff Enable tracing with generation initially off -ton, -t Enable tracing with generation initially on -tv Launch processes under TotalView Debugger -v Be verbose -w / -nw Wait/don't wait for application to complete -wd <dir> Change current working directory of new processes to <dir> -x <envlist> Export environment vars in <envlist> Nodes: n<list>, e.g., n0-3,5 CPUS: c<list>, e.g., c0-3,5 Extras: h (local node), o (origin node), N (all nodes), C (all CPUs) Examples: mpirun n0-7 prog1 Executes "prog1" on nodes 0 through 7. mpirun -lamd -x FOO=bar,DISPLAY N prog2 Executes "prog2" on all nodes using the LAMD RPI. In the environment of each process, set FOO to the value "bar", and set DISPLAY to the current value. mpirun n0 N prog3 Run "prog3" on node 0, *and* all nodes. This executes *2* copies on n0. mpirun C prog4 arg1 arg2 Run "prog4" on each available CPU with command line arguments of "arg1" and "arg2". If each node has a CPU count of 1, the "C" is equivalent to "N". If at least one node has a CPU count greater than 1, LAM will run neighboring ranks of MPI_COMM_WORLD on that node. For example, if node 0 has a CPU count of 4 and node 1 has a CPU count of 2, "prog4" will have MPI_COMM_WORLD ranks 0 through 3 on n0, and ranks 4 and 5 on n1. mpirun c0 C prog5 Similar to the "prog3" example above, this runs "prog5" on CPU 0 *and* on each available CPU. This executes *2* copies on the node where CPU 0 is (i.e., n0). This is probably not a useful use of the "C" notation; it is only shown here for an example. Defaults: -c2c -w -pty -nger -nsigs ----------------------------------------------------------------------------- Code:
# Sample .bashrc for SuSE Linux # Copyright (c) SuSE GmbH Nuernberg # There are 3 different types of shells in bash: the login shell, normal shell # and interactive shell. Login shells read ~/.profile and interactive shells # read ~/.bashrc; in our setup, /etc/profile sources ~/.bashrc - thus all # settings made here will also take effect in a login shell. # # NOTE: It is recommended to make language settings in ~/.profile rather than # here, since multilingual X sessions would not work properly if LANG is over- # ridden in every subshell. # Some applications read the EDITOR variable to determine your favourite text # editor. So uncomment the line below and enter the editor of your choice :-) #export EDITOR=/usr/bin/vim #export EDITOR=/usr/bin/mcedit # For some news readers it makes sense to specify the NEWSSERVER variable here #export NEWSSERVER=your.news.server # If you want to use a Palm device with Linux, uncomment the two lines below. # For some (older) Palm Pilots, you might need to set a lower baud rate # e.g. 57600 or 38400; lowest is 9600 (very slow!) # #export PILOTPORT=/dev/pilot #export PILOTRATE=115200 test -s ~/.alias && . ~/.alias || true source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib source /opt/intel/bin/compilervars.sh intel64 source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI Last edited by wyldckat; October 2, 2013 at 09:05. Reason: Added [CODE][/CODE] |
|
October 2, 2013, 09:06 |
|
#6 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Quick answer: The problem is that your system's MPI is actually MPICH2. For that, follow the instructions given here: http://www.cfd-online.com/Forums/ope...tml#post383090 post #9
__________________
|
|
October 6, 2013, 08:59 |
|
#7 |
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
I don't know how to unistall mpich and I check this :
Code:
hadi@:~> ls -l 'which mpich' ls: cannot access which mpich: No such file or directory hadi@:~> ls -l `which mpich` which: no mpich in (/home/hadi/OpenFOAM/ThirdParty-2.1.1/platforms/linux64Gcc/paraview-3.12.0/bin:/home/hadi/OpenFOAM/hadi-2.1.1/platforms/linux64GccDPOpt/bin:/home/hadi/OpenFOAM/site/2.1.1/platforms/linux64GccDPOpt/bin:/home/hadi/OpenFOAM/OpenFOAM-2.1.1/platforms/linux64GccDPOpt/bin:/home/hadi/OpenFOAM/OpenFOAM-2.1.1/bin:/home/hadi/OpenFOAM/OpenFOAM-2.1.1/wmake:/opt/intel/composer_xe_2011_sp1.11.339/bin/intel64:/home/hadi/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/kde3/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc:/opt/intel/composer_xe_2011_sp1.11.339/mpirt/bin/intel64) total 83448 drwxr-xr-x 2 hadi users 4096 Sep 14 14:22 bin drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Desktop drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Documents drwxr-xr-x 2 hadi users 4096 Sep 20 12:12 Downloads drwxr-xr-x 3 hadi users 4096 Aug 31 12:14 mpich-install drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Music drwxr-xr-x 4 hadi users 4096 Sep 4 07:29 OpenFOAM -rw-r--r-- 1 hadi root 30709473 Aug 17 07:36 OpenFOAM-2.1.1.tgz drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Pictures drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Public drwxr-xr-x 2 hadi users 4096 Aug 17 07:35 public_html drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Templates -rw-r--r-- 1 hadi users 256 Aug 30 08:38 test1.f90 -rw-r--r-- 1 hadi users 97 Aug 30 08:41 test.cpp -rw-r--r-- 1 hadi users 36 Aug 25 14:05 test.f90 -rw-r--r-- 1 hadi root 54677441 Aug 17 07:36 ThirdParty-2.1.1.tgz drwxr-xr-x 2 hadi users 4096 Aug 17 07:36 Videos Code:
hadi@:~> ls -l `which mpicc` -rwxr-xr-x 1 root root 31376 Jan 27 2013 /usr/bin/mpicc Code:
#export PILOTPORT=/dev/pilot #export PILOTRATE=115200 test -s ~/.alias && . ~/.alias || true source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI #export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib source /opt/intel/bin/compilervars.sh intel64 source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI Last edited by wyldckat; October 6, 2013 at 12:17. Reason: Changed [QUOTE] to [CODE][/CODE] |
|
October 6, 2013, 12:21 |
|
#8 | |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
Now I'm the one confused on what exactly you want to do here OK, first let's do some clean up:
Now, let's try and get something clear here: what MPI toolboxes do you have installed in your system and which one do you want to use? Best regards, Bruno
__________________
|
||
October 7, 2013, 09:32 |
|
#9 | ||
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
I change to this:
Quote:
Quote:
I use mpirun and I don't now what do you mean form this''what MPI toolboxes do you have installed in your system and which one do you want to use?''? Last edited by dark lancer; October 7, 2013 at 17:28. |
|||
October 7, 2013, 17:50 |
|
#10 | |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
Quote:
My question is which MPI toolbox you want to use, because you cannot have all 3 versions working at the same time in the same shell environment. I suggest that you study the following blog post, as well as the links it provides: Advanced tips for working with the OpenFOAM shell environment Best regards, Bruno
__________________
|
||
October 8, 2013, 04:39 |
|
#11 | |
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
hi Bruno and thanks for all
I want to use Open-MPI for OpenFOAM because I think Open-MPI set with OpenFOAM and Intel MPI I think but I'm not sure use for intel fortran that exist on system We have a code of fortran that when we decide run it by parallel use MPICH2 and now I must uninstall MPICH2 that Open-MPI work and I don't know how to uninstall MPICH2?? a question?? this case is test??? Quote:
Last edited by dark lancer; October 8, 2013 at 09:21. |
||
October 13, 2013, 06:09 |
|
#12 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
I haven't tested the following steps, but it should work:
Best regards, Bruno
__________________
|
|
October 13, 2013, 08:40 |
|
#13 | |||||||||
Member
hadi abdollahzadeh
Join Date: Aug 2012
Location: Iran-yasouj
Posts: 59
Rep Power: 14 |
this my bashrc and Don't need chenge the red line or add a line for mpich?
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
||||||||||
October 13, 2013, 08:48 |
|
#14 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
Well, at least it's looking like you're closer to solving this. The problem here is that the MPICH2 service - which responsible for coordinating the communications between the parallel processes - is not running. How do you launch in parallel the FORTRAN programs that have been built with MPICH2? Best regards, Bruno PS: Please follow the instructions given on the second link on my signature, namely as to how to post code and application outputs here on the forum. The idea is that the correct way is to use [CODE] and not [QUOTE].
__________________
|
|
October 13, 2013, 15:13 |
|
#15 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Hi Hadi,
OK, I noticed that you updated your post. Try this: Code:
echo export WM_MPLIB=MPICH > $WM_PROJECT_DIR/etc/prefs.sh Best regards, Bruno
__________________
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Case running in serial, but Parallel run gives error | atmcfd | OpenFOAM Running, Solving & CFD | 18 | March 26, 2016 13:40 |
Error during Parallel Running | sfigato | OpenFOAM Pre-Processing | 0 | February 7, 2013 04:05 |
[solidMechanics] Running contactStressFoam in Parallel | Hisham | OpenFOAM CC Toolkits for Fluid-Structure Interaction | 2 | October 16, 2012 11:34 |
Running PimpleDyMFoam in parallel | paul b | OpenFOAM Running, Solving & CFD | 8 | April 20, 2011 06:21 |
Problems in Parallel PHOENICS | Zeng | Phoenics | 3 | February 27, 2001 14:28 |