|
[Sponsors] |
December 27, 2012, 22:56 |
MPI problmes
|
#1 |
Senior Member
Dongyue Li
Join Date: Jun 2012
Location: Beijing, China
Posts: 849
Rep Power: 18 |
Hi Foamers.
According to this http://www.openfoam.org/download/git.php After I compiled OF 21x and paraview, there is totally nothing wrong and successfully run solvers. But I want to run parallel. so I tried, there are some errors: Code:
cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ mpirun -np 4 interFoam -parallel [cfd:08509] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 121 -------------------------------------------------------------------------- Sorry! You were supposed to get help about: orte_init:startup:internal-failure But I couldn't open the help file: /home/cfd/OpenFOAM/ThirdParty-2.1.x/platforms/linux64Gcc/openmpi-1.5.3/share/openmpi/help-orte-runtime: No such file or directory. Sorry! -------------------------------------------------------------------------- [cfd:08509] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file orterun.c at line 572 cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ Code:
cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ mpirun -------------------------------------------------------------------------- Sorry! You were supposed to get help about: orterun:usage But I couldn't open the help file: /home/cfd/OpenFOAM/ThirdParty-2.1.x/platforms/linux64Gcc/openmpi-1.5.3/share/openmpi/help-orterun.txt: No such file or directory. Sorry! -------------------------------------------------------------------------- cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ Code:
cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ sudo mpirun [sudo] password for cfd: mpirun (Open MPI) 1.5.3 Usage: mpirun [OPTION]... [PROGRAM]... Start the given program using Open RTE -am <arg0> Aggregate MCA parameter set file list --app <arg0> Provide an appfile; ignore all other command line options -bind-to-board|--bind-to-board Whether to bind processes to specific boards (meaningless on 1 board/node) -bind-to-core|--bind-to-core Whether to bind processes to specific cores -bind-to-none|--bind-to-none Do not bind processes to cores or sockets (default) -bind-to-socket|--bind-to-socket Whether to bind processes to sockets -byboard|--byboard Whether to assign processes round-robin by board (equivalent to bynode if only 1 board/node) -bycore|--bycore Alias for byslot -bynode|--bynode Whether to assign processes round-robin by node -byslot|--byslot Whether to assign processes round-robin by slot (the default) -bysocket|--bysocket Whether to assign processes round-robin by socket -c|-np|--np <arg0> Number of processes to run -cf|--cartofile <arg0> Provide a cartography file -cpu-set|--cpu-set <arg0> Comma-separated list of ranges specifying logical cpus allocated to this job [default: none] -cpus-per-proc|--cpus-per-proc <arg0> Number of cpus to use for each process [default=1] -cpus-per-rank|--cpus-per-rank <arg0> Synonym for cpus-per-proc -d|-debug-devel|--debug-devel Enable debugging of OpenRTE -debug|--debug Invoke the user-level debugger indicated by the orte_base_user_debugger MCA parameter -debug-daemons|--debug-daemons Enable debugging of any OpenRTE daemons used by this application -debug-daemons-file|--debug-daemons-file Enable debugging of any OpenRTE daemons used by this application, storing output in files -debugger|--debugger <arg0> Sequence of debuggers to search for when "--debug" is used -default-hostfile|--default-hostfile <arg0> Provide a default hostfile -display-allocation|--display-allocation Display the allocation being used by this job -display-devel-allocation|--display-devel-allocation Display a detailed list (mostly intended for developers) of the allocation being used by this job -display-devel-map|--display-devel-map Display a detailed process map (mostly intended for developers) just before launch -display-map|--display-map Display the process map just before launch -do-not-launch|--do-not-launch Perform all necessary operations to prepare to launch the application, but do not actually launch it -do-not-resolve|--do-not-resolve Do not attempt to resolve interfaces -gmca|--gmca <arg0> <arg1> Pass global MCA parameters that are applicable to all contexts (arg0 is the parameter name; arg1 is the parameter value) -h|--help This help message -H|-host|--host <arg0> List of hosts to invoke processes on --hetero Indicates that multiple app_contexts are being provided that are a mix of 32/64 bit binaries -hostfile|--hostfile <arg0> Provide a hostfile -launch-agent|--launch-agent <arg0> Command used to start processes on remote nodes (default: orted) -leave-session-attached|--leave-session-attached Enable debugging of OpenRTE -loadbalance|--loadbalance Balance total number of procs across all allocated nodes -machinefile|--machinefile <arg0> Provide a hostfile -mca|--mca <arg0> <arg1> Pass context-specific MCA parameters; they are considered global if --gmca is not used and only one context is specified (arg0 is the parameter name; arg1 is the parameter value) -n|--n <arg0> Number of processes to run -nolocal|--nolocal Do not run any MPI applications on the local node -nooversubscribe|--nooversubscribe Nodes are not to be oversubscribed, even if the system supports such operation --noprefix Disable automatic --prefix behavior -nperboard|--nperboard <arg0> Launch n processes per board on all allocated nodes -npernode|--npernode <arg0> Launch n processes per node on all allocated nodes -npersocket|--npersocket <arg0> Launch n processes per socket on all allocated nodes -num-boards|--num-boards <arg0> Number of processor boards/node (1-256) [default: 1] -num-cores|--num-cores <arg0> Number of cores/socket (1-256) [default: 1] -num-sockets|--num-sockets <arg0> Number of sockets/board (1-256) [default: 1] -ompi-server|--ompi-server <arg0> Specify the URI of the Open MPI server, or the name of the file (specified as file:filename) that contains that info -output-filename|--output-filename <arg0> Redirect output from application processes into filename.rank -path|--path <arg0> PATH to be used to look for executables to start processes -pernode|--pernode Launch one process per available node on the specified number of nodes [no -np => use all allocated nodes] --prefix <arg0> Prefix where Open MPI is installed on remote nodes --preload-files <arg0> Preload the comma separated list of files to the remote machines current working directory before starting the remote process. --preload-files-dest-dir <arg0> The destination directory to use in conjunction with --preload-files. By default the absolute and relative paths provided by --preload-files are used. -q|--quiet Suppress helpful messages -report-bindings|--report-bindings Whether to report process bindings to stderr -report-events|--report-events <arg0> Report events to a tool listening at the specified URI -report-pid|--report-pid <arg0> Printout pid on stdout [-], stderr [+], or a file [anything else] -report-uri|--report-uri <arg0> Printout URI on stdout [-], stderr [+], or a file [anything else] -rf|--rankfile <arg0> Provide a rankfile file -s|--preload-binary Preload the binary on the remote machine before starting the remote process. -server-wait-time|--server-wait-time <arg0> Time in seconds to wait for ompi-server (default: 10 sec) -show-progress|--show-progress Output a brief periodic report on launch progress -slot-list|--slot-list <arg0> List of processor IDs to bind MPI processes to (e.g., used in conjunction with rank files) -stdin|--stdin <arg0> Specify procs to receive stdin [rank, all, none] (default: 0, indicating rank 0) -stride|--stride <arg0> When binding multiple cores to a rank, the step size to use between cores [default: 1] -tag-output|--tag-output Tag all output with [job,rank] -timestamp-output|--timestamp-output Timestamp all application process output -tmpdir|--tmpdir <arg0> Set the root for the session directory tree for orterun ONLY -tv|--tv Deprecated backwards compatibility flag; synonym for "--debug" -use-regexp|--use-regexp Use regular expressions for launch -v|--verbose Be verbose -V|--version Print version and exit -wait-for-server|--wait-for-server If ompi-server is not already running, wait until it is detected (default: false) -wd|--wd <arg0> Synonym for --wdir -wdir|--wdir <arg0> Set the working directory of the started processes -x <arg0> Export an environment variable, optionally specifying a value (e.g., "-x foo" exports the environment variable foo and takes its value from the current environment; "-x foo=bar" exports the environment variable name foo and sets its value to "bar" in the started processes) -xml|--xml Provide all output in XML format -xml-file|--xml-file <arg0> Provide all output in XML format to the specified file -xterm|--xterm <arg0> Create a new xterm window and display output from the specified ranks there Report bugs to http://www.open-mpi.org/community/help/ cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ Code:
cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ sudo su root@cfd:/home/cfd/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak# mpirun -np 4 interFoam -parallel -------------------------------------------------------------------------- mpirun was unable to launch the specified application as it could not find an executable: Executable: interFoam Node: cfd while attempting to start process rank 0. -------------------------------------------------------------------------- 4 total processes failed to start root@cfd:/home/cfd/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak# |
|
December 28, 2012, 01:36 |
|
#3 |
Senior Member
Dongyue Li
Join Date: Jun 2012
Location: Beijing, China
Posts: 849
Rep Power: 18 |
||
December 28, 2012, 04:40 |
|
#5 | |
Senior Member
Dongyue Li
Join Date: Jun 2012
Location: Beijing, China
Posts: 849
Rep Power: 18 |
Quote:
Code:
numberOfSubdomains 4; method simple; simpleCoeffs { n ( 2 2 1 ); delta 0.001; } hierarchicalCoeffs { n ( 1 1 1 ); delta 0.001; order xyz; } manualCoeffs { dataFile ""; } distributed no; roots ( ); // ************************************************************************* // I think its tough to install openmpi for myself. |
||
December 28, 2012, 04:56 |
|
#6 |
Senior Member
Dongyue Li
Join Date: Jun 2012
Location: Beijing, China
Posts: 849
Rep Power: 18 |
I do strictly according to this page. http://www.openfoam.org/download/git.php
But I can NOT install openmpi correctly. Its driving me crazy and upset!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |
|
December 28, 2012, 05:16 |
|
#8 | |
Senior Member
Dongyue Li
Join Date: Jun 2012
Location: Beijing, China
Posts: 849
Rep Power: 18 |
Quote:
Code:
cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ mpirun -np 4 interFoam -parallel [cfd:08509] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 121 -------------------------------------------------------------------------- Sorry! You were supposed to get help about: orte_init:startup:internal-failure But I couldn't open the help file: /home/cfd/OpenFOAM/ThirdParty-2.1.x/platforms/linux64Gcc/openmpi-1.5.3/share/openmpi/help-orte-runtime: No such file or directory. Sorry! -------------------------------------------------------------------------- [cfd:08509] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file orterun.c at line 572 cfd@cfd:~/OpenFOAM/tutorials/multiphase/interFoam/laminar/damBreak$ |
||
December 28, 2012, 05:17 |
|
#9 |
Senior Member
Laurence R. McGlashan
Join Date: Mar 2009
Posts: 370
Rep Power: 23 |
What is the result of:
Code:
ls $WM_THIRD_PARTY_DIR/platforms/linux64Gcc/openmpi-1.5.3/share/openmpi/ And also post the output of the Allwmake script that you ran in $WM_THIRD_PARTY_DIR. Code:
cd $WM_THIRD_PARTY_DIR ./Allwmake > log.make 2>&1 & P.S. Do not run stuff as root.
__________________
Laurence R. McGlashan :: Website |
|
December 28, 2012, 05:28 |
|
#10 | |
Senior Member
Dongyue Li
Join Date: Jun 2012
Location: Beijing, China
Posts: 849
Rep Power: 18 |
Quote:
no it does not exits. Code:
cfd@cfd:~/OpenFOAM/ThirdParty-2.1.x/platforms/linux64Gcc$ ls paraview-3.12.0 cd OpenFOAM/ThirdParty-2.1.x/openmpi1.5.3 ./configure make all install then I set the .bashrc but it does not work. actually I dont know how to set the .bashrc. I search the internet............. forget it !!!!! I just re-install my ubuntu and compile Openfoam.. I am totally pissed off! I am running interFoam on a mesh with 3,000,000 cells. thats too slow! I cant endure that. now I am so upset. I got have a rest. anyway Thank you guys very much. I will update my problem later after I reinstall my ubuntu and foam. |
||
December 28, 2012, 05:38 |
|
#11 | |
Senior Member
Laurence R. McGlashan
Join Date: Mar 2009
Posts: 370
Rep Power: 23 |
Quote:
Unpack the openmpi sources into $WM_THIRD_PARTY_DIR and then just run the Allwmake script in $WM_THIRD_PARTY_DIR. That will install openmpi for you.
__________________
Laurence R. McGlashan :: Website |
||
December 28, 2012, 05:40 |
|
#12 |
Senior Member
|
Check this link...
http://auriza.site40.net/notes/mpi/o...on-ubuntu-904/ Try installing it from the command prompt.. Best of luck... |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
mpirun, best parameters | pablodecastillo | Hardware | 18 | November 10, 2016 13:36 |
MPI error | florencenawei | OpenFOAM Installation | 3 | October 10, 2011 02:21 |
Error using LaunderGibsonRSTM on SGI ALTIX 4700 | jaswi | OpenFOAM | 2 | April 29, 2008 11:54 |
Is Testsuite on the way or not | lakeat | OpenFOAM Installation | 6 | April 28, 2008 12:12 |
MPI and parallel computation | Wang | Main CFD Forum | 7 | April 15, 2004 12:25 |