|
[Sponsors] |
December 26, 2020, 07:38 |
mpirun suddently dosent work
|
#1 |
Member
Join Date: Feb 2018
Posts: 58
Rep Power: 8 |
Hello,
my mpirun on Ubuntu 20.04 dosent work anymore. I used it a few weeks back and it works good and well. I just wanted to use it a few minutes ago and it doesent work. I decompose the case first and then i use the Code:
mpirun -np 4 interFoam -parallel Then this message appear. Code:
kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$ mpirun -np 4 interFoam -parallel -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0) -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0) -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0) -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3304] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3305] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3306] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3307] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[48186,1],0] Exit code: 1 -------------------------------------------------------------------------- kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$ Did anyone know what do to, to get it work again ? Best regards Kai |
|
December 28, 2020, 02:33 |
Update
|
#2 |
Member
Join Date: Feb 2018
Posts: 58
Rep Power: 8 |
Good morning,
im trying to fix this error by myself and i have a little update. The ompi version i use is the: Code:
kai@Kai-Desktop:~/Dokumente$ mpirun --version mpirun (Open MPI) 4.0.3 Code:
#include <mpi.h> #include <stdio.h> int main(int argc, char** argv) { // Initialize the MPI environment MPI_Init(NULL, NULL); // Get the number of processes int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); // Get the rank of the process int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); // Get the name of the processor char processor_name[MPI_MAX_PROCESSOR_NAME]; int name_len; MPI_Get_processor_name(processor_name, &name_len); // Print off a hello world message printf("Hello world from processor %s, rank %d out of %d processors\n", processor_name, world_rank, world_size); // Finalize the MPI environment. MPI_Finalize(); } After i compile it and execute it: Code:
kai@Kai-Desktop:~/Dokumente$ mpirun -np 4 ./hello_world -parallel Hello world from processor Kai-Desktop, rank 0 out of 4 processors Hello world from processor Kai-Desktop, rank 1 out of 4 processors Hello world from processor Kai-Desktop, rank 2 out of 4 processors Hello world from processor Kai-Desktop, rank 3 out of 4 processors In conclusion mpi works on my computer, or not ? Why are of dosent work with it. ? Best regards Kai |
|
December 29, 2020, 07:26 |
|
#3 |
Member
Join Date: Feb 2018
Posts: 58
Rep Power: 8 |
Update:
I installed OpenFOAM 8 and mpirun works again.... I dont know why this error still occurs in v7 |
|
March 18, 2021, 15:25 |
|
#4 |
New Member
Evren Yilmaz Yakin
Join Date: Feb 2016
Location: Ankara, Turkey
Posts: 27
Rep Power: 10 |
Hi,
I have the same error now. It started when after the upgrade from 18.04 to 20.04. SOLVED I uninstalled mpi and reinstalled. So, it solved.
__________________
Best Regards, Evren Last edited by evrenykn; March 19, 2021 at 02:03. Reason: Solved |
|
March 24, 2021, 14:07 |
|
#5 |
Member
Join Date: Feb 2018
Posts: 58
Rep Power: 8 |
Hey evrenykn,
which packages did you uninstalled ? Best regards Kai |
|
November 23, 2021, 23:10 |
|
#6 |
New Member
Jin Zhang
Join Date: May 2018
Location: Germany
Posts: 15
Rep Power: 8 |
||
November 24, 2021, 11:58 |
|
#7 |
Member
Join Date: Feb 2018
Posts: 58
Rep Power: 8 |
Hey Jin,
yes, you can update your openfoam version to 8 or 9 or update your linux operation system and mpirun. After an update it works flawless again. Best regards Kai |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
mpirun, best parameters | pablodecastillo | Hardware | 18 | November 10, 2016 13:36 |
Do all CFD analysts have to do some hands-on work except PhDs? | e13drd | Main CFD Forum | 2 | March 17, 2014 15:56 |
Rgid motion dosent work with SST model | liujmljm | SU2 | 4 | November 20, 2013 07:45 |
mpirun doesn't work after update on 2.2.0 | marcus85 | OpenFOAM Running, Solving & CFD | 7 | May 20, 2013 08:05 |
ATTENTION! Reliability problems in CFX 5.7 | Joseph | CFX | 14 | April 20, 2010 16:45 |