|
[Sponsors] |
August 23, 2018, 15:15 |
|
#21 |
New Member
Faraz
Join Date: Mar 2018
Posts: 25
Rep Power: 8 |
This you have to edit to customize to your installation.. It was not meant to be copy/pasted.. read it again.
|
|
August 24, 2018, 08:22 |
|
#22 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
Hi
when i type "mpirun -np 6 -hostfile machines simpleFoam -parallel" this is what i get:- [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:20848] [[33493,0],0] usock_peer_send_blocking: send() to socket 41 failed: Broken pipe (32) [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:20848] [[33493,0],0] ORTE_ERROR_LOG: Unreachable in file oob_usock_connection.c at line 316 [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:20848] [[33493,0],0]-[[33493,1],0] usock_peer_accept: usock_peer_send_connect_ack failed -------------------------------------------------------------------------- mpirun was unable to find the specified executable file, and therefore did not launch the job. This error was first reported for process rank 3; it may have occurred for other processes as well. NOTE: A common cause for this error is misspelling a mpirun command line parameter option (remember that mpirun interprets the first unrecognized command line token as the executable). Node: client1 Executable: /opt/openfoam6/platforms/linux64GccDPInt32Opt/bin/simpleFoam -------------------------------------------------------------------------- 3 total processes failed to start when i type "mpirun -np 6 -hostfile machines foamJob simpleFoam -parallel" this is what i get:- Application : simpleFoam Executing: /opt/openfoam6/platforms/linux64GccDPInt32Opt/bin/simpleFoam -parallel > log 2>&1 & Application : simpleFoam Application : simpleFoam Executing: /opt/openfoam6/platforms/linux64GccDPInt32Opt/bin/simpleFoam -parallel > log 2>&1 & Executing: /opt/openfoam6/platforms/linux64GccDPInt32Opt/bin/simpleFoam -parallel > log 2>&1 & -------------------------------------------------------------------------- mpirun was unable to find the specified executable file, and therefore did not launch the job. This error was first reported for process rank 3; it may have occurred for other processes as well. NOTE: A common cause for this error is misspelling a mpirun command line parameter option (remember that mpirun interprets the first unrecognized command line token as the executable). Node: client1 Executable: /opt/openfoam6/bin/foamJob -------------------------------------------------------------------------- 3 total processes failed to start Along with a log file which says :- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:20554] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! |
|
August 24, 2018, 09:40 |
|
#23 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
yes i did customise it and when i type "source /opt/openfoam6/etc/bashrc" nothing happens i dont get any output
then when i type "mpirun -np 6 -hostfile machines simpleFoam -parallel" this is what i get:- [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:03141] [[57312,0],0] usock_peer_send_blocking: send() to socket 39 failed: Broken pipe (32) [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:03141] [[57312,0],0] ORTE_ERROR_LOG: Unreachable in file oob_usock_connection.c at line 316 [mpiuser-HP-ProDesk-400-G2-MT-TPM-DP:03141] [[57312,0],0]-[[57312,1],0] usock_peer_accept: usock_peer_send_connect_ack failed -------------------------------------------------------------------------- mpirun was unable to find the specified executable file, and therefore did not launch the job. This error was first reported for process rank 3; it may have occurred for other processes as well. NOTE: A common cause for this error is misspelling a mpirun command line parameter option (remember that mpirun interprets the first unrecognized command line token as the executable). Node: client Executable: /opt/openfoam6/platforms/linux64GccDPInt32Opt/bin/simpleFoam |
|
August 24, 2018, 10:21 |
|
#24 | |
New Member
Faraz
Join Date: Mar 2018
Posts: 25
Rep Power: 8 |
Quote:
For example here's how the bashrc file looks like for me: source /opt/apps/OpenFOAM/OpenFOAM-v1712/etc/bashrc |
||
August 24, 2018, 11:07 |
|
#25 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
Hi
yes the line is there in bash file in both nodes ie master and client source /opt/openfoam6/etc/bashrc this is the line in my case |
|
August 24, 2018, 11:47 |
|
#26 |
New Member
Faraz
Join Date: Mar 2018
Posts: 25
Rep Power: 8 |
Rank 3 is the process that starts on the slave. So probably some paths are different on the slave node. Is /opt a NFS share or did you install it separately on both machines?
At this point I would remove the installation from the slave and just make /opt an NFS share. |
|
August 25, 2018, 01:37 |
|
#27 |
Member
Join Date: Nov 2014
Posts: 92
Rep Power: 12 |
||
August 25, 2018, 08:11 |
|
#28 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
Hi,feacluster
opt is not a nfs shared....i have not installed nfs ive installed openfoam separately inside opt |
|
August 25, 2018, 08:14 |
|
#29 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
hi,hokhay
yes i placed the source line in the first line of the bash file and also in etc/profile and also in ~/.profile this did the trick thanks a lot for you support feacluster and hokhay! u guys are my heroes! |
|
August 25, 2018, 08:21 |
|
#30 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
Hi
the solver is running but i ran out of memory in just 10 iterations! can anyone explain why this is happening? my case has approximately 7 million cells and my computer config is :- 4GB RAM 500GB HD 4core i5 Processor running at 3.8GHz ie four physical cores and two logical cores i get a dialogue box after 10 iterations saying simpleFoam stopped unexpectedly because it ran out of memory! if i have to increase memory what should i upgrade in my computers!? my decomposeParDict file has the distributed option set to "no" is this causing this problem? |
|
August 25, 2018, 15:41 |
|
#31 |
Member
Join Date: Nov 2014
Posts: 92
Rep Power: 12 |
I think it's just purely too little RAM and 7 millions cell is not small number. My rough guess for 4GB ram, you maybe able to run 2millions cell simulation.
To be honest, your PC is not up to the job for practical CFD. A 7 millions cells sim would take at least 2 days to complete, depending on the convergence rate. You need to get a brand new PC with at least 16GB Ram |
|
August 25, 2018, 18:54 |
|
#32 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
Hi
can you please explain a way to calculate these things maybe not exactly but roughly because this is just a coarse mesh and i will significantly refine my mesh for grid independence which will also increase the number of cells in future and i will also do some fluid structure interaction simulation. so if i need to upgrade to a new computer i need to decide the specs, and im open to even buying a server or setup another cluster whichever gives me a better performance. all i know is that if the number of cells reduce to less than 50k cells per processor openfoam does not solve it in parallel. my target is to solve 30 million cells in less than 2 hours. The university im studying right now is willing to fund for the setup. so please advice and ur inputs and suggestions is highly appreciated. regards rishab |
|
August 26, 2018, 00:28 |
|
#33 |
Member
Join Date: Nov 2014
Posts: 92
Rep Power: 12 |
I don't think there is any way to calculate this. It is just from my experience.
For your reference, I am running a car aerodynamics steady state simulation with 35 millions cell on 12 server computers with total of 192 cores and this configuration takes about 18 hours to run 10000 iterations. They are 6 year old servers with E5-2650 cpu. The new AMD EYPC cpu could easily double the performance. To finish a 30 millions cell sim in 2 hours, I guess you may need more powerfull servers than what I have and your simulation needs to convergence with less iterations. It is really case dependence. |
|
August 26, 2018, 08:12 |
|
#34 |
New Member
Rishab.G.Hombal
Join Date: Aug 2018
Posts: 20
Rep Power: 8 |
so when u say server PC what do u exactly mean by it do u literally mean server or is it a PC if so can u please tell me the specs and how many memory slots for a 16core cpu and so on?
|
|
August 26, 2018, 15:02 |
|
#35 |
Member
Join Date: Nov 2014
Posts: 92
Rep Power: 12 |
I mean server. The one I am using is PowerEdge R620. It is a dual cpu computer and total of 16 Ram slots. You can find the spec from Dell website. Also OpenFoam is I/O intensitive software, which means the memory bandwidth has a larger impact on performance than CPU.
I suggest you to read the following paper. https://www.researchgate.net/publica...and_Don'ts |
|
August 27, 2018, 14:34 |
|
#36 | |
New Member
Faraz
Join Date: Mar 2018
Posts: 25
Rep Power: 8 |
Quote:
source /opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpivars.sh source /opt/apps/OpenFOAM/OpenFOAM-v1712/etc/bashrc export LD_LIBRARY_PATH=/opt/apps/intel:$LD_LIBRARY_PATH export I_MPI_FABRICS=shm:dapl export I_MPI_DAPL_PROVIDER=ofa-v2-ib0 export I_MPI_DYNAMIC_CONNECTION=0 |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Encountering an error while running a case in cluster using openfoam | dnsuman | Main CFD Forum | 2 | August 6, 2018 02:35 |
OpenFOAM Training, London, Chicago, Munich, Sep-Oct 2015 | cfd.direct | OpenFOAM Announcements from Other Sources | 2 | August 31, 2015 14:36 |
[OpenFOAM.org] How to get OpenFoam compiled on a cluster with CentOS 6.5 and no root permissions | hulli | OpenFOAM Installation | 2 | November 6, 2014 19:01 |
OpenFOAM parallel running error in cluster | vishal_s | OpenFOAM Running, Solving & CFD | 5 | March 11, 2014 16:11 |
Something weird encountered when running OpenFOAM in parallel on multiple nodes | xpqiu | OpenFOAM Running, Solving & CFD | 2 | May 2, 2013 05:59 |