|
[Sponsors] |
December 16, 2020, 17:15 |
Parallel computation on cluster
|
#1 |
New Member
Hamed Hoorijani
Join Date: May 2019
Location: Gent, Belgium
Posts: 23
Rep Power: 7 |
Hi, I'm trying to do parallel computation on a cluster using OF7. Still, when I switch my cases from a single CPU core to multicores, the solution continues very slowly. I tested the Foam's tutorials as well, but they were slow too during parallel computation.
I think it might be related to MPI and OpenFOAM's dependencies because I just installed the flex package manually and skipped the other required libraries. I'm using the cluster without root privileges or internet access, so mostly I should install the packages manually. I will appreciate it if anyone can help me and guide me to find the problem and fix it. Here is some information about the OS and installed packages: $ cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" $ mpirun --version mpirun (Open MPI) 2.1.1 Report bugs to http://www.open-mpi.org/community/help/ $ flex --version flex 2.5.39 The settings of the bashrc file: #- Compiler location: # WM_COMPILER_TYPE= system | ThirdParty (OpenFOAM) export WM_COMPILER_TYPE=system #- Compiler: # WM_COMPILER = Gcc | Gcc48 ... Gcc62 | Clang | Icc export WM_COMPILER=Gcc unset WM_COMPILER_ARCH WM_COMPILER_LIB_ARCH #- Label size: # WM_LABEL_SIZE = 32 | 64 export WM_LABEL_SIZE=32 #- Optimised, debug, profiling: # WM_COMPILE_OPTION = Opt | Debug | Prof export WM_COMPILE_OPTION=Opt #- MPI implementation: # WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI # | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI export WM_MPLIB=SYSTEMOPENMPI I should also say that OF is compiled; everything works, but I notice they're slower compare to my laptop. I couldn't find a source for installing OF on centOS without root and internet access. I'll be grateful if someone can guide me to install it or find the problem. |
|
December 17, 2020, 07:28 |
Log files for the first time-step
|
#2 |
New Member
Hamed Hoorijani
Join Date: May 2019
Location: Gent, Belgium
Posts: 23
Rep Power: 7 |
$ mpirun -np 5 interFoam -parallel
/*---------------------------------------------------------------------------*\ ========= | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / O peration | Website: https://openfoam.org \\ / A nd | Version: 7 \\/ M anipulation | \*---------------------------------------------------------------------------*/ Build : 7 Exec : interFoam -parallel Date : Dec 17 2020 Time : 14:28:30 Host : "cschpc.localdomain" PID : 62216 I/O : uncollated Case : /home/hoorijani/Tests/microfluidic nProcs : 5 Slaves : 4 ( "cschpc.localdomain.62217" "cschpc.localdomain.62218" "cschpc.localdomain.62219" "cschpc.localdomain.62220" ) Pstream initialized with: floatTransfer : 0 nProcsSimpleSum : 0 commsType : nonBlocking polling iterations : 0 sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10) allowSystemOperations : Allowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 Selecting dynamicFvMesh dynamicRefineFvMesh PIMPLE: No convergence criteria found PIMPLE: Operating solver in transient mode with 1 outer corrector PIMPLE: Operating solver in PISO mode Reading field p_rgh Reading field U Reading/calculating face flux field phi Reading transportProperties Selecting incompressible transport model Newtonian Selecting incompressible transport model Newtonian Selecting turbulence model type laminar Selecting laminar stress model Stokes Reading g Reading hRef Calculating field g.h No MRF models present No finite volume options present GAMG: Solving for pcorr, Initial residual = 1, Final residual = 9.1650704e-05, No Iterations 23 time step continuity errors : sum local = 1.4094045e-08, global = 1.7497141e-10, cumulative = 1.7497141e-10 Constructing face velocity Uf Courant Number mean: 0.068398968 max: 0.33986911 Starting time loop Courant Number mean: 0.068398968 max: 0.33986911 Interface Courant Number mean: 0 max: 0 deltaT = 0.00011111111 Time = 0.000111111 Selected 0 cells for refinement out of 140000. Selected 0 split points out of a possible 0. MULES: Solving for alpha.oil Phase-1 volume fraction = 0.99998714 Min(alpha.oil) = 0 Max(alpha.oil) = 1.0000002 MULES: Solving for alpha.oil Phase-1 volume fraction = 0.99997428 Min(alpha.oil) = 0 Max(alpha.oil) = 1.0000003 MULES: Solving for alpha.oil Phase-1 volume fraction = 0.99996142 Min(alpha.oil) = 0 Max(alpha.oil) = 1.0000005 GAMG: Solving for p_rgh, Initial residual = 1, Final residual = 0.0097215824, No Iterations 12 time step continuity errors : sum local = 1.7565826e-06, global = 5.9014695e-09, cumulative = 6.0764409e-09 GAMG: Solving for p_rgh, Initial residual = 0.00097938896, Final residual = 7.1056775e-09, No Iterations 29 time step continuity errors : sum local = 1.5112455e-09, global = 1.1240562e-12, cumulative = 6.077565e-09 [ ExecutionTime = 99.31 s ClockTime = 802 s ] Courant Number mean: 0.079858565 max: 0.41126431 Interface Courant Number mean: 4.8897762e-05 max: 0.17519377 deltaT = 0.00011111111 Time = 0.000222222 \*------------------------------------------------------------------------------*/ $ interFoam /*---------------------------------------------------------------------------*\ ========= | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / O peration | Website: https://openfoam.org \\ / A nd | Version: 7 \\/ M anipulation | \*---------------------------------------------------------------------------*/ Build : 7 Exec : interFoam Date : Dec 17 2020 Time : 14:46:34 Host : "cschpc.localdomain" PID : 6637 I/O : uncollated Case : /home/hoorijani/Tests/microfluidic nProcs : 1 sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10) allowSystemOperations : Allowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 Selecting dynamicFvMesh dynamicRefineFvMesh PIMPLE: No convergence criteria found PIMPLE: Operating solver in transient mode with 1 outer corrector PIMPLE: Operating solver in PISO mode Reading field p_rgh Reading field U Reading/calculating face flux field phi Reading transportProperties Selecting incompressible transport model Newtonian Selecting incompressible transport model Newtonian Selecting turbulence model type laminar Selecting laminar stress model Stokes Reading g Reading hRef Calculating field g.h No MRF models present No finite volume options present GAMG: Solving for pcorr, Initial residual = 1, Final residual = 9.7545158e-05, No Iterations 25 time step continuity errors : sum local = 1.5000494e-08, global = 3.4034405e-09, cumulative = 3.4034405e-09 Constructing face velocity Uf Courant Number mean: 0.06839863 max: 0.33986913 Starting time loop Courant Number mean: 0.06839863 max: 0.33986913 Interface Courant Number mean: 0 max: 0 deltaT = 0.00011111111 Time = 0.000111111 Selected 0 cells for refinement out of 140000. Selected 0 split points out of a possible 0. MULES: Solving for alpha.oil Phase-1 volume fraction = 0.99998714 Min(alpha.oil) = 0 Max(alpha.oil) = 1.0000007 MULES: Solving for alpha.oil Phase-1 volume fraction = 0.99997428 Min(alpha.oil) = 0 Max(alpha.oil) = 1.0000014 MULES: Solving for alpha.oil Phase-1 volume fraction = 0.99996142 Min(alpha.oil) = 0 Max(alpha.oil) = 1.000002 GAMG: Solving for p_rgh, Initial residual = 1, Final residual = 0.0076261294, No Iterations 13 time step continuity errors : sum local = 1.3779574e-06, global = 2.579493e-07, cumulative = 2.6135274e-07 GAMG: Solving for p_rgh, Initial residual = 0.00097614287, Final residual = 8.93966e-09, No Iterations 30 time step continuity errors : sum local = 1.9007518e-09, global = 4.2154046e-10, cumulative = 2.6177428e-07 [ ExecutionTime = 4.55 s ClockTime = 38 s ] As can be seen, simulation on multi-core took much longer compared to single-core processors. I don't know this is related to MPI or what? I tested multiple test case files, but it was the same as well. I'll be grateful if you could help me. |
|
Tags |
centos 7, cluster computing, dependencies, mpi errors, parallel computation |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Parallel Computation Troubles | climenha | SU2 | 3 | May 16, 2016 10:40 |
Explicitly filtered LES | saeedi | Main CFD Forum | 16 | October 14, 2015 12:58 |
Problem with parallel computation (case inviscid onera M6) | Combas | SU2 | 11 | January 30, 2014 02:20 |
Serial UDF is working for parallel computation also | Tanjina | Fluent UDF and Scheme Programming | 0 | December 26, 2013 19:24 |
Computation in parallel | zou_mo | OpenFOAM Running, Solving & CFD | 2 | June 29, 2005 12:08 |