|
[Sponsors] |
December 15, 2018, 03:57 |
MPI & shared memory
|
#1 |
Senior Member
Join Date: Sep 2015
Location: Singapore
Posts: 102
Rep Power: 11 |
Dear Foamers,
I would like to know if the following is possible: Say that I am running a case in parallel. Assuming that all the cores are within the same node, is it possible to declare a shared memory in the heap that is visible to all the cores. Specifically, if each processor creates a field as shown below, Code:
scalarField* fieldPtr(new scalarField(n)); Has anyone implemented something like this before? If so, how to go about doing it? USV |
|
December 30, 2018, 06:30 |
|
#2 |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,715
Rep Power: 40 |
Currently no DMA or RDMA wrapping in OpenFOAM. You will have create your own MPI communicators, access windows etc.
|
|
December 31, 2018, 00:47 |
MPI/OpenMP Hybrid Programming in OpenFOAM
|
#3 |
Senior Member
Join Date: Sep 2015
Location: Singapore
Posts: 102
Rep Power: 11 |
Thank you Mark.
After a little scouring of the Internet, I came to the same conclusion. However, there is a simple but limited solution which is to use OpenMP. Since I created my own schemes and solver, I was able to incorporate quite a bit of OpenMP parallelism into the code. For those trying to use existing solvers/schemes, unfortunately, this won't help you too much unless you re-write the schemes with OpenMP pragmas. To compile with OpenMP, include the 'fopenmp' flag in the file '$WM_PROJECT_DIR/wmake/rules/linux64Gcc/c++Opt'. So, it should read like this: Code:
$:cat $WM_PROJECT_DIR/wmake/rules/linux64Gcc/c++Opt c++DBUG = c++OPT = -O2 -fopenmp In your solver/scheme, you may need to include "omp.h" for the pragmas to work. After this, you're pretty much set. You can parallelize loops as follows: Code:
#pragma omp parallel for forAll(neighbour, celli) { ... } Code:
export OMP_NUM_THREADS=6 mpirun -np 8 --map-by ppr:1:numa:pe=6 solver -parallel A word of caution though. This may not run any faster (it fact, it ran much slower in many cases) unless a significant portion of the code (i.e. the heavy duty loops) is parallelized and the OpenMP overhead is kept small. Usually, the benefits start showing at higher core counts when MPI traffic starts to dominate. In other cases, I think the bulit-in MPI alone is more efficient. Lastly, I am no expert in these areas. Just an amateur. So, there could be other things that I am missing and better ways to go about doing it. So, feel free to correct my mistakes and suggest better ways to go about it... Cheers, USV |
|
December 31, 2018, 05:06 |
|
#4 | |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,715
Rep Power: 40 |
Quote:
Take a look at the cfmesh integration for examples of using these defines, as well as various openmp directives. Note that it is also good practice (I think) to guard your openmp pragmas with ifdef/endif so that you can rapidly enable/disable these. Sometimes debugging mpi + openmp can be rather "challenging". |
||
December 31, 2018, 05:15 |
|
#5 | |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,715
Rep Power: 40 |
Quote:
https://www.ixpug.org/images/docs/IX...g-OpenFOAM.pdf |
||
December 31, 2018, 20:02 |
|
#6 |
Senior Member
Yan Zhang
Join Date: May 2014
Posts: 120
Rep Power: 12 |
Hi
I'm also interested in this issue. I want to ask is there any possible to create a shared class whose member variables cost much memory. PS: For openMP in OpenFOAM, I've found a github repository.
__________________
https://openfoam.top |
|
January 2, 2019, 09:00 |
|
#7 | ||
Senior Member
Join Date: Sep 2015
Location: Singapore
Posts: 102
Rep Power: 11 |
Hello Mark,
Quote:
By the way, when OpenMP is not linked, the relevant pragmas are ignored by the compiler. This happens in both GCC and ICC. I don't use Clang though. So, I guess there is no need for guards. Quote:
USV |
|||
January 2, 2019, 10:05 |
|
#8 | ||
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,715
Rep Power: 40 |
Quote:
The simplest example is applications/test/openmp/Make/options (in 1712 and later). If you check the corresponding source file (Test-openmp.C) you'll perhaps see what I mean about the guards. As a minimum, you need a guard around the include <omp.h> statement. After that you can decide to use any of the following approaches:
The only reason I suggest the USE_OMP guard is to let you explicitly disable openmp for benchmarking and debugging as required by changing the Make/options entry. If you don't need this for benchmarking, debugging etc, no worries. Quote:
|
|||
Tags |
mpi, shared memory |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Foam::error::PrintStack | almir | OpenFOAM Running, Solving & CFD | 92 | May 21, 2024 08:56 |
Decomposing meshes | Tobi | OpenFOAM Pre-Processing | 22 | February 24, 2023 10:23 |
decomposePar problem: Cell 0contains face labels out of range | vaina74 | OpenFOAM Pre-Processing | 37 | July 20, 2020 06:38 |
[mesh manipulation] Importing Multiple Meshes | thomasnwalshiii | OpenFOAM Meshing & Mesh Conversion | 18 | December 19, 2015 19:57 |
SigFpe when running ANY application in parallel | Pj. | OpenFOAM Running, Solving & CFD | 3 | April 23, 2015 15:53 |