|
[Sponsors] |
November 6, 2008, 15:23 |
Implementation of MPI
|
#1 |
Guest
Posts: n/a
|
Hi all, I am working on a 3D FVM based CFD code. It is runnning on a staggered grid, and I need to parallelize this code. I have some experience with PVM, but this time I have to use MPI. With PVM, we used to use a master code to spawn slaves and then each slave was running the same code with different grid info. To be more clear, slaves had the sequential code, which we were using for serial runs, plus some extra PVM commands for information exchange. On the whole, slaves were communicating with their neigbours as well as with the master at used defined time steps. This actually is a classical approach.
My question is if I can use the same strategy (Master/Slave) with MPI? I also have a Poisson solver parallelized with MPI. But I could not see any slave spawning and running the same code on slave. Anyway, I am a little confused. I appreciate any comment, refererence or small examples that you can tell me. Thanks |
|
November 6, 2008, 16:29 |
Re: Implementation of MPI
|
#2 |
Guest
Posts: n/a
|
In the most elementary approach to MPI, all the processes in a given communication group run the same (instruction) code, with different data (grids, initial conditions), i.e. SIMD approach. In this approach a single executable is launched as multiple processes by the "mpirun" script that is typically included in most MPI installations.
mpif90 -o myprog.exe myprog.f mpirun -np=6 myprog.exe I believe there are provisions in MPI for launching multiple (differing) executables that then communicate with each other. However, I stayed with the elementary approach, and simply used IF Blocks to implement the master/slave distinction when needed. Typically, I chose processor 0 (zero) as the master. For example, master sending messages to slave in a single source code: call MPI_INIT(...) call MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs) call MPI_COMM_RANK(MPI_COMM_WORLD, my_id) ... serial code .... if (my_id) .eq. 0) then do iproc = 1, nprocs-1 call MPI_ISEND(....) end do else do iproc = 1, nprocs-1 call MPI_IRECV(....) end do end if .... more serial code .... call MPI_FINALIZE() stop end I would check the online docs for the exact syntax of the MPI calls, because my memory is fuzzy on those. |
|
November 6, 2008, 20:07 |
Re: Implementation of MPI
|
#3 |
Guest
Posts: n/a
|
Oops! I made a mistake. There should not have been a DO loop in the ELSE portion of the IF block. It should just be an MPI_IRECV(...) call there. Sorry for the confusion.
|
|
November 7, 2008, 03:14 |
Re: Implementation of MPI
|
#4 |
Guest
Posts: n/a
|
Thanks for the quick reply Ananda. I guess I am understanding it better now. I will do some practice in the light of your suggestions. Hopefully, it will work..I'll let you know. thanks a lot
|
|
May 17, 2009, 12:29 |
please help me
|
#5 |
New Member
noureddine
Join Date: May 2009
Posts: 4
Rep Power: 17 |
Hello!
i have the same problem it means :i have a code writtin in fortran for a turbulent 3d incompressible flow by using MPICH2 (mpi2)+ , can you help me in order to advance in this work. thanks |
|
May 17, 2009, 12:31 |
please help me
|
#6 |
New Member
noureddine
Join Date: May 2009
Posts: 4
Rep Power: 17 |
Hello!
i have the same problem it means :i have a code writtin in fortran for a turbulent 3d incompressible flow by using MPICH2 (mpi2)+ , can you help me in order to advance in this work. if you want i send you my code. thanks |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
MPI error | florencenawei | OpenFOAM Installation | 3 | October 10, 2011 02:21 |
MPI implementation of OpenFOAM | Jamshidi | OpenFOAM | 7 | June 25, 2011 10:19 |
Error using LaunderGibsonRSTM on SGI ALTIX 4700 | jaswi | OpenFOAM | 2 | April 29, 2008 11:54 |
Is Testsuite on the way or not | lakeat | OpenFOAM Installation | 6 | April 28, 2008 12:12 |
MPI and parallel computation | Wang | Main CFD Forum | 7 | April 15, 2004 12:25 |