CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Parallel Fortran CFD development: mpi_send/mpi_recv vs mpi_bcast

Register Blogs Community New Posts Updated Threads Search

Like Tree2Likes
  • 2 Post By flotus1

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 19, 2023, 17:35
Default Parallel Fortran CFD development: mpi_send/mpi_recv vs mpi_bcast
  #1
New Member
 
Huzafa
Join Date: Mar 2023
Posts: 2
Rep Power: 0
implicit_some is on a distinguished road
I am learning how to use Open MPI in order to develop my own finite difference CFD solver in Fortran90.

I have read on a few tutorials and pages online that mpi_send and mpi_recv routines need to be called in order pass data from the "edge" cells from one mesh partition to its adjacent mesh partition.

What is stopping me from just using mpi_bcast? If I just broadcast the entire mesh data from the root processor to all processors and then carry out the same numerical process on all processors - but of course change the minimum/maximum cells that each processors processes.

Is this generally a less efficient way or something? Looking at the material online, it appears to me that no one uses this approach and I am wondering why. Is there a situation in which this is not possible and I have to use mpi_send/mpi_recv?
implicit_some is offline   Reply With Quote

Old   March 19, 2023, 18:10
Default
  #2
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
The most important goal of an MPI communication scheme -right after correctness- is low overhead.
In general, you want to exchange as little data as possible, between as few processes as possible. Sending and receiving data costs time. If your solver spends a lot of time doing the MPI communication, you will get little to no speedup when increasing the number of processes.

BCAST is bad here because it sends data to all other processes, even those that don't need it. Which wastes valuable time.
Broadcasting the ENTIRE data from all cells is even worse, because you are sending data that actually none of the processes need.
Doing it this way will lead to negative speedup, on top of using tons of memory.
naffrancois and implicit_some like this.

Last edited by flotus1; March 20, 2023 at 04:08.
flotus1 is offline   Reply With Quote

Reply

Tags
fortran 90, mpi, parallel


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Run a user fortran routine in parallel Smilodon CFX 2 March 28, 2017 05:35
Explicitly filtered LES saeedi Main CFD Forum 16 October 14, 2015 12:58
simpleFoam parallel AndrewMortimer OpenFOAM Running, Solving & CFD 12 August 7, 2015 19:45
CFX11 + Fortran compiler ? Mohan CFX 20 March 30, 2011 19:56
ASME CFD Symposium - Call for Papers Chris Kleijn Main CFD Forum 0 September 25, 2001 11:17


All times are GMT -4. The time now is 05:42.