|
[Sponsors] |
Sending a large bufffer between processes (with Pstream?) |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
January 11, 2016, 13:50 |
Sending a large bufffer between processes (with Pstream?)
|
#1 |
Member
Join Date: Aug 2015
Posts: 37
Rep Power: 11 |
I'm implementing an algorithm which requires that I trade large amounts of data between neighbouring processes at each time step - something on the order of thousands of doubles. My understanding is that the ideal way of doing this is with one large transfer, as this should be much faster than many small ones. In MPI, I would build up a buffer on the sending process and then transfer the data all at once. In OpenFOAM, most things seem to be done using Pstream, but I haven't found documentation explaining the extent to which Pstream is buffered.
For my interest, I'm wondering:
|
|
January 11, 2016, 14:34 |
|
#2 | |
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Quote:
You must use functions IPstream::read and IPstream::write. For example, to read data on master from slave Code:
scalarList arrayToRead (10); label nPoints = arrayToRead.size(); IPstream::read(Pstream::scheduled, jSlave, reinterpret_cast<char*>(&arrayToRead[0]), nPoints*sizeof(scalar), UPstream::msgType(), UPstream::worldComm) Code:
scalarList arrayToWrite (10); label nPoints = arrayToWrite.size(); OPstream::write(Pstream::scheduled, Pstream::masterNo(), reinterpret_cast<char*>(&(arrayToWrite[0])), nPoints*sizeof(scalar), UPstream::msgType(), UPstream::worldComm);
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin Last edited by mkraposhin; January 11, 2016 at 16:35. |
||
January 11, 2016, 15:52 |
|
#3 |
Member
Join Date: Aug 2015
Posts: 37
Rep Power: 11 |
Thanks for the prompt reply, mkraposhin!
This approach looks good to me. It seems to me that it's actually better than what I was originally suggesting, because you've eliminated the unnecessary step of copying the data from my external buffer into the buffer of a Pstream object for transferring. Two questions about your solution:
|
|
January 11, 2016, 16:35 |
|
#4 | ||
Senior Member
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 355
Rep Power: 21 |
Quote:
Quote:
I'll ask him for explanation tomorrow. If you will find answer before me, can you be so kind to enlighten me?
__________________
MDPI Fluids (Q2) special issue for OSS software: https://www.mdpi.com/journal/fluids/..._modelling_OSS GitHub: https://github.com/unicfdlab Linkedin: https://linkedin.com/in/matvey-kraposhin-413869163 RG: https://www.researchgate.net/profile/Matvey_Kraposhin |
|||
January 11, 2016, 20:55 |
|
#5 |
Member
Join Date: Aug 2015
Posts: 37
Rep Power: 11 |
Thanks!
Do you happen to know whether there is a readWrite() function, for if I wanted to fully trade the data (master sends to slave and slave sends to master in one call)? |
|
Tags |
pstream |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
big difference between clockTime and executionTime | LM4112 | OpenFOAM Running, Solving & CFD | 21 | February 15, 2019 04:05 |
foam-extend-3.2 Pstream: "MPI_ABORT was invoked" | craven.brent | OpenFOAM Running, Solving & CFD | 5 | November 18, 2015 08:55 |
MPI problmes | sharonyue | OpenFOAM | 11 | December 28, 2012 05:40 |
Large Epsilon 3D | lepsilon | Main CFD Forum | 0 | July 10, 2012 11:43 |