CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

MPI send and receive of non primitive elements

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 25, 2009, 15:35
Default MPI send and receive of non primitive elements
  #1
Member
 
cosimo bianchini
Join Date: Mar 2009
Location: Florence, Tuscany, Italy
Posts: 88
Rep Power: 17
cosimobianchini is on a distinguished road
Send a message via Skype™ to cosimobianchini
I have a question regarding the use of MPI_Send and MPI_Recv class of functions.
I need to share between processors a list of non primitive elements (let`s say a List<face> to fix ideas).
What I usually use to pass data between different processors is a structure of this kind with otherGlobalFaces being a List or a Field.

Field<Type> otherGlobalFaces;

MPI_Recv
(
reinterpret_cast<char*>(otherGlobalFaces.begin()),
otherGlobalFaces.byteSize(),
MPI_PACKED,
Pstream:rocID(procI),
tag,
MPI_COMM_WORLD,
&status
);


This structure is working fine in case otherGlobalFaces is a scalar, vector, tensor or any other primitive element Field.
If however I declare otherGlobalFaces as a List<face>, even though I replace the bufferSize entry with the proper buffer size (equal to the buffer size of the respective send), I`m not able to pass the data from one processor to another.
Do you have any idea on how to solve this?

As a workaround, since my faces belong to a cuttingPlane and are all triangles, I casted the faces into vectors and the algorithm could actually work, but I`m not really satisfied with this option in fact I`m not sure whether the cuttingPlane faces constructed once the points are found. Can you please confirm that the faces of a cutting plane are always triangles and if not, under which assumption this is true?
Thanks a lot for helping,
Cosimo
__________________
Cosimo Bianchini

Ergon Research s.r.l.
Via Panciatichi, 92
50127 Florence - ITALY
Tel: +39 055 0763716
Mob: +39 320 9460153
e-mail: cosimo.bianchini@ergonresearch.it
URL: www.ergonresearch.it
cosimobianchini is offline   Reply With Quote

Old   August 25, 2009, 16:58
Default
  #2
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Quote:
Originally Posted by cosimobianchini View Post
I have a question regarding the use of MPI_Send and MPI_Recv class of functions.
Any particular reason why you're not using the OPStream/IPStream-classes for transmitting the data?

Bernhard
gschaider is offline   Reply With Quote

Old   August 26, 2009, 05:08
Default
  #3
Member
 
cosimo bianchini
Join Date: Mar 2009
Location: Florence, Tuscany, Italy
Posts: 88
Rep Power: 17
cosimobianchini is on a distinguished road
Send a message via Skype™ to cosimobianchini
In this particular case no but I do not think the problem will be solved since apparently the Pstream read and write just do the same things I1m doing, do you believe it too?
The direct use of mpi functions belongs to the need for specifying a msgType or better a msg tag in order to allow multiple communications between the same processors avoiding mixing of the data.
This piece of code also belong from 1.3 where if I'm not mistaken it was not possible to easily select the communication type for each sending receive operation (I might be wrong because I do not have the source with me, it is just a memory, so it is very likely to fail ).
Thanks a lot,
Cosimo
__________________
Cosimo Bianchini

Ergon Research s.r.l.
Via Panciatichi, 92
50127 Florence - ITALY
Tel: +39 055 0763716
Mob: +39 320 9460153
e-mail: cosimo.bianchini@ergonresearch.it
URL: www.ergonresearch.it
cosimobianchini is offline   Reply With Quote

Old   August 26, 2009, 12:55
Default
  #4
Senior Member
 
Sandeep Menon
Join Date: Mar 2009
Location: Amherst, MA
Posts: 403
Rep Power: 25
deepsterblue will become famous soon enough
I've had trouble with this too... It stems from the fact that faceLists are compound data types which include header information (like the size of the entire faceList, and the size of each face, in addition to point labels). So the size of these elements need to be included as well. However, it isn't entirely clear how they should be sent, and I've run into segFaults all the time. I've resorted to packing them into regular arrays to get the job done, but as you said, it isn't a very elegant solution.
__________________
Sandeep Menon
University of Massachusetts Amherst
https://github.com/smenon
deepsterblue is offline   Reply With Quote

Old   August 26, 2009, 16:25
Default
  #5
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Quote:
Originally Posted by cosimobianchini View Post
In this particular case no but I do not think the problem will be solved since apparently the Pstream read and write just do the same things I1m doing, do you believe it too?
The direct use of mpi functions belongs to the need for specifying a msgType or better a msg tag in order to allow multiple communications between the same processors avoiding mixing of the data.
This piece of code also belong from 1.3 where if I'm not mistaken it was not possible to easily select the communication type for each sending receive operation (I might be wrong because I do not have the source with me, it is just a memory, so it is very likely to fail ).
Thanks a lot,
Cosimo
I'm not 100% sure whether a List<face> lies sequentially in memory and if that is not the case the low-level MPI commands won't work.
(Think about it: the size of a face is not a priori defined. So if you replace a face of size 3 at position N with a face of size 4 all the following faces would have to be shifted in memory)

What I meant with the streams was writing the Field into the stream and reading it from another PStream on the target processor. Between the processors it would be an ASCII-stream which means some overhead (printing, parsing), but I'd first try to get the algorithm right and then worry about optimization
gschaider is offline   Reply With Quote

Old   August 27, 2009, 18:44
Default
  #6
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
Hi Cosimo,

you could try packing using OstringStream (note: haven't tried this). Something like:

OstringStream os;
os <<otherGlobalFaces;
string contents(os.str());
MPI_Send( contents.begin(), contents.size());
mattijs is offline   Reply With Quote

Old   August 28, 2009, 12:56
Default
  #7
Member
 
cosimo bianchini
Join Date: Mar 2009
Location: Florence, Tuscany, Italy
Posts: 88
Rep Power: 17
cosimobianchini is on a distinguished road
Send a message via Skype™ to cosimobianchini
Thanks Matjis I tried your piece of code (there was a typo error for further reference take it here)
OStringStream os;
os <<otherGlobalFaces;
string contents(os.str());
MPI_Send( contents.begin(), contents.size());

but it is not clear to me how you then receive a IStringStream and especially how you cast it back on a faceList.


Thank you Bernard for the hint. I also believe the problem to be related to the receiving field where the size of each face is in fact unknown so it is not possible even assigning the correct buffer size to distinguish which piece of the stream belongs to which face. Anyhow I was not capable of receiving the field as a Pstream and then cast it back on a faceList as well.

Anyhow I solved the problem, allowing a little extra communication, using gatherList and scatterList that seems to work with non primitive list too (I looked in the code searching for the trick but I was not able to find it).

Another doubt arises for the same utility still related to parallel communication: what method is actually more efficient between the two proposed below that are equivalent to me?

List<vector> globalPoints(globalPointSize,pTraits<vector>::zero );
forAll(cut_.points(),pointi)
{
globalPoints[processorStartPointPosition_ + pointi] = cut_.points()[pointi];
}
reduce(globalPoints, sumOp<Field<vector> >());

or

List<List<point> > globalPointsList(Pstream::nProcs());
globalPointsList[Pstream::myProcNo()] = localPoints;

Pstream::gatherList(globalPointsList);
Pstream::scatterList(globalPointsList);

List<point> globalPoints = ListListOps::combine<List<point> >
(
globalPointsList,
accessOp<List<points> >()
);

It seems to me that this second method implies fewer processor communication but I`m not so sure regarding the combine method

Thanks a lot again,
Cosimo
__________________
Cosimo Bianchini

Ergon Research s.r.l.
Via Panciatichi, 92
50127 Florence - ITALY
Tel: +39 055 0763716
Mob: +39 320 9460153
e-mail: cosimo.bianchini@ergonresearch.it
URL: www.ergonresearch.it
cosimobianchini is offline   Reply With Quote

Reply

Tags
mpi send recv processor


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 23:49.