CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Meshing & Mesh Conversion

[snappyHexMesh] SnappyHexMesh in parallel openmpi

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 14, 2008, 08:18
Default SnappyHexMesh in parallel openmpi
  #1
Member
 
Niklas Wikstrom
Join Date: Mar 2009
Posts: 86
Rep Power: 17
wikstrom is on a distinguished road
Lately I have several times been running into the followin problem. It is repeatable with the same case on two different hardware archs and with icc and gcc compilers:

During Shell refinement iteration (>1) an MPI-error occur:

[dagobah:01576] *** An error occurred in MPI_Bsend
[dagobah:01576] *** on communicator MPI_COMM_WORLD
[dagobah:01576] *** MPI_ERR_BUFFER: invalid buffer pointer
[dagobah:01576] *** MPI_ERRORS_ARE_FATAL (goodbye)


Here is the complete case
snappyHexMesh-coarse.tgz

To run:

blockMesh
decomposePar
foamJob -p -s snappyHexMesh


I do not know if this is to be regarded a bug, or if it's only me...

Cheers
Niklas
wikstrom is offline   Reply With Quote

Old   October 14, 2008, 08:24
Default its just you http://www.cfd-on
  #2
Super Moderator
 
niklas's Avatar
 
Niklas Nordin
Join Date: Mar 2009
Location: Stockholm, Sweden
Posts: 693
Rep Power: 29
niklas will become famous soon enoughniklas will become famous soon enough
its just you

OK I also get that error.

Niklas
(maybe its a username issue)
niklas is offline   Reply With Quote

Old   October 16, 2008, 04:05
Default Have you tried 1.5.x? If it do
  #3
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
Have you tried 1.5.x? If it does not work in that one please report as a bug.
mattijs is offline   Reply With Quote

Old   October 17, 2008, 05:35
Default I am running recent pull of 1.
  #4
Member
 
Niklas Wikstrom
Join Date: Mar 2009
Posts: 86
Rep Power: 17
wikstrom is on a distinguished road
I am running recent pull of 1.5.x. Reporting bug!

Thanks for testing and great suggestions Niklas! Actually changed my irl name to Bob The Builder and now everything works fine! :-)

Thanks Niklas and Mattijs
wikstrom is offline   Reply With Quote

Old   October 21, 2008, 18:38
Default Hi I also face this error
  #5
Member
 
mohd mojab
Join Date: Mar 2009
Posts: 31
Rep Power: 17
mou_mi is on a distinguished road
Hi

I also face this error in snappyhexmesh parallel run?

*** An error occurred in MPI_Bsend
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_BUFFER: invalid buffer pointer
*** MPI_ERRORS_ARE_FATAL (goodbye)

Would you tell me how and where I can change my name according to what Niklas said?

Thank you
mou
mou_mi is offline   Reply With Quote

Old   November 18, 2008, 12:31
Default Hi, I have the same problem
  #6
New Member
 
Attila Schwarczkopf
Join Date: Mar 2009
Location: Edinburgh / London / Budapest
Posts: 12
Rep Power: 17
schwarczi is on a distinguished road
Hi,

I have the same problem that you described above in connetion with parallel meshing (blockMesh ->
decomposePar -> snappyHexMesh in package version 1.5)

My geometry was built up from several STL files, let's say 20. If I use only 19 parts, everything works fine, I have no problem. But when I used all the 20, I used to get that [MPI_ERRORS_ARE_FATAL...] message and SnappyHexMesh crashed again and again.

I tried to divide the task between the processors many different ways, different memory settings, etc. I checked the user names - as you suggested -, and made the meshing process ran with different users/root. Sorry, nothing has helped. I dublechecked the STL files, too, and used different combinations, the case is the same, when all of them are included the meshing crashes.

Do you have any good idea? Is it a bug? Or the problem is by MPI itself?


Thanks in advance,
Schwarczi
schwarczi is offline   Reply With Quote

Old   November 18, 2008, 15:39
Default Make sure your MPI_BUFFER_SIZE
  #7
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
Make sure your MPI_BUFFER_SIZE is plenty big, 200000000 or larger. Also check on your nodes that you are not running out of memory altogether.
mattijs is offline   Reply With Quote

Old   November 24, 2008, 10:52
Default Mattijs, Thank you very muc
  #8
New Member
 
Attila Schwarczkopf
Join Date: Mar 2009
Location: Edinburgh / London / Budapest
Posts: 12
Rep Power: 17
schwarczi is on a distinguished road
Mattijs,

Thank you very much, your advice has been absolutely useful. Extending [MPI_BUFFER_SIZE] has helped me solve the [MPI_ERRORS_ARE_FATAL...] problem described in my last post.

Thanks,
Sch.
schwarczi is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
wrong zero boundary dir in parallel after snappyHexMesh HagenC OpenFOAM Pre-Processing 2 March 13, 2017 05:47
[snappyHexMesh] snappyhexmesh doesn't creat mesh in parallel issue? klausb OpenFOAM Meshing & Mesh Conversion 1 March 7, 2015 12:55
snappyHexMesh in parallel with AMI louisgag OpenFOAM Pre-Processing 8 September 15, 2014 03:57
[snappyHexMesh] processorWeights problem with snappyhexmesh in parallel oskar OpenFOAM Meshing & Mesh Conversion 0 July 7, 2011 11:05
[snappyHexMesh] snappyHexMesh in parallel with cyclics tonyuprm OpenFOAM Meshing & Mesh Conversion 1 June 29, 2011 11:43


All times are GMT -4. The time now is 15:19.