|
[Sponsors] |
June 19, 2019, 11:41 |
[OF 5.0] patches externalCoupled in parallel
|
#1 |
Senior Member
Gerry Kan
Join Date: May 2016
Posts: 373
Rep Power: 11 |
Howdy people:
I am seeing something pretty strange lately with the externalCoupled boundary condition lately. I am not sure why. In this case, the externalCoupled BC code is slightly modified so that it does not check the lock file, as the data are already present and the solver needs not wait for them. Returning the result from OpenFOAM back to the external solver is at the moment unimportant. Here is the sequence of commands that I used to generate the mesh and linking the boundaries to the external data: Code:
blockMesh createExternalCoupledPatchGeometry T decomposePar 1: Single core (normal) 2: parallel with 16 cores (abnormal) I am not sure what I have done incorrectly, and perhaps someone could have some ideas how I could solve this problem. Thank you very much in advance, Gerry. P.S. - I realized after the fact this should have been in the "user" section. My apologies in advance. Last edited by Gerry Kan; June 26, 2019 at 11:09. |
|
June 20, 2019, 12:57 |
Minimally reproducible example
|
#2 |
Senior Member
Gerry Kan
Join Date: May 2016
Posts: 373
Rep Power: 11 |
Folks:
I tinker with this issue today and managed to demonstrate this with a small, reproducible example (at least on my cluster) based on the externalCoupledCavity tutorial. I replaced the external solver (externalSolver) to introduce a spatial and temporal variations on the hot and cold surfaces. Everything else remains unchanged. Again, for the serial run (1 core) the temperature as mapped correctly as expected. The problem appears again when it is run in parallel, in which the patches are all mixed up. The curious could pick up the test case following this link: 20190620-externalCoupledCavity.tar.gz?dl=0 To reproduce: The Allrun, Allrun-parallel, and Allclean commands will automate the whole process. The corresponding solver output can be viewed in Paraview by using reconstructPar (in parallel) and foamToVTK. (For those who know how to do this, sorry for the reiteration). The following three figures shows the spatial temperature distribution of the "hot" surface after iteration 1 (also note difference between T distribution between the two parallel runs): 1) Serial run (normal) 2) Parallel run with 4 cores 3) Parallel run with 16 cores The range of the temperature suggests that the mix up is localized within the hot boundary. This is known as I have configured the temperatures boundaries as such that the maximum T of the cold wall is lower than the minimum T of the hot wall. At this point, I believe that this is a bug in OpenFOAM, unless, of course, the externalCoupled boundary condition is meant to be run in serial, though I doubt this was the intent. However, while browsing through all externalCoupled related issues, I did not see any issues pertaining to parallelization between OF 5 and OF 6. I assume that this has not been resolved or addressed. Thanks again, Gerry. Last edited by Gerry Kan; June 21, 2019 at 05:46. |
|
June 27, 2019, 03:58 |
Problem solved
|
#3 |
Senior Member
Gerry Kan
Join Date: May 2016
Posts: 373
Rep Power: 11 |
Folks:
Problem solved! The externalCoupledCavity tutorial and documentation kind of imply that you always run createExternalCoupledPatchGeometry before, and decomposePar picks up the coupled patches. However, you need to run createExternalCoupledPatchGeometry in parallel after decomposePar, both on the same number of processors. Then the external boundary data are mapped correctely. Here is a modified version of the externalCoupledCavity tutorial to reflect these change. Note that I have rewritten the externalSolver in Python 3 to allow for both temporal and spatial boundary temperature variations. Hope that helps, Gerry. |
|
Tags |
externalcoupled, parallel, patches |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Error running simpleFoam in parallel | Yuby | OpenFOAM Running, Solving & CFD | 14 | October 7, 2021 05:38 |
problem during mpi in server: expected Scalar, found on line 0 the word 'nan' | muth | OpenFOAM Running, Solving & CFD | 3 | August 27, 2018 05:18 |
MPI error in parallel application | usv001 | OpenFOAM Programming & Development | 2 | September 14, 2017 12:30 |
Some questions about a multi region case run in parallel | zfaraday | OpenFOAM Running, Solving & CFD | 5 | February 23, 2017 11:25 |
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 | Amitava Majumdar | Main CFD Forum | 0 | January 5, 1999 13:00 |