|
[Sponsors] |
[mesh manipulation] redistributePar do not interpolate properly on the processor boundaries |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
May 28, 2013, 17:05 |
redistributePar do not interpolate properly on the processor boundaries
|
#1 |
New Member
Matteo Cerminara
Join Date: Feb 2012
Posts: 15
Rep Power: 14 |
Hello,
here my initial and boundary condition on the pressure: Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.0.0 | | \\ / A nd | Web: www.OpenFOAM.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class volScalarField; location "0"; object p; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // dimensions [1 -1 -2 0 0 0 0]; internalField uniform 101325; boundaryField { inlet { type fixedValue; value uniform 101325; } wall { type fixedValue; value uniform 101325; } vertical { type totalPressure; U U; p0 uniform 101325; rho rho; psi none; gamma 1.4; value uniform 101325; } top { type totalPressure; U U; p0 uniform 101325; rho rho; psi none; gamma 1.4; value uniform 101325; } } // ************************************************************************* // Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.0.1 | | \\ / A nd | Web: www.OpenFOAM.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 2; method simple; //method scotch; simpleCoeffs { n (1 1 2); delta 0.001; } hierarchicalCoeffs { n (3 2 1); delta 0.001; order xyz; } manualCoeffs { dataFile "cellDecomposition"; } // ************************************************************************* // Code:
let "Nm1 = N - 1" for i in $( seq 0 1 $Nm1 ) do mkdir processor$i done mkdir processor0/{0,constant} cp -r constant/polyMesh processor0/constant/ cp 0/* processor0/0/ mpirun -np $N redistributePar -parallel -overwrite There is someone understanding or guessing what's happening? Thanks! Matteo PS: I want not use decomposePar because I need to do all the prepprocessing in parallel (of course on a bigger number of processors) |
|
May 28, 2013, 18:37 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings Matteo and welcome to the forum!
Well, even if you don't want to use decomposePar, you still have to use decomposePar. This is because redistributePar needs sub-domain data to be present, otherwise it will do some very crazy stuff... I even wonder why it didn't crash in the first place. For pre-processing in parallel, it depends on what you really want to do. Many of OpenFOAM's applications for pre-processing with work with the "-parallel" option, along with mpirun. The only reason I can see for you to do this, is if your mesh is too large to be decomposed while using a single machine. Of course the question then is: how was the mesh generated in the first place? Either way, if you can provide some more information about the workflow you need to achieve, then it'll be easier to give you some good directions on how to proceed. Best regards, Bruno
__________________
|
|
May 28, 2013, 20:34 |
|
#3 |
New Member
Matteo Cerminara
Join Date: Feb 2012
Posts: 15
Rep Power: 14 |
Greatings Bruno, and many thanks for the quick reply!!!
This forum is an irreplaceable resource! About the first problem, I get the same result if I start decomposing the case in two processors and than redistribute the case in 4. Here the output image I used the following commands: Code:
decomposePar // on two processors // modify the decomposeParDict dictionary mpirun -np 4 redistributePar -parallel -overwrite // on 4 processors Code:
/*---------------------------------------------------------------------------*\ ========= | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / O peration | \\ / A nd | Copyright (C) 2011 OpenFOAM Foundation \\/ M anipulation | ------------------------------------------------------------------------------- License This file is part of OpenFOAM. OpenFOAM is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. OpenFOAM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>. Application redistributePar Description Redistributes existing decomposed mesh and fields according to the current settings in the decomposeParDict file. Must be run on maximum number of source and destination processors. Balances mesh and writes new mesh to new time directory. Can also work like decomposePar: \verbatim # Create empty processor directories (have to exist for argList) mkdir processor0 .. mkdir processorN # Copy undecomposed polyMesh cp -r constant processor0 # Distribute mpirun -np ddd redistributePar -parallel \endverbatim \*---------------------------------------------------------------------------*/ The test case I am using here has a mesh practically isotropic and orthogonal, with 36x36x72 cells. Going forward, I try to explain my workflow, even if a little complicated. The issue comes out because I have as many ram memory as I want for parallel applications but not for serial ones. In principle, I would like to take a coarse test case, and: - redistribute the coarser mesh to a bigger number N of processor, I'm using mpirun -np N redistributePar -parallel -overwrite - refine the coarser mesh, I'm using mpirun -np $N refineMesh -parallel -overwrite - map the nonuniform fields of the coarser mesh into the finer, I'm using mapFields -consistent -parallelTarget -sourceTime 1e-08 . because I use as source the non-decomposed case Executing these steps, I found some problems, the thread is the first one. Than: - I need to use mapFields because my pressure field is not uniform, both in the internalField and in the boundaryField entries. But it does not seem to act neither on the p0 entry of the totalPressure boundary condition nor on the inputValue entry of the inputOutput condition for the temperature field (while it works on the value entries). - I found the way to use mapFields with a decomposed source too (using the flag -parallelSource), but I was not able to find a way to use it in parallel. I thank you in advance for any hint or suggestion you could give me. Best regards, Matteo |
|
May 29, 2013, 18:01 |
|
#4 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Matteo,
I didn't remember about that header... I had seen it several months ago and no longer remembered about it. Although I thought that decomposePar did some more magic, even if we were to decompose to a single processor folder... OK, there are only two things I can think of right now:
Bruno
__________________
|
|
June 3, 2013, 12:34 |
Test case for refining and mapping in parallel -- with bug
|
#5 |
New Member
Matteo Cerminara
Join Date: Feb 2012
Posts: 15
Rep Power: 14 |
Hi Bruno,
at the end I found a little bit of time to order my test case and to create one for the Forum. Here it is: turbulentInletCFD.zip You can find inside a bash script doing the steps I would like to do. I would be grateful to anyone can help me in solving the problem described in the posts above. Matteo |
|
June 16, 2013, 16:59 |
|
#6 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Matteo,
Sorry for taking so long to look into this but I finally figured out what's going on. Actually, I detected a couple of bugs in redistributePar, thanks to your test case! OK, let's look at one issue at a time:
If you do not want to or cant report this for some reason (time?), please allow me permission to report this for you. As for a solution in the mean time, it's simple: simply rely on changeDictionary to restore things back to normal after redistributing the mesh+fields: Code:
echo "redistributing..." mpirun -np $N redistributePar -parallel -overwrite > logRed 2>&1 echo "restoring initial 0 fields to the new decomposition..." mpirun -np $N changeDictionary -parallel > logChg 2>&1 echo "refining..." mpirun -np $N refineMesh -parallel -overwrite > logRef 2>&1 echo "mapping..." mapFields -consistent -parallelTarget -sourceTime 1e-08 . > logMap 2>&1 Attached is the fixed case. Best regards, Bruno
__________________
|
|
June 17, 2013, 10:09 |
|
#7 |
New Member
Matteo Cerminara
Join Date: Feb 2012
Posts: 15
Rep Power: 14 |
Hi Bruno,
thanks for the fixed case!!! I will try it as soon as possible, and I will tell you how it works! About the bug reporting, I never tried to submit a bug to http://www.openfoam.org/bugs/ so, if for you it will not take too long, please feel free to submit it. I will learn from your report how to do it! Otherwise, I will try! Best regards, Matteo |
|
June 17, 2013, 19:46 |
|
#8 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Matteo,
It's going to be a long week for me. I'll look into submitting it during the next weekend. Best regards, Bruno
__________________
|
|
October 13, 2013, 17:06 |
|
#9 |
Member
Dan Kokron
Join Date: Dec 2012
Posts: 33
Rep Power: 13 |
Bruno,
Did these bugs get reported/resolved. I don't see anything related in mantis Thanks Dan |
|
October 14, 2013, 17:43 |
|
#10 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Dan,
Unfortunately I haven't had the time yet to properly report this bug. Specially because I haven't managed to reproduce the same bug with a simpler test case But it's still on my to-do list. And I didn't want to provide this complicated test case, since it would make it harder for them to ascertain where the problem really is. So Dan, if you have a simpler test case where this bug can be reproduced, feel free to report it! Best regards, Bruno
__________________
|
|
February 16, 2014, 15:53 |
|
#11 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings to all!
OK, I've done a really quick test with the original case that Matteo provided and I believe that this issue has been fixed in OpenFOAM 2.2.x, thanks to this bug report: http://www.openfoam.org/mantisbt/view.php?id=1130 If anyone can double check this, please let us know if this is truly fixed or not! Best regards, Bruno
__________________
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
fvc::interpolate(rAU) at boundary faces | Jesper_Roland | OpenFOAM Programming & Development | 5 | January 30, 2019 09:55 |
Interpolate face velocitys to points | Tobi | OpenFOAM Programming & Development | 1 | December 16, 2014 09:19 |
BC which interpolate from an existing solution | Sylv | OpenFOAM Programming & Development | 2 | July 11, 2013 06:14 |
how to interpolate grad(p) | victorconan | OpenFOAM | 6 | September 3, 2012 09:24 |
Interpolate in some points | ivan_cozza | OpenFOAM Post-Processing | 2 | April 22, 2009 09:58 |