|
[Sponsors] |
strange processor boundary behavior with linearUpwindV |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
November 4, 2011, 20:42 |
strange processor boundary behavior with linearUpwindV
|
#1 |
New Member
Austin Kimbrell
Join Date: Feb 2011
Location: Tennessee, USA
Posts: 8
Rep Power: 15 |
Code used is a recent version of OpenFOAM-1.6-ext.
I observed the following behavior when running a test case in parallel - see attached image. The case is a point vortex with P = 0 boundaries and U boundaries set to zeroGradient. I am running standard icoFoam solver with 3 processors. For discretization schemes I am using the default Gauss linear everywhere except div(phi,U), for that one I use Gauss linearUpwindV Gauss linear. My max Courant number is ~0.3, grid resolution is reasonable. The quantity shown is vorticity magnitude - you can see that when the scale is minimized, data is not being shared properly across the processor boundaries - they effectively show up in the solution when they should be invisible. I have run this case previously in serial and have not seen anything like this before. Furthermore, most other schemes for the convection term do not show this behavior. I have observed something similar also using SFCDV but it dissipates shortly after initialization. Is this a known problem with the linearUpwindV scheme? I have also tried pure linearUpwind and did not see this problem at all. Last edited by akimbrell; November 4, 2011 at 20:42. Reason: identified OpenFOAM version |
|
November 5, 2011, 13:45 |
|
#2 |
Senior Member
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36 |
This is typically due to a missing update of the BC's. If you can reproduce it in 2.0.x, please report it as a bug on http://www.openfoam.com/mantisbt/main_page.php .
__________________
Alberto Passalacqua GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541) OpenQBMM - An open-source implementation of quadrature-based moment methods. To obtain more accurate answers, please specify the version of OpenFOAM you are using. |
|
February 11, 2013, 15:56 |
|
#3 |
Member
David Hora
Join Date: Mar 2009
Location: Zürich, Switzerland
Posts: 63
Rep Power: 17 |
||
November 6, 2013, 04:53 |
|
#4 | |
Senior Member
saeideh mohamadi
Join Date: Aug 2012
Posts: 229
Rep Power: 15 |
Quote:
would you please tell me that how linearUpwind can be define in OpenFOAM-1.6-ext? e.g linearUpwind in openFoam 2.2.0 is defind in this way: div(phi,U) Gauss linearUpwindV grad(U); i want to know how should i define it in OpenFOAM-1.6-ext? thank you very much |
||
November 9, 2013, 14:16 |
|
#5 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings s.m,
You can find tutorials that use this scheme by running: Code:
grep -R "linearUpwindV" $FOAM_TUTORIALS/ Bruno
__________________
|
|
May 18, 2014, 21:04 |
compressible InterFoam
|
#6 |
Member
Join Date: Sep 2013
Posts: 46
Rep Power: 13 |
Hi !
I worked several months on simualtion of cavitation bubbles. I started with the compressibleInterFoam solver and modified it, but the problem still also remains with the original one(s) (versions: 2.1.1, 2.2.1, 2.3.0): When at least one of the processor patches crosses the interface, the velocity field is not computed correctly. You observe numerical fragments parallel to the patch like in the following picture https://www.dropbox.com/s/a4cr4qpyoqjhue2/bug.png The only version I discovered to be able to deal with the decomposition is the openfoam-extend version. Has anyone observed this? Is this the same error as discussed here? Furthermore I discovered that even without decomposing, running a 2D axis-symmetric case has some overestimating properties in the cell of the interface directly at the axis. So if you place a bubble that will collapse onto the axis, it will unphysically get thinner at the axis. This also seems to be no problem in the extend version 3.0 (absolutely same input files). Regards, Max Last edited by ma-tri-x; May 18, 2014 at 21:08. Reason: forgot sth |
|
May 20, 2014, 07:04 |
|
#7 |
Member
David Hora
Join Date: Mar 2009
Location: Zürich, Switzerland
Posts: 63
Rep Power: 17 |
Hi Max
Do you see the numerical fragments only with linearUpwindV or also with other schemes? I think it would be good to report this bug. Do you have a simple case that demonstrates your problems? Regards David |
|
May 20, 2014, 13:13 |
I don't use linearUpwind
|
#8 |
Member
Join Date: Sep 2013
Posts: 46
Rep Power: 13 |
Hi David!
I don't use linearupwind. My fvSchemes looks like: Code:
ddtSchemes { default Euler; } gradSchemes { default Gauss linear; } divSchemes { default Gauss vanLeer; div(phirb,alpha) Gauss interfaceCompression 1; } laplacianSchemes { default Gauss linear corrected; } interpolationSchemes { default linear; } snGradSchemes { default none; // default Gauss skewCorrected linear; snGrad(pd) limited 0.5; snGrad(rho) limited 0.5; snGrad(alpha1) limited 0.5; snGrad(p_rgh) limited 0.8; snGrad(p) limited 0.8; } fluxRequired { default none; p_rgh; p; pcorr; alpha1; } In August, I will be able to prepare one... I think you will immediately see the symmetry-axis-effect when you set up a bubble with very low pressure on an axis of an axis-symmetric mesh. in the very first timestep, where the bubble wall starts to accelerate, you find that the two cells where the interface hits the axis accelerate faster. It's hidden in the next timesteps because I think you cannot set a threshold for the U-field magnitude in paraview (the two cells are only slightly faster and the range of U-magnitude over the mesh is much larger). I cannot show you this unfortunately, because I didn't save a screenshot. But I can show you another freaky example: Once I decomposed with scotch and some processor boundaries were parallel to the bubble wall at some time. When the bubble passed the processor patch, droplets were formed. Maybe I can put the screenshot here. A yes, here's an attachment. The left picture is at t=8.18393e-5, the right is at t=8.33299e-5. In between the bubble collapsed further and the interface (="bubble wall") passed the scotch-processor-patch. Last edited by ma-tri-x; May 20, 2014 at 13:17. Reason: better explanation |
|
October 17, 2015, 16:57 |
Solved
|
#9 | |
Member
Join Date: Sep 2013
Posts: 46
Rep Power: 13 |
Quote:
After years of calculating in parallel with openFOAM of several versions, I finally found the bug: have a look at your Code:
controlDict Code:
cd $WM_PROJECT_DIR/applications/solvers/path-to-solver Code:
CorantNo.H Code:
CoNum = max(SfUfbyDelta/mesh.magSf()).value()*deltaT; Code:
CoNum = max(SfUfbyDelta/mesh.magSf()).value()*deltaT; reduce(CoNum,minOp<scalar>()); Now run Code:
./Allwmake |
||
October 17, 2015, 17:17 |
|
#10 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi ma-tri-x,
Many thanks for sharing this solution! I've taken a look into how the latest OpenFOAM versions deal with this and they calculate the Courant numbers differently and are able to be consistent with the values for all processors. I've checked and this issue currently only occurs in foam-extend (and their old OpenFOAM 1.6-ext). Please report this at their bug tracker here: http://sourceforge.net/p/openfoam-ex...extendrelease/ Best regards, Bruno Edit: Since 7 days have passed, I've reported this here, so that it wouldn't be lost: https://sourceforge.net/p/openfoam-e...ndrelease/295/ Last edited by wyldckat; October 24, 2015 at 13:54. Reason: see "Edit:" and fixed typo |
|
November 3, 2015, 00:44 |
|
#11 |
New Member
anand sudhi
Join Date: Sep 2015
Posts: 16
Rep Power: 11 |
Hi
It would be a huge help if you can tell me the extra modification done to linearUpwindV to account for direction of the field. I understand there is a new variable 'maxCorr' in the code. But i am not able to understand its implementation. What is the purpose of this modification? Also does this modification make linearUpwindV bounded with out the need of specifying limited gradients? Any comments will be very much appreciated. Thank you Anand |
|
November 4, 2015, 10:20 |
|
#12 | |
Member
Join Date: Sep 2013
Posts: 46
Rep Power: 13 |
Quote:
Sorry, I didn't receive an email that you replied to my post. Thanks a lot for reporting this "bug"! Strangely enough, I only observed this in some special cases. In other cases, the time steps were synchronized but it may have occured that some cores just waited for the others to complete intermediate time steps and afterwards these intermediate ones were deleted again. It seems that deep inside the source code there's something not working properly (because the same version of openmpi sometimes works, sometimes this bug occurs). @Anand: I don't have a clue about the upwind thing. You must ask someone of the previous posts. I suggested that it doesn't have to do with linearUpdwind since I didn't use that one and got the bug as well... At least it looked alike. Kind regards, Max |
||
April 12, 2016, 06:49 |
|
#13 |
New Member
Carles
Join Date: Aug 2015
Posts: 4
Rep Power: 11 |
Hi everyone!
I'm having a similar problem with the processor boundary conditions trying to simulate a trombe wall with chtMultiRegionFoam with 6 processors. The pictures show the velocity in the air region (inside de building and surrounding the trombe wall). The first one is the initial condition I used mapped from a chtMultiRegionSimpleFoam parallel run. In the second one the strange behaviour in the processor boundaries shows up. It seemed that it was a problem with the 1.6-ext version, but I'm using 2.4, and not seen the inconsistency in the time directories. The tutorial multiRegionHeater runs ok in parallel. In my case this problem only occurs in the transient run and in the fluid region. Any idea? Thanks! |
|
July 26, 2017, 17:46 |
|
#14 |
Member
Alexander Nekris
Join Date: Feb 2015
Location: France
Posts: 32
Rep Power: 11 |
Hello everybody!
I have to revive this old thread. I've observed a very strange behavior at the processor boundaries. I work with foam-extend- 4.0 and use a kind of a mix between reactingFoam and sonicFoam to simulate chemical kinetics at supersonic velocites. My test case is a 2D wedge in a supersonic flow. I heat the gas locally on the surface of the wedge which leads to a decomposition of molecular nitrogen into atomic nitrogen. When I run my test case serial then every thing is fine. I get my dissociation of N2 into N at the location, where I heat the gas. But after I switched from serial to parallel (here 4 processors) the N mass fraction started moving along the processor boundary – see in the attached image a strip on the left side. What you see is the upper side of the wedge where I heat the gas. For visualizing I reduced the maximum of the color scale. I tried to update mass fractions after the Yi-equation with Yi.correctBoundaryConditions(). I tried to change the div and laplacian schemes which showed some changes in the behaviour but the behaviour was still there. When I deactivate the convective and diffusive terms in the Yi-equation then this phenomenon does not occur which means the problem is with the div and laplacian schemes. Correct? I don’t use adjustTimeStep but a fixed time step. My div scheme in the Yi-equation is Gauss limitedLinear01 1 and my laplacian scheme is Gauss linear uncorrected. Does anybody know what happens with my mass fractions? Has anyone observed the same error? Any suggestion or advice are welcome! Regards, Alex |
|
July 31, 2017, 13:14 |
|
#15 |
Member
Alexander Nekris
Join Date: Feb 2015
Location: France
Posts: 32
Rep Power: 11 |
Hello everyone,
I guess I solved the problem but I don’t know why it works. I simulate supersonic flows at very high velocities, meaning, at deltaT of about 5.0e-9 I have a max Courant Number of about 0.2. So, I have to work with very small deltaT values. I use the following solver conditions for my Y-Equation: Solver BiCGStab; preconditioner DILU; tolerance 1e-08; relTol 0; If I reduce the tolerance to 1e-07, the strange behavior at the processor boundaries disappears! I don’t know why it happens. Anyhow, the proper combination of deltaT and tolerance are needed. Does anybody know why it happens? Regards, Alex Last edited by Neka; August 1, 2017 at 11:07. |
|
August 1, 2017, 11:20 |
|
#16 |
Member
Alexander Nekris
Join Date: Feb 2015
Location: France
Posts: 32
Rep Power: 11 |
Ok, it turns out I haven’t solved the problem!
When I reduce the tolerance from 1e-08 to 1e-07, the strange behavior at the processor boundaries disappears, but simultaneously the Y-equation stops convecting, meaning the mass fractions are frozen and don’t move anymore. I probably forgot to mention that my mass fractions of N (atomic nitrogen) are very small (e.g. Y_N = 1.0e-08), so I work with very small numbers here. Any suggestion or advice are welcome! |
|
March 25, 2018, 10:05 |
|
#17 |
Member
Kristjan
Join Date: Apr 2017
Location: Slovenia
Posts: 36
Rep Power: 9 |
Does this commit resolve the issue?
https://github.com/OpenFOAM/OpenFOAM...aefb62bb47c0c2 I've been looking at this source but I didn't get my head around the problem. Could someone explain? https://cpp.openfoam.org/v3/a08663_source.html I'm using a modified compressibleInterFoam from OpenFOAM v3.0.1 in parallel. Courant number peaks by orders of magnitude in one of the domains, step time is variable and limited by CoMax and solution discontinuity appeared over the processor boundaries. I'm guessing I should copy $(WM_PROJECT_DIR)/src/finiteVolume/lnInclude/CourantNo.H to my solver dir, make the change and include it under a different name? |
|
March 26, 2018, 17:29 |
|
#18 | |||
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Quick answers:
Quote:
Quote:
Quote:
__________________
|
||||
March 26, 2018, 19:11 |
|
#19 |
Member
Kristjan
Join Date: Apr 2017
Location: Slovenia
Posts: 36
Rep Power: 9 |
All sub-domains contain internal faces. I'll spend some more time on this and come back if I find anything useful. Thank you for your time!
|
|
April 2, 2018, 08:42 |
|
#20 | |
Member
Kristjan
Join Date: Apr 2017
Location: Slovenia
Posts: 36
Rep Power: 9 |
Quote:
|
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Strange Nut behaviour with K-OmegaSST | nicolarre | OpenFOAM Running, Solving & CFD | 12 | March 19, 2019 21:35 |
Strange Flamelet Generation Problem | tar | FLUENT | 11 | March 22, 2014 11:52 |
strange pressure behaviour with symmetricPlane boudary condition - interFoam | duongquaphim | OpenFOAM Running, Solving & CFD | 10 | August 20, 2013 15:00 |
Coordination Frame Problem - strange | Luk_Fiz | CFX | 2 | July 30, 2010 09:20 |
Strange things in SRF (urgent) | Cem | FLUENT | 0 | December 19, 2005 11:37 |