CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

pressure eq. "converges" after few time steps

Register Blogs Community New Posts Updated Threads Search

Like Tree25Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 7, 2011, 10:08
Default
  #41
Senior Member
 
Felix L.
Join Date: Feb 2010
Location: Hamburg
Posts: 165
Rep Power: 18
FelixL is on a distinguished road
Quote:
Originally Posted by vkrastev View Post
Hi Felix,
I've tried both PCG and GAMG as solvers for p, but my experience goes in a different direction than yours...In particular, if the relTol parameter for p starts to be quite low (I usually use something like 10^-04/10^-05) the GAMG solver is much faster than the PCG one. Can you explain for which cases you have found an opposite behavior?
Thanks

V.
Hello, V,


these were simple aerodynamic cases (2D, incompressible, RANS) and mostly hexa meshes. The case I was looking deeper into the performance of the linear solvers was a ground effect study of two interacting airfoils. The speedup I obtained with PCG was impressive, I was able to reduce the simulation time from 3 days (finest grid) with GAMG to less than 12 hours with PCG and otherwise same settings.

Why do you use such low relative tolerances - any particular reason behind that?


Greetings,
Felix.
FelixL is offline   Reply With Quote

Old   February 7, 2011, 10:08
Default
  #42
Member
 
Franco Marra
Join Date: Mar 2009
Location: Napoli - Italy
Posts: 70
Rep Power: 17
francescomarra is on a distinguished road
Quote:
Hi Franco, and thanks for your comment. How can I set the max iteration number? Is there a missing parameter on my fvSolution where can I set it?
Hi Maddalena,

It should be possible by adding, in fvSolution, the following keyword:
Code:
maxIter   300;
For instance:
Code:
solvers
{
    p
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-06;
        relTol          0.1;
        maxIter         1;
    }
...
1000 iterations is the default value.

Regards,
Franco
francescomarra is offline   Reply With Quote

Old   February 7, 2011, 10:19
Default
  #43
Senior Member
 
Travis Carrigan
Join Date: Jul 2010
Location: Arlington, TX
Posts: 161
Rep Power: 16
tcarrigan is on a distinguished road
Just curious, have you tried using leastSquares for the gradScheme?

I did some 2D calculations for a NACA airfoil using both structured and unstructured grids. I too suffered convergence issues when running the calculation for the unstructured case. However, switching the gradScheme to a cellLimited leastSquares happened to solve the problem.

Let me know if this works.
tcarrigan is offline   Reply With Quote

Old   February 7, 2011, 10:58
Default
  #44
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Ok, first results using all you suggested above. I have not get to steady state, but at least I have a sort of solution now.
All goes fine until some boundary epsilon and k appears. They prevent my pressure residual to get under 0.1. How can I solve it? using lower relax factor on them?

Quote:
Originally Posted by tcarrigan View Post
I did some 2D calculations for a NACA airfoil using both structured and unstructured grids. I too suffered convergence issues when running the calculation for the unstructured case. However, switching the gradScheme to a cellLimited leastSquares happened to solve the problem.
Travis, what is the advantage of a cellLimited leastSquares on gradScheme?

mad
Attached Images
File Type: jpg residuals.jpg (33.0 KB, 246 views)
Attached Files
File Type: txt fvSchemes.txt (1.8 KB, 136 views)
File Type: txt fvSolution.txt (1.8 KB, 91 views)
maddalena is offline   Reply With Quote

Old   February 7, 2011, 11:11
Default
  #45
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by FelixL View Post
Hello, V,


these were simple aerodynamic cases (2D, incompressible, RANS) and mostly hexa meshes. The case I was looking deeper into the performance of the linear solvers was a ground effect study of two interacting airfoils. The speedup I obtained with PCG was impressive, I was able to reduce the simulation time from 3 days (finest grid) with GAMG to less than 12 hours with PCG and otherwise same settings.

Why do you use such low relative tolerances - any particular reason behind that?


Greetings,
Felix.
Well, your results are really interesting... However, my cases are slightly different (incompressible and RANS too, but 3D, with a single object placed near a solid ground and with tetra/prisms meshes), and I have observed that also for not so low relTol values (about 10^-02) GAMG does a much faster job than PCG. About the relTol values, I have to admit that I haven't made any systematic study about an optimum value, but as the combination of smooth solver (for U and turbulent quantities) and GAMG (for p) allows to decrease them without much additional cost I do prefer to maintain maybe a little lower values than It should be necessary (10^-02/10^-03 for U and turbulent quantities, 10^-04/10^-05 for p).

Best Regards

V.
vkrastev is offline   Reply With Quote

Old   February 8, 2011, 01:02
Default
  #46
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by vkrastev View Post
and then I've tried it for my runs: with domains of a few milions of cells (1.5 up to 5 milions) I can tell you that passing from 1000 to 50 cells does not have any effect on the solver efficiency, but this doesn't prove that such a criterion is universally correct or not .....
There is a confusion. I think you mean to say that you did not observe any efficiency change in CFD solution and NOT the matrix solver.

There are chances that you do not observe efficiency changes. I will try to explain the reason.

First matrix solver is sensetive to direct solve and number of equations. Why is it so? It is related to the main reason why multigrid algorithms work for the first place.
simple rule of thumb is that larger the equations in direct solver faster the convergence multigrid would show. (it might work against it though some times, but it rarely does in properly implemented multigrid code).

to show this I will show you an example. The mesh is 60 x 60 x 60 and Poisson equation. We will use smoothed aggregation multigrid (of Vanek et al). (I haven't used my additive corrective multigrid code for loong time so i will not waste time searching it).

Here is how multgrid levels are generated in this case.

ncells = 216000

Size [ 0 ] = 216000
Size [ 1 ] = 25939
Size [ 2 ] = 611
Size [ 3 ] = 10
Size [ 4 ] = 1

Max AMG levels = 4


For this problem initial residual is 1000.

Here is how convergence went for this:

Res start = 1000
[1 ] Res = 1652.59 ratio 0.605112
[2 ] Res = 633.144 ratio 1.57942
[3 ] Res = 254.718 ratio 3.9259
[4 ] Res = 110.734 ratio 9.03068
[5 ] Res = 51.5134 ratio 19.4124
[6 ] Res = 25.3378 ratio 39.4668
[7 ] Res = 13.0345 ratio 76.7195
[8 ] Res = 6.96472 ratio 143.581
[9 ] Res = 3.84651 ratio 259.976
[10 ] Res = 2.18456 ratio 457.759
[11 ] Res = 1.26968 ratio 787.598
[12 ] Res = 0.751132 ratio 1331.32
[13 ] Res = 0.449974 ratio 2222.35
[14 ] Res = 0.271864 ratio 3678.31
[15 ] Res = 0.165201 ratio 6053.23
[16 ] Res = 0.100779 ratio 9922.68
[17 ] Res = 0.0616454 ratio 16221.8
[18 ] Res = 0.03778 ratio 26469
[19 ] Res = 0.0231871 ratio 43127.5
[20 ] Res = 0.0142468 ratio 70191.2
[21 ] Res = 0.00876183 ratio 114131
[22 ] Res = 0.00539288 ratio 185430
[23 ] Res = 0.00332168 ratio 301052
[24 ] Res = 0.00204727 ratio 488455
[25 ] Res = 0.0012626 ratio 792017
[26 ] Res = 0.00077915 ratio 1.28345e+006


It took 26 iterations to drop error by factor of 1.28345e+006

Now lets fix level 2 as direct solve. That is the level with 611 will be solved directly. This is how convergence went for this case:

Res start = 1000
[1 ] Res = 1625.02 ratio 0.615378
[2 ] Res = 512.776 ratio 1.95017
[3 ] Res = 167.994 ratio 5.95259
[4 ] Res = 57.2169 ratio 17.4774
[5 ] Res = 20.0184 ratio 49.9541
[6 ] Res = 7.11943 ratio 140.461
[7 ] Res = 2.56062 ratio 390.53
[8 ] Res = 0.928676 ratio 1076.8
[9 ] Res = 0.338887 ratio 2950.83
[10 ] Res = 0.124259 ratio 8047.74
[11 ] Res = 0.0457355 ratio 21864.9
[12 ] Res = 0.0168867 ratio 59218.1
[13 ] Res = 0.00625179 ratio 159954
[14 ] Res = 0.00232032 ratio 430975
[15 ] Res = 0.00086324 ratio 1.15843e+006


You see by increasing direct solve size i could do the same thing in 15 iterations.

So there are two things:
(a) Direct solve takes time
(b) By increasing direct solve size you can speed up convergence.

A good choice of direct solver size would be when the time saved in convergence is more than the time lost in direct solve. So sometimes they can cancel each other out.


This is the reason you might not have noticed the efficiency change. If you really want to observe the change then try putting the direct solve size to very large , say 100000 or so.
danvica likes this.
arjun is offline   Reply With Quote

Old   February 8, 2011, 01:20
Default
  #47
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by FelixL View Post
Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already!


Greetings,
Felix.
I think your observations are not out of line. They are pretty much correct. For some cases when matrix sizes are small enough CG based solvers CAN be faster than some multigrid solvers.

The main issue is that multigrid is single word BUT it represents a whole world of matrix solvers. Some multigrids have issues and thats why a lot of research is going on in this area. But some of modern multigrid solvers are really very impressive.

Good read would be this

http://neumann.math.tufts.edu/~scott/research/aSA2.pdf

just to see which direction we are heading.
arjun is offline   Reply With Quote

Old   February 8, 2011, 02:32
Default
  #48
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
maddalena

one reason for you residuals not going very low can be, that your solution is of transient nature.

BTW, what the reason to set different non-orthogongonal correction (limited 0.5 an limited 0.33) for k and epsilon?

My point of view, I would try to converge all with the first order, and then switch to the "second" order linearUpwind.
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 8, 2011, 03:38
Default
  #49
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Good morning!
Quote:
Originally Posted by makaveli_lcf View Post
one reason for you residuals not going very low can be, that your solution is of transient nature.
I do not thing that the solution is transient. there are lots of vortex forming in the geometry, due to the complex flow path, but all in all I believe that the case is steady state. The last night the simulation run without problems until time 1700, this moring I had an acceptable velocity and pressure field although the pressure residuals (the highest one) did not go under 0.02. At least I have a solution now!
Quote:
Originally Posted by makaveli_lcf View Post
BTW, what the reason to set different non-orthogongonal correction (limited 0.5 an limited 0.33) for k and epsilon? My point of view, I would try to converge all with the first order, and then switch to the "second" order linearUpwind.
This is something it has been suggested to me some time ago, and, since it seemed to be working, I never changed back.
My task of today is trying to reduce the pressure residual, and the first step can be to use first order everywhere on laplacian. And maybe trying least square on grad U, as suggested by Travis.
Thanks to your support, I will keep you informed!

mad
maddalena is offline   Reply With Quote

Old   February 8, 2011, 04:44
Default
  #50
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
maddalena

did you try potentialFoam on your setup? Is it converging till required accuracy?
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 8, 2011, 04:54
Default
  #51
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by makaveli_lcf View Post
did you try potentialFoam on your setup? Is it converging till required accuracy?
No, I did not. The simulation I run uses a slightly modified version of simpleFoam, for which potentialFoam gives no results. However, all the observations we made yesterday applies without variation because the modified version of simpleFoam is used only in the first steps of the simulation itself.

mad
maddalena is offline   Reply With Quote

Old   February 8, 2011, 05:05
Default
  #52
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
I have mentioned potentialFoam, because it gives some tips about your required non-orthogonal corrections and gives a good first approximation to start with.
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 8, 2011, 06:00
Default
  #53
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by arjun View Post
There is a confusion. I think you mean to say that you did not observe any efficiency change in CFD solution and NOT the matrix solver.

There are chances that you do not observe efficiency changes. I will try to explain the reason.

First matrix solver is sensetive to direct solve and number of equations. Why is it so? It is related to the main reason why multigrid algorithms work for the first place.
simple rule of thumb is that larger the equations in direct solver faster the convergence multigrid would show. (it might work against it though some times, but it rarely does in properly implemented multigrid code).

to show this I will show you an example. The mesh is 60 x 60 x 60 and Poisson equation. We will use smoothed aggregation multigrid (of Vanek et al). (I haven't used my additive corrective multigrid code for loong time so i will not waste time searching it).

Here is how multgrid levels are generated in this case.

ncells = 216000

Size [ 0 ] = 216000
Size [ 1 ] = 25939
Size [ 2 ] = 611
Size [ 3 ] = 10
Size [ 4 ] = 1

Max AMG levels = 4


For this problem initial residual is 1000.

Here is how convergence went for this:

Res start = 1000
[1 ] Res = 1652.59 ratio 0.605112
[2 ] Res = 633.144 ratio 1.57942
[3 ] Res = 254.718 ratio 3.9259
[4 ] Res = 110.734 ratio 9.03068
[5 ] Res = 51.5134 ratio 19.4124
[6 ] Res = 25.3378 ratio 39.4668
[7 ] Res = 13.0345 ratio 76.7195
[8 ] Res = 6.96472 ratio 143.581
[9 ] Res = 3.84651 ratio 259.976
[10 ] Res = 2.18456 ratio 457.759
[11 ] Res = 1.26968 ratio 787.598
[12 ] Res = 0.751132 ratio 1331.32
[13 ] Res = 0.449974 ratio 2222.35
[14 ] Res = 0.271864 ratio 3678.31
[15 ] Res = 0.165201 ratio 6053.23
[16 ] Res = 0.100779 ratio 9922.68
[17 ] Res = 0.0616454 ratio 16221.8
[18 ] Res = 0.03778 ratio 26469
[19 ] Res = 0.0231871 ratio 43127.5
[20 ] Res = 0.0142468 ratio 70191.2
[21 ] Res = 0.00876183 ratio 114131
[22 ] Res = 0.00539288 ratio 185430
[23 ] Res = 0.00332168 ratio 301052
[24 ] Res = 0.00204727 ratio 488455
[25 ] Res = 0.0012626 ratio 792017
[26 ] Res = 0.00077915 ratio 1.28345e+006


It took 26 iterations to drop error by factor of 1.28345e+006

Now lets fix level 2 as direct solve. That is the level with 611 will be solved directly. This is how convergence went for this case:

Res start = 1000
[1 ] Res = 1625.02 ratio 0.615378
[2 ] Res = 512.776 ratio 1.95017
[3 ] Res = 167.994 ratio 5.95259
[4 ] Res = 57.2169 ratio 17.4774
[5 ] Res = 20.0184 ratio 49.9541
[6 ] Res = 7.11943 ratio 140.461
[7 ] Res = 2.56062 ratio 390.53
[8 ] Res = 0.928676 ratio 1076.8
[9 ] Res = 0.338887 ratio 2950.83
[10 ] Res = 0.124259 ratio 8047.74
[11 ] Res = 0.0457355 ratio 21864.9
[12 ] Res = 0.0168867 ratio 59218.1
[13 ] Res = 0.00625179 ratio 159954
[14 ] Res = 0.00232032 ratio 430975
[15 ] Res = 0.00086324 ratio 1.15843e+006


You see by increasing direct solve size i could do the same thing in 15 iterations.

So there are two things:
(a) Direct solve takes time
(b) By increasing direct solve size you can speed up convergence.

A good choice of direct solver size would be when the time saved in convergence is more than the time lost in direct solve. So sometimes they can cancel each other out.


This is the reason you might not have noticed the efficiency change. If you really want to observe the change then try putting the direct solve size to very large , say 100000 or so.
Really interesting explanation! However, when I say that passing from 1000 to 50 cells in coarsest level (which, following your post, means lowering the size of direct solution procedure) I didn't see any significant changes in efficiency, is because of two factors: the first is the time required for reaching a given convergence criterion, which all by itself is not sufficient to isolate the two concurring effects introduced above (reduction of iterations vs. additional time required for direct solution); the second seems much less ambiguous as the number of GAMG iterations reported by the code to reach the convergence criterion remain the same in both cases. So, if the time required is the same and the number of iterations doesn't change, maybe we can guess that the size and complexity of my cases render them quite insensitive to such a change (from 1000 to 50 or vice-versa). Please correct me if I'm missing something else

Best Regards

V.
vkrastev is offline   Reply With Quote

Old   February 8, 2011, 06:22
Default
  #54
Senior Member
 
Felix L.
Join Date: Feb 2010
Location: Hamburg
Posts: 165
Rep Power: 18
FelixL is on a distinguished road
Quote:
Originally Posted by arjun View Post
I think your observations are not out of line. They are pretty much correct. For some cases when matrix sizes are small enough CG based solvers CAN be faster than some multigrid solvers.

The main issue is that multigrid is single word BUT it represents a whole world of matrix solvers. Some multigrids have issues and thats why a lot of research is going on in this area. But some of modern multigrid solvers are really very impressive.

Good read would be this

http://neumann.math.tufts.edu/~scott/research/aSA2.pdf

just to see which direction we are heading.

Hello, Arjun,


thanks for the text recommendation, I will have a look into it.

Yeah, I know MG methods are a complex topic and there are many different directions evolving at the moment. I was working with the DLR TAU code (a code of the german aerospace center used both in research and industry) and the multigrid method used there was really, really helpful to save resources.

I'm pretty sure, GAMG won't be the only option of MG approaches in OF, so I'm very much looking forward to the upcoming updates.


Greetings,
Felix.
FelixL is offline   Reply With Quote

Old   February 8, 2011, 10:19
Default
  #55
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
maddalena

hope I found the reason for your poor pressure residuals.
Look at the residual level for my pressure equation test with

1. laplacian corrected
2. laplacian limited 0.5

1200cells_corrected_linear_100iter.png
1200cells_limited_0.5_linear_100iter.png
be_inspired, fumiya and Ramzy1990 like this.
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 8, 2011, 10:30
Default
  #56
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by makaveli_lcf View Post
maddalena

hope I found the reason for your poor pressure residuals.
Look at the residual level for my pressure equation test with

1. laplacian corrected
2. laplacian limited 0.5

Attachment 6407
Attachment 6408
This is really, really interesting...
vkrastev is offline   Reply With Quote

Old   February 8, 2011, 11:00
Default
  #57
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by makaveli_lcf View Post
maddalena

hope I found the reason for your poor pressure residuals.
Look at the residual level for my pressure equation test with

1. laplacian corrected
2. laplacian limited 0.5

Attachment 6407
Attachment 6408
I hope I can get there one day...
maddalena is offline   Reply With Quote

Old   February 8, 2011, 12:53
Default
  #58
Senior Member
 
Felix L.
Join Date: Feb 2010
Location: Hamburg
Posts: 165
Rep Power: 18
FelixL is on a distinguished road
Hello, all,


I can reproduce this behaviour using one of my aerodynamic cases with different laplacian schemes and otherwise same settings (see the attachements).

It has to be noted that the aerodynamic coefficients differ only by max. 0.1%, so this incomplete convergence of pressure - though not really good-looking - maybe has a minor influence on the result (at least for my simple 2D case). But I was able to get the resiudal of p below 1e-3 for the limited laplacian case, so maybe this is already accurate enough, I can't tell. A deeper investigation would be interesting.


Greetings,
Felix.
Attached Images
File Type: png corrected.png (27.8 KB, 288 views)
File Type: png limited.png (31.1 KB, 278 views)
Ramzy1990 likes this.
FelixL is offline   Reply With Quote

Old   February 9, 2011, 03:01
Default
  #59
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 256
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
maddalena

the same behavior of the pressure residuals I got when changing gradSchemes from linear (or leastSquares) gradient scheme to its limited version (cell/face(MD)Limited)

another my observation was, that leastSquares (not limited) scheme for gradients gave more smooth and physical solution, while one from the linear scheme was distorted by skewed cells of the unstructured grid part.

So if you are using limited version of pressure laplacian and gradient discretization, the order of 10^-2 - 10^-3 might be normal for your pressure residuals. On the one hand, introducing limiting, you provide more physically correct solution due to its boundness. On the other hand, that would result in convergence issues.

You can read more about accuracy/convergence in so called "Gamma paper" http://powerlab.fsb.hr/ped/kturbo/Op...GammaPaper.pdf

PS. By the way, Maddalena, thank you for rising this topic, it helped me to discover some important points for my self)))
PS1. Hope you understood, that I suggested to try non limited scheme versions to achieve desired convergence criterion.
fumiya, mgg and Ramzy1990 like this.
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 9, 2011, 04:19
Smile Thank you!
  #60
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
From my point of view, this is one of the most interesting thread about schemes and convergence of the OF forum! All the suggestions have been demonstrated widely. Indeed, they have been not "limited" to the only: Do this because I know it works, but people showed that what they say is true for specific reasons, with specific test cases.
Of course, the thread is open for similar contribution in the future, hoping that the discussion level remains the same.

Thank you, FOAMers!

maddalena
maddalena is offline   Reply With Quote

Reply

Tags
convergence issues, pipe flow, simplefoam


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
TimeVaryingMappedFixedValue irishdave OpenFOAM Running, Solving & CFD 32 June 16, 2021 07:55
time Step's turbFoam >>> exit mgolbs OpenFOAM Pre-Processing 4 December 8, 2009 04:48
Modeling in micron scale using icoFoam m9819348 OpenFOAM Running, Solving & CFD 7 October 27, 2007 01:36
Hydrostatic pressure in 2-phase flow modeling (CFX4.2) HB &DS CFX 0 January 9, 2000 14:19
unsteady calcs in FLUENT Sanjay Padhiar Main CFD Forum 1 March 31, 1999 13:32


All times are GMT -4. The time now is 14:02.