CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Although having reduced the time step, the simulation is not converging

Register Blogs Community New Posts Updated Threads Search

Like Tree12Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 1, 2017, 04:37
Default Although having reduced the time step, the simulation is not converging
  #1
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Hi,

I am trying to perform an analysis of the convergence when reducing the grid of an algorithm that I have implemented.
The problem I am simulating is a flow past a circular cylinder.

I am using several meshes:
(1) Coarse Grid -> 2546 nodes, 4918 tria elements
(2) Medium Grid -> 7294 nodes, 14334 tria elements
(3) Fine Grid -> 19716 nodes, 38887 tria elements
(4) Very Fine Grid -> 84442 nodes, 167973 tria elements.

All works fine for the (1), (2) and (3) grids. And I have managed to get a solution quite close to the expected solutions appearing in refernces with an error less than 2% in all the three grids.
But when trying to simulate the flow using the (4) grid, after a few steps (50 - 60 time steps approximately) of the transiente simulation, the velocity field grows very much, and the solutions starts to diverge.
I have reduced the time step a lot, but there has been no improvement. Indeed, if I reduced the time step even more (1e-7), the solution diverges even earlier.

I have tried with big time steps and with small time steps, but I have not found a way to get the algorithm work.

I have tried to use even a very small value for the tolerance of the matrix inverse calculation (1e-20), but there has been no improvement.

Any suggestion on why this is not working?

Best regards,
Hector.
HectorRedal is offline   Reply With Quote

Old   February 1, 2017, 04:56
Default
  #2
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Quote:
Originally Posted by HectorRedal View Post
Hi,

I am trying to perform an analysis of the convergence when reducing the grid of an algorithm that I have implemented.
The problem I am simulating is a flow past a circular cylinder.

I am using several meshes:
(1) Coarse Grid -> 2546 nodes, 4918 tria elements
(2) Medium Grid -> 7294 nodes, 14334 tria elements
(3) Fine Grid -> 19716 nodes, 38887 tria elements
(4) Very Fine Grid -> 84442 nodes, 167973 tria elements.

All works fine for the (1), (2) and (3) grids. And I have managed to get a solution quite close to the expected solutions appearing in refernces with an error less than 2% in all the three grids.
But when trying to simulate the flow using the (4) grid, after a few steps (50 - 60 time steps approximately) of the transiente simulation, the velocity field grows very much, and the solutions starts to diverge.
I have reduced the time step a lot, but there has been no improvement. Indeed, if I reduced the time step even more (1e-7), the solution diverges even earlier.

I have tried with big time steps and with small time steps, but I have not found a way to get the algorithm work.

I have tried to use even a very small value for the tolerance of the matrix inverse calculation (1e-20), but there has been no improvement.

Any suggestion on why this is not working?

Best regards,
Hector.

It seems a numerical instability issue but I want ask you if the solution definitely blow-up or simply starts to oscillate. Plot the total kinetic energy versus time.
The reduced time step should drive into a stability region and if this does not happen you could have a bug in your code.
Are you sure that changing the grid resolution no others setting are changed?
Are you sure that the problem is not in the generated triangle in grid 4) ??
juliom likes this.
FMDenaro is offline   Reply With Quote

Old   February 1, 2017, 10:02
Default
  #3
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Quote:
Originally Posted by FMDenaro View Post
It seems a numerical instability issue but I want ask you if the solution definitely blow-up or simply starts to oscillate. Plot the total kinetic energy versus time.
The reduced time step should drive into a stability region and if this does not happen you could have a bug in your code.
Are you sure that changing the grid resolution no others setting are changed?
Are you sure that the problem is not in the generated triangle in grid 4) ??
The solution does not oscillate but blows up.
I'm afraid there is some bug in the code, because when chaninge the grid resolution, other setting are not changed (keeping non-dimensional parameter constant).

I can try to generate another grid, so as to avoid any problem in the grid. That is also one of my concerns, that there is an error in the grid or in the boundary conditions placed in the nodes of the grid.
The point is that, the more number of nodes, the much probability exist to make an error when setting boundary conditions.
HectorRedal is offline   Reply With Quote

Old   February 1, 2017, 22:29
Default
  #4
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by HectorRedal View Post
The solution does not oscillate but blows up.
I'm afraid there is some bug in the code, because when chaninge the grid resolution, other setting are not changed (keeping non-dimensional parameter constant).

I can try to generate another grid, so as to avoid any problem in the grid. That is also one of my concerns, that there is an error in the grid or in the boundary conditions placed in the nodes of the grid.
The point is that, the more number of nodes, the much probability exist to make an error when setting boundary conditions.

it is possibly not a bug but we don't know good solution.
There is a similar thread here too

Time convergence study problems, very small time steps

Unfortunately there is nothing a user can do to fix it.

I have faced a similar problem during VOF calculations where reduction in time step finally blows solution out rather than helping to converge.
arjun is offline   Reply With Quote

Old   February 2, 2017, 04:37
Default
  #5
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Quote:
Originally Posted by arjun View Post
it is possibly not a bug but we don't know good solution.
There is a similar thread here too

Time convergence study problems, very small time steps

Unfortunately there is nothing a user can do to fix it.

I have faced a similar problem during VOF calculations where reduction in time step finally blows solution out rather than helping to converge.

If I understand correctly the original question, here the problem is different, the solution blow-up on the grid (4), the finest grid, for every value of the time-step, not just for small values.
FMDenaro is offline   Reply With Quote

Old   February 3, 2017, 12:20
Default
  #6
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Quote:
Originally Posted by FMDenaro View Post
If I understand correctly the original question, here the problem is different, the solution blow-up on the grid (4), the finest grid, for every value of the time-step, not just for small values.
Yes, you are right.
The solution blows up only in grid (4), the finest one.
In any other grid, the solution works fine.

I am going to generate another grid and check if the algorithm works in this new grid. A grid intermediate betwen (4) and (3).
HectorRedal is offline   Reply With Quote

Old   February 4, 2017, 07:59
Default
  #7
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
When solving the system A x = b, I am using a tolerance of 1e-10. I don't know if this tolerance is too high. And this is what is causing the algorithm to blow up.

Do you think it's better to use for example tol=1e-20, when the number or nodes increases a lot?
The problem then is that the solution time is going to grow exponentially, I am afraid.
HectorRedal is offline   Reply With Quote

Old   February 4, 2017, 09:12
Default
  #8
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Quote:
Originally Posted by HectorRedal View Post
When solving the system A x = b, I am using a tolerance of 1e-10. I don't know if this tolerance is too high. And this is what is causing the algorithm to blow up.

Do you think it's better to use for example tol=1e-20, when the number or nodes increases a lot?
The problem then is that the solution time is going to grow exponentially, I am afraid.
The accuracy of an iterative method is relevant and must be consider carefully during a grid refinement... Are you using the check on the norm of the residual |A.x-b| ?
If you are using non dimensional form of the equations, a tolerance of 10^-8 should be sufficient. Differently, you should normalize the residual by its value at the zero-th iteration.
FMDenaro is offline   Reply With Quote

Old   February 4, 2017, 09:42
Default
  #9
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by FMDenaro View Post
If I understand correctly the original question, here the problem is different, the solution blow-up on the grid (4), the finest grid, for every value of the time-step, not just for small values.

Thank you, you are right i mis read it.
arjun is offline   Reply With Quote

Old   February 4, 2017, 09:46
Default
  #10
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by HectorRedal View Post
When solving the system A x = b, I am using a tolerance of 1e-10. I don't know if this tolerance is too high. And this is what is causing the algorithm to blow up.

Do you think it's better to use for example tol=1e-20, when the number or nodes increases a lot?
The problem then is that the solution time is going to grow exponentially, I am afraid.

All depends on what is flow solver algorithm. Are you solving with pressure projection method then yes, it seems that needs much lower tolerance. If you are solving with pressure correction algorithm then no, 1E-10 is actually extra low in my opinion.
(I have had run 3 billion cell calculations where i did not converge linear system more than 1E-4 tolerance).

Usually finer the mesh more stable it is so i would try to find out the skew in this mesh because it seems there might be one or two elements that are making it blow up (1 bad element is enough actually).
fresty likes this.
arjun is offline   Reply With Quote

Old   February 4, 2017, 10:29
Default
  #11
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Quote:
Originally Posted by FMDenaro View Post
The accuracy of an iterative method is relevant and must be consider carefully during a grid refinement... Are you using the check on the norm of the residual |A.x-b| ?
If you are using non dimensional form of the equations, a tolerance of 10^-8 should be sufficient. Differently, you should normalize the residual by its value at the zero-th iteration.
I am using Eigen library to compute the solution of the A.x = b system.
Accordingto my knowledge, the tolerance is for the norm of |Ax-b|/|b|.
I am trying to solve it using a 10^-20 tolerance.
HectorRedal is offline   Reply With Quote

Old   February 4, 2017, 10:31
Default
  #12
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Quote:
Originally Posted by arjun View Post
All depends on what is flow solver algorithm. Are you solving with pressure projection method then yes, it seems that needs much lower tolerance. If you are solving with pressure correction algorithm then no, 1E-10 is actually extra low in my opinion.
(I have had run 3 billion cell calculations where i did not converge linear system more than 1E-4 tolerance).

Usually finer the mesh more stable it is so i would try to find out the skew in this mesh because it seems there might be one or two elements that are making it blow up (1 bad element is enough actually).
3 billion cell calculations is amazing!!
The mesh I am using has 160.000 tria elements, that is definitely a corarser mesh than the one you tested.
If I were able to solve it...
HectorRedal is offline   Reply With Quote

Old   February 4, 2017, 10:42
Default
  #13
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
How can I detect skew cells?
HectorRedal is offline   Reply With Quote

Old   February 4, 2017, 13:25
Default
  #14
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Quote:
Originally Posted by HectorRedal View Post
I am using Eigen library to compute the solution of the A.x = b system.
Accordingto my knowledge, the tolerance is for the norm of |Ax-b|/|b|.
I am trying to solve it using a 10^-20 tolerance.

you can not reach that tolerance
FMDenaro is offline   Reply With Quote

Old   February 5, 2017, 05:42
Default
  #15
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Quote:
Originally Posted by FMDenaro View Post
you can not reach that tolerance
That is the tolerance that I have using before calling the solve method in the Eigen library (1e-20).
So far, I have not received any kind of notification / error from the method solve() stating that this tolerance has not been reached. So, I understand that in somehow it is the expected error after receiving the solution.

By the way, I have managed to make the simulation run.
I have increased the delta t (time step), and now the simulation does not blow up.

Before I was trying out these time steps:
delta t = 1e-5
delta t = 1e-6
delta t = 1e-7

The simulation does not work with any of that values of time step. After a couple of iterations (of time), more or les 30-40, the simulation blew up.

Now, I am running the simulation using delta t= 1e-4, at it appears it works. I have performed more than 10000 iterations, and the simulation has not blown up yet.
Using a delta t=1e-3, the simulation blows up, since the time step is greater than the Courant number.

So, I'm afraid the problem is related to previously problem indicated by arjun (Time convergence study problems, very small time steps)
HectorRedal is offline   Reply With Quote

Old   February 5, 2017, 07:24
Default
  #16
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Hector, from my experience I would still think about a bug in the code and the only way to assess that is by performing the check of the accuracy slope on an analytical test-case by taking constant the ratio dt/h. This is mandatory...
What about your time and space discretization? In the test you will discover if the accuracy slope corresponds to the theoretical one for yout method. If the slope deviates you can be sure of a bug.

From a more general framework, the local truncation error (LTE) is a function of h and dt. When you fix h (that means working one grid) a part of the LTE remains constant while the dt is diminuished. At a certain small value dt, the part of the LTE that depends only on dt can be disregarded and you see only the effect of the spatial discretization even for smaller time-step. A blow-up would mean that this remaing part causes a negative numerical diffusion. I am not aware of schemes having such behaviour ...
juliom and HectorRedal like this.
FMDenaro is offline   Reply With Quote

Old   February 5, 2017, 12:20
Default
  #17
Senior Member
 
Julio Mendez
Join Date: Apr 2009
Location: Fairburn, GA. USA
Posts: 290
Rep Power: 18
juliom is on a distinguished road
Send a message via Skype™ to juliom
I faced the same issue, but my problem was different. Indeed my solution blew up if my dx was diminished. I haven't found the reason yet. I will recommend you to interpolate the fine solution to the finest solution. You did not mentioned anything about the model you are using: les or unresolved dns. I have seen that at some point you start capturing more relegant physical structures that need to be damped. Once I read that this was common for the vortex name street phenomenon. In commercial softwares and meshers you can medio distorting the mesh. Check for skeness.
FMDenaro likes this.
juliom is offline   Reply With Quote

Old   February 6, 2017, 05:52
Default
  #18
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Now if you are able to get it to run with higher time step size then understand that your solution now is more smooth or diffusive.

At this moment it is hard to know if it is due to bug or it is supposed to behave this way.

Anyway, now I would suggest that run the same calculation with first order upwind then then reduce the timestep. Since upwind scheme also has diffusive nature or error, it shall be stable again and allow you smaller time step.

Also can I ask, what solver we are talking about. Is it one of the commercial code?
FMDenaro likes this.
arjun is offline   Reply With Quote

Old   February 6, 2017, 05:59
Default
  #19
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
All those suggestions are right, however, I STRONGLY suggest to use an analytical test-case (e.g. Taylor solution) and performing the refinement study. This is the ONLY way to assess the presence of a bug.
FMDenaro is offline   Reply With Quote

Old   February 6, 2017, 07:33
Default
  #20
Senior Member
 
Hector Redal
Join Date: Aug 2010
Location: Madrid, Spain
Posts: 243
Rep Power: 17
HectorRedal is on a distinguished road
Quote:
Originally Posted by FMDenaro View Post
Hector, from my experience I would still think about a bug in the code and the only way to assess that is by performing the check of the accuracy slope on an analytical test-case by taking constant the ratio dt/h. This is mandatory...
What about your time and space discretization? In the test you will discover if the accuracy slope corresponds to the theoretical one for yout method. If the slope deviates you can be sure of a bug.

From a more general framework, the local truncation error (LTE) is a function of h and dt. When you fix h (that means working one grid) a part of the LTE remains constant while the dt is diminuished. At a certain small value dt, the part of the LTE that depends only on dt can be disregarded and you see only the effect of the spatial discretization even for smaller time-step. A blow-up would mean that this remaing part causes a negative numerical diffusion. I am not aware of schemes having such behaviour ...
Hi Filippo,

First of all, I would like to thank you for your help and support on this issue.

I totally agree with you on your comments and suggestions.
The test you have commented about estimating the acurary slope is something I did a couple of months ago. The estimated error slope was approximately 2, which matches the second order error of the Characteristics Based Split Algorithm (according to the authors Zienkiewicz et al).

You couldn't have explained it better. Since the code I have developed is the implementation of the Taylor series expansion of two variables (time discretization and space discretization), if one of the delta tends to zero, then the remaining error (when comparing with the theorethical function) is only caused by the approximation caused by the other variable.

The point with your reasoning that I do not fully agree with is the one stating that the problem is related to diffusion (artificial viscosity). Maybe the error is not in the calculation of the viscotity term of the algorithm but in any other term. Something that I have to investigate. The point is that I am facing this problem more than one year and it is still unsolved. I am not very confident I will be able to find it either.
HectorRedal is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Transient simulation not converging skabilan OpenFOAM Running, Solving & CFD 14 December 17, 2019 00:12
Stuck in a Rut- interDyMFoam! xoitx OpenFOAM Running, Solving & CFD 14 March 25, 2016 08:09
Star cd es-ice solver error ernarasimman STAR-CD 2 September 12, 2014 01:01
time step directories naming issue Andrea_85 OpenFOAM 3 April 3, 2014 09:38
mixerVesselAMI2D's mass is not balancing sharonyue OpenFOAM Running, Solving & CFD 6 June 10, 2013 10:34


All times are GMT -4. The time now is 16:54.