|
[Sponsors] |
August 17, 2005, 10:38 |
Implicit/Explicit explanation by Blazek
|
#1 |
Guest
Posts: n/a
|
Hi,
I'm reading Blazek's book on CFD and he said that: "explicit schemes represent the best choice for certain unsteady application when time scales are comaprable to the spatial scales over the eigenvalue ie CFL no. dictated by physics is of order unity ... global physical phenomena evolve much slower than solution changes locally.... necessary to integrate over long periods of time. other cases when physical time scales large in comparison to spatial scales divided by eigenvalue, CFL no. can be order of 100 or 1000 w/o impairing accuracy, then implicit scheme more appropriate...." I don't really understand what he's trying to say. Is he saying that if the event changes rapidly in a short time, one should use explicit. otherwise, implicit is better? Thanks |
|
August 17, 2005, 12:06 |
Re: Implicit/Explicit explanation by Blazek
|
#2 |
Guest
Posts: n/a
|
with explicit you run into oscillations if your time step is too big. You have a criteria to follow in order to make sure that your time step is small enough. With implicit you don't have the dependency on time step size. - that is my understanding.
|
|
August 17, 2005, 13:14 |
Re: Implicit/Explicit explanation by Blazek
|
#3 |
Guest
Posts: n/a
|
Let's distinguish between several time scales. First, there is a time scale associated with each term in the Navier-Stokes Equations (NSE). Next, there are time scales for the evolution of the integration variables. Oftentimes one has a situation where, for instance, reactive and diffusive time scales balance so that mass fractions evolve on a slower time scale than either term. The integration variables evolve at time scales based on unbalanced terms within the particular equation. Next, there are time scales introduced by the physical model. Physically, the fastest mean timescale for hydrocarbons in combustion is the time scale for a hydrogen to break off. I am told this average time scale is of order 100 picoseconds. Now, go look at the inverse eigenvalues of a heptane reaction mechanism. Time scales of 10^(-17) show up. Since we are solving the NSE and these equations are only valid on time scales of several mean free times, both the 10^(-13) and 10^(-17) second time scales are superfluous to our equations. However, the numerics must still cope with them because they are present.
What does this say about integration? You need to take time steps small enough to resolve the phenomena that you are interested in. You do not, however, need to step so slowly that all time scales are resolved. In ODE speak, a mode is reasonably resolved if z = (eigenvalue of dF/dU)*(delta t) = 1. This implies that the step size equals the particular time scale of the equation. What you care about is the ratio of the fastest time scale present to the time scale of the fastest mode that is physically relevant to you. Explicit methods work well when the fastest scales in the problem need to be resolved. If there ae modes that are much faster than the scales you need then you bite the bullet and use an implicit integrator. The line that seperates the two is a time scale ratio of 100-1000. IMEX methods can be useful to deal with the excessively fast diffusive and/or reactive modes. Step size (error) controllers are very useful here. |
|
August 18, 2005, 07:31 |
Re: Implicit/Explicit explanation by Blazek
|
#4 |
Guest
Posts: n/a
|
Then, for explicit schemes what is the criteria to follow in order to make sure that time step is small enough?
Regards. |
|
August 18, 2005, 09:27 |
Re: Implicit/Explicit explanation by Blazek
|
#5 |
Guest
Posts: n/a
|
well, from my experience, and with only fluid flow (no heat) after you discretize the NS equations you will have a variable, numeric diffusivity, which is equal to (deltaT*nu)/deltaY^2. You will see this variable if you do a simple 1-D, unsteady model and descritize the equations. This has to be less than .5. Again, this is for simple cases. It might be different for a much more rigorous model.
|
|
August 25, 2005, 02:10 |
Re: Implicit/Explicit explanation by Blazek
|
#6 |
Guest
Posts: n/a
|
Hello Sir,
I have a confusion here. Here you are talking about two things: time scales & modes. Time scales are calculated based on eigen values of A=(DF/DU) matrix. Each eigen value correspond to a time scale. These are the time scales present in the system. But You are also talking about time scale corresponding to fastest mode? what do you mean by by mode here? Your guidelines will be really helpful? Thanks & Regards, Tarun |
|
August 25, 2005, 22:05 |
Re: Implicit/Explicit explanation by Blazek
|
#7 |
Guest
Posts: n/a
|
Tarun,
You are concerned with my simultaneous use of "time scales" and "modes." Once you linearize your system, you then rotate it to find the eigenvalues. The eigenvalues of dF/dU have units of inverse time. Therefore the reciprocal of the eigenvalues have units of time. Now, what would you like to call the entity that is characterized by this time scale? I flippantly referred to it as a mode. Maybe you prefer something else. If you are integrating 10 variables at 10^6 grid points then you are integrating 10^7 equations. This implies that there will be 10^7 eigenvalues to dF/dU. What you are concerned about is ones that involve time scales that are too fast to have meaning for what you are interested in. If you know that the fastest time scale characterizing the things you are interested in then you should time step at that time scale or a tad less. Now, there may be some eigenvalues that are larger than the biggest one you care to resolve. The ratio of the (max. Eig)/(max. resolved Eig) is loosely the maximum value of z in your calculations. If you find that z_{max}>> 1, then your problem is stiff. The parameter z is defined as z = eigenvalue*dt Which eigenvalue? Which dt? The step size should be chosen to be small enough to resolve what you need to resolve but no less. This means you possibly intend to ignore "excessively" large eigenvalues unless your integrator breaks. There are many choices for the eigenvalue. The biggest, any one residing in the complex RHP, etc. Linear stability plots of ODE integrators plot the stability function as a function of z. By the way, you are free to recast your semi-discretized Navier-Stokes Eqns. in an additive sense like this: dU/dt = F_{conv} + F_{diff} + F_{react} + ... Strictly speaking, one cannot simultaneously diagonalize each of these terms but since we ultimately make many simplifications here, we'll overlook this. Hence, dU/dt = [dF_{conv}/dU]*U + [dF_{diff}/dU]*U + [dF_{react}/dU]*U + Now, you can see the eigenvalues and time scales of each term individually. If you'd like, you may now design an N-additive Runge-Kutta method to integrate your equations with N-right hand side terms. It's difficult to imagine the utility of an ODE method with N>3. Last thought. The only reasonable way for you to chose a time step that resolves what you need and no more is to use an error controller along with a method that is largely undetered by large values of z. Methods like this are usually L-stable. L comes from the term left half plane because L-stable methods have a stability function that vanished for z -> infinity in the LHP. |
|
August 26, 2005, 14:12 |
Re: Implicit/Explicit explanation by Blazek
|
#8 |
Guest
Posts: n/a
|
Among all the (correct and confusing) details, I think these are the key statements:
"You need to take time steps small enough to resolve the phenomena that you are interested in." "Explicit methods work well when the fastest scales in the problem need to be resolved." ...but you also have to explain, why (in simple terms)! There are three issues at hand: accuracy, stability, and efficiency. With explicit methods you need to resolve the smallest time scales, because otherwise they will not be stable (stability). That's the bottom line. Now you can say, if you're really interested in the smallest time scale, then it's ok to use an explicit method, because you would resolve that time scale of interest anyway (for accuracy). But if you're not interested in it, then it would be more efficient to use a method that allows you to disregard the smallest time scale and just shoot for any larger time scale you are interested in (for efficiency). That would be an implicit method, which actually requires more computational effort per iteration (=disadvantage in efficiency), but allows you to choose a larger time step, just resolving the interesting scale (=advantage in efficiency), and any larger scales. Now, depending on what time scales you are interested in, you can see how the trade-off between the disadvantage and advantage of implicit methods play out against the explicit approach. You're certainly not going to use implicit methods, unless you're only interested in time scales far away from the smallest. That's what Blazek is saying. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Explanation for terms like RAS turbulence, kEpsilon,... | tH3f0rC3 | OpenFOAM Pre-Processing | 2 | March 16, 2011 10:16 |
[GAMBIT] Neutral-Files - element columns explanation | jmg | ANSYS Meshing & Geometry | 0 | August 25, 2010 13:31 |
Explanation of status report of equation solution | nico | OpenFOAM Running, Solving & CFD | 0 | April 15, 2006 06:36 |
what's the explanation of ke model | liang | Phoenics | 7 | August 9, 2005 12:00 |
Kiva Subroutine Explanation | liqiang | Main CFD Forum | 1 | November 20, 2004 12:27 |