CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Help to make LES simulation faster

Register Blogs Community New Posts Updated Threads Search

Like Tree2Likes
  • 1 Post By Bernhard
  • 1 Post By Santiago

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 2, 2013, 05:54
Default Help to make LES simulation faster
  #1
Pj.
Member
 
Luca
Join Date: Mar 2013
Posts: 68
Rep Power: 13
Pj. is on a distinguished road
Hi everybody,

I'l trying to run a wind tunnel simulation with an LES turbulence model with pisoFoam. The domain is made of 17M cells. The simulation in made of a channel with some roughness blocks in the firsts 3/4 of it, and with a building model in the last quarter.

Actually I'm solving with a time step equal to 0.0001s (10'000 Hz). This is done to keep the maxCo below one. Actually my maxCo is ~0.88.

The problem is that the meanCo is something like 0.007 and the simulation is running very slowly. With 252 CPUs it's taking more or less 2.9s per iterations. This means that it takes 1 day to solve 1.25 seconds.

Since i need to simulate at least 2-3 minutes and i can't use more CPU, what can i do?
I was thinking to increase the time step since the meanCo is way smaller than 1, but this would cause the maxCo to be bigger than 1 in some points. If this point is in the roughness region (where i don't really care if the solution is wrong in some small portions) do you think the solution will still converge?

How can i know "where" the Courant is bigger than 1?

Do you have other ideas other than increasing the timestep? Maybe use pimpleFoam?

Thanks a lot. Regards,
Luca
Pj. is offline   Reply With Quote

Old   December 2, 2013, 06:34
Default
  #2
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22
Bernhard is on a distinguished road
It is a bit difficult to give any advise, could you maybe share fvSchemes and a snippet of the log file?

Quote:
How can i know "where" the Courant is bigger than 1?
Use to Co utility.
Bernhard is offline   Reply With Quote

Old   December 2, 2013, 08:22
Default
  #3
Senior Member
 
Join Date: Dec 2011
Posts: 111
Rep Power: 20
haakon will become famous soon enough
A low timestep and high resolution is some of the nature of LES. It's unavoidable. You can of course adjust the mesh in problematic areas, and perhaps end up with a slightly larger timestep, however, in the end you cannot overcome the fact that LES is computationally intensive by nature. If you only can afford a RANS simulation, stick to that. Perhaps you can use a RANS model to initiate a physical sane initial condition to save some time?

Last edited by haakon; December 2, 2013 at 08:23. Reason: Typo
haakon is offline   Reply With Quote

Old   December 2, 2013, 08:55
Default
  #4
Senior Member
 
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 23
Lieven will become famous soon enough
I'm with haakon on this one. LES is by definition expensive so don't expect to be able to do it cheaply. If you find a way to do this, it will probably make you a very rich man ;-). I would even recommend to further reduce your time step such that also maxCo is significantly smaller than 1.0 (for time integration accuracy). I would not switch to pimpleFoam since underrelaxation is quite unphysical in an LES context.

As I see it, there are two things you can do:
1. adjust your mesh. If you can't afford a simulation of 17M cells, just don't do it then. A converged solution on a 4M cells mesh is probably better than a halfway converged solution on a 17M cells.
2. Use a cheaper LES turbulence model (if you can) e.g. dynamics models are more expensive to compute than the classical smagorisky model.

Using a RANS model to compute an initial field could help you speed up the conversion but this is certainly not guaranteed.

Cheers,

L
Lieven is offline   Reply With Quote

Old   December 2, 2013, 09:34
Default
  #5
Pj.
Member
 
Luca
Join Date: Mar 2013
Posts: 68
Rep Power: 13
Pj. is on a distinguished road
Hi Bernhard. Thank you for reading.

Here is my fvSchemes:
Code:
ddtSchemes
{
    default         backward;
}

gradSchemes
{
    default         Gauss linear;
    grad(p)         Gauss linear;
    grad(U)         Gauss linear;
}

divSchemes
{
    default         none;
    div(phi,U)      Gauss linear;
    div(phi,k)      Gauss limitedLinear 1;
    div(phi,B)      Gauss limitedLinear 1;
    div(phi,nuTilda) Gauss limitedLinear 1;
    div(B)          Gauss linear;
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default         none;
    laplacian(nuEff,U) Gauss linear corrected;
    laplacian((1|A(U)),p) Gauss linear corrected;
    laplacian(DkEff,k) Gauss linear corrected;
    laplacian(DBEff,B) Gauss linear corrected;
    laplacian(DnuTildaEff,nuTilda) Gauss linear corrected;
}

interpolationSchemes
{
    default         linear;
    interpolate(U)  linear;
}

snGradSchemes
{
    default         corrected;
}

fluxRequired
{
    default         no;
    p               ;
}
and a piece of the log

Code:
Time = 1.2372

Courant Number mean: 0.00771093 max: 0.898045
DILUPBiCG:  Solving for Ux, Initial residual = 4.38996e-05, Final residual = 8.81599e-09, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 0.0012287, Final residual = 2.04545e-07, No Iterations 1
DILUPBiCG:  Solving for Uz, Initial residual = 0.000869898, Final residual = 1.5147e-07, No Iterations 1
DICPCG:  Solving for p, Initial residual = 0.00549978, Final residual = 0.000262121, No Iterations 3
time step continuity errors : sum local = 2.52331e-10, global = -5.10701e-14, cumulative = 2.4203e-09
DICPCG:  Solving for p, Initial residual = 0.000364781, Final residual = 9.84024e-07, No Iterations 97
time step continuity errors : sum local = 9.47303e-13, global = -4.74805e-14, cumulative = 2.42025e-09
ExecutionTime = 84873 s  ClockTime = 85168 s

forceCoeffs output:
    Cm    = 2.37554
    Cd    = 39.8978
    Cl    = 64.6208
    Cl(f) = 34.6859
    Cl(r) = 29.9349

Time = 1.23725

Courant Number mean: 0.00771094 max: 0.89373
DILUPBiCG:  Solving for Ux, Initial residual = 4.39007e-05, Final residual = 8.81516e-09, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 0.00122869, Final residual = 2.04562e-07, No Iterations 1
DILUPBiCG:  Solving for Uz, Initial residual = 0.0008699, Final residual = 1.51441e-07, No Iterations 1
DICPCG:  Solving for p, Initial residual = 0.0055296, Final residual = 0.000264223, No Iterations 3
time step continuity errors : sum local = 2.54366e-10, global = -5.4272e-14, cumulative = 2.4202e-09
DICPCG:  Solving for p, Initial residual = 0.000368202, Final residual = 9.85297e-07, No Iterations 115
time step continuity errors : sum local = 9.48546e-13, global = -5.09298e-14, cumulative = 2.42015e-09
ExecutionTime = 84875.5 s  ClockTime = 85170 s

forceCoeffs output:
    Cm    = 2.38226
    Cd    = 39.8857
    Cl    = 64.572
    Cl(f) = 34.6683
    Cl(r) = 29.9038

Time = 1.2373

Courant Number mean: 0.00771095 max: 0.889406
DILUPBiCG:  Solving for Ux, Initial residual = 4.39019e-05, Final residual = 8.81608e-09, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 0.00122868, Final residual = 2.04589e-07, No Iterations 1
DILUPBiCG:  Solving for Uz, Initial residual = 0.000869913, Final residual = 1.51427e-07, No Iterations 1
DICPCG:  Solving for p, Initial residual = 0.00552333, Final residual = 0.000261467, No Iterations 3
time step continuity errors : sum local = 2.51721e-10, global = -5.30145e-14, cumulative = 2.4201e-09
DICPCG:  Solving for p, Initial residual = 0.00036565, Final residual = 9.99106e-07, No Iterations 94
time step continuity errors : sum local = 9.61889e-13, global = -5.46456e-14, cumulative = 2.42004e-09
ExecutionTime = 84877.9 s  ClockTime = 85172 s

forceCoeffs output:
    Cm    = 2.36583
    Cd    = 39.8604
    Cl    = 64.5751
    Cl(f) = 34.6534
    Cl(r) = 29.9217
At the moment i'm looking into pimpleFoam to have a more robust solution with bigger timesteps, but i can't find a proper description of how to set it up. I'm trying more or less with no clue.

Thank you very much
Pj. is offline   Reply With Quote

Old   December 2, 2013, 10:01
Default
  #6
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22
Bernhard is on a distinguished road
Where I said fvSchemes I meant fvSolution, excuse me. Did you ever try to solve the pressure equation using a GAMG method? On 100-200 cpus it is said to be more efficient in solving the pressure equation (generally) than PCG. You are using more processors, but it might be worth experimenting a bit with these settings.
Alhasan likes this.
Bernhard is offline   Reply With Quote

Old   December 2, 2013, 10:15
Default
  #7
Pj.
Member
 
Luca
Join Date: Mar 2013
Posts: 68
Rep Power: 13
Pj. is on a distinguished road
I know LES is computationally expensive. I don't expect to make it quickly and easily. I was just asking if there was a way to make it a little faster. Maybe with some improvements I could achieve to solve the case twice or four time faster. That would mean a lot of gain, even if the computation is still very expensive.

I will try to initialise the case with a RANS or with a coarser mesh.
I will also try to switch to GAMG and see if i find any improvements.

About the LES model I already use a Smagorinsky one, so I can't use a cheaper one.

Lastly, there are many papers that suggest that RANS is not so good in my field of study (wind flow around a low rise building), so i can't switch to that.

PS: this is my fvSolution:
Code:
solvers
{
    p
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-06;
        relTol          0.05;
    }

    pFinal
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-06;
        relTol          0;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-05;
        relTol          0;
    }

    k
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-05;
        relTol          0;
    }

    nuTilda
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-05;
        relTol          0;
    }
}

PISO
{
    nCorrectors     2;
    nNonOrthogonalCorrectors 0;
    pRefCell        0;
    pRefValue       0;
}
Pj. is offline   Reply With Quote

Old   December 2, 2013, 11:40
Default
  #8
Senior Member
 
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 23
Lieven will become famous soon enough
Hi PJ,

Two remarks about the fvSchemes:
1. you should set relTol of p also to 0.
2. Are you sure nCorrectors = 2 is enough for your mesh? If you have a fully orthogonal mesh, you can set this to 0. If not, 2 is most likely too little...

Cheers,

L
Lieven is offline   Reply With Quote

Old   December 3, 2013, 03:07
Default
  #9
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 22
Bernhard is on a distinguished road
Quote:
Originally Posted by Lieven View Post
1. you should set relTol of p also to 0.
This does not make sense to me. Why would you want to solve an intermediate pressure to full convergence?
Bernhard is offline   Reply With Quote

Old   December 3, 2013, 04:41
Default
  #10
Senior Member
 
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 23
Lieven will become famous soon enough
How sorry, your fully correct Bernhard!
I wasn't paying attention to the pFinal entry :-D (the solver I'm using doesn't have it).

Cheers,

Lieven
Lieven is offline   Reply With Quote

Old   December 3, 2013, 06:03
Default
  #11
Pj.
Member
 
Luca
Join Date: Mar 2013
Posts: 68
Rep Power: 13
Pj. is on a distinguished road
Thank everybody for your kind help.
Now i'm running some tests on a smaller 2M cells case to benchmark a bit the solutions you proposed.

I know that LES is expensive, I'm not asking to run it on my laptop in few hours. But in my lab we already ran 15-20M cells cases that were solving about 10-15 seconds every day. In a week we had our 1-2 minutes simulation. I could accept to run the simulation in 2 weeks, but not in 8.

The problem with this case is that I have some very small cells near the model and there the Co number is 100 times bigger than everywhere else. To maintain the Co < 1 there i have to solve with a timestep 100 times smaller and though this slow down the simulation a lot.

I was looking for the best way to solve this. Make a coarser mesh is of course and option, but it's not good for our purpose. I was therefore looking is a better solution exists.

Thank you very much. I will post the results of the benchmark as soon as i have them
Pj. is offline   Reply With Quote

Old   December 4, 2013, 05:27
Default
  #12
Senior Member
 
RodriguezFatz's Avatar
 
Philipp
Join Date: Jun 2011
Location: Germany
Posts: 1,297
Rep Power: 27
RodriguezFatz will become famous soon enough
Hi, you mentioned that you use LES because RANS doesn't work. Did you try the SAS model?
__________________
The skeleton ran out of shampoo in the shower.
RodriguezFatz is offline   Reply With Quote

Old   December 9, 2019, 07:33
Default
  #13
New Member
 
Praharsha Reddy
Join Date: Dec 2019
Posts: 14
Rep Power: 7
lonewanderer is on a distinguished road
How can we convert LES simulation to RANS simulation
lonewanderer is offline   Reply With Quote

Old   December 10, 2019, 03:38
Default
  #14
Senior Member
 
Ruiyan Chen
Join Date: Jul 2016
Location: Hangzhou, China
Posts: 162
Rep Power: 10
cryabroad is on a distinguished road
If other people have used same amount of cells as yours and their cases are running way faster than yous, I don't think it means that much. It really depends on the details of the mesh. You mentioned something about very small cells near the model (I suppose you mean the walls of a building or something similar), do you really need that refined mesh there? What is the yPlus value? Obviously in LES you want to have a very refined mesh, expecially near the wall, but if the wall areas are not that important, maybe a wall model is enough. Typically, walls are crucial for internal flows, but may not be that important for external flows. My experience is that the spalding wall model (nutUSpaldingWallFunction) works really well.

I always initiate my LES from an unsteady RANS simulation. For the unsteady RANS I use very large time steps, and because of this sometimes I have to switch my time scheme to first order (If not it easily diverges). Not sure if it's the correct way of doing things though. Note that sometimes it does make your case run faster but sometimes it does not. Again, it depends on the actual problem.
cryabroad is offline   Reply With Quote

Old   December 10, 2019, 05:13
Default
  #15
Senior Member
 
Santiago Lopez Castano
Join Date: Nov 2012
Posts: 354
Rep Power: 16
Santiago is on a distinguished road
Quote:
Originally Posted by Pj. View Post
I know LES is computationally expensive. I don't expect to make it quickly and easily. I was just asking if there was a way to make it a little faster. Maybe with some improvements I could achieve to solve the case twice or four time faster. That would mean a lot of gain, even if the computation is still very expensive.

I will try to initialise the case with a RANS or with a coarser mesh.
I will also try to switch to GAMG and see if i find any improvements.

About the LES model I already use a Smagorinsky one, so I can't use a cheaper one.

Lastly, there are many papers that suggest that RANS is not so good in my field of study (wind flow around a low rise building), so i can't switch to that.

PS: this is my fvSolution:
Code:
solvers
{
    p
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-06;
        relTol          0.05;
    }

    pFinal
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-06;
        relTol          0;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-05;
        relTol          0;
    }

    k
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-05;
        relTol          0;
    }

    nuTilda
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-05;
        relTol          0;
    }
}

PISO
{
    nCorrectors     2;
    nNonOrthogonalCorrectors 0;
    pRefCell        0;
    pRefValue       0;
}
For the inversion of symmetric matrices (e.g. The pressure Poisson Equation) you can gain A LOT by using Multigrid (GAMG) instead of PCG. It might be less robust but, since you are doing Smagorinsky LES correctly I assume, it will not be a problem because you have a structured grid anyway.
cryabroad likes this.

Last edited by Santiago; December 10, 2019 at 06:11. Reason: typo
Santiago is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
How make a video for transient simulation in CFX post Anna Tian CFX 1 July 6, 2013 17:35
Large Eddy Simulation - LES stix Main CFD Forum 5 April 8, 2013 09:23
OpenFOAM 1.7.1 installation problem on OpenSUSE 11.3 flakid OpenFOAM Installation 16 December 28, 2010 09:48
OpenFOAM with IBM AIX matthias OpenFOAM Installation 20 March 25, 2008 03:36
Need LES simulation knowledge / reference material Sam CFX 7 January 25, 2008 17:30


All times are GMT -4. The time now is 13:43.