|
[Sponsors] |
November 11, 2014, 05:39 |
|
#61 |
Senior Member
Philipp
Join Date: Jun 2011
Location: Germany
Posts: 1,297
Rep Power: 27 |
Hey jetfire, maybe we can speed up this a little. If you want that,
1) post some log output (one time step is enough) 2) how do you decompose? 3) post your current solver settings (fvSolution)
__________________
The skeleton ran out of shampoo in the shower. |
|
November 11, 2014, 05:58 |
|
#62 | |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Quote:
The density changes due to pressure too! Can you check which pressure you will have in between your blades? I dont know which model you are using but the ideal gas rule is: means its proportional to
As I expect, your rotor gives a high partial vacuum -> very small densities! That is the reason (in my opinion) why your density is in that range. Therefor you should decrease your rhoMin to 0.1 or whatever.
__________________
Keep foaming, Tobias Holzmann |
||
November 11, 2014, 06:02 |
|
#63 |
Senior Member
Paritosh Vasava
Join Date: Oct 2012
Location: Lappeenranta, Finland
Posts: 732
Rep Power: 23 |
@Tobi: Thanks for the clarification.
|
|
November 11, 2014, 06:24 |
|
#64 |
Member
Abhijit
Join Date: Jul 2014
Posts: 75
Rep Power: 12 |
@Tobi,
Thanks for the tips to speeden up my simulation, I will try implementing them and let you know the results. Can you message me your email id , i will send my output log file to you. Cannot post it here as it is exceeding max file size. |
|
November 11, 2014, 06:29 |
|
#65 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Its sufficient if you post only the last time step with all pimple loops like Philipp told. As you see we both had the same thoughts
__________________
Keep foaming, Tobias Holzmann |
|
November 11, 2014, 06:36 |
|
#66 | |
Member
Abhijit
Join Date: Jul 2014
Posts: 75
Rep Power: 12 |
Quote:
1. Please find the output for 2 timeSteps in attachments 2. I am using hierarchical decomposition with 8 cores. Code:
numberOfSubdomains 8; method hierarchical; hierarchicalCoeffs { n (2 2 2); delta 0.001; order xyz; } Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.3.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; location "system"; object fvSolution; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // solvers { p { solver GAMG; smoother GaussSeidel; cacheAgglomeration on; agglomerator faceAreaPair; nCellsInCoarsestLevel 100; mergeLevels 1; tolerance 1e-06; relTol 0.01; } pFinal { $p; relTol 0; } pcorr { $p; tolerance 1e-2; relTol 0; } "(rho|U|h)" { solver PBiCG; preconditioner DILU; tolerance 1e-06; relTol 0.1; } "(rho|U|h)Final" { $U; relTol 0; } "(k|epsilon|omega)" { solver PBiCG; preconditioner DILU; tolerance 1e-10; relTol 0.1; } "(k|epsilon|omega)Final" { $k; relTol 0; } } PIMPLE { momentumPredictor yes; transonic no; nOuterCorrectors 100; nCorrectors 2; nNonOrthogonalCorrectors 1; turbOnFinalIterOnly false; rhoMin rhoMin [ 1 -3 0 0 0 ] 0.1; rhoMax rhoMax [ 1 -3 0 0 0 ] 2.5; residualControl { "(U|k|omega)" { tolerance 1e-05; relTol 0; } p { tolerance 1e-04; relTol 0; } } } relaxationFactors { fields { p 0.3; pFinal 1; } equations { "(U|h|k|epsilon|omega)" 0.4; "(U|h|k|epsilon|omega)Final" 1; } } // ************************************************************************* // |
||
November 11, 2014, 06:40 |
|
#67 | |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Quote:
Please show us the output of your decomposition: Code:
decomposePar > log fvSolution: Code:
tolerance for U|h|rho -> 1e-9; relTol 0.05; nCorrectors = 1 residual control p-> 1e-5; Code:
time step continuity errors : sum local = 4.538409924e-06, global = -4.522927178e-06, cumulative = -0.002405985855 Please check your logfile with pyFoam (pyFoamPlotWatcher). Additionally you can check your meshCourantNo if you insert "checkMeshCourantNo true" into the PIMPLE directory of your fvSolution. I think it is possible to increase your maxCo no to 3.
__________________
Keep foaming, Tobias Holzmann |
||
November 11, 2014, 06:42 |
|
#68 |
Member
Abhijit
Join Date: Jul 2014
Posts: 75
Rep Power: 12 |
I have the output of decomposePar on my terminal , here it is
Code:
/*---------------------------------------------------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.3.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ Build : 2.3.0-f5222ca19ce6 Exec : decomposePar Date : Nov 10 2014 Time : 16:45:40 Host : "EAT-Standalone" PID : 8606 Case : /home/eatin/OpenFOAM/eatin-2.3.0/run/tutorials/TurboCharger/Trial_run2 nProcs : 1 sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster allowSystemOperations : Disallowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Decomposing mesh region0 Create mesh Calculating distribution of cells Selecting decompositionMethod hierarchical Finished decomposition in 4.13 s Calculating original mesh data Distributing cells to processors Distributing faces to processors Distributing points to processors Constructing processor meshes Processor 0 Number of cells = 828661 Number of faces shared with processor 1 = 24114 Number of faces shared with processor 2 = 10498 Number of faces shared with processor 4 = 10485 Number of processor patches = 3 Number of processor faces = 45097 Number of boundary faces = 67775 Processor 1 Number of cells = 828661 Number of faces shared with processor 0 = 24114 Number of faces shared with processor 2 = 1298 Number of faces shared with processor 3 = 6458 Number of faces shared with processor 4 = 1879 Number of faces shared with processor 5 = 6472 Number of processor patches = 5 Number of processor faces = 40221 Number of boundary faces = 80623 Processor 2 Number of cells = 828661 Number of faces shared with processor 0 = 10498 Number of faces shared with processor 1 = 1298 Number of faces shared with processor 3 = 17656 Number of faces shared with processor 4 = 4197 Number of faces shared with processor 6 = 12647 Number of faces shared with processor 7 = 715 Number of processor patches = 6 Number of processor faces = 47011 Number of boundary faces = 62623 Processor 3 Number of cells = 828661 Number of faces shared with processor 1 = 6458 Number of faces shared with processor 2 = 17656 Number of faces shared with processor 7 = 6219 Number of processor patches = 3 Number of processor faces = 30333 Number of boundary faces = 72641 Processor 4 Number of cells = 828661 Number of faces shared with processor 0 = 10485 Number of faces shared with processor 1 = 1879 Number of faces shared with processor 2 = 4197 Number of faces shared with processor 5 = 15088 Number of faces shared with processor 6 = 13676 Number of processor patches = 5 Number of processor faces = 45325 Number of boundary faces = 81249 Processor 5 Number of cells = 828661 Number of faces shared with processor 1 = 6472 Number of faces shared with processor 4 = 15088 Number of faces shared with processor 6 = 2022 Number of faces shared with processor 7 = 6375 Number of processor patches = 4 Number of processor faces = 29957 Number of boundary faces = 78931 Processor 6 Number of cells = 828661 Number of faces shared with processor 2 = 12647 Number of faces shared with processor 4 = 13676 Number of faces shared with processor 5 = 2022 Number of faces shared with processor 7 = 16593 Number of processor patches = 4 Number of processor faces = 44938 Number of boundary faces = 70372 Processor 7 Number of cells = 828661 Number of faces shared with processor 2 = 715 Number of faces shared with processor 3 = 6219 Number of faces shared with processor 5 = 6375 Number of faces shared with processor 6 = 16593 Number of processor patches = 4 Number of processor faces = 29902 Number of boundary faces = 75444 Number of processor faces = 156392 Max number of cells = 828661 (0% above average 828661) Max number of processor patches = 6 (41.17647059% above average 4.25) Max number of faces between processors = 47011 (20.2388869% above average 39098) Time = 0 Processor 0: field transfer Processor 1: field transfer Processor 2: field transfer Processor 3: field transfer Processor 4: field transfer Processor 5: field transfer Processor 6: field transfer Processor 7: field transfer End. |
|
November 11, 2014, 06:53 |
|
#69 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Hi,
Code:
Processor 1 Number of cells = 828661 Number of faces shared with processor 0 = 24114 Number of faces shared with processor 2 = 1298 Number of faces shared with processor 3 = 6458 Number of faces shared with processor 4 = 1879 Number of faces shared with processor 5 = 6472 Number of processor patches = 5 Number of processor faces = 40221 Number of boundary faces = 80623 Processor 2 Number of cells = 828661 Number of faces shared with processor 0 = 10498 Number of faces shared with processor 1 = 1298 Number of faces shared with processor 3 = 17656 Number of faces shared with processor 4 = 4197 Number of faces shared with processor 6 = 12647 Number of faces shared with processor 7 = 715 Number of processor patches = 6 Number of processor faces = 47011 Number of boundary faces = 62623 Processor 7 Number of cells = 828661 Number of faces shared with processor 2 = 715 Number of faces shared with processor 3 = 6219 Number of faces shared with processor 5 = 6375 Number of faces shared with processor 6 = 16593 Number of processor patches = 4 Number of processor faces = 29902 Number of boundary faces = 75444 Additionally after decomposing, renumbering!
__________________
Keep foaming, Tobias Holzmann |
|
November 11, 2014, 07:03 |
|
#70 |
Member
Abhijit
Join Date: Jul 2014
Posts: 75
Rep Power: 12 |
@Tobi
Here is my complete domain http://www.cfd-online.com/Forums/ope...tml#post515511 Since large no. of elements are there in all 3 directions i have given '(2 2 2)' to make it 8 cores. The similar decomposition method is used for Propeller tutorial which is similar to my case. Please see the domain and let me know if i have to change the decomposition approach or change the subdomains. |
|
November 11, 2014, 07:41 |
|
#71 |
Senior Member
Philipp
Join Date: Jun 2011
Location: Germany
Posts: 1,297
Rep Power: 27 |
Jetfire, did you try to use some other decomposition method? I had a very simple pipe flow and thought it is a clever idea to use "simple" decomposition. It showed low number of shared faces and all that stuff, but for some reason it was slower than just using "scotch" without any additional settings. You can just try some different methods and write down the different execution times. It's worth it for such long simulations to try a bit at the beginning.
Just a general question: Why is every time step converged to insanity? I mean, do you really get different results for these kind of problems with 30 pimple loops compared to - let's say a 3 times smaller time step and PISO solver (thus 1 outer iteration per time step)? Your Courant number is close to "1" anyway, so for stability of PISO just a slightly smaller time step would be needed. All the LES guys use PISO... is this that much different from what you are doing? Also: Why the first pressure corrector with 60 iterations? For me, this looks like something is going utterly wrong. I think each linear solver should not take more than just a few iterations. Maybe this is again due to your solver, but please can someone elaborate this?
__________________
The skeleton ran out of shampoo in the shower. |
|
November 11, 2014, 07:48 |
|
#72 |
Senior Member
Philipp
Join Date: Jun 2011
Location: Germany
Posts: 1,297
Rep Power: 27 |
Another point is: Did you try different settings for the GAMG solver? I did this for a case of mine and found out that playing with "mergeLevels" decreased the simulation time (in my case "2" was the best). Also changing the pre,post,finest sweeps settings changed a lot:
Code:
"(p|pFinal)" { solver GAMG; tolerance 1e-12; relTol 0.1; maxIter 100; smoother GaussSeidel; nPreSweeps 1; nPostSweeps 1; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 400; agglomerator faceAreaPair; mergeLevels 2; }
__________________
The skeleton ran out of shampoo in the shower. |
|
November 11, 2014, 08:23 |
|
#73 | |||
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Quote:
Quote:
If you get 1000 Iterations something is wrong. In the pressure calculation (my experience) it could occur. But it also could be possible that the BC are wrong for that problem. However, I first should have a look at the calculation procedure in that solver but there is not time for that now. Quote:
__________________
Keep foaming, Tobias Holzmann |
||||
November 11, 2014, 08:29 |
|
#74 | |
Senior Member
Philipp
Join Date: Jun 2011
Location: Germany
Posts: 1,297
Rep Power: 27 |
Quote:
Or is this just a matter of initialization? Does the number of PIMPLE loops decrease drastically after a few time steps?
__________________
The skeleton ran out of shampoo in the shower. |
||
November 11, 2014, 08:47 |
|
#75 | |||
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Quote:
Quote:
Quote:
If there would be no advantage of PIMPLE compared to PISO then nobody would use PIMPLE.
__________________
Keep foaming, Tobias Holzmann |
||||
November 11, 2014, 08:56 |
|
#76 |
Senior Member
Philipp
Join Date: Jun 2011
Location: Germany
Posts: 1,297
Rep Power: 27 |
I got the feeling that he doesn't use the advantage of PIMPLE if he iterates more often than the number of PISO steps he actually needs if he reduces the time step. This is just an educated guess, I never tryed it.
__________________
The skeleton ran out of shampoo in the shower. |
|
November 11, 2014, 09:56 |
|
#77 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
He also can try something like that:
I can not test the case because I do not have it (:
__________________
Keep foaming, Tobias Holzmann |
|
November 11, 2014, 11:26 |
|
#78 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52 |
Question: are you using the boundary condition which you attached in this post: http://www.cfd-online.com/Forums/ope...tml#post516087
The reason why I ask is due to the U - file, if it is like that you have an error which could be the reason of the pcorr iteration. That could be possible. I will check if I find some papers about that.
__________________
Keep foaming, Tobias Holzmann |
|
November 12, 2014, 00:06 |
|
#79 |
Member
Abhijit
Join Date: Jul 2014
Posts: 75
Rep Power: 12 |
Hi,
Bad news , Simulation crashed showing the same error as posted earlier after few timeSteps. check the output in attachments
|
|
November 12, 2014, 00:12 |
|
#80 | |
Member
Abhijit
Join Date: Jul 2014
Posts: 75
Rep Power: 12 |
Quote:
|
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Study of the EEqn.H in rhoPimpleDyMFoam. | Horacio Aguerre | OpenFOAM Programming & Development | 11 | August 19, 2022 03:47 |
Developing a rhoPimpleDyMFoam solver | bvieira | OpenFOAM Programming & Development | 20 | October 9, 2014 13:12 |
rhoPimpleDymFoam | jvd.mechanic | OpenFOAM Running, Solving & CFD | 0 | June 15, 2014 06:20 |
Divergence in rhoPimpleDyMFoam | bvieira | OpenFOAM Running, Solving & CFD | 1 | July 19, 2012 03:22 |