|
[Sponsors] |
May 8, 2020, 15:36 |
Running Case in Parallel
|
#1 |
New Member
Join Date: Apr 2020
Posts: 19
Rep Power: 6 |
Hello,
i wanted to run the Case "damBreak" in Parallel on 6 Processors. But when i excecute my Allrun file the calculation just finish really quick and i have no time directory or processor0, processor1... directorys in my case directory. First I changed the decomposeParDict like this: Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: v1912 | | \\ / A nd | Website: www.openfoam.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 6; method simple; coeffs { n (3 2 1); } distributed no; roots ( ); // ************************************************************************* // Code:
#!/bin/sh cd "${0%/*}" || exit # Run from this directory . ${WM_PROJECT_DIR:?}/bin/tools/RunFunctions # Tutorial run functions #------------------------------------------------------------------------------ runApplication decomposePar -force runApplication blockMesh runParallel setFields runParallel $(getApplication) runApplication reconstructPar -newTimes #------------------------------------------------------------------------------ Code:
/mnt/c/users/typus/tutorials/multiphase/interfoam/laminar/damBreak/damBreak$ ./Allrun Running decomposePar on /mnt/c/users/typus/tutorials/multiphase/interfoam/laminar/damBreak/damBreak Running blockMesh on /mnt/c/users/typus/tutorials/multiphase/interfoam/laminar/damBreak/damBreak Running setFields (6 processes) on /mnt/c/users/typus/tutorials/multiphase/interfoam/laminar/damBreak/damBreak Running interFoam (6 processes) on /mnt/c/users/typus/tutorials/multiphase/interfoam/laminar/damBreak/damBreak Running reconstructPar on /mnt/c/users/typus/tutorials/multiphase/interfoam/laminar/damBreak/damBreak |
|
May 8, 2020, 15:48 |
|
#2 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,761
Rep Power: 66 |
Some output from OpenFOAM would be helpful.
You need to run blockMesh before decomposePar You should have processor dirs if decomposePar ran. And if that was successful, you should have 0 dirs in each of the processor dirs if setFields ran. You don't need to use setFields in parallel. Can't you just stick with the regular tutorial? Code:
runApplication blockMesh runApplication setFields runApplication decomposePar runParallel $(getApplication) runApplication reconstructPar |
|
May 8, 2020, 16:00 |
|
#3 |
New Member
Join Date: Apr 2020
Posts: 19
Rep Power: 6 |
||
May 8, 2020, 17:59 |
|
#4 |
New Member
Join Date: Apr 2020
Posts: 19
Rep Power: 6 |
I have one more question.
Is there a way to omptimize the decomposePar if you for example have 40 Processors then you can split them up in different ways in: n (10 1 4) n (5 4 2) ...and so on is there a way to find the best allocation? With the shortest simulation time? |
|
May 8, 2020, 20:59 |
|
#5 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,761
Rep Power: 66 |
You are doing a tutorial, why worry about these things...
The answer is no, there isn't. At least, there isn't a magic wand like you want. The time it takes you to find the best mesh with fastest possible simulation time I can almost certainly guarantee will be longer than the time it takes you run the case even with a bad mesh partitioning. When you move on to doing productive simulations with more complex geometry, you'll likely stop using simple and move on to things like scotch and metis. Even still, the time it takes to solve a given partition depends also on what is being solved in the partition. You could have multiple physics for example and then one partition ends up being solved much faster than another. Even with a single set of physics, the time it takes to solve the same partition is dependent on the flow itself. To complicate things even further, the time it takes to solve each partition is also dependent on your hardware... There's an endless list of things you need to optimize for to find the perfect mesh. |
|
May 11, 2020, 04:43 |
|
#6 |
New Member
Join Date: Apr 2020
Posts: 19
Rep Power: 6 |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Is Playstation 3 cluster suitable for CFD work | hsieh | OpenFOAM | 9 | August 16, 2015 15:53 |
simpleFoam parallel | AndrewMortimer | OpenFOAM Running, Solving & CFD | 12 | August 7, 2015 19:45 |
damBreak case parallel run problem | behzad-cfd | OpenFOAM Running, Solving & CFD | 5 | August 2, 2015 18:18 |
Performance of GGI case in parallel | hannes | OpenFOAM Running, Solving & CFD | 26 | August 3, 2011 04:07 |
Free surface boudary conditions with SOLA-VOF | Fan | Main CFD Forum | 10 | September 9, 2006 13:24 |