CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

parallel run is slower than serial run (pimpleFoam) !!!

Register Blogs Community New Posts Updated Threads Search

Like Tree6Likes
  • 1 Post By wyldckat
  • 1 Post By wyldckat
  • 2 Post By wyldckat
  • 1 Post By miladrakhsha
  • 1 Post By vrosalesCFD

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 13, 2013, 14:01
Default parallel run is slower than serial run (pimpleFoam) !!!
  #1
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Hi all
I have run flow around a cylinder and plate.
I have test my problem in serial and parallel execution.
but the serial run (1 proc) is more faster than parallel run (2 proc) ?

how can I improve my parallel run ?

Best Regards
mechy is offline   Reply With Quote

Old   August 13, 2013, 14:16
Default
  #2
Senior Member
 
Kent Wardle
Join Date: Mar 2009
Location: Illinois, USA
Posts: 219
Rep Power: 21
kwardle is on a distinguished road
You have left out some important info. http://www.cfd-online.com/Forums/ope...-get-help.html

How big is your problem size? Perhaps it is too small to benefit from parallel execution--if you are running a 2d problem with 5000 cells I would not be surprised to see little or no speedup.
What kind of machine are you running on?
How did you execute the solver? (should be "mpirun -np 2 solverName -parallel" for 2 procs on local machine)
kwardle is offline   Reply With Quote

Old   August 13, 2013, 15:36
Default
  #3
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Quote:
Originally Posted by kwardle View Post
You have left out some important info. http://www.cfd-online.com/Forums/ope...-get-help.html

How big is your problem size? Perhaps it is too small to benefit from parallel execution--if you are running a 2d problem with 5000 cells I would not be surprised to see little or no speedup.
What kind of machine are you running on?
How did you execute the solver? (should be "mpirun -np 2 solverName -parallel" for 2 procs on local machine)

Hi

my problem have 180000 cells and it is a 2d problem
I have run the bellow command

mpirun -np 2 pimpleFoam -parallel

I have run on my laptop with intel cpu core 2 due
mechy is offline   Reply With Quote

Old   August 30, 2013, 10:27
Default
  #4
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
any help will be appreciated
this is the decomposePar result :

HTML Code:
Build  : 1.6-ext-22f6e2e40a1e
Exec   : decomposePar -force
Date   : Aug 30 2013
Time   : 17:39:29
Host   : yas-VGN-FW370J
PID    : 2983
Case   : /home/yas/OpenFOAM/yas-2.1.1/run/icoFsiFoamPiezo_RUNS/OKMESHcyl_PiezoPLTioFsiFoamPiezoThesisOF16ext/parallel
nProcs : 1
SigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Time = 0
Removing 2 existing processor directories
Create mesh for region region0

Calculating distribution of cells
Selecting decompositionMethod scotch

Finished decomposition in 0.47 s

Calculating original mesh data

Distributing cells to processors

Distributing faces to processors

Calculating processor boundary addressing

Distributing points to processors

Constructing processor meshes

Processor 0
    Number of cells = 92368
    Number of faces shared with processor 1 = 503
    Number of processor patches = 1
    Number of processor faces = 503
    Number of boundary faces = 185681

Processor 1
    Number of cells = 91392
    Number of faces shared with processor 0 = 503
    Number of processor patches = 1
    Number of processor faces = 503
    Number of boundary faces = 183769

Number of processor faces = 503
Max number of processor patches = 1
Max number of faces between processors = 503

Processor 0: field transfer
Processor 1: field transfer

End.
and this is the results of pimpleFoam for serial run (time=45 sec)

HTML Code:
Courant Number mean: 0 max: 0.00165125113506 velocity magnitude: 7.125
deltaT = 1.19904076739e-06
Time = 1.19904076739e-06

DILUPBiCG:  Solving for Ux, Initial residual = 1, Final residual = 5.2997904141e-08, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 1, Final residual = 3.92819713926e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 1, Final residual = 9.74149677961e-06, No Iterations 292
GAMG:  Solving for p, Initial residual = 0.00492661344204, Final residual = 9.92739792382e-06, No Iterations 62
GAMG:  Solving for p, Initial residual = 0.000438924562465, Final residual = 9.31995819834e-06, No Iterations 20
time step continuity errors : sum local = 1.08325394028e-08, global = 1.2131344075e-10, cumulative = 1.2131344075e-10
GAMG:  Solving for p, Initial residual = 0.000895746588392, Final residual = 9.54211427674e-06, No Iterations 20
GAMG:  Solving for p, Initial residual = 0.000113652119668, Final residual = 9.24182927039e-06, No Iterations 7
GAMG:  Solving for p, Initial residual = 3.1040845323e-05, Final residual = 7.79188809601e-06, No Iterations 3
time step continuity errors : sum local = 9.06826897434e-09, global = 3.05845168269e-10, cumulative = 4.27158609019e-10
DILUPBiCG:  Solving for omega, Initial residual = 0.00276574153542, Final residual = 1.17965829568e-06, No Iterations 2
DILUPBiCG:  Solving for k, Initial residual = 0.999999997864, Final residual = 2.04240307473e-06, No Iterations 3
DILUPBiCG:  Solving for Ux, Initial residual = 0.00328278643814, Final residual = 3.83577769127e-08, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 0.0166301820008, Final residual = 3.20367439059e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.000108129763017, Final residual = 8.78361949654e-06, No Iterations 16
GAMG:  Solving for p, Initial residual = 7.66538334551e-05, Final residual = 8.24183510606e-06, No Iterations 5
GAMG:  Solving for p, Initial residual = 2.28286003354e-05, Final residual = 6.74306412704e-06, No Iterations 3
time step continuity errors : sum local = 7.86321574349e-09, global = 2.93093380319e-10, cumulative = 7.20251989338e-10
GAMG:  Solving for p, Initial residual = 1.17775650304e-05, Final residual = 7.26056758009e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 7.52125937243e-06, Final residual = 5.52916300329e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 5.88957082068e-06, Final residual = 4.3767204652e-06, No Iterations 1
time step continuity errors : sum local = 5.10374772441e-09, global = -2.03780605143e-10, cumulative = 5.16471384195e-10
DILUPBiCG:  Solving for omega, Initial residual = 0.000574220023228, Final residual = 6.14969308295e-08, No Iterations 1
bounding omega, min: -51.9537160506 max: 2783990.47407 average: 3215.46773736
DILUPBiCG:  Solving for k, Initial residual = 0.00126294320421, Final residual = 1.596491519e-07, No Iterations 1
smoothSolver:  Solving for Ux, Initial residual = 4.0069355444e-06, Final residual = 2.66756177809e-08, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 2.81168384134e-05, Final residual = 2.22995109099e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 5.20087345873e-06, Final residual = 4.31188201265e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 4.43988865995e-06, Final residual = 3.56541527305e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 3.65422303439e-06, Final residual = 3.04594172644e-06, No Iterations 1
time step continuity errors : sum local = 3.55003853738e-09, global = 1.32507266627e-10, cumulative = 6.48978650822e-10
GAMG:  Solving for p, Initial residual = 3.11340357727e-06, Final residual = 2.66630235476e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 2.71542382035e-06, Final residual = 2.3583809352e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 2.39680524373e-06, Final residual = 9.82034988664e-09, No Iterations 114
time step continuity errors : sum local = 1.14455927733e-11, global = 1.35390645433e-14, cumulative = 6.48992189887e-10
DILUPBiCG:  Solving for omega, Initial residual = 0.571814369085, Final residual = 5.85579618545e-09, No Iterations 2
DILUPBiCG:  Solving for k, Initial residual = 0.0615167993133, Final residual = 1.51064699096e-07, No Iterations 1
ExecutionTime = 45.47 s  ClockTime = 45 s
and this is the results of pimpleFoam in parallel run (time= 123 sec)

HTML Code:
Courant Number mean: 0 max: 0.00165125113506 velocity magnitude: 7.125
deltaT = 1.19904076739e-06
Time = 1.19904076739e-06

DILUPBiCG:  Solving for Ux, Initial residual = 1, Final residual = 5.2997904141e-08, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 1, Final residual = 3.92819713926e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 1, Final residual = 9.8940958311e-06, No Iterations 286
GAMG:  Solving for p, Initial residual = 0.00492661366781, Final residual = 9.69441981314e-06, No Iterations 63
GAMG:  Solving for p, Initial residual = 0.000438655867845, Final residual = 9.56808439715e-06, No Iterations 20
time step continuity errors : sum local = 1.11209229401e-08, global = 6.13442805401e-11, cumulative = 6.13442805401e-11
GAMG:  Solving for p, Initial residual = 0.000896181004493, Final residual = 9.94003026082e-06, No Iterations 19
GAMG:  Solving for p, Initial residual = 0.000114038891381, Final residual = 9.28235138726e-06, No Iterations 6
GAMG:  Solving for p, Initial residual = 3.1145114298e-05, Final residual = 8.00575569478e-06, No Iterations 3
time step continuity errors : sum local = 9.31716491533e-09, global = 3.3954703335e-10, cumulative = 4.0089131389e-10
DILUPBiCG:  Solving for omega, Initial residual = 0.00276574193747, Final residual = 1.18013315195e-06, No Iterations 2
DILUPBiCG:  Solving for k, Initial residual = 0.999999998834, Final residual = 2.04821821548e-06, No Iterations 3
DILUPBiCG:  Solving for Ux, Initial residual = 0.00328256439747, Final residual = 9.22262556873e-08, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 0.0166023384362, Final residual = 3.69188648857e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.000108687134005, Final residual = 9.10996557958e-06, No Iterations 15
GAMG:  Solving for p, Initial residual = 7.68663677355e-05, Final residual = 8.51764028408e-06, No Iterations 5
GAMG:  Solving for p, Initial residual = 2.31392311457e-05, Final residual = 7.02587951765e-06, No Iterations 3
time step continuity errors : sum local = 8.19377779615e-09, global = -3.64619989004e-10, cumulative = 3.62713248856e-11
GAMG:  Solving for p, Initial residual = 1.20610790822e-05, Final residual = 7.23892901734e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 7.46956457073e-06, Final residual = 5.56488553551e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 5.92499120684e-06, Final residual = 4.42706445922e-06, No Iterations 1
time step continuity errors : sum local = 5.16293280986e-09, global = 2.15140122523e-10, cumulative = 2.51411447409e-10
DILUPBiCG:  Solving for omega, Initial residual = 0.000574220017969, Final residual = 6.47609273803e-08, No Iterations 1
bounding omega, min: -52.4826853551 max: 2783990.47407 average: 3215.47457647
DILUPBiCG:  Solving for k, Initial residual = 0.00126292944634, Final residual = 1.71053486661e-07, No Iterations 1
smoothSolver:  Solving for Ux, Initial residual = 4.01826769091e-06, Final residual = 2.67134825097e-08, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 2.80798370421e-05, Final residual = 2.23039106906e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 5.25261897253e-06, Final residual = 4.27996390355e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 4.39468304143e-06, Final residual = 3.54945868384e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 3.63252709134e-06, Final residual = 3.00424092978e-06, No Iterations 1
time step continuity errors : sum local = 3.50234785006e-09, global = -1.19404746341e-10, cumulative = 1.32006701068e-10
GAMG:  Solving for p, Initial residual = 3.06460438258e-06, Final residual = 2.61585694502e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 2.6597484296e-06, Final residual = 2.31236874164e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 2.34525336265e-06, Final residual = 9.72102473968e-09, No Iterations 108
time step continuity errors : sum local = 1.13327805178e-11, global = -6.13248770843e-15, cumulative = 1.3200056858e-10
DILUPBiCG:  Solving for omega, Initial residual = 0.582577485522, Final residual = 5.74478651409e-09, No Iterations 2
DILUPBiCG:  Solving for k, Initial residual = 0.064033662544, Final residual = 1.52036103476e-07, No Iterations 1
ExecutionTime = 121.82 s  ClockTime = 123 s
mechy is offline   Reply With Quote

Old   August 31, 2013, 13:14
Default
  #5
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi mechy,

On this thread of yours, you have more information beyond the information you've sent me over private messages. So let's continue the discussion here.

OK, there are no reasons that I can see that this would be a hardware related limitation, since you are using a real machine with a dual core processor.

Now, looking at your previous post, I saw this line and another similar to it, on both serial and parallel runs:
Quote:
Code:
bounding omega, min: -52.4826853551 max: 2783990.47407 average: 3215.47457647
The minimum value for the "omega" field is bad news, because if this omega field is the one for the k-omega turbulence modelling, then both "k" and "omega" should only have values above zero!

Therefore, I think the problem you are seeing is part of a limitation on either OpenFOAM or on the MPI to be able to handle invalid numbers, such as NaN and Inf.

So, first you should fix whichever problem your case has in the set-up of the boundaries or the mesh.

Best regards,
Bruno
miladrakhsha likes this.
__________________
wyldckat is offline   Reply With Quote

Old   August 31, 2013, 15:07
Default
  #6
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Dear Bruno

the results for above threads are for different test case.
I test a flow around a cylinder (for laminar condition )
and in serial run first time step consume 6 sec and in parallel run it consume 15 sec
I have attached the test case
the results of solver are shown as follow

decomposePar
HTML Code:
Calculating distribution of cells
Selecting decompositionMethod simple

Finished decomposition in 0.12 s

Calculating original mesh data

Distributing cells to processors

Distributing faces to processors

Distributing points to processors

Constructing processor meshes

Processor 0
    Number of cells = 86400
    Number of faces shared with processor 1 = 539
    Number of processor patches = 1
    Number of processor faces = 539
    Number of boundary faces = 173805

Processor 1
    Number of cells = 86400
    Number of faces shared with processor 0 = 539
    Number of processor patches = 1
    Number of processor faces = 539
    Number of boundary faces = 173655

Number of processor faces = 539
Max number of cells = 86400 (0% above average 86400)
Max number of processor patches = 1 (0% above average 1)
Max number of faces between processors = 539 (0% above average 539)

Time = 0

Processor 0: field transfer
Processor 1: field transfer

End.

Exec : pimpleFoam -parallel
HTML Code:

PIMPLE: no residual control data found. Calculations will employ 3 corrector loops


Starting time loop

Courant Number mean: 3.39655234767e-07 max: 0.000161747173295
deltaT = 1.19904076739e-06
Time = 1.19904076739e-06

PIMPLE: iteration 1
DILUPBiCG:  Solving for Ux, Initial residual = 1, Final residual = 6.96035197586e-10, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 1, Final residual = 7.87886990224e-15, No Iterations 1
GAMG:  Solving for p, Initial residual = 1, Final residual = 9.71073975512e-06, No Iterations 30
time step continuity errors : sum local = 7.90960050679e-12, global = 9.2538434098e-18, cumulative = 9.2538434098e-18
PIMPLE: iteration 2
DILUPBiCG:  Solving for Ux, Initial residual = 6.90125739808e-05, Final residual = 7.59669009166e-11, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 4.41526111253e-06, Final residual = 4.41526111253e-06, No Iterations 0
GAMG:  Solving for p, Initial residual = 0.00166409642772, Final residual = 9.35997918544e-06, No Iterations 9
time step continuity errors : sum local = 7.90865831711e-09, global = 1.85991682847e-14, cumulative = 1.86084221281e-14
PIMPLE: iteration 3
smoothSolver:  Solving for Ux, Initial residual = 1.59677155046e-05, Final residual = 8.70000414415e-09, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 1.77933888357e-05, Final residual = 7.67220185903e-09, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.000101226876292, Final residual = 9.63072794241e-09, No Iterations 22
time step continuity errors : sum local = 8.15010081926e-12, global = -8.61571179427e-17, cumulative = 1.85222650102e-14
ExecutionTime = 14.05 s  ClockTime = 15 s
Exec : pimpleFoam
HTML Code:
Starting time loop

Courant Number mean: 3.39655234767e-07 max: 0.000161747173295
deltaT = 1.19904076739e-06
Time = 1.19904076739e-06

PIMPLE: iteration 1
DILUPBiCG:  Solving for Ux, Initial residual = 1, Final residual = 2.30051737223e-15, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 1, Final residual = 7.87886713367e-15, No Iterations 1
GAMG:  Solving for p, Initial residual = 1, Final residual = 5.69328754714e-06, No Iterations 19
time step continuity errors : sum local = 4.63730214029e-12, global = 4.80640009863e-17, cumulative = 4.80640009863e-17
PIMPLE: iteration 2
DILUPBiCG:  Solving for Ux, Initial residual = 6.89082529956e-05, Final residual = 5.13244948403e-12, No Iterations 1
DILUPBiCG:  Solving for Uy, Initial residual = 4.39821658069e-06, Final residual = 4.39821658069e-06, No Iterations 0
GAMG:  Solving for p, Initial residual = 0.00166385360517, Final residual = 9.97281656696e-06, No Iterations 9
time step continuity errors : sum local = 8.42646538031e-09, global = 1.65728690322e-12, cumulative = 1.65733496722e-12
PIMPLE: iteration 3
smoothSolver:  Solving for Ux, Initial residual = 1.59227489748e-05, Final residual = 8.68277817384e-09, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 1.77504455575e-05, Final residual = 7.64446712243e-09, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.000100775380721, Final residual = 5.89445176647e-09, No Iterations 15
time step continuity errors : sum local = 4.98821054588e-12, global = 5.34660543394e-18, cumulative = 1.65734031382e-12
ExecutionTime = 6.26 s  ClockTime = 6 s
Attached Files
File Type: gz parallel.tar.gz (5.8 KB, 96 views)
mechy is offline   Reply With Quote

Old   August 31, 2013, 15:30
Default
  #7
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi mechy,

Ah, good thing you provided the case! The problem is that you had wrongly configured the GAMG for the pressure. By using this:
Code:
nCellsInCoarsestLevel 20;
the execution time for me dropped from 15s to 3s in parallel and the serial went from 6s to 4s.

I used as reference this post: http://www.cfd-online.com/Forums/ope...tml#post295166 post #10

Best regards,
Bruno
HuJG likes this.
__________________
wyldckat is offline   Reply With Quote

Old   August 31, 2013, 16:14
Default
  #8
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Dear Bruno
thanks so much for your answer
also, it is work well for me
and at now the parallel is faster than the serial
in some threads I read that the
Code:
 nCellsInCoarsestLevel
should be equal to the sqrt of cell numbers
and at now I think it is not correct

do you know what is the best value for nCellsInCoarsestLevel
does it have fixed value for all number of mesh cells ?


also, in my turbulent runs the value of omega has the order of 2e6
is it true ?

Best Regards
mechy is offline   Reply With Quote

Old   August 31, 2013, 17:18
Default
  #9
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi mechy,

Quote:
Originally Posted by mechy View Post
in some threads I read that the
Code:
 nCellsInCoarsestLevel
should be equal to the sqrt of cell numbers
and at now I think it is not correct

do you know what is the best value for nCellsInCoarsestLevel
does it have fixed value for all number of mesh cells ?
I know I've read somewhere here on the forum that the only way to be certain is to do some trial-and-error, since it might depend on the mesh distribution or on the equations to be solved... but my memory might be playing tricks about the details. All I'm certain is that we usually need to do some trial-and-error on a case to case basis.

Quote:
Originally Posted by mechy View Post
also, in my turbulent runs the value of omega has the order of 2e6
is it true ?
The k-omega and k-epsilon values usually depend on the kind of simulation you're doing. But 2e6 does seem to be a rather high value.
I usually suggest that you scale down and/or simplify your case first, to ensure if things are working properly.

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   September 1, 2013, 01:58
Default
  #10
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Dear Bruno
thanks so much for your reply

what is the maximum value of nCellsInCoarsestLevel which is used in your problems ?
and can you explain that how should I do trial-and-error to select best value for it ?


Best Regards
mechy is offline   Reply With Quote

Old   September 1, 2013, 06:57
Default
  #11
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi mechy,

I rarely use GAMG.

What I mean by trial-and-error is what Martin said several years ago and I quote:
Quote:
Originally Posted by MartinB View Post
You can test it out: start your simulation, change the value after a few iterations and check the new iteration times. Then make another change and check again...
For reference, the following threads are on this topic:
Best regards,
Bruno
m_mousavi88 and k.farnagh like this.
__________________
wyldckat is offline   Reply With Quote

Old   September 1, 2013, 10:27
Default
  #12
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Hi Bruno
thanks very much


is there any command in decomposePar that forces whole of a boundary fall in a processor ?
in other words if I have a boundary with name plate and this plate is in the middle
of mesh , for running with 2 processor ,how can I set all of plate in a one processor


Best Regards
mechy is offline   Reply With Quote

Old   September 1, 2013, 10:36
Default
  #13
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Look at the file "applications/utilities/parallelProcessing/decomposePar/decomposeParDict": https://github.com/OpenFOAM/OpenFOAM...composeParDict
It's the "preservePatches" option, but I think it only works for cyclic patches.
__________________
wyldckat is offline   Reply With Quote

Old   September 1, 2013, 11:10
Default
  #14
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
thanks for your answer
I test it but it does not work for other patches

Best Regards
mechy is offline   Reply With Quote

Old   July 22, 2014, 16:51
Unhappy
  #15
New Member
 
miladrakhsha
Join Date: Aug 2012
Posts: 29
Rep Power: 14
miladrakhsha is on a distinguished road
Hi Bruno
I believe that I have the same problem yet in spite of all my efforts and after a couple of days that I have been searching for a solution I was not able to figure this out.

I am experiencing with a 3D mesh which has 3 million cells and I am using simpleFoam solver. For a serial simulation of a 2D case (100,000 cells) of almost the same problem I spent very short time to get the results.(about 2-3 hours for 2000-3000 iteration) Nonetheless, for the 3D case the simulation proceeds disappointingly slow. It has been running for 4 days and it just got to 1000th iteration.

I have attached my fvSolution, fvSchemes and decomposeParDict file to this post for more information. Also a small part of the log file of simpleFoam is as follow: (About one of your comment I read in this page regarding negative omega I should mention that unfortunately this is not the problem in my case. I have experienced the same thing in the 2D case though it does not cause a problem in that 2D case )

Time = 1060

DILUPBiCG: Solving for Ux, Initial residual = 0.000158359, Final residual = 5.31028e-07, No Iterations 2
DILUPBiCG: Solving for Uy, Initial residual = 0.000194343, Final residual = 6.61693e-07, No Iterations 2
DILUPBiCG: Solving for Uz, Initial residual = 0.00132841, Final residual = 8.17812e-07, No Iterations 3
GAMG: Solving for p, Initial residual = 0.373173, Final residual = 0.0827443, No Iterations 1000
time step continuity errors : sum local = 0.00194834, global = 7.21998e-15, cumulative = -2.62773e-13
DILUPBiCG: Solving for omega, Initial residual = 0.00529106, Final residual = 1.44872e-05, No Iterations 1
bounding omega, min: -905.065 max: 52459.6 average: 61.2743
DILUPBiCG: Solving for k, Initial residual = 0.000462427, Final residual = 9.88307e-07, No Iterations 3
bounding k, min: -3.68267 max: 129.378 average: 0.447474
ExecutionTime = 337633 s ClockTime = 338023 s

forceCoeffs forceCoeffs output:
Cm = -0.232539
Cd = -0.156459
Cl = 0.0756161
Cl(f) = -0.194731
Cl(r) = 0.270347

Time = 1061

DILUPBiCG: Solving for Ux, Initial residual = 0.000168488, Final residual = 1.1647e-06, No Iterations 2
DILUPBiCG: Solving for Uy, Initial residual = 0.000200027, Final residual = 5.57576e-07, No Iterations 2
DILUPBiCG: Solving for Uz, Initial residual = 0.00151299, Final residual = 9.32168e-06, No Iterations 2
GAMG: Solving for p, Initial residual = 0.383116, Final residual = 0.0607197, No Iterations 1000
time step continuity errors : sum local = 0.00133519, global = -1.74997e-15, cumulative = -2.64523e-13
DILUPBiCG: Solving for omega, Initial residual = 0.0133203, Final residual = 1.47131e-05, No Iterations 1
bounding omega, min: -756.761 max: 59953.8 average: 61.2054
DILUPBiCG: Solving for k, Initial residual = 0.000462061, Final residual = 2.65904e-06, No Iterations 2
bounding k, min: -1.97769 max: 129.043 average: 0.447068
ExecutionTime = 338002 s ClockTime = 338392 s

forceCoeffs forceCoeffs output:
Cm = -0.244678
Cd = -0.160826
Cl = 0.0467811
Cl(f) = -0.221287
Cl(r) = 0.268068

I would really appreciate your help-or anybody who can help - in advance
Attached Files
File Type: zip upload.zip (1.9 KB, 6 views)
miladrakhsha is offline   Reply With Quote

Old   July 22, 2014, 16:57
Default
  #16
New Member
 
miladrakhsha
Join Date: Aug 2012
Posts: 29
Rep Power: 14
miladrakhsha is on a distinguished road
Also I always had this question for which I could not find an answer. It might seem a stupid question but I have some experiences with fluent and as I remember my speed up in fluent was very close to the number of processors that I used. However, in openFoam it seems that for some reasons this is not the case and speed up coefficient is far less than number of processors.
I would be grateful if you could answer this question or suggest a useful link for this purpose.
miladrakhsha is offline   Reply With Quote

Old   July 28, 2014, 03:08
Default
  #17
Senior Member
 
Join Date: Jun 2011
Posts: 163
Rep Power: 15
mechy is on a distinguished road
Hi
the number of processors and also nCellsInCoarsestLevel 500;
is very high

set the following values:
nCellsInCoarsestLevel ------> 20 to 50
numberOfSubdomains ---------------> number of your machine processor
mechy is offline   Reply With Quote

Old   July 28, 2014, 17:40
Default
  #18
New Member
 
miladrakhsha
Join Date: Aug 2012
Posts: 29
Rep Power: 14
miladrakhsha is on a distinguished road
Quote:
Originally Posted by mechy View Post
Hi
the number of processors and also nCellsInCoarsestLevel 500;
is very high

set the following values:
nCellsInCoarsestLevel ------> 20 to 50
numberOfSubdomains ---------------> number of your machine processor
Thank you for the response
I have used nCellsInCoarsestLevel=sqrt(ncells) as you and Bruno mentioned in this topic. Also when I decrease this parameter there is not any obvious change in pressure solver speed.
In addition, as I am working with a machine with 32 CPUs so the numberOfSubdomains that you mentioned is less than number of CPUs in my simulation.

Based on my experience, in my parallel simulation, PCG (preconditioned Conjugate Gradient method) do a better job that GAMG solver although GAMG solver is faster in serial simulations.

Thank you
Milad
elham usefi likes this.
miladrakhsha is offline   Reply With Quote

Old   August 17, 2016, 18:19
Default Update
  #19
New Member
 
Victor
Join Date: Jul 2016
Posts: 1
Rep Power: 0
vrosalesCFD is on a distinguished road
I just had the same problem in OpenFoam 4 but it was a silly mistake. I was typing

mpirun -np 10 pimpleFoam

the correct is

mpirun -np 10 pimpleFoam -parallel

I got confused because, if you use the first command line, your case still runs but much slower. So you can see the results and everything.

I hope that some of you can save some time with this post

Cheers

Victor
elham usefi likes this.
vrosalesCFD is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Interfoam blows on parallel run danvica OpenFOAM Running, Solving & CFD 16 December 22, 2012 03:09
Script to Run Parallel Jobs in Rocks Cluster asaha OpenFOAM Running, Solving & CFD 12 July 4, 2012 23:51
AMR and parallel run lichmaster OpenFOAM 6 May 3, 2012 08:23
Can't run in parallel JulytoNovember OpenFOAM Running, Solving & CFD 2 March 31, 2012 10:28
Help: Serial code to parallel but even slower Zonexo Main CFD Forum 4 May 14, 2008 11:26


All times are GMT -4. The time now is 11:26.