|
[Sponsors] |
December 14, 2011, 05:30 |
Parallel Computing decomposePar
|
#1 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
Hey All!
I'm trying to do some parallel calculations. I've access to several computers. Each of them has 6 processors. For testing I'm running the motorBike tutorial. So when I want to use 12 processors, how do I have to set "n" in hierarchicalCoeffs? Originally: (3 2 1) <== with 6 processors I read the user guide, but couldn't find the right solution, because e.g. (6 4 2) doesn't work. greets Christian |
|
December 14, 2011, 07:09 |
|
#2 |
Member
Tibo
Join Date: Jun 2011
Posts: 68
Rep Power: 15 |
Well, if you need 12 subdomains, (6 4 2) is obviously not going to work, as it leads to 6*4*2=48 subdomains. Use e.g. (3 4 1) instead.
I do not know what your conditions and geometry are, so make sure that your decomposition is meaningful. A (6 2 1) or even (12 1 1) decomposition might turn out to be more suited. As for how to make your computer understand it has to use processors on several computers, I can unfortunately not help. It might work automatically as soon as the computers are connected to each other through the same server. Just give it a shot, I guess =D Tibo |
|
December 14, 2011, 08:06 |
|
#3 | |
Senior Member
Roman Thiele
Join Date: Aug 2009
Location: Eindhoven, NL
Posts: 374
Rep Power: 21 |
Quote:
optionally you can then state a processor weight, for example if you have different computers (different cpu's ...) then you can state that one processor has more weight than another. in order to get this working on more than one computer, check out any mpi tutorial that explains on how to set up automatic ssh login by sharing keys and so on.
__________________
~roman |
||
December 14, 2011, 08:07 |
|
#4 |
Senior Member
Pablo Higuera
Join Date: Jan 2011
Location: Auckland
Posts: 627
Rep Power: 19 |
Hi
I recommend you to use Scotch decomposition method if you are unsure about the suitability of Hierarchical. If you encounter any problems with running in parallel you should check out mpi machinefile options. Best |
|
December 14, 2011, 09:44 |
|
#5 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
thanks for your replies!!
it worked so far! except, when I use more then 6 processors (doesn't matter if all on the same machine or split to others) the computer crashes. AND when i want to run potentialFoam in parallel it always prints the error: Code:
Reading field p [0] [0] [0] --> FOAM FATAL IO ERROR: [0] keyword motorBike_frt-fairing:001%1 is undefined in dictionary "/home/lav09/prosch/OpenFOAM/prosch-2.0.1/run/motorBike/processor0/0/p::boundaryField" [0] [0] file: /home/lav09/prosch/OpenFOAM/prosch-2.0.1/run/motorBike/processor0/0/p::boundaryField from line [1] [1] [1] 26 to line 53. [0] [0] From function dictionary::subDict(const word& keyword) const [0] in file db/dictionary/dictionary.C at line 461. [0] FOAM parallel run exiting Do I have set some special options in decomposePar or snappyHexMesh? I tried all I could see in the help files, but always get the same error. greets |
|
December 15, 2011, 17:27 |
|
#6 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings to all!
@ChrisPro: why do I have the feeling that you are trying to use the motorBike case from an older version of OpenFOAM on the newer 2.0.1!? That error message indicates that the mesh that has been created has patches that are not present in the "0/p" field! If you are not using the motorBike case from 2.0.1, you better compare your modified case with the one in 2.0.1! If this is not the case, please gives us a (short) step-by-step list of what you are doing, so it'll be easier for us to test it ourselves or at least diagnose what might be missing Best regards, Bruno
__________________
|
|
December 25, 2011, 16:05 |
|
#7 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
hey! sorry for the late answer.
I tried the same thing on several computers but always get the same error. I made sure, using the newest version of openFoam AND the tutorials. Here is what i do in the motorBike example: making a file called machines where the computername and the number of processors is in. running blockMesh decomposePar <== works mpirun --hostfile machines -np 4 snappyHexMesh -overwrite -parallel > log & (here on a machine with 4 processors) <== works mpirun --hostfile machines -np 4 potentialFoam -noFunctionObjects -writep -parallel > log & this displays the error: Code:
[0] [0] --> FOAM FATAL IO ERROR: [0] keyword motorBike_frt-fairing:001%1 is undefined in dictionary "/home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor0/0/p::boundaryField" [0] [0] file: /home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor0/0/p::boundaryField from line 26 to line 53. [0] [0] From function dictionary::subDict(const word& keyword) const [0] in file db/dictionary/dictionary.C at line 461. [0] FOAM parallel run exiting [0] [2] [2] [2] --> FOAM FATAL IO ERROR: [2] keyword motorBike_frt-fairing:001%1 is undefined in dictionary "/home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor2/0/p::boundaryField" [2] [2] file: /home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor2/0/p::boundaryField from line 26 to line 53. [2] [2] From function dictionary::subDict(const word& keyword) const [2] in file db/dictionary/dictionary.C at line 461. [2] FOAM parallel run exiting [2] -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [3] [3] [3] --> FOAM FATAL IO ERROR: [3] keyword motorBike_frt-fairing:001%1 is undefined in dictionary "/home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor3/0/p::boundaryField" [3] [3] file: /home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor3/0/p::boundaryField from line 26 to line 53. [3] [3] From function dictionary::subDict(const word& keyword) const [3] in file db/dictionary/dictionary.C at line 461. [3] FOAM parallel run exiting [3] [1] [1] [1] --> FOAM FATAL IO ERROR: [1] keyword motorBike_frt-fairing:001%1 is undefined in dictionary "/home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor1/0/p::boundaryField" [1] [1] file: /home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike/processor1/0/p::boundaryField from line 26 to line 53. [1] [1] From function dictionary::subDict(const word& keyword) const [1] in file db/dictionary/dictionary.C at line 461. [1] FOAM parallel run exiting [1] -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 4825 on node ubuntu exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [ubuntu:04824] 3 more processes have sent help message help-mpi-api.txt / mpi-abort [ubuntu:04824] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages Code:
/*---------------------------------------------------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.1.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ Build : 2.1.0-0bc225064152 Exec : potentialFoam -noFunctionObjects -writep -parallel Date : Dec 25 2011 Time : 21:56:27 Host : "ubuntu" PID : 4825 Case : /home/christian/OpenFOAM/christian-2.1.0/run/Test/motorBike nProcs : 4 Slaves : 3 ( "ubuntu.4826" "ubuntu.4827" "ubuntu.4828" ) Pstream initialized with: floatTransfer : 0 nProcsSimpleSum : 0 commsType : nonBlocking sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster allowSystemOperations : Disallowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 Reading field p greets Christian |
|
December 25, 2011, 20:53 |
|
#8 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Christian,
OK, I managed to reproduce the same error you got. But I don't have time right now to figure out how to fix this. My suggestion is that you check the "turbineSiting" tutorial:
Bruno
__________________
|
|
December 29, 2011, 04:52 |
|
#9 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
I tried the tutorial, but even when I do not change anything in the tutorial, snappyHexMesh reports an error, that the ptscotch library has to be installed.
but ptscotch isn't used when snappyHexMesh is executed! Code:
Shell refinement iteration 4 ---------------------------- Marked for refinement due to refinement shells : 3818 cells. Determined cells to refine in = 0.05 s Selected for internal refinement : 4204 cells (out of 73632) Edge intersection testing: Number of edges : 334730 Number of edges to retest : 108544 Number of intersected edges : 18230 Refined mesh in = 0.58 s [1] [1] [1] --> FOAM FATAL ERROR: [1] You are trying to use ptscotch but do not have the ptscotchDecomp library loaded. This message is from the dummy ptscotchDecomp stub library instead. Please install ptscotch and make sure that libptscotch.so is in your LD_LIBRARY_PATH. The ptscotchDecomp library can then be built in $FOAM_SRC/parallel/decompose/ptscotchDecomp [1] |
|
December 29, 2011, 07:22 |
|
#10 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Yeah, that's one of those crazy bugs I thought that they had fixed that in 2.1.0, but I haven't confirmed this yet.
You might want to report this on the bug tracker: http://www.openfoam.com/mantisbt/ - I haven't reported it because I keep forgetting about it
__________________
Last edited by wyldckat; December 29, 2011 at 17:16. Reason: incomplete sentence... |
|
December 29, 2011, 12:23 |
|
#11 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
ok!
just reported it! |
|
December 30, 2011, 06:25 |
|
#12 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Christian,
OK, I've managed to solve the problem with running motorBike in parallel, namely the problem you had in post #7. The attached case has only been tested with OpenFOAM 2.0.x. Basically the problem is that decomposePar does not preserve the regular expressions that are present in the original boundaries (p, U, k, omega, nut). So the solution that I implemented resorts to changeDictionary to reinstate the missing boundaries, as well as add ones that were missing from the decomposition and parallel meshing. It's not the perfect solution, but it's a start. As for the ptscotch problem, I've seen your report and I'am wandering which were the steps you took for installing OpenFOAM. Was is it the Debian packages one? (this one: http://www.openfoam.org/download/ubuntu.php) Best regards, Bruno
__________________
|
|
December 30, 2011, 12:37 |
|
#13 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
Oh yeah Bruno! you're my man!
I tested it on openfoam 2.1.0 and it works!! Thanks a lot!! But how do you reconstruct the Case? when i use reconstructPar the following Error message is displayed: Code:
Create time Create mesh for time = 0 Time = 100 Reconstructing FV fields --> FOAM FATAL ERROR: Size of maps does not correspond to size of mesh for processor 0 faceProcAddressing : 1112 nFaces : 276200 cellProcAddressing : 320 nCell : 86070 boundaryProcAddressing : 7 nFaces : 74 From function fvFieldReconstructor::fvFieldReconstructor ( fvMesh&, const PtrList<fvMesh>&, const PtrList<labelIOList>&, const PtrList<labelIOList>&, const PtrList<labelIOList>& ) in file fvFieldReconstructor.C at line 66. FOAM exiting I downloaded the file ptscotch library from: http://packages.debian.org/sid/libptscotch-5.1 greets Christian |
|
December 31, 2011, 11:33 |
|
#14 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Christian,
The problem is rather simple when we think about it: since we generated the mesh in parallel, said mesh doesn't exist in reconstructed form! The following commands are meant to be placed at the end of Allrun: Code:
runApplication reconstructParMesh -mergeTol 1e-6 -constant runApplication reconstructPar Code:
paraFoam -builtin By the way, you might also be interested in the new tutorial available for 2.1.0: "incompressible/pisoFoam/les/motorBike/motorBike" - this is another variant of the "motorBike" case we know, but it is meshed in parallel with 8 cores with a higher mesh resolution and is quite the memory eater. 2.7GiB of RAM weren't enough to feed this beast of a mesh! edit: I forgot to ask - Which version of Linux and architecture are you using? For the architecture, you can check by running: Code:
uname -m Best regards, Bruno
__________________
Last edited by wyldckat; December 31, 2011 at 11:37. Reason: see "edit:" |
|
January 1, 2012, 05:51 |
|
#15 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
First: Happy New Year!!
oh thanks a lot!!! that worked fine, with your motorbike example. but what exaclty is procBoundary? and why did you set the velocity in the changeDictionaryDict file for procBoundary? I adapted this example to my case. It works and i get a good residual. But in my case I simulate the flow through a rather complex geometrie where all boundaries are given by stl files, even the inlet and the outlet. I set the pressure at the inlet and outlet and the velocity is calculated with "pressureDirectedInletVelocity". The pressure at the inlet is 10Pa and at the outlet 0Pa. when I calculate this the normal way, with only a single processor, everithing is fine. But in parallel the starting (Time 0) boundary conditions are so that there is a pressure of 10Pa at the inlet and outlet. The changeDictionaryDict has exactly the same adjustments as the single files for p, U,... except for the additional procBoundary Code:
dictionaryReplacement { p { boundaryField { "wall_.*" { type zeroGradient; } "inlet_.*" { type fixedValue; value uniform 10; } "outlet_.*" { type fixedValue; value uniform 0; } "procBoundary.*" { type processor; value uniform 0; } } } U { boundaryField { "inlet_.*" { type pressureDirectedInletVelocity; value (0 0 0); inletDirection uniform (0 -1 0); } "outlet_.*" { type pressureInletOutletVelocity; value uniform (0 0 0); } "wall_.*" { type fixedValue; value uniform (0 0 0); } "procBoundary.*" { type processor; value uniform (0 0 0); } } } omega { boundaryField { "wall_.*" { type omegaWallFunction; value 1.78; } "outlet_.*" { type inletOutlet; inletValue 1.78; value 1.78; } "inlet_.*" { type fixedValue; value 1.78; } "procBoundary.*" { type processor; value uniform 1.78; } } } nut { boundaryField { "wall_.*" { type nutkWallFunction; value uniform 0; } "inlet_.*" { type calculated; value uniform 0; } "outlet_.*" { type calculated; value uniform 0; } "procBoundary.*" { type processor; value uniform 0; } } } k { boundaryField { "wall_.*" { type kqRWallFunction; value 0.24; } "outlet_.*" { type inletOutlet; inletValue 0.24; value 0.24; } "inlet_.*" { type fixedValue; value 0.24; } "procBoundary.*" { type processor; value uniform 0.24; } } } } // ************************************************************************* // Yes the pisoForm motorbike example looks quite interesting, but I still get this dam ptscotch error which we talked about some posts before. Maybe I really made something wrong installing OpenFoam but I excactly followed the instructions on OpenFoam.com for Ubuntu http://www.openfoam.org/download/ubuntu.php So I'm using Ubuntu 10.04LTS on a x86_64 How did you install the ptscotch library? and where are the correct files to download? regards Christian |
|
January 1, 2012, 10:49 |
|
#16 | |||
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Hi Christian,
Happy New Year Quote:
Oh, and these values between processors seem to be the same as the initial internal fields. Quote:
Quote:
I'm checking this in a virtual machine with the same version of Ubuntu 10.04 x86_64 to confirm what's going on... I'll edit this post after I have results. edit: preliminary tests indicated that the relevant package is only recommended. Said package still needs to be installed manually, at least for 10.04: Code:
sudo apt-get install libptscotch-dev edit 2: OK, now I understand why you went to Debian for the package. And wouldn't you know it, since there is no "libptscotch-dev" for 10.04, then there is no "libptscotchDecomp.so" for "openmpi-system"! Here are the commands for a possible proper installation:
Bruno
__________________
Last edited by wyldckat; January 1, 2012 at 11:44. Reason: see "edit:" and "edit 2:" |
||||
January 6, 2012, 15:45 |
|
#17 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
Hey!!
Yeah I managed to install the ptscotch library! Thank you!! now, openmpi works really well. I'm running all functions now with the keyword mpirun, because there one has the option to make a "machines" file, where all computers and their cpus which are used for the calculations are listed. Thats how my "Allrun" flie looks like now: Code:
#!/bin/sh # Source tutorial run functions . $WM_PROJECT_DIR/bin/tools/RunFunctions # copy flange surface from resources folder #cp $FOAM_TUTORIALS/resources/geometry/flange.stl.gz constant/triSurface/ cp -r 0.org 0 > /dev/null 2>&1 runApplication blockMesh runApplication surfaceFeatureExtract -includedAngle 150 -writeObj constant/triSurface/Wall.stl wall runApplication decomposePar runApplication mpirun --hostfile machines -np 4 snappyHexMesh -overwrite -parallel #log Datei umbenennen: mv log.mpirun log.snappyHexMesh find . -type f -iname "*level*" -exec rm {} \; ls -d processor* | xargs -i cp -r 0.org/* ./{}/0/ $1 #renumberMesh runApplication mpirun --hostfile machines -np 4 renumberMesh -overwrite -parallel mv log.mpirun log.renumberMesh #potentialFoam runApplication mpirun --hostfile machines -np 4 potentialFoam -initialiseUBCs -noFunctionObjects -parallel mv log.mpirun log.potentialFoam #simpleFoam runApplication mpirun --hostfile machines -np 4 `getApplication` -parallel mv log.mpirun log.simpleFoam runApplication reconstructParMesh -mergeTol 1e-6 -constant runApplication reconstructPar Christian |
|
April 9, 2012, 23:48 |
|
#18 | |
New Member
xiaoweii
Join Date: Mar 2012
Posts: 7
Rep Power: 14 |
Hi, Christian,
I have been working on the use on snappyHexMesh. About the running of motobike case, I have some problems: In the "OpenFOAM-2.1.0/tutorials/incompressible/pisoFoam/les/motorBike/motorBike" directory, I didn't change anything. Then I run the commands according to the Allrun file, the commands list is as below: "blockMesh", "cp system/decomposeParDict.hierarchical system/decomposeParDict","decomposePar", "cp system/decomposeParDict.ptscotch system/decomposeParDict", "mpirun -np 8 snappyHexMesh -overwrite -parallel" then it just got stuck at: Overall mesh bounding box : (-5 -4 0) (15 4 8) Relative tolerance : 1e-06 Absolute matching distance : 2.29783e-05 In fact I have tested many cases in OpenFOAM-2.1.0, all of them were stuck somewhere during their running, while they got stuck at different points. Do you know what's wrong with it? Best wishes, xiaow_g. Quote:
|
||
April 10, 2012, 05:16 |
|
#19 |
Member
Join Date: Jun 2011
Posts: 38
Rep Power: 15 |
Hey!
when snappyHexMesh says something like: "aborted" or "killed" then probably your memory is to low for that case. :/ greets christian |
|
April 10, 2012, 17:48 |
|
#20 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,982
Blog Entries: 45
Rep Power: 128 |
Greetings to all!
@xiaow_g: I know I've had a similar problem, where snappyHexMesh kept running at 100% in parallel and wouldn't continue, but I can't remember what the problem was. It might be a memory issue like Chris said, but it could also be a bug you might be triggering somehow. It would be useful to know the Linux distribution and architecture (uname -m) you are using, as well how much RAM is available to your Linux installation (in case you are running it inside a virtual machine). The other possibility is that you need to upgrade to OpenFOAM 2.1.x. Several bugs have been fixed since 2.1.0 was released and I vaguely remember having problems with snappyHexMesh in parallel with one versions that wasn't from git... Best regards, Bruno
__________________
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
unchangeable continuity residuals in parallel computing | wlt_1985 | FLUENT | 0 | August 1, 2011 13:15 |
Diffusion equation solved using Parallel Computing | Sachin Paramane | Main CFD Forum | 0 | June 12, 2007 00:48 |
Parallel Computing on Multi-Core Processors | Upgrading Hardware | CFX | 6 | June 7, 2007 16:54 |
Parallel Computing | peter | Main CFD Forum | 7 | May 15, 2006 10:53 |
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 | Amitava Majumdar | Main CFD Forum | 0 | January 5, 1999 13:00 |