|
[Sponsors] |
April 5, 2013, 11:36 |
OpenFOAM CPU Usage
|
#1 |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
Dear all:
I am running solshingTank2D in interDyMFoam. The tank has 40,000 cells. I did this to see how the cpu is utilized by OpenFOAM. My computer processor is a Intel i3 CPU M330@2.13Ghzx4 with 4 CPU's. I am running UBUNTU 12.10 and OpenFoam 2.2.0. When I check the processor performance, what I see is that one CPU is running at 99% while the others are below 10%. Is there any way to make OpenFOAM seek out and use all the CPU's? A screen shot of CPU usage while running OpenFOAM for this problem is attached. Any advice would be greatly appreciated. |
|
April 5, 2013, 11:40 |
|
#2 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 30 |
Have you checked the user manual? http://www.openfoam.org/docs/user/ru...s-parallel.php
__________________
*On twitter @akidTwit *Spend as much time formulating your questions as you expect people to spend on their answer. |
|
April 5, 2013, 13:14 |
|
#3 | |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
Quote:
|
||
April 5, 2013, 15:36 |
|
#4 |
New Member
Eric
Join Date: Mar 2013
Posts: 22
Rep Power: 13 |
Hello,
Are you running foamJob or mpirun? if not that explains why only one core is used. |
|
April 5, 2013, 15:39 |
|
#5 | |
Senior Member
|
Quote:
Just set decomposeParDict correctly with respect to number of processors you have and you are done. the rest is in the user's manual. Good luck
__________________
Learn OpenFOAM in Persian SFO (StarCCM+ FLUENT OpenFOAM) Project Team Member Complex Heat & Flow Simulation Research Group If you can't explain it simply, you don't understand it well enough. "Richard Feynman" |
||
April 5, 2013, 18:07 |
|
#6 |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
I have 2 processors. Each processor has 2 CPUs. So I should set subdomain to the number of processors (2) or the number of CPUs (4). Also, I have to run blockMesh and setFields before running mpi, correct? Thanks.
|
|
April 5, 2013, 18:15 |
|
#7 | ||
Senior Member
|
Quote:
Quote:
Not sure about setFields, but about blockMesh, yes you must run it before mpi.
__________________
Learn OpenFOAM in Persian SFO (StarCCM+ FLUENT OpenFOAM) Project Team Member Complex Heat & Flow Simulation Research Group If you can't explain it simply, you don't understand it well enough. "Richard Feynman" |
|||
April 5, 2013, 19:02 |
|
#8 |
New Member
Eric
Join Date: Mar 2013
Posts: 22
Rep Power: 13 |
I am putting a file in my case folder named 'machines'. In this file I later write my config, I.e.
Workstation cpu=2 Where workstation is the host name. Sub domains are based on cores, so in your case 4, I think. My normal procedure is 0) having machines file setup 1) Blockmesh/ideasUnvToFoam 2) decomposePar 3) foamJob -s -p solverName |
|
April 7, 2013, 18:56 |
Running OpenFoam over multiple CPU's on the same computer
|
#9 |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
Gentlement:
Here is my attempt to run OpenFOAM over 4 CPU's that my computer has. I noted that in the first error message, it shows that it cannot find processor0. But I thought OpenFOAM would automatically detect the number of CPU's. Was that assumption incorrect? Thanks for your help / advice. __________________________________________________ _____________________ musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ mpirun -np 4 interDyMFoam -parallel >log & [1] 4814 musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ [0] [0] [0] --> FOAM FATAL ERROR: [0] interDyMFoam: cannot open case directory "/home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor0" [0] [0] FOAM parallel run exiting [0] -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 4815 on node ubuntu exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- |
|
April 8, 2013, 00:03 |
|
#10 | |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
Quote:
__________________________________________________ _____________________ Decomposing mesh region0 Create mesh Calculating distribution of cells Selecting decompositionMethod hierarchical --> FOAM FATAL ERROR: Wrong number of processor divisions in geomDecomp: Number of domains : 4 Wanted decomposition : (2 2 2) From function geomDecomp::geomDecomp(const dictionary& decompositionDict) in file geomDecomp/geomDecomp.C at line 50. FOAM exiting |
||
April 8, 2013, 04:01 |
|
#11 |
New Member
Eric
Join Date: Mar 2013
Posts: 22
Rep Power: 13 |
Do you have a possibility to upload the decomposeDict file? It might be easier to find the source of the problem. However (2 2 2) is to me decomposition for 8 processors and not for 4.
|
|
April 8, 2013, 04:17 |
|
#12 | |
Senior Member
|
Quote:
also have a loook at this: http://www.cfd-online.com/Forums/ope...tml#post189895
__________________
Learn OpenFOAM in Persian SFO (StarCCM+ FLUENT OpenFOAM) Project Team Member Complex Heat & Flow Simulation Research Group If you can't explain it simply, you don't understand it well enough. "Richard Feynman" |
||
April 9, 2013, 16:19 |
|
#13 |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
Thanks for your help. The decomposePar works now. However, it stalls at the end of the run with the following message:
----- I have deleted preceeeding out put to keep this to the point ----------- Number of processor faces = 812 Max number of cells = 27540 (49.998% above average 18360.2) Max number of processor patches = 2 (0% above average 2) Max number of faces between processors = 408 (0.492611% above average 406) Time = 0 --> FOAM FATAL IO ERROR: Cannot find patchField entry for lowerWall file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/0/alpha1.org-old.boundaryField from line 25 to line 33. From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&) in file /home/opencfd/OpenFOAM/OpenFOAM-2.2.0/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 154. FOAM exiting ----------------------------------------------------------------------------------------------------------------------- But, my blockmeshDict file has the patches as shown below: vertices ( (-0.05 -0.50 -0.35) // Vertex back lower left corner = 0 (-0.05 0.50 -0.35) // Vertex back lower right corner= 1 (-0.05 0.50 0.65) // Vertex back upper right corner= 2 (-0.05 -0.50 0.65) // Vertex back upper left corner = 3 (0.05 -0.50 -0.35) // Vertex front lower left corner = 4 (0.05 0.50 -0.35) // Vertex front lower right corner= 5 (0.05 0.50 0.65) // Vertex front upper right corner= 6 (0.05 -0.50 0.65) // Vertex front upper left corner = 7 ); blocks ( // block0 hex (0 1 2 3 4 5 6 7) (271 271 1) simpleGrading (1 1 1) ); //patches boundary ( lowerWall { type patch; faces ( (0 1 5 4) ); } rightWall { type patch; faces ( (1 2 6 5) ); } atmosphere { type patch; faces ( (2 3 7 6) ); } leftWall { type patch; faces ( (0 4 7 3) ); } frontAndBack { type Empty; faces ( (4 5 6 7) (0 3 2 1) ); } ); Any comments / suggestions would be appreciated. Thankyou. |
|
April 9, 2013, 17:06 |
|
#14 |
New Member
Eric
Join Date: Mar 2013
Posts: 22
Rep Power: 13 |
I wonder if you are missing the patch name in one of the U, p .... files? Upload the boundary files as well and we can check if you cannot find the source of your problem.
|
|
April 9, 2013, 20:24 |
|
#15 | |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
Quote:
musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ mpirun -np 4 interDyMFoam -parallel >log & [1] 2663 musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ [0] [0] [0] --> FOAM FATAL IO ERROR: [0] cannot find file [0] [0] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor0/0/alpha1 at line 0. [0] [0] From function regIOobject::readStream() [0] in file db/regIOobject/regIOobjectRead.C at line 73. [0] FOAM parallel run exiting [0] [2] [2] [2] --> FOAM FATAL IO ERROR: [2] cannot find file [2] [2] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor2/0/alpha1 at line 0. [2] [3] [3] [3] --> FOAM FATAL IO ERROR: [3] cannot find file [3] [3] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor3/0/alpha1 at line 0. [3] [3] From function regIOobject::readStream() [3] in file db/regIOobject/regIOobjectRead.C at line 73. [3] FOAM parallel run exiting [3] [2] From function regIOobject::readStream() [2] in file db/regIOobject/regIOobjectRead.C at line 73. [2] FOAM parallel run exiting [2] [1] [1] [1] --> FOAM FATAL IO ERROR: [1] cannot find file [1] [1] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor1/0/alpha1 at line 0. [1] [1] From function regIOobject::readStream() [1] in file db/regIOobject/regIOobjectRead.C at line 73. [1] FOAM parallel run exiting [1] -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 3 with PID 2667 on node ubuntu exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [ubuntu:02663] 3 more processes have sent help message help-mpi-api.txt / mpi-abort [ubuntu:02663] Set MCA parameter "orte_base_help_aggregate" to 0 to see all helpmusa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ |
||
April 9, 2013, 20:51 |
|
#16 |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
If there are 4 CPU's then is the MPI error messages requiring that there be a alpha1.org folder for each processor?
|
|
April 9, 2013, 22:05 |
i3 multicore performance
|
#17 |
Senior Member
Jose Rey
Join Date: Oct 2012
Posts: 134
Rep Power: 18 |
Does your computer have:
1. Two separate processors with i3 two cores each?, or 2. One dual core i3 processor with hyperthreading (aka multithreading)? If you have case #1, you should see 8 logical CPUs, if you have #2, you should see 4 logical CPUs. Only 50% of logical CPUs have the power to crunch data at fullest. For some jobs, however, the other 50% could give you an edge. What I am saying, is that your model might finish faster with the settings set to two CPUs than four. It also might explain your original observation of two CPUs working harder than the other two. This is a good post that touches on the subject of hyperthreading: http://www.cfd-online.com/Forums/ope...processor.html Last edited by JR22; April 9, 2013 at 22:59. |
|
April 10, 2013, 02:06 |
|
#18 | |
New Member
Eric
Join Date: Mar 2013
Posts: 22
Rep Power: 13 |
Quote:
I have never used the solver you are using but I cannot imagine that you need .org files. If the setup has hanged, you might also need to run decomposePar -force to update the folders. In general, I would say that each decomposed processor needs a full set of boundary files. I ran the tutorial of interDyMFoam sloshingTank2D (ras) using ./Allrun, then aborted when the solver started. Later decomposePar and finally foamJob -s -p interDyMFaom |
||
April 10, 2013, 04:52 |
|
#19 |
New Member
Håkon Bartnes Line
Join Date: Mar 2013
Posts: 27
Rep Power: 14 |
Have you done the damBreak tutorial described in the user guide? It's a pretty good step-by-step walkthrough of setting up a parallel run. It's even multiphase, so it uses the variable alpha.
By the way, I think you forgot to rename the file called "alpha1.org" to "alpha1" before decomposing. The ".org"-ending doesn't do anything, it's just to keep a backup of the alpha1 file that's not modified by the setFields application. This is also described in the damBreak tutorial. |
|
April 10, 2013, 21:54 |
|
#20 |
Senior Member
musaddeque hossein
Join Date: Mar 2009
Posts: 309
Rep Power: 18 |
To all the forum members who put their time and effort in trying to help me out and responding to my posts -- a heartfelt thankyou. Paralle processing finally worked. The reasons it was not working are as follows:
1.decomposePar reads all the files in the "0" folder. So it not only read the alpha1, p and U folders that I had modified, but also read the original folders which I had saved with prefix such as "old" or "orignal". So the error was coming from decomposePar reading the original files after reading the files I had modified. 2. The ./Allclean kept deleting the alpha1, apha1.org files with the "rm" command. So I commented out that line. 3. I didnt realize that in the decomposePar dictionary, you should keep only the method you want to use and comment out or delete the other options. After I took care of these items, the parallel processing works very well and the system monitor shows all the 4 CPU's running at 99%-100%. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
fluent cpu usage | aminshz | FLUENT | 1 | December 16, 2011 03:45 |
Cross-compiling OpenFOAM 1.6 on Linux for Windows 32 and 64bits with Mingw-w64 | wyldckat | OpenFOAM Announcements from Other Sources | 7 | January 19, 2010 16:39 |
Critical errors during OpenFoam installation in OpenSuse 11.0 | amscosta | OpenFOAM | 5 | May 1, 2009 15:06 |
OpenFOAM Training and Workshop Zagreb 2628Jan2006 | hjasak | OpenFOAM | 1 | February 2, 2006 22:07 |
OpenFOAM Training and Workshop | Hrvoje Jasak | Main CFD Forum | 0 | October 7, 2005 08:14 |