CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Pre-Processing

decomposePar no field transfert

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By ryanc6

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 20, 2014, 06:59
Question decomposePar no field transfert
  #1
New Member
 
Jean-Pierre
Join Date: May 2014
Posts: 9
Rep Power: 12
Jeanp is on a distinguished road
Hi everyone,

I have problems using decomposePar after I do sHM. Everything its ok after snappy but when I try to decompose the mesh problems occur. I prepare my cases in my laptop (4 processors, 8GB) before sending them to a cluster (48 processors). The cases I prepare in my laptop are the same with the exception of the refinement levels in sHM and the decomposeParDict (nb of processors). I do this to have a "lighter" case to be sure I haven’t done any silly error before sending the case to the cluster. First, I had some problems decomposing in my computer, the processors(n) files contained only the constant file, there was no field transfer. I fixed that using "decomposePar -latestTime".
What I don’t understand is that in the cluster the processors(n) files don’t have the 0 file (I overwrite when I sHM) even if I use “"decomposePar -latestTime".

Here my decomposePar log:
Code:
/*---------------------------------------------------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.2.1                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
Build  : 
Exec   : decomposePar -latestTime
Date   : 
Time   : 
Host   : 
PID    : 
Case   : 
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time



Decomposing mesh region0

Create mesh

Calculating distribution of cells
Selecting decompositionMethod scotch

Finished decomposition in 61.54 s

Calculating original mesh data

Distributing cells to processors

Distributing faces to processors

Distributing points to processors

Constructing processor meshes

Processor 0
    Number of cells = 156676
    Number of faces shared with processor 1 = 1463
    Number of faces shared with processor 3 = 2235
    Number of faces shared with processor 4 = 99
    Number of faces shared with processor 5 = 1753
    Number of faces shared with processor 7 = 2890
    Number of faces shared with processor 12 = 1392
    Number of faces shared with processor 13 = 2912
    Number of faces shared with processor 15 = 28
    Number of faces shared with processor 29 = 1119
    Number of faces shared with processor 30 = 43
    Number of faces shared with processor 33 = 752
    Number of faces shared with processor 43 = 1663
    Number of faces shared with processor 44 = 277
    Number of processor patches = 13
    Number of processor faces = 16626
    Number of boundary faces = 3973

Processor 1
    Number of cells = 155343
    Number of faces shared with processor 0 = 1463
    Number of faces shared with processor 2 = 4228
    Number of faces shared with processor 3 = 684
    Number of faces shared with processor 4 = 492
    Number of faces shared with processor 5 = 61
    Number of faces shared with processor 12 = 3211
    Number of faces shared with processor 15 = 732
    Number of faces shared with processor 17 = 113
    Number of processor patches = 8
    Number of processor faces = 10984
    Number of boundary faces = 5711

% [..] The same for every processor
Processor 47
    Number of cells = 154530
    Number of faces shared with processor 43 = 89
    Number of faces shared with processor 44 = 114
    Number of faces shared with processor 46 = 1413
    Number of processor patches = 3
    Number of processor faces = 1616
    Number of boundary faces = 62009

Number of processor faces = 250893
Max number of cells = 156913 (1.15258% above average 155125)
Max number of processor patches = 17 (102.985% above average 8.375)
Max number of faces between processors = 16626 (59.0415% above average 10453.9)

Time = 0

%it stops here...
Here my decomposePar dict :
Code:
/*--------------------------------*- C++ -*----------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.3.0                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      decomposeParDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

numberOfSubdomains 48;//4

method          scotch;//simple


simpleCoeffs
{
    n               (2 2 1);
    delta           0.001;
}

hierarchicalCoeffs
{
    n               (1 1 1);
    delta           0.001;
    order           xyz;
}

manualCoeffs
{
    dataFile        "cellDecomposition";
}

metisCoeffs
{
    //n                   (5 1 1);
    //cellWeightsFile     "constant/cellWeightsFile";
}


// ************************************************************************* //
And here the output from the cluster :
Code:
### auto-loading modules openmpi/1.6.3--gnu--4.7.2
### auto-loading modules gnu/4.7.2


--> FOAM FATAL IO ERROR: 
Cannot find patchField entry for train_CATIASTL

file: myCase/0/p.boundaryField from line 26 to line 64.

    From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&)
    in file/prod/build/applications/openfoam/2.2.1-gnu-4.7.2/openmpi--1.6.3--gnu--4.7.2/BA_WORK/OpenFOAM-2.2.1/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 198.

FOAM exiting

--------------------------------------------------------------------------
WARNING: It appears that your OpenFabrics subsystem is configured to only
allow registering part of your physical memory.  This can cause MPI jobs to
run with erratic performance, hang, and/or crash.

This may be caused by your OpenFabrics vendor limiting the amount of
physical memory that can be registered.  You should investigate the
relevant Linux kernel module parameters that control how much physical
memory can be registered, and increase them to allow registering all
physical memory on your machine.

See this Open MPI FAQ item for more information on these Linux kernel module
parameters:

    http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

  Local host:              node119
  Registerable memory:     32768 MiB
  Total memory:            48434 MiB

Your MPI job will continue, but may be behave poorly and/or hang.
--------------------------------------------------------------------------
[0] 
[0] 
[0] --> FOAM FATAL IO ERROR: 
[0] cannot find file
[0] 
[0] file: myCase/processor0/0/p at line 0.
[0] 
[0]     From function regIOobject::readStream()
[0]     in file db/regIOobject/regIOobjectRead.C at line 73.
[0] 
FOAM parallel run exiting
[0] 

%[...]
% The same for every processor
%[...]

[9] 
[9] 
[9] --> FOAM FATAL IO ERROR: 
[9] cannot find file
[9] 
[9] file: myCase/processor9/0/p at line 0.
[9] 
[9]     From function regIOobject::readStream()
[9]     in file db/regIOobject/regIOobjectRead.C at line 73.
[9] 
FOAM parallel run exiting
[9] 
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 11 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 24 with PID 5991 on
node node128ib0 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[node119:09393] 47 more processes have sent help message help-mpi-btl-openib.txt / reg mem limit low
[node119:09393] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[node119:09393] 47 more processes have sent help message help-mpi-api.txt / mpi-abort


--> FOAM FATAL ERROR: 
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 178.

FOAM exiting
If you need something else please just ask me
Thanks!
Jeanp is offline   Reply With Quote

Old   April 14, 2021, 20:10
Default Same Problem
  #2
New Member
 
Join Date: Jun 2019
Location: United States
Posts: 15
Rep Power: 7
ryanc6 is on a distinguished road
Hello Jeanp,


I am experiencing the same problem. My decomposePar does not write the field data to the processor* folders. DecomposePar runs until it reaches the "Time = 0" line. (same as yours) Then I get the exact same errors.



Did you ever find a solution to this problem?



Thanks,
ryanc6
ryanc6 is offline   Reply With Quote

Old   April 15, 2021, 11:23
Default Today It Decided to Work
  #3
New Member
 
Join Date: Jun 2019
Location: United States
Posts: 15
Rep Power: 7
ryanc6 is on a distinguished road
This morning I was able to run decomposePar correctly. I made no changes to the decomposeParDict or other system files. I did however rerun my mesh generation. I used blockMesh along with refineMesh and snappyHexMesh. I did not change any of the mesh generation dicts; I simply cleaned it all and ran again. CheckMesh passed and everything looked good on both mesh generation accounts so I'm not sure what the problem was. I hope this helps anyone else with this problem.
wangxc likes this.
ryanc6 is offline   Reply With Quote

Old   June 18, 2022, 13:01
Default
  #4
New Member
 
Yanjun Tong
Join Date: Jul 2020
Posts: 17
Rep Power: 6
Hughtong is on a distinguished road
I searched this problem during the decompose process was stucked in the "Time =0 " in the cluster. But after about 30 minutes, it starts to "Processor *: field transfer" XD
Hughtong is offline   Reply With Quote

Reply

Tags
cluster, decomposepar, field transfer


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Setting an external B0_x field in MHD module fferroni FLUENT 11 August 1, 2022 09:20
funkySetBoundaryFields - Manipulation of existing field jhertel OpenFOAM Pre-Processing 16 May 18, 2020 07:32
chtMultiRegionFoam - exchange data between flow field and temperature phsieh2005 OpenFOAM Running, Solving & CFD 0 February 7, 2012 10:16
Porosity field in Fluent wojciech FLUENT 1 September 20, 2010 12:19
problem with internal field and nonuniform list OFU OpenFOAM Running, Solving & CFD 1 October 5, 2009 04:35


All times are GMT -4. The time now is 11:19.