CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

OpenFOAM extended 4.0 Error with Multinode Set-up

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 18, 2018, 18:55
Default OpenFOAM extended 4.0 Error with Multinode Set-up
  #1
New Member
 
Wang Shuo
Join Date: Sep 2018
Posts: 1
Rep Power: 0
Liweix is on a distinguished road
Hi everyone,


I am new to openfoam and a trying to submit a multinode job on a slurm cluster. The case is decomposed to 60 processors by the following decomposeParDict file:
Code:
numberOfSubdomains 60;

method          scotch;//clusteredF;

clusteredFCoeffs
{
    method      simpleVan;//hierarchical;//

    numberOfSubdomains 40;

    simpleCoeffs
    {
        n               ( 1 4 1 );
        delta           0.001;
    }

    simpleVanCoeffs
    {
        n               ( 1 4 1 );
        delta           0.001;
    }

     hierarchicalCoeffs
  {
        n               ( 1 4 1 );
        delta           0.001;
        order           xyz;
    }
}

scotchCoeffs
{
//   processorWeights (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1);
// processorWeights (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1);
}

simpleCoeffs
{
    n               ( 1 4 1 );
    delta           0.001;
}

hierarchicalCoeffs
{
    n               ( 1 4 1 );
    delta           0.001;
     order           xyz;
}

metisCoeffs
{
    processorWeights ( 1 1 1 1 );
}

manualCoeffs
{
    dataFile        "";
}

distributed     no;

roots           ( );
and my submission script is like this:


Code:
#SBATCH --partition multinode -N 3

#SBATCH --ntasks-per-node=20

#SBATCH --mem=3000mb

#SBATCH --time=20:00

#SBATCH --job-name=simple2

#SBATCH --mail-type=all

#SBATCH --error=Error2

# unload all modules:

module purge
source /.../foam/foam-extend-4.0/etc/bashrc

module load compiler/gnu/7

mpirun -bind-to-core -bycore -report-bindings PFFoam -parallel >log
I could see from the squeue command that 3 nodes with 60 processors were distributed for my job, however, the job ends up with an error report:
Code:
[0]
[0]
[0] --> FOAM FATAL ERROR:
[0] "/.../system/decomposeParDict" specifies 60 processors but job was started with 20 processors.
[0]
FOAM parallel run exiting
[0]
It is a strange error since the resources and number of processors are in accordance. I tried also single-node set up and it worked but slowly, because of the huge number of cells. This batch job worked perfectly before the update of the cluster. However, the openmpi 1.8.8 is no more available in the cluster. I guess it was due to Openmpi problem, so I tried to compile the ThirdParty of the openFoam and checked my ompiinfo:

Code:
  Package: Open MPI blablabla.localdomain Distribution
  Open MPI: 1.8.8
  Open MPI repo revision: v1.8.7-20-g1d53995
   Open MPI release date: Aug 05, 2015
                Open RTE: 1.8.8
  Open RTE repo revision: v1.8.7-20-g1d53995
   Open RTE release date: Aug 05, 2015
                    OPAL: 1.8.8
      OPAL repo revision: v1.8.7-20-g1d53995
       OPAL release date: Aug 05, 2015
                 MPI API: 3.0
            Ident string: 1.8.8
                  Prefix: blablabla/foam/foam-extend-4.0/ThirdParty/packages/openmpi-1.8.8/platforms/linux64GccDPOpt
 Configured architecture: x86_64-pc-linux-gnu
          Configure host: blabla.localdomain
           Configured by: be8830
           Configured on: Fri Dec  7 17:37:45 CET 2018
          Configure host: blabla.localdomain
                Built by: be8830
                Built on: Fri Dec  7 17:43:38 CET 2018
              Built host: blablabla.localdomain
              C bindings: yes
            C++ bindings: yes
             Fort mpif.h: yes (all)
            Fort use mpi: yes (full: ignore TKR)
       Fort use mpi size: deprecated-ompi-info-value
        Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
                          limitations in the /opt/gcc/7/bin/gfortran
                          compiler, does not support the following: array
                          subsections, direct passthru (where possible) to
                          underlying Open MPI's C functionality
  Fort mpi_f08 subarrays: no
           Java bindings: no
  Wrapper compiler rpath: runpath
              C compiler: gcc
     C compiler absolute: /opt/gcc/7/bin/gcc
  C compiler family name: GNU
      C compiler version: 7.2.0
            C++ compiler: g++
   C++ compiler absolute: /opt/gcc/7/bin/g++
           Fort compiler: /opt/gcc/7/bin/gfortran
       Fort compiler abs: 
         Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
   Fort 08 assumed shape: yes
      Fort optional args: yes
          Fort INTERFACE: yes
    Fort ISO_FORTRAN_ENV: yes
       Fort STORAGE_SIZE: yes
      Fort BIND(C) (all): yes
      Fort ISO_C_BINDING: yes
 Fort SUBROUTINE BIND(C): yes
       Fort TYPE,BIND(C): yes
 Fort T,BIND(C,name="a"): yes
            Fort PRIVATE: yes
          Fort PROTECTED: yes
           Fort ABSTRACT: yes
       Fort ASYNCHRONOUS: yes
          Fort PROCEDURE: yes
           Fort C_FUNLOC: yes
 Fort f08 using wrappers: yes
         Fort MPI_SIZEOF: yes
             C profiling: yes
           C++ profiling: yes
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: yes
          C++ exceptions: no
          Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes,
                          OMPI progress: no, ORTE progress: yes, Event lib:
                          yes)
           Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
              dl support: yes
   Heterogeneous support: no
 mpirun default --prefix: yes
         MPI I/O support: yes
       MPI_WTIME support: gettimeofday
     Symbol vis. support: yes
   Host topology support: yes
          MPI extensions: 
   FT Checkpoint support: no (checkpoint thread: no)
   C/R Enabled Debugging: no
     VampirTrace support: no
  MPI_MAX_PROCESSOR_NAME: 256
    MPI_MAX_ERROR_STRING: 256
     MPI_MAX_OBJECT_NAME: 64
        MPI_MAX_INFO_KEY: 36
        MPI_MAX_INFO_VAL: 256
       MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
           MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.8.8)
            MCA compress: gzip (MCA v2.0, API v2.0, Component v1.8.8)
            MCA compress: bzip (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA crs: none (MCA v2.0, API v2.0, Component v1.8.8)
                  MCA db: print (MCA v2.0, API v1.0, Component v1.8.8)
                  MCA db: hash (MCA v2.0, API v1.0, Component v1.8.8)
                  MCA dl: dlopen (MCA v2.0, API v1.0, Component v1.8.8)
               MCA event: libevent2021 (MCA v2.0, API v2.0, Component v1.8.8)
               MCA hwloc: hwloc191 (MCA v2.0, API v2.0, Component v1.8.8)
                  MCA if: posix_ipv4 (MCA v2.0, API v2.0, Component v1.8.8)
                  MCA if: linux_ipv6 (MCA v2.0, API v2.0, Component v1.8.8)
         MCA installdirs: env (MCA v2.0, API v2.0, Component v1.8.8)
         MCA installdirs: config (MCA v2.0, API v2.0, Component v1.8.8)
              MCA memory: linux (MCA v2.0, API v2.0, Component v1.8.8)
               MCA pstat: linux (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA sec: basic (MCA v2.0, API v1.0, Component v1.8.8)
               MCA shmem: posix (MCA v2.0, API v2.0, Component v1.8.8)
               MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.8.8)
               MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.8.8)
               MCA timer: linux (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA dfs: app (MCA v2.0, API v1.0, Component v1.8.8)
                 MCA dfs: orted (MCA v2.0, API v1.0, Component v1.8.8)
                 MCA dfs: test (MCA v2.0, API v1.0, Component v1.8.8)
              MCA errmgr: default_app (MCA v2.0, API v3.0, Component v1.8.8)
              MCA errmgr: default_orted (MCA v2.0, API v3.0, Component
                          v1.8.8)
              MCA errmgr: default_hnp (MCA v2.0, API v3.0, Component v1.8.8)
              MCA errmgr: default_tool (MCA v2.0, API v3.0, Component v1.8.8)
                 MCA ess: tool (MCA v2.0, API v3.0, Component v1.8.8)
                 MCA ess: singleton (MCA v2.0, API v3.0, Component v1.8.8)
                 MCA ess: env (MCA v2.0, API v3.0, Component v1.8.8)
                 MCA ess: hnp (MCA v2.0, API v3.0, Component v1.8.8)
               MCA filem: raw (MCA v2.0, API v2.0, Component v1.8.8)
             MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA iof: mr_hnp (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA iof: hnp (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA iof: tool (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA iof: mr_orted (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA iof: orted (MCA v2.0, API v2.0, Component v1.8.8)
                MCA odls: default (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA oob: tcp (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA plm: rsh (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA plm: isolated (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA ras: simulator (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: ppr (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: lama (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: staged (MCA v2.0, API v2.0, Component v1.8.8)
               MCA rmaps: mindist (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA rml: oob (MCA v2.0, API v2.0, Component v1.8.8)
              MCA routed: debruijn (MCA v2.0, API v2.0, Component v1.8.8)
              MCA routed: radix (MCA v2.0, API v2.0, Component v1.8.8)
              MCA routed: direct (MCA v2.0, API v2.0, Component v1.8.8)
              MCA routed: binomial (MCA v2.0, API v2.0, Component v1.8.8)
               MCA state: app (MCA v2.0, API v1.0, Component v1.8.8)
               MCA state: orted (MCA v2.0, API v1.0, Component v1.8.8)
               MCA state: hnp (MCA v2.0, API v1.0, Component v1.8.8)
               MCA state: staged_orted (MCA v2.0, API v1.0, Component v1.8.8)
               MCA state: tool (MCA v2.0, API v1.0, Component v1.8.8)
               MCA state: staged_hnp (MCA v2.0, API v1.0, Component v1.8.8)
               MCA state: novm (MCA v2.0, API v1.0, Component v1.8.8)
           MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.8.8)
           MCA allocator: basic (MCA v2.0, API v2.0, Component v1.8.8)
                MCA bcol: ptpcoll (MCA v2.0, API v2.0, Component v1.8.8)
                MCA bcol: basesmuma (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA btl: openib (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA btl: vader (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA btl: self (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA btl: sm (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: libnbc (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: basic (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: sm (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: inter (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: tuned (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: self (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.8.8)
                MCA coll: ml (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.8.8)
                MCA fbtl: posix (MCA v2.0, API v2.0, Component v1.8.8)
               MCA fcoll: two_phase (MCA v2.0, API v2.0, Component v1.8.8)
               MCA fcoll: ylib (MCA v2.0, API v2.0, Component v1.8.8)
               MCA fcoll: static (MCA v2.0, API v2.0, Component v1.8.8)
               MCA fcoll: dynamic (MCA v2.0, API v2.0, Component v1.8.8)
               MCA fcoll: individual (MCA v2.0, API v2.0, Component v1.8.8)
                  MCA fs: ufs (MCA v2.0, API v2.0, Component v1.8.8)
                  MCA io: romio (MCA v2.0, API v2.0, Component v1.8.8)
                  MCA io: ompio (MCA v2.0, API v2.0, Component v1.8.8)
               MCA mpool: sm (MCA v2.0, API v2.0, Component v1.8.8)
               MCA mpool: grdma (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA osc: rdma (MCA v2.0, API v3.0, Component v1.8.8)
                 MCA osc: sm (MCA v2.0, API v3.0, Component v1.8.8)
                 MCA pml: v (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA pml: bfo (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA pml: cm (MCA v2.0, API v2.0, Component v1.8.8)
              MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.8.8)
              MCA rcache: vma (MCA v2.0, API v2.0, Component v1.8.8)
                 MCA rte: orte (MCA v2.0, API v2.0, Component v1.8.8)
                MCA sbgp: basesmsocket (MCA v2.0, API v2.0, Component v1.8.8)
                MCA sbgp: p2p (MCA v2.0, API v2.0, Component v1.8.8)
                MCA sbgp: basesmuma (MCA v2.0, API v2.0, Component v1.8.8)
            MCA sharedfp: lockedfile (MCA v2.0, API v2.0, Component v1.8.8)
            MCA sharedfp: sm (MCA v2.0, API v2.0, Component v1.8.8)
            MCA sharedfp: individual (MCA v2.0, API v2.0, Component v1.8.8)
                MCA topo: basic (MCA v2.0, API v2.1, Component v1.8.8)
           MCA vprotocol: pessimist (MCA v2.0, API v2.0, Component v1.8.8)
It is exactly the same version that my openFoam requires. I also tried this command in the submission script:
mpirun -np 60 PFoam -parallel > log_phasefieldf1oam
this script works but it is enen slower than the setup of 20 processors, which was not the case before.
I omit some name and adress which is not important and I am sure they are in order. I really appreciate it if someone can give me some advice on this error.
Thanks in advance,
Liweix
Liweix is offline   Reply With Quote

Old   February 18, 2020, 01:50
Post Re: OpenFOAM extended 4.0 Error with Multinode Set-up Reply to Thread
  #2
New Member
 
Yoshiaki SENDA
Join Date: Jun 2014
Location: Kyoto, Japan
Posts: 3
Rep Power: 12
panda1100 is on a distinguished road
I have met probably the same problem, and I solved by following steps.


+ use absolute path of mpirun executable
+ pass mpirun options manually
+ propagate all environment variables to all nodes by using -x option of mpirun. in my job script, EVAR is containing all environment variables name



my job script


Code:
#!/bin/bash
#SBATCH -o slurm_log_%j.out
#SBATCH -e slurm_log_%j.err
#SBATCH --ntasks=48                  # Number of MPI tasks (i.e. processes)
#SBATCH --nodes=2                    # Maximum number of nodes to be allocated
#SBATCH --ntasks-per-node=24         # Maximum number of tasks on each node
#SBATCH --ntasks-per-socket=12        # Maximum number of tasks on each socket
#SBATCH --distribution=cyclic:cyclic

source /opt/foam/foam-extend-4.0/etc/bashrc
MACHINEFILE="nodes.$SLURM_JOB_ID"
srun -l /bin/hostname | sort -n | awk '{print $2}' > $MACHINEFILE
ENVLIST=/opt/foam/foam-extend-4.0/etc/env
EVAR=$(for e in $(cat $ENVLIST); do echo "-x $e " ; done)

cd $SLURM_SUBMIT_DIR

# Mesh-related operations
blockMesh

# Solving
decomposePar -force
MPIRUN=`which mpirun`
$MPIRUN -n $SLURM_NTASKS \
    $EVAR \
    -npernode $SLURM_NTASKS_PER_NODE \
    --machinefile $MACHINEFILE \
    -wdir $SLURM_SUBMIT_DIR \
    rhoPisoFoam -parallel

my support file(/opt/foam/foam-extend-4.0/etc/env) to pass environment variables to all nodes


Code:
FOAM_TUTORIALS
OPENMPI_COMPILE_FLAGS
LD_LIBRARY_PATH
MPI_BUFFER_SIZE
WM_PROJECT_INST_DIR
WM_THIRD_PARTY_USE_LIBCCMIO_261
OPENMPI_LINK_FLAGS
FOAM_RUN
WM_THIRD_PARTY_DIR
SCOTCH_INCLUDE_DIR
WM_LDFLAGS
OPENMPI_BIN_DIR
PARAVIEW_BIN_DIR
WM_THIRD_PARTY_USE_HWLOC_1101
WM_THIRD_PARTY_USE_METIS_510
OPENMPI_INCLUDE_DIR
FOAM_LIB
METIS_INCLUDE_DIR
FOAM_APP
WM_CXXFLAGS
FOAM_UTILITIES
METIS_BIN_DIR
FOAM_APPBIN
HWLOC_BIN_DIR
WM_THIRD_PARTY_USE_PARMGRIDGEN_10
SCOTCH_BIN_DIR
WM_THIRD_PARTY_USE_SCOTCH_604
WM_PRECISION_OPTION
PARMGRIDGEN_LIB_DIR
FOAM_SOLVERS
MODULES_CMD
HWLOC_DIR
MESQUITE_INCLUDE_DIR
ENV
FOAM_DEV
WM_CC
PARMETIS_DIR
FOAM_USER_APPBIN
WM_THIRD_PARTY_USE_OPENMPI_188
PARMETIS_LIB_DIR
WM_PROJECT_USER_DIR
WM_OPTIONS
WM_LINK_LANGUAGE
PARMGRIDGEN_DIR
WM_OSTYPE
OPAL_PREFIX
WM_PROJECT
FOAM_LIBBIN
WM_THIRD_PARTY_USE_MESQUITE_212
MPI_ARCH_PATH
WM_CFLAGS
PARAVIEW_INCLUDE_DIR
MESQUITE_LIB_DIR
WM_ARCH
SCOTCH_DIR
MESQUITE_BIN_DIR
FOAM_SRC
PINC
PYFOAM_DIR
PYFOAM_SITE_DIR
FOAM_SITE_APPBIN
PARAVIEW_LIB_DIR
METIS_DIR
FOAM_TEST_HARNESS_DIR
PARMGRIDGEN_INCLUDE_DIR
MPI_HOME
WM_FORK
FOAM_SITE_LIBBIN
WM_COMPILER_LIB_ARCH
WM_COMPILER
WM_THIRD_PARTY_USE_PYFOAM_064
WM_DIR
WM_ARCH_OPTION
WM_PROJECT_VERSION
WM_MPLIB
FOAM_INST_DIR
WM_COMPILE_OPTION
PARAVIEW_VERSION
PYTHONPATH
FOAM_SITE_DIR
PLIBS
PARAVIEW_DIR
WM_CXX
WM_NCOMPPROCS
FOAM_USER_LIBBIN
WM_THIRD_PARTY_USE_PARMETIS_403
MODULEPATH
PV_PLUGIN_PATH
SCOTCH_LIB_DIR
PARMGRIDGEN_BIN_DIR
FOAM_JOB_DIR
WM_PROJECT_DIR
OPENMPI_DIR
PATH
PARMETIS_BIN_DIR
PARMETIS_INCLUDE_DIR
METIS_LIB_DIR
OPENMPI_LIB_DIR
WM_THIRD_PARTY_USE_PARAVIEW_440
MESQUITE_DIR
echo $EVAR. EVAR look like this. This includes unnecessary variables... but works for me. I got this variable list by execute printenv after sourcing /opt/foam/foam-extend-4.0/etc/bashrc.



Code:
-x FOAM_TUTORIALS -x OPENMPI_COMPILE_FLAGS -x LD_LIBRARY_PATH -x MPI_BUFFER_SIZE -x WM_PROJECT_INST_DIR -x WM_THIRD_PARTY_USE_LIBCCMIO_261 -x OPENMPI_LINK_FLAGS -x FOAM_RUN -x WM_THIRD_PARTY_DIR -x SCOTCH_INCLUDE_DIR -x WM_LDFLAGS -x OPENMPI_BIN_DIR -x PARAVIEW_BIN_DIR -x WM_THIRD_PARTY_USE_HWLOC_1101 -x WM_THIRD_PARTY_USE_METIS_510 -x OPENMPI_INCLUDE_DIR -x FOAM_LIB -x METIS_INCLUDE_DIR -x FOAM_APP -x WM_CXXFLAGS -x FOAM_UTILITIES -x METIS_BIN_DIR -x FOAM_APPBIN -x HWLOC_BIN_DIR -x WM_THIRD_PARTY_USE_PARMGRIDGEN_10 -x SCOTCH_BIN_DIR -x WM_THIRD_PARTY_USE_SCOTCH_604 -x WM_PRECISION_OPTION -x PARMGRIDGEN_LIB_DIR -x FOAM_SOLVERS -x MODULES_CMD -x HWLOC_DIR -x MESQUITE_INCLUDE_DIR -x ENV -x FOAM_DEV -x WM_CC -x PARMETIS_DIR -x FOAM_USER_APPBIN -x WM_THIRD_PARTY_USE_OPENMPI_188 -x PARMETIS_LIB_DIR -x WM_PROJECT_USER_DIR -x WM_OPTIONS -x WM_LINK_LANGUAGE -x PARMGRIDGEN_DIR -x WM_OSTYPE -x OPAL_PREFIX -x WM_PROJECT -x FOAM_LIBBIN -x WM_THIRD_PARTY_USE_MESQUITE_212 -x MPI_ARCH_PATH -x WM_CFLAGS -x PARAVIEW_INCLUDE_DIR -x MESQUITE_LIB_DIR -x WM_ARCH -x SCOTCH_DIR -x MESQUITE_BIN_DIR -x FOAM_SRC -x PINC -x PYFOAM_DIR -x PYFOAM_SITE_DIR -x FOAM_SITE_APPBIN -x PARAVIEW_LIB_DIR -x METIS_DIR -x FOAM_TEST_HARNESS_DIR -x PARMGRIDGEN_INCLUDE_DIR -x MPI_HOME -x WM_FORK -x FOAM_SITE_LIBBIN -x WM_COMPILER_LIB_ARCH -x WM_COMPILER -x WM_THIRD_PARTY_USE_PYFOAM_064 -x WM_DIR -x WM_ARCH_OPTION -x WM_PROJECT_VERSION -x WM_MPLIB -x FOAM_INST_DIR -x WM_COMPILE_OPTION -x PARAVIEW_VERSION -x PYTHONPATH -x FOAM_SITE_DIR -x PLIBS -x PARAVIEW_DIR -x WM_CXX -x WM_NCOMPPROCS -x FOAM_USER_LIBBIN -x WM_THIRD_PARTY_USE_PARMETIS_403 -x MODULEPATH -x PV_PLUGIN_PATH -x SCOTCH_LIB_DIR -x PARMGRIDGEN_BIN_DIR -x FOAM_JOB_DIR -x WM_PROJECT_DIR -x OPENMPI_DIR -x PATH -x PARMETIS_BIN_DIR -x PARMETIS_INCLUDE_DIR -x METIS_LIB_DIR -x OPENMPI_LIB_DIR -x WM_THIRD_PARTY_USE_PARAVIEW_440 -x MESQUITE_DIR
panda1100 is offline   Reply With Quote

Reply

Tags
multi processor, openfoam 4.0 extend, openmpi 1.8


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Frequently Asked Questions about Installing OpenFOAM wyldckat OpenFOAM Installation 3 November 14, 2023 12:58
level set method in Openfoam cyw OpenFOAM Running, Solving & CFD 0 February 9, 2018 12:20
Suggestion for a new sub-forum at OpenFOAM's Forum wyldckat Site Help, Feedback & Discussions 20 October 28, 2014 10:04
New OpenFOAM Forum Structure jola OpenFOAM 2 October 19, 2011 07:55
Cross-compiling OpenFOAM 1.7.0 on Linux for Windows 32 and 64bits with Mingw-w64 wyldckat OpenFOAM Announcements from Other Sources 3 September 8, 2010 07:25


All times are GMT -4. The time now is 23:00.