CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Installation

Compiling OpenFOAM on hpc-fe.gbar.dtu.dk

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 10, 2011, 04:41
Default Compiling OpenFOAM on hpc-fe.gbar.dtu.dk
  #1
New Member
 
Kasper Kærgaard
Join Date: May 2010
Posts: 5
Rep Power: 16
kaergaard is on a distinguished road
Hi All.
I have successfully compiled OpenFOAM-1.7.1 on the DTU cluster hpc-fe.gbar.dtu.dk.
Follow the normal instructions for compiling openfoam, but pay attention to the following:

The default gcc version on the cluster is too old to use (most of openfoam compiles, but some libraries will fail), so you must configure openfoam to use gcc44 and g++44 instead of gcc and g++.

In OpenFOAM-1.7.1/etc/bashrc change the value of WM_CC and WM_CXX to gcc44 and g++44, respectively.

Further you must edit the two files, c and c++ in OpenFOAM-1.7.1/wmake/rules/linux64Gcc/, to change any occurrences of "gcc" and "g++" into "gcc44" and "g++44".
This information was found at:
http://consultancy.edvoncken.net/ind...lding_OpenFOAM

Furthermore you must configure openfoam to use the system mpi instead of the one that ships with OpenFOAM.
First tell openfoam to use the system mpi: in OpenFOAM-1.7.1/etc/bashrc
set the variable WM_MPLIB to SYSTEMOPENMPI. The line you should edit ends up looking like this:
: ${WM_MPLIB:=SYSTEMOPENMPI}; export WM_MPLIB

Next tell openfoam where to find the system MPI. In etc/settings.sh right after these two lines:
SYSTEMOPENMPI)
# use the system installed openmpi, get library directory via mpicc
mpi_version=openmpi-system
add these two lines:
export MPI_HOME=/opt/SUNWhpc/HPC8.2.1c/gnu/
export MPI_ARCH_PATH=/opt/SUNWhpc/HPC8.2.1c/gnu/

In order to use the system mpi you must call:

module load mpi

at the beginning of your session, i.e. either interactive session, or in your run file.
My run file to submit to the que looks like this (this I have inherited from NGJ):

#! /bin/sh
# <SUN Grid Engine>
# =======================
# standard start options, standardized job-name/output
#$ -S /bin/sh -cwd -j yes -N test1 -o log.run$JOB_ID
# =======================
# </SUN Grid Engine>
# <environ>
# ---------------------------------------------------------------------------
# : ${SGE_ROOT:=/opt/sge/sge6_2u5/}
# : ${SGE_CELL:=default}
# for i in $SGE_ROOT/$SGE_CELL/site/environ; do [ -f $i ] && echo "source $i" && . $i; done
# PATH=$HOME/bin:/home/ngja/OpenFOAM/ThirdParty/cmake-2.6.4/build/bin/:$PATH
FOAM_SIGFPE=1 #Halts on Nan/Floating Point Exceptions
module load mpi
. $HOME/OpenFOAM/OpenFOAM-1.7.1/etc/bashrc
# ---------------------------------------------------------------------------
# </environ>
APPLICATION=waveFoam17

WHICH_APP=`echo $APPLICATION | cut -f1 -d" "`; WHICH_APP=`which $WHICH_APP`;
caseName=${PWD##*/}
jobName=$caseName
# avoid generic names
case "$jobName" in
foam | OpenFOAM )
jobName=$(dirname $PWD)
jobName=$(basename $jobName)
;;
esac

cat<<PRINT
(**) job $jobName
(**) pwd ${PWD/$HOME/~}
(II) job_id $JOB_ID
(II) queue ${QUEUE:-NULL}
(II) host ${HOSTNAME:-NULL}
(II) slots ${NSLOTS:-1}
(II) foam ${WM_PROJECT_VERSION}
(II) App ${WHICH_APP/$HOME/~}
PRINT

# parallel/serial
if [ "$NSLOTS" -gt 1 ]; then
mpirun=mpirun
case "$PE" in
ompi)
echo "(II) mpi $PE-${OPENMPI_VERSION}"
# openmpi parameters
## mpirun="$mpirun --mca mpi_yield_when_idle 1"
;;
esac
mpirun="$mpirun --mca btl_openib_verbose 1 --mca btl ^tcp ${APPLICATION} -parallel"

echo "(II) mpirun /usr/mpi/gcc/openmpi-1.4.1/bin/mpirun"
echo "(II) cmd $mpirun"
exec $mpirun
else
echo "(II) cmd ${APPLICATION}"
${APPLICATION}
fi

Kind regards Kasper Kærgaard
kaergaard is offline   Reply With Quote

Old   June 16, 2011, 02:33
Default
  #2
Senior Member
 
Balkrishna Patankar
Join Date: Mar 2009
Location: Pune
Posts: 123
Rep Power: 17
balkrishna is on a distinguished road
Hi ,
I am using OpenFOAM on an Intel cluster . I compiled it successfully with the openfoam openmpi however it gave the following error on running a case :
Code:
cn052.2077ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2077PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
--------------------------------------------------------------------------
PSM was unable to open an endpoint. Please make sure that the network link is
active on the node and the hardware is functioning. 

  Error: PSM Could not find an InfiniPath Unit
--------------------------------------------------------------------------
cn052.2081ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2081PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2079ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2079PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2078ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2078PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2084ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2084PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2083ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2083PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2088ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2088PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2080ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2080PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2086ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2086PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2087ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2087PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2085ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2085PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
cn052.2082ipath_wait_for_device: The /dev/ipath device failed to appear after 30.0 seconds: Connection timed out
cn052.2082PSM Could not find an InfiniPath Unit on device /dev/ipath (30s elapsed) (err=21)
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  PML add procs failed
  --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2077] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2079] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2082] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2083] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2080] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2081] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 2077 on
node cn052 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[cn052:02074] 23 more processes have sent help message help-mtl-psm.txt / unable to open endpoint
[cn052:02074] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[cn052:02074] 7 more processes have sent help message help-mpi-runtime / mpi_init:startup:internal-failure
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2088] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[cn052:2087] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
I decided to use the above procedure but on making the following changes :
On changing $WM_MPLIB in etc/bashrc in the openfoam folder and adding the two lines etc/settings.sh as follows :
Code:
In etc/bashrc
${WM_MPLIB:=SYSTEMOPENMPI};  //etc/bashrc

In etc/settings.sh
export MPI_HOME=$I_MPI_ROOT
export MPI_ARCH_PATH=$I_MPI_ROOT
I get the following error on sourcing ~/.bashrc file :
Code:
/usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: ld returned 1 exit status
/usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: ld returned 1 exit status
Another , thing :
Code:
module load mpi
does not work . saying module : command not found .

Could u help me out in sorting the above problems ?

Thanks ,
Balkrishna .
balkrishna is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Help Compiling OpenFOAM 1.6 on SGI Altix Machine deji OpenFOAM 11 February 16, 2011 12:10
Cross-compiling OpenFOAM 1.7.0 on Linux for Windows 32 and 64bits with Mingw-w64 wyldckat OpenFOAM Announcements from Other Sources 3 September 8, 2010 07:25
OpenFOAM benchmarks for HPC tomislav_maric OpenFOAM Running, Solving & CFD 0 April 21, 2010 18:39
Modified OpenFOAM Forum Structure and New Mailing-List pete Site News & Announcements 0 June 29, 2009 06:56
A new Howto on the OpenFOAM Wiki Compiling OpenFOAM under Unix mbeaudoin OpenFOAM Installation 2 April 28, 2006 09:54


All times are GMT -4. The time now is 00:07.