CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Installation

OpenFOAM 1.7.x using intel compiler and MVAPICH2

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 21, 2011, 11:00
Default OpenFOAM 1.7.x using intel compiler and MVAPICH2
  #1
New Member
 
Kris
Join Date: Nov 2010
Posts: 21
Rep Power: 16
kpsl is on a distinguished road
Dear Foamers,

I've been searching the forum for days, but I cant seem to find a thread with the answer to my problem.

I am trying to compile the latest OpenFOAM 1.7.x (git repository) using the Intel compiler and MVAPICH2.

I have modified my bashrc to read:

Code:
: ${WM_COMPILER:=Icc}; export WM_COMPILER

: ${WM_MPLIB:=MPI-MVAPICH2}; export WM_MPLIB
Also, I have added this to my settings.sh:

Code:
MPI-MVAPICH2)
export MPI_HOME=/sw/comm/mvapich2/1.5.0-intel
export MPI_ARCH_PATH=$MPI_HOME
_foamAddPath $MPI_ARCH_PATH/bin
_foamAddLib     $MPI_ARCH_PATH/lib
export FOAM_MPI_LIBBIN=$FOAM_LIBBIN/mvapich2
;;
Furthermore, I crated a file in wmake/rules/linux64Icc named mplibMPI-MVAPICH2 that contains:

Code:
PFLAGS     = -DMPICH_SKIP_MPICXX
PINC       = -I$(MPI_ARCH_PATH)/include
PLIBS      = -L$(MPI_ARCH_PATH)/lib -lmpich -lrt
I am not sure if the contents of this file is correct, however it seems to work.

Now, I have used intel.compiler (icc and ifort) version 12.0.0 and gcc 4.5.2.

I get many warnings during compilation, all of them similar to this one:

Code:
/home/kris/OpenFOAM/OpenFOAM-1.7.x/src/OpenFOAM/lnInclude/pTraits.H(71): warning #597: "Foam::pTraits<PrimitiveType>::operator Foam::symmTensor() const [with PrimitiveType=Foam::symmTensor]" will not be called for implicit or explicit conversions
          operator PrimitiveType() const
          ^
          detected during:
            instantiation of class "Foam::pTraits<PrimitiveType> [with PrimitiveType=Foam::symmTensor]" at line 82 of "/home/kris/OpenFOAM/OpenFOAM-1.7.x/src/OpenFOAM/lnInclude/dimensionedType.H"
            instantiation of class "Foam::dimensioned<Type> [with Type=Foam::symmTensor]" at line 182 of "/home/kris/OpenFOAM/OpenFOAM-1.7.x/src/OpenFOAM/lnInclude/transform.H"
Upon completion I notice that barely half the solvers have managed to compile.

Note that I have also compiled using GCC and MVAPICH2 and this worked perfectly. Thus, the problem must lie with the Intel compiler. Am I using the wrong version? Or is there something else I have missed?

Any help would be greatly appreciated.

Last edited by kpsl; May 21, 2011 at 11:19.
kpsl is offline   Reply With Quote

Old   May 22, 2011, 03:59
Default
  #2
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Greetings Kris,

The error you are getting is very similar to this one: http://www.openfoam.com/mantisbt/view.php?id=98&nbn=3
A couple of solutions are listed there, so you might want to give it a try!

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   May 22, 2011, 05:45
Default
  #3
New Member
 
Kris
Join Date: Nov 2010
Posts: 21
Rep Power: 16
kpsl is on a distinguished road
Hi Bruno,

thank you for your reply

I saw that post, in fact i copied and pasted the error from there since I don't have access to my log files from home.

I noticed that the -xT warning no longer occurs in the latest 1.7.x release (it does occur a lot in 1.7.1). However, the compilation still gives the warning I originally posted many times.

I am not sure where to apply -std=c++0x. Can I do it globally or do I need to go through all of the wmake files? Sorry, my compiling skills are not that developed.
kpsl is offline   Reply With Quote

Old   May 22, 2011, 05:57
Default
  #4
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Kris,

Mmm, I just remembered about a Japanese blog that has a few more instructions: http://www.geocities.co.jp/SiliconVa..._compiler.html
You can try to use Google's hammer translator to sort out a few more details: http://translate.google.com/translat...n&hl=&ie=UTF-8
The interesting detail on that blog is that Icc 12 got beaten by gcc 4.5...

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   May 22, 2011, 06:08
Default
  #5
New Member
 
Kris
Join Date: Nov 2010
Posts: 21
Rep Power: 16
kpsl is on a distinguished road
Hi Bruno,

thank you for the great hint!! I will try this first thing tomorrow.

Interesting that gcc beats icc. I guess will compare the two myself and post my results asap.

Kind regards,
Kris
kpsl is offline   Reply With Quote

Old   May 24, 2011, 05:14
Default
  #6
New Member
 
Kris
Join Date: Nov 2010
Posts: 21
Rep Power: 16
kpsl is on a distinguished road
So, I managed to compile OpenFOAM without problems using the Intel compiler and MVAPICH2 thanks to Bruno's hint.

It runs fine in serial mode but it doesn't seem to work in parallel on the cluster I am using.

Whenever I submit a job I get the message:

Code:
mpiexec: Warning: tasks 0-63 exited before completing MPI startup
No further explanation is given as to what caused the error.
I will ask my network admin to see if he knows a solution.
The strange thing is that when compiling with gcc and MVAPICH2 using the exact same MPI setting, everthing works fine.
kpsl is offline   Reply With Quote

Old   May 24, 2011, 06:13
Default
  #7
New Member
 
Kris
Join Date: Nov 2010
Posts: 21
Rep Power: 16
kpsl is on a distinguished road
Ok the results are in!

Benchmark done using the following setup:

16 nodes using 4 cores each = 64 CPUs
Mesh containing ~3.5 million cells
rhoPimpleFoam

Criteria: real time taken to reach 0.0002s in the simulation.

Intel.Compiler/12.0.0 & MVAPICH2/1.5.0-intel
Run1: ExecutionTime = 88.49 s ClockTime = 90 s
Run2: ExecutionTime = 87.57 s ClockTime = 88 s
Run3: ExecutionTime = 87.9 s ClockTime = 89 s
Mean: Execution Time = 87.98 s ClockTime = 89 s


Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
Run1: ExecutionTime = 83.33 s ClockTime = 87 s
Run2: ExecutionTime = 83.14 s ClockTime = 85 s
Run3: ExecutionTime = 82.78 s ClockTime = 84 s
Mean: ExecutionTime = 83.08 s ClockTime = 85.33 s


GCC clearly beats Intel's compiler!
kpsl is offline   Reply With Quote

Old   May 27, 2011, 16:05
Default
  #8
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Kris,

This is very nice indeed! But from both benchmarks, I still get the feeling that anything less than 10min isn't much of a benchmark with OpenFOAM, since what we save on computational power, might get otherwise get wasted on some other task, like file access and MPI and so on...

But still, extrapolating a 24h00 run on icc gives a whooping 23h12 on gcc in the worst cases! That's easily a meal or two!

Now the big question is: are both end results as correct as when built with the officially advised gcc 4.4 series? And how does a gcc 4.4 build take on those other two!?

Oh well... some things are best left unknown

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   August 12, 2011, 14:44
Default
  #9
New Member
 
Kris
Join Date: Nov 2010
Posts: 21
Rep Power: 16
kpsl is on a distinguished road
I took the liberty of benching the Intel Compiler against Gcc once again. This time using the new OpenFOAM-2.0.x.

Mesh containing ~1 million cells.
rhoPimpleFoam


First a rather short run on one node. Data is written once at the end of the computation:

Intel.Compiler/12.0.4 & MVAPICH2/1.5.0-intel
1 Node, 8 CPUs, Intel Xeon Harpertown E5472
Run1: ExecutionTime = 3438.8 s ClockTime = 3449 s
Run2: ExecutionTime = 3440.75 s ClockTime = 3450 s
Run3: ExecutionTime = 3448.05 s ClockTime = 3458 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
1 Node, 8 CPUs each, Intel Xeon Harpertown E5472
Run1: ExecutionTime = 3286.97 s ClockTime = 3297 s
Run2: ExecutionTime = 3288.28 s ClockTime = 3297 s
Run3: ExecutionTime = 3285.53 s ClockTime = 3294 s


And the same run on 8 nodes:

Intel.Compiler/12.0.4 & MVAPICH2/1.5.0-intel
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 209.24 s ClockTime = 211 s
Run2: ExecutionTime = 208.58 s ClockTime = 209 s
Run3: ExecutionTime = 209.44 s ClockTime = 212 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 177.08 s ClockTime = 178 s
Run2: ExecutionTime = 175.57 s ClockTime = 176 s
Run3: ExecutionTime = 177.66 s ClockTime = 181 s


Then i took Bruno's advice and let the computation run 5 times longer. Data is now written 5 times in total:

Intel.Compiler/12.0.4 & MVAPICH2/1.5.0-intel
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 1099.89 s ClockTime = 1105 s
Run2: ExecutionTime = 1099.45 s ClockTime = 1103 s
Run3: ExecutionTime = 1096.52 s ClockTime = 1100 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 931.13 s ClockTime = 936 s
Run2: ExecutionTime = 932.93 s ClockTime = 938 s
Run3: ExecutionTime = 930.48 s ClockTime = 936 s


As you can see, Gcc beats Intel by about 15% everytime. This seems to be a lot more than with 1.7.x, although i must admit, the "smaller" mesh may have something to do with it.
kpsl is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 14:02.