CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

SU2 Tutorial 2 Parallel Computation

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 31, 2014, 23:19
Default Running Oneram6 in parallel issues
  #1
Member
 
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12
CrashLaker is on a distinguished road
Hello guys.
I've ran Oneram6 in serial successfully. Now I'm facing some issues trying to run it in parallel.

What should I do in order to paralellize SU2's 2nd Tutorial?
What's parallel_computation.py actually doing?

I'm running on a cluster that has 6 nodes that has 2 Xeons each.

I've ran this tests:
Code:
/opt/su2/bin/parallel_computation.py -f inv_ONERAM6.cfg -p 12
/opt/su2mpich2/bin/parallel_computation.py -f inv_ONERAM6.cfg -p 12
This is my ./configure settings in which I tried 2 versions of MPI (Intel MPI & MPICH3).
MPICH 3.1
Intel MPI 4.1
Icc/icpc 14.0.1
Code:
./configure --prefix="/opt/su2mpich2" --with-Metis-lib="/opt/metis1/lib" 
--with-Metis-include="/opt/metis1/include" --with-Metis-version=5 
--with-MPI="/opt/mpich2/bin/mpicxx"
Code:
./configure --prefix="/opt/su2" --with-Metis-lib="/opt/metis1/lib" 
--with-Metis-include="/opt/metis1/include" --with-Metis-version=5 
--with-MPI="/opt/intel/impi/4.1.3.048/intel64/bin/mpicxx"
41 53.673333 -6.208682 -5.711006 0.286449 0.011889
41 53.693333 -6.208682 -5.711006 0.286449 0.011889
41 53.682143 -6.208682 -5.711006 0.286449 0.011889
41 53.572857 -6.208682 -5.711006 0.286449 0.011889
41 53.863571 -6.208682 -5.711006 0.286449 0.011889
41 53.754524 -6.208682 -5.711006 0.286449 0.011889
41 53.650000 -6.208682 -5.711006 0.286449 0.011889
41 53.890476 -6.208682 -5.711006 0.286449 0.011889
41 53.882857 -6.208682 -5.711006 0.286449 0.011889
41 53.901667 -6.208682 -5.711006 0.286449 0.011889
41 53.967381 -6.208682 -5.711006 0.286449 0.011889
41 53.824048 -6.208682 -5.711006 0.286449 0.011889
42 53.672791 -6.255845 -5.757245 0.286496 0.011876
42 53.678605 -6.255845 -5.757245 0.286496 0.011876
42 53.692791 -6.255845 -5.757245 0.286496 0.011876
42 53.572093 -6.255845 -5.757245 0.286496 0.011876
42 53.856279 -6.255845 -5.757245 0.286496 0.011876
42 53.745814 -6.255845 -5.757245 0.286496 0.011876
42 53.651628 -6.255845 -5.757245 0.286496 0.011876
42 53.879767 -6.255845 -5.757245 0.286496 0.011876
42 53.877209 -6.255845 -5.757245 0.286496 0.011876
42 53.894419 -6.255845 -5.757245 0.286496 0.011876
42 53.961628 -6.255845 -5.757245 0.286496 0.011876
42 53.823721 -6.255845 -5.757245 0.286496 0.011876
43 53.672955 -6.302153 -5.803464 0.286533 0.011862
43 53.675909 -6.302153 -5.803464 0.286533 0.011862
43 53.692273 -6.302153 -5.803464 0.286533 0.011862
43 53.571818 -6.302153 -5.803464 0.286533 0.011862
43 53.737727 -6.302153 -5.803464 0.286533 0.011862
43 53.847955 -6.302153 -5.803464 0.286533 0.011862
43 53.652500 -6.302153 -5.803464 0.286533 0.011862
43 53.876136 -6.302153 -5.803464 0.286533 0.011862
43 53.886364 -6.302153 -5.803464 0.286533 0.011862
43 53.876818 -6.302153 -5.803464 0.286533 0.011862
43 53.957045 -6.302153 -5.803464 0.286533 0.011862
43 53.822045 -6.302153 -5.803464 0.286533 0.011862

Note that each iteration is printed 12 times..

The same goes for any number inserted in p.

Note : I'm using MPI + Metis. Not CGNS.

thanks in advance!

Last edited by CrashLaker; April 4, 2014 at 17:28.
CrashLaker is offline   Reply With Quote

Old   April 4, 2014, 17:17
Default
  #2
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 14
hlk is on a distinguished road
I have gotten this type of behavior when SU2 was not compiled with parallel tools. Go back to your configuration and check to make sure that the paths to the appropriate libraries are correct, and look through the config output to make sure that SU2 is being compiled with mpi.
hlk is offline   Reply With Quote

Old   April 4, 2014, 17:35
Default
  #3
Member
 
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12
CrashLaker is on a distinguished road
Quote:
Originally Posted by hlk View Post
I have gotten this type of behavior when SU2 was not compiled with parallel tools. Go back to your configuration and check to make sure that the paths to the appropriate libraries are correct, and look through the config output to make sure that SU2 is being compiled with mpi.
I've compiled it with lots of different kinds of configuration. Even the script (configure) says it has support for MPI and METIS.

Do you strongly think this is a problem with MPI over anything else?..
CrashLaker is offline   Reply With Quote

Old   April 4, 2014, 17:50
Default
  #4
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 14
hlk is on a distinguished road
when you say that the configure script says it has MPI support, I assume you mean that there is a line in config.log like:
MPI support: yes

I see that you also commented on http://www.cfd-online.com/Forums/su2...ify-nodes.html

Since you are running on a cluster, as mentioned in the post linked above, you should also look into the cluster-specific requirements. Unfortunately, that is beyond my expertise, and you will need to talk to the cluster administrator or another user familiar with the specifics of your cluster.
hlk is offline   Reply With Quote

Old   April 4, 2014, 18:27
Default
  #5
New Member
 
Santiago Padron
Join Date: May 2013
Posts: 17
Rep Power: 13
Santiago Padron is on a distinguished road
Your original post had said that you compiled without Metis. Metis is needed to run a parallel computation, that is probably why you are seeing the repeated output. I would recommend you make clean and then compile again with Metis support.

As for your question of "What's parallel_computation.py actually doing?"
The parallel_comutation.py script automatically handles the domain decomposition with SU2_DDC, execution of SU2_CFD, and the merging of the decomposed files using SU2_SOL. This is described in more detail in Tutorial 6 - Turbulent ONERA M6.
Santiago Padron is offline   Reply With Quote

Old   April 5, 2014, 07:14
Default
  #6
New Member
 
Hexchain T.
Join Date: Apr 2014
Posts: 1
Rep Power: 0
hexchain is on a distinguished road
Hi,

Could you please explain how do you compile SU2 with Intel MPI? My try failed with several "mpi.h must be included before stdio.h" errors, and maunally #undef SEEK_* makes it fail later on some MPI functions.
hexchain is offline   Reply With Quote

Old   April 5, 2014, 14:39
Default
  #7
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 14
hlk is on a distinguished road
The general directions for compiling with MPI and Metis can be found near the bottom of the following page:
http://adl-public.stanford.edu/docs/...on+from+Source
hlk is offline   Reply With Quote

Old   April 5, 2014, 17:14
Default
  #8
Member
 
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12
CrashLaker is on a distinguished road
Quote:
Originally Posted by Santiago Padron View Post
Your original post had said that you compiled without Metis. Metis is needed to run a parallel computation, that is probably why you are seeing the repeated output. I would recommend you make clean and then compile again with Metis support.

As for your question of "What's parallel_computation.py actually doing?"
The parallel_comutation.py script automatically handles the domain decomposition with SU2_DDC, execution of SU2_CFD, and the merging of the decomposed files using SU2_SOL. This is described in more detail in Tutorial 6 - Turbulent ONERA M6.
Hello Santiago. Thanks for your reply. The very first attempt was without Metis but now Metis and MPI are installed.
Thanks for commenting about Turbulent Onera I'm going to check it out there for in depth details.

What should happen after running SU2_DDC? Should it create new mesh files depending on the number of divisions specified?
I'm asking that because after I run SU2_DDC it doesn't create anything. Do you think there's a problem there?

Hexchain.
I was able to compile with Intel MPI easily but I've read some threads in which you had to add mpi.h include in 3 files (It's a thread in this forum but now I don't know them exactly).
CrashLaker is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
SU2 parallel installation error JAlbert SU2 Installation 2 November 19, 2013 03:43
SU2 Parallel Computation pdp.aero SU2 Installation 5 June 19, 2013 15:02
[swak4Foam] Bug in groovyBC in parallel computation Aleksey_R OpenFOAM Community Contributions 19 September 18, 2012 08:50
Parallel computation using NUMECA 6.1 BalanceChen Fidelity CFD 1 June 5, 2011 07:24
UDF for parallel computation Stanislav Kraev FLUENT 3 October 10, 2006 04:12


All times are GMT -4. The time now is 14:49.