CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

SU2 Quickstart case computation time increases in parallel

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   July 7, 2016, 22:57
Default SU2 Quickstart case computation time increases in parallel
  #1
New Member
 
Wrik Mallik
Join Date: Nov 2013
Posts: 3
Rep Power: 13
wrik is on a distinguished road
Hi,

I am running SU2 Quickstart NACA0012 Mach=0.8 steady case, on a 48 core Linux machine. I have built from source code for parallel using proper ./configure options. OpenMPI is loaded which provides me with mpicc and mpicxx.

I am suing the python command:
parallel_computation.py -f inv_NACA0012.cfg -n 8

to use 8 cores which takes about ~1500 seconds to converge. Using 2 cores takes about ~170 seconds. The results however, match for both the cases.

I am not sure why using more cores takes more time. Is it because that the problem is so simple that domain decomposition is taking more time than solving and thus running in parallel is of no use here? I have attached config.log and config_CFD.cfg file which will show you that MPI support and paraMETIS support is provided, and the domain is indeed partitioned. Kindly provide some suggestions.

Wrik
Attached Files
File Type: zip SU2_config.zip (7.6 KB, 1 views)
wrik is offline   Reply With Quote

Old   July 20, 2016, 14:33
Default
  #2
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 14
hlk is on a distinguished road
Quote:
Originally Posted by wrik View Post
Hi,

I am running SU2 Quickstart NACA0012 Mach=0.8 steady case, on a 48 core Linux machine. I have built from source code for parallel using proper ./configure options. OpenMPI is loaded which provides me with mpicc and mpicxx.

I am suing the python command:
parallel_computation.py -f inv_NACA0012.cfg -n 8

to use 8 cores which takes about ~1500 seconds to converge. Using 2 cores takes about ~170 seconds. The results however, match for both the cases.

I am not sure why using more cores takes more time. Is it because that the problem is so simple that domain decomposition is taking more time than solving and thus running in parallel is of no use here? I have attached config.log and config_CFD.cfg file which will show you that MPI support and paraMETIS support is provided, and the domain is indeed partitioned. Kindly provide some suggestions.

Wrik
We generally recommend not going lower than 5,000 points per processor in parallel problems. This test case is very small, with only a little over 5,000 points total, so I would actually expect it to run the fastest on a single core, or at least not much slower than 2 cores. The reason is not only the domain decomposition, it is also that when running in parallel there is an additional cost associated with communicating information between the different cores. For example, in this case when using 8 cores, at each iteration each core only has ~625 points to compute, but then has to communicate with each of the 7 other processors prior to the next iteration.
hlk is offline   Reply With Quote

Old   July 20, 2016, 21:28
Default
  #3
New Member
 
Wrik Mallik
Join Date: Nov 2013
Posts: 3
Rep Power: 13
wrik is on a distinguished road
Thanks for the suggestions! I have tried parallel run on more complicated models and they just work fine.

Wrik
wrik is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
what is Execution(/CPU) time when running parallel jobs? Malthe Eisum OpenFOAM Post-Processing 1 May 11, 2016 03:05
How to write k and epsilon before the abnormal end xiuying OpenFOAM Running, Solving & CFD 8 August 27, 2013 16:33
plot over time fferroni OpenFOAM Post-Processing 7 June 8, 2012 08:56
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 bookie56 OpenFOAM Installation 8 August 13, 2011 05:03
Free surface boudary conditions with SOLA-VOF Fan Main CFD Forum 10 September 9, 2006 13:24


All times are GMT -4. The time now is 04:51.