CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

Parallel run of SU2 version 3.2.7

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 9, 2015, 09:22
Question Parallel run of SU2 version 3.2.7
  #1
New Member
 
Eran Arad
Join Date: Jan 2015
Location: Israel
Posts: 15
Rep Power: 11
Arad is on a distinguished road
Hi
First I must say that parallel run using version 3.2.7 looks much better.
Merging the multiple domain-files and creation of a single flow.dat file is
more elegant and efficient.

However, I think something is still missing:

In many cases the run command requires the specifications of special parameters (like rdma protocol, machinefiles with a list of compute nodes and so forth). I do not see how these parameter can be included using parallel_computation.py .

Yes, one can run mpiexec -n N SU2_PRT to partition the mesh to N files like before and then use mpiexec -n N SU2_CFD directly with any parameter. However here we miss the nice merging at the end of the run that 3.2.7 parallel_computation.py provides.

Is there a way around to enjoy all the worlds ?

Thanks,
Eran Arad
Arad is offline   Reply With Quote

Old   January 11, 2015, 03:04
Default
  #2
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 14
hlk is on a distinguished road
Quote:
Originally Posted by Arad View Post
Hi
First I must say that parallel run using version 3.2.7 looks much better.
Merging the multiple domain-files and creation of a single flow.dat file is
more elegant and efficient.

However, I think something is still missing:

In many cases the run command requires the specifications of special parameters (like rdma protocol, machinefiles with a list of compute nodes and so forth). I do not see how these parameter can be included using parallel_computation.py .

Yes, one can run mpiexec -n N SU2_PRT to partition the mesh to N files like before and then use mpiexec -n N SU2_CFD directly with any parameter. However here we miss the nice merging at the end of the run that 3.2.7 parallel_computation.py provides.

Is there a way around to enjoy all the worlds ?

Thanks,
Eran Arad
Thank you for your question.
Since these types of parameters are specific to the cluster being used, they are not set within the python script. Workload managers such as slurm (which one will be specific to the cluster you are using) can set up these parameters automatically. The system administrator for the cluster you are using should be able to help you with this.

For running on a PC rather than a cluster this won't be necessary - most of the time, either the python script by itself or preceded by mpirun -n N parallel_computation.py should work.
hlk is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[mesh manipulation] Cannot get refineMesh to run in parallel smschnob OpenFOAM Meshing & Mesh Conversion 2 June 3, 2014 12:20
The results difference between parallel and serial run. Hkp OpenFOAM Running, Solving & CFD 2 April 17, 2014 03:26
Interfoam blows on parallel run danvica OpenFOAM Running, Solving & CFD 16 December 22, 2012 03:09
Script to Run Parallel Jobs in Rocks Cluster asaha OpenFOAM Running, Solving & CFD 12 July 4, 2012 23:51
SnappyHexMesh OF-1.6-ext crashes on a parallel run norman1981 OpenFOAM Bugs 5 December 7, 2011 13:48


All times are GMT -4. The time now is 16:32.