|
[Sponsors] |
Parallel computation using periodic boundary conditions |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
March 13, 2014, 18:28 |
Parallel computation using periodic boundary conditions
|
#1 |
New Member
AD
Join Date: Feb 2013
Posts: 7
Rep Power: 13 |
Hello,
I have been using SU2 with periodic boundary conditions and ran into a couple of problems. Can SU2 be used in parallel with periodic boundary conditions? For the same case running fine in serial, an error occurs in parallel when writing the restart file. It also seems that the partitioned grid didn't keep the SEND_RECEIVE marker ensuring the communication between periodic boundaries, and the PERIODIC_INDEX= 1 and PERIODIC_INDEX= 2 sections are all set to 0. Could you tell me if I missed a step for running such case in parallel? Also, the restart file written by SU2 skips halo cells and all nodes from one of the periodic boundaries, which leads to the generation of a wrong Tecplot file, where the grid structure (including halo cells and both periodic boundaries) and the number of nodes (original nodes without one of the periodic boundaries) don't match. I used SU2 for solving other problems and it worked well otherwise. Thank you! |
|
March 17, 2014, 10:19 |
|
#2 |
New Member
AD
Join Date: Feb 2013
Posts: 7
Rep Power: 13 |
After many tests, it appears that the problem really comes from the SU2_DDC program which overwrites the SEND_RECEIVE markers related to the periodic boundary conditions. Older versions of SU2 such the version 2.0.3 correctly partition the grid with the periodic BC information. Partitions generated with SU2 v.2.0.3 can then be used with later versions. However, when running a computation with SU2 v3.0 in parallel employing a periodic BC, there is no problem using 2 cores, but for more cores, the following error occurs:
Fatal error in MPI_Sendrecv: Message truncated, error stack: MPI_Sendrecv(230).................: MPI_Sendrecv(sbuf=0x10a5fd40, scount=2050, MPI_DOUBLE, dest=2, stag=0, rbuf=0x10a5ad30, rcount=2560, MPI_DOUBLE, src=5, rtag=0, MPI_COMM_WORLD, status=0x1) failed MPIDI_CH3U_Receive_data_found(129): Message from rank 5 and tag 0 truncated; 21080 bytes received but buffer size is 20480 Does somebody have any idea about how to solve this problem? Thank you, it would be great if this feature was fixed in next versions as it is a commonly used boundary condition. |
|
March 27, 2014, 00:41 |
|
#3 |
Super Moderator
Thomas D. Economon
Join Date: Jan 2013
Location: Stanford, CA
Posts: 271
Rep Power: 14 |
Hi,
Thanks for the feedback on the periodic BCs. We are currently reconsidering the implementation for the periodic BCs as we begin tackling some issues related to a new high-performance strategy in the code. In the meantime, if you are interested in modifying the code to alleviate the issue, please let us know, and we can point you in the right direction. The decomposition is controlled by the CDomainGeometry class within geometry_structure.cpp. Cheers, Tom |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Radiation interface | hinca | CFX | 15 | January 26, 2014 18:11 |
periodic boundary conditions fro pressure | Salem | Main CFD Forum | 21 | April 10, 2013 01:44 |
periodic boundary conditions | mranji1 | Main CFD Forum | 4 | August 25, 2009 00:45 |
New topic on same subject - Flow around race car | Tudor Miron | CFX | 15 | April 2, 2004 07:18 |
multispieces and periodic boundary conditions prob | ydzhang | FLUENT | 4 | May 25, 2001 00:17 |