|
[Sponsors] |
merge_grid_su2.py overwrote original grid with zero size |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
January 29, 2013, 15:08 |
merge_grid_su2.py overwrote original grid with zero size
|
#1 |
Member
Tom Jentink
Join Date: Jan 2013
Posts: 61
Rep Power: 13 |
I was testing things in the euler/naca0012 directory.
I finally got a parallel run to work, and ended up with the partitioned grid files in the directory after the run. I tried using merge_grid_su2.py to combine them, and it looked like it ran fine, and there were no errors, but it wrote a zero size file. Anyone else experience this? I've been having python issues, but I think most are figured out at this point. |
|
January 29, 2013, 16:37 |
|
#2 |
Super Moderator
Thomas D. Economon
Join Date: Jan 2013
Location: Stanford, CA
Posts: 271
Rep Power: 14 |
Currently, when running a parallel solution, SU2_DDC partitions the original mesh and writes the requested number of sub-grids (gridname_1.su2, grid name_2.su2, etc.), but it also leaves the original mesh in the directory. While the solution files (.vtk or .plt) must be merged at the termination of the solver (this will be handled automatically with the parallel_computation.py script), there is no need to merge the partitioned mesh files.
However, if one is interested in performing the full design loop in parallel (flow solution->adjoint solution->gradient projection->optimizer->mesh deformation), then the meshes will be deformed in parallel between design cycles. These deformed partitions can then be merged at the end of the design process using the merge_grid_su2.py script, as they make up a grid that is indeed different from the original. |
|
January 30, 2013, 10:20 |
|
#3 |
Member
Tom Jentink
Join Date: Jan 2013
Posts: 61
Rep Power: 13 |
I was merely interested in using the merge capability to try things out with the 8 grids the partitioning process left.
I should be able to use merge_grid_su2.py to do this, right? I ran it, it had no errors, and I ended up with a merged grid of zero size. |
|
January 30, 2013, 10:23 |
|
#4 |
Member
Tom Jentink
Join Date: Jan 2013
Posts: 61
Rep Power: 13 |
I haven't had much luck getting anything else working with the naca0012 test case except for the standard cfd run in parallel.
So I was just playing around and thought I'd try to merge the 8 grids I got when running the parallel case. |
|
February 1, 2013, 02:24 |
|
#5 |
Super Moderator
Thomas D. Economon
Join Date: Jan 2013
Location: Stanford, CA
Posts: 271
Rep Power: 14 |
Hi Tom,
Just wanted to follow up on this. Sorry you're having trouble with getting things working with the NACA 0012. Are there any other errors or specific problems you want to report? As for the grid merging, you can indeed do this after a parallel run (although, as I mention above, it shouldn't be necessary as the original mesh will not be removed during partitioning). The merge_grid_su2.py script is written to work inside of the parallel_deformation.py script, and therefore it assumes that the mesh file names for the original mesh and the newly deformed partitions are different (e.g. mesh_NACA0012_inv.su2 might be the original, while mesh_out_1.su2, mesh_out_2.su2, etc. might be the output from SU2_MDC in parallel). To merge after a parallel run, you could make a separate copy of the original mesh with a different name, run the parallel computation, and change the filenames under the MESH_FILENAME and MESH_OUT_FILENAME to be the names of the copied original mesh file and the root of the partitioned mesh files (without the "_*" suffix), respectively. In short, the merge_grid_su2.py script expects the partitioned meshes to have a different root filename than the original mesh, and if they are the same, then the original mesh file will be overwritten with a zero size file, as you noted. I hope this clears things up! |
|
February 1, 2013, 11:08 |
|
#6 |
Member
Tom Jentink
Join Date: Jan 2013
Posts: 61
Rep Power: 13 |
thanks. i'll try that.
I found out, though, that my 'parallel' runs were really just my one job repeated 8 times (8=ncpu). It took 2x longer than the serial run. So now I'm needing to figure out how to get the parallel job working properly on the cluster I'm using (mpich2_intel with a pbs queueing system) |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
critical error during installation of openfoam | Fabio88 | OpenFOAM Installation | 21 | June 2, 2010 04:01 |
reducing grid size | Marjon van Ginneken | FLOW-3D | 4 | January 12, 2009 10:18 |
grid size | jay | Siemens | 6 | September 5, 2008 06:00 |
grid size in LES | michael | Siemens | 0 | December 27, 2006 00:02 |
Grid Independent Solution | Chuck Leakeas | Main CFD Forum | 2 | May 26, 2000 12:18 |