|
[Sponsors] |
May 15, 2013, 11:34 |
SU2 running problem
|
#1 |
New Member
andrea ciarella
Join Date: Nov 2012
Posts: 5
Rep Power: 14 |
Hi,
I am trying to run SU2 (both compressible and incompressible) but I have problem on running it in parallel. My mesh is highly anisotropic and unstructured. the code perform really well in terms of accuracy on 1 core (onera m6 testcase with y+1) but if I try to run it in parallel it stops after the partitioning with no error message just a MPI general error. What do I have to do? Thanks ANdrea |
|
May 15, 2013, 12:10 |
|
#2 | |
Super Moderator
Francisco Palacios
Join Date: Jan 2013
Location: Long Beach, CA
Posts: 404
Rep Power: 15 |
Quote:
Thanks for using SU2, Best Francisco |
||
May 16, 2013, 05:26 |
onera m6
|
#3 |
New Member
andrea ciarella
Join Date: Nov 2012
Posts: 5
Rep Power: 14 |
Thank you for your reply
My problem arise because I don't have any error message, just a general mpi error. I can use SU2_DDC to generate the domain decomposition but when I run SU2_CFD in parallel it just cash before starting (this is with the compressible solver). In the picture you can see the problem with the incompressible solver. I can run the firstiterations than the simulation diverges. It seems that the problems is in the connection area. Thank you Andrea |
|
May 16, 2013, 12:07 |
|
#4 |
Senior Member
|
Hi Andrea,
Just a suggestion, why not try different number of partitions i.e no of processors. May be first start your simulation with the single grid for few iterations. May be use the explicit formulation initially. What turbulence model you use or it is an Euler computation. A little bit more detail, may be a snapshot of your residual history might be helpful. Hope this helps. |
|
May 16, 2013, 12:32 |
|
#5 |
New Member
andrea ciarella
Join Date: Nov 2012
Posts: 5
Rep Power: 14 |
thanks for the suggestion, my simulation is the onera m6 testcase (SA turbulence model, M= 0.83 and Re=11.7e6), using only a fine mesh with y+=1. If I run with just one processor I get really good results but take ages. My problem cames if I try to use the same mash for running in parallel. it don't start or diverges. These no residual history because there are few iteration.
For the incompressible case I generate a rectangular wing (naca 6510) and try to run at 3 degree aoa. it seems that it sepates a lot and is not converging. I am trying two new test and it seems that the incompressible wing are slowly converging but I am not sure now. My mesh is around 250MB (1.8 millions points) I am running in single grid mode. Andrea |
|
May 16, 2013, 16:02 |
|
#6 |
Senior Member
|
Hi Andrea,
I think you can also try to use small CFL numbers in combination with the Euler mode and when the simulation stabilizes then change to the Navier Stokes solver with the SA turbulence modelled. One more thing, which descretization scheme are you using? Upwind or Central. I would start with first order upwind and then later on choose the 2nd order upwind. As suggested earlier, you can also try to change the number of processor for the parallel simulation. good luck. |
|
Tags |
compressible solver, incompressible, onera m6, parallel computation, su2 |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Parallel running of 3D multiphase turbulence model (unknown problem!!) | MOHAMMAD67 | OpenFOAM Running, Solving & CFD | 7 | November 23, 2015 11:53 |
Problem while running in Highperformance computing environment | Phanipavan | STAR-CD | 1 | September 11, 2013 07:42 |
running FLUENT on cluster problem | cth_yao | FLUENT | 2 | December 2, 2011 07:05 |
problem with running in parallel | dhruv | OpenFOAM | 3 | November 25, 2011 06:06 |
Kubuntu uses dash breaks All scripts in tutorials | platopus | OpenFOAM Bugs | 8 | April 15, 2008 08:52 |