|
[Sponsors] |
November 4, 2009, 17:48 |
Parallel FVM on staggered grid
|
#1 |
New Member
Ertan Karaismail
Join Date: Apr 2009
Posts: 17
Rep Power: 17 |
HI, I'm working on parallelization of a 3D FVM code running on structured and (backward) staggered grid. I use domain decomposition and MPI for parallelization and fractional step for pressure. I use SIP (stones implicit method) as the solver for all variables. Data at the boundaries are exchanged via ghost cells. I exchange data at the end of each time step. The code runs well and spits out results, however, they are not the same as results from the serial code. There is a discrepancy and the pressure at the domain boundaries doesn't look smooth.
I wonder if I have to do the information exchange at every inner solver iterations? Right now, I work on that, however, I have also have some doubts about the whole picture. I guess , people prefer Multigrid method for solving pressure. I am not sure, if multigrid method is a must for parallelization. I'd like to hear some ideas from experienced people . Any resource is also welcome. Thanks Ertan |
|
November 5, 2009, 10:06 |
|
#2 |
Senior Member
TWB
Join Date: Mar 2009
Posts: 414
Rep Power: 19 |
Hi,
is it a must to use SIP? i suggest u use PETSc or HYPRE to solve the momentum and poisson eqn respectively. of cos, u'll need to take time to learn it. but it should be quite fast. u'll 've to update the values at each time step. |
|
November 5, 2009, 12:25 |
|
#3 |
New Member
Ertan Karaismail
Join Date: Apr 2009
Posts: 17
Rep Power: 17 |
Thanks for the reply,
The code I've been working on can also use ICCG (Incomplete Cholesky preconditioned Conjugate Gradient method), Gauss-Seidel, CGSTAB, ADI. The code looks pretty much Peric's codes which are available on his website. In one of his parallel codes (with PVM, I don't think this matters tough), he seems to exchange data between domains at each inner iteration. I believe this brings in extra computational burden. In my masters, I worked on a finite difference code parallelized with PVM and the boundary transfers were carried out at each time step. I don't understand why it is not working now. One thing that come to my mind is that the grid I use is staggered, and I data transfer algorithm might be wrong. Assuming that I decompose the domain in x-direction, at the end of each subdomains, I only have 1 ghost cell for u-momentum cells, whereas there are 3 in the beginning of the subdomains. So I transfer 3 u-values from a given subdomain to the next one, and receive only 1 u-value in return. For other variables (u,w,and P), since the cells are not staggered in x-direction, I do 2 by 2 transfer (2 vaues of ,v,w and P are sent and received). Does this algorithm make sense to you? or do you have any recommendation. Thanks, Ertan |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Partial Staggered grid arrangement | agg | Main CFD Forum | 3 | September 28, 2005 02:43 |
Higher order discretization on staggered grid | Chandra Shekhar | Main CFD Forum | 9 | January 27, 2005 17:31 |
nonlinear Lax-Wendroff in 2D/3D on staggered grid | Jeroen Gerrits | Main CFD Forum | 0 | August 19, 2004 18:09 |
Semi Lagrangian method and Staggered Grid | JEONG MO HONG | Main CFD Forum | 1 | May 26, 2003 02:43 |
Combustion Convergence problems | Art Stretton | Phoenics | 5 | April 2, 2002 06:59 |