CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

steps required to transform single block structured FV code to multi-block

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By sbaffini

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 12, 2013, 11:00
Default steps required to transform single block structured FV code to multi-block
  #1
New Member
 
Join Date: Dec 2013
Posts: 4
Rep Power: 13
antoine_b is on a distinguished road
Hi,

Can anyone explain me the different steps required to transform/extend a finite volume single block structured code to a finite volume multi-block structured code?
antoine_b is offline   Reply With Quote

Old   December 12, 2013, 11:25
Default
  #2
Senior Member
 
cfdnewbie
Join Date: Mar 2010
Posts: 557
Rep Power: 20
cfdnewbie is on a distinguished road
Are you interested in conforming or non-conforming blocking?
cfdnewbie is offline   Reply With Quote

Old   December 12, 2013, 12:01
Default steps required to transform single block structured FV code to multi-block
  #3
New Member
 
Join Date: Dec 2013
Posts: 4
Rep Power: 13
antoine_b is on a distinguished road
Hi,

I am not sure to get completely your question but if I design a mesh, it will conform/follow the boundary. Do you mean non-conforming in the sense of an immerse boundary for instance?
antoine_b is offline   Reply With Quote

Old   December 12, 2013, 12:31
Default
  #4
Senior Member
 
cfdnewbie
Join Date: Mar 2010
Posts: 557
Rep Power: 20
cfdnewbie is on a distinguished road
No, what I meant was: Do the blocks have to be connected in a conforming way? No hanging nodes?
cfdnewbie is offline   Reply With Quote

Old   December 12, 2013, 12:59
Default
  #5
New Member
 
Join Date: Dec 2013
Posts: 4
Rep Power: 13
antoine_b is on a distinguished road
Ah ok. I want to do conforming at first (no hanging nodes)
antoine_b is offline   Reply With Quote

Old   December 13, 2013, 05:11
Default
  #6
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,195
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
I have no practical experience, hence you might be beyond the point i'm going to describe (in this case, sorry for the stupid post), but this is more or less what you should do/consider:

1) A first big difference is due to the basics of the main solver. Is it parallel or serial? I assume it is serial, which possibly simplify the description. So the point is how to go from single-block (SB) serial to multi-block (MB) serial. I'll come back to the parallel case later (for parallel i mean MPI; shared memory is just like serial).

2) The very nice thing about this problem is that the SB solver is almost all you need for the MB case (i assume you have some tool to produce your MB grids). Indeed, what in the SB are the boundary conditions, in the MB become interface conditions from the adjacent blocks. Roughly speaking, you do it by adding ghost cells to the interface boundaries of your blocks and exchanging information from the adjacent blocks during iterations (I'll come back later on this).

3) So, i would summarize the first step like this: you create your different blocks; on every block a SB solver is running; on the interface boundaries between blocks you use as boundary conditions those grabbed from adjacent blocks and temporarily stored in ghost cells. However, you should possibly use these values just like if the cells of interest (those near the interface boundary) were interior cells and use the computational stencil for interior cells. On real boundaries, of course, you keep using your real boundary conditions.

4) Ghost cells needs to be the exact replica of those from the adjacent blocks and you need as many layers as required from the interior computational stencil.

5) Now, the main thing now becomes how you visit the different blocks. In serial (or shared memory), you should probably have a main cycle iterating over the several blocks, then solving within each block. For a fully explicit scheme this is no specificly problematic and you possibly have just to consider how to treat hyperbolicity in the order you visit the blocks (i'm really ignorant here, i'm just guessing). For implicit schemes and general elliptic/parabolic problems there is (if i remember correctly) a whole new mathematical problem to consider which goes under the name Domain Decomposition Techniques (Schwarz preconditioning in the specific case); again, i'm very ignorant here, but you can read Chapter 6 from

Canuto, Hussaini, Quarteroni, Zang: Spectral Methods. Evolution to Complex Geometries and Applications to Fluid Dynamics, Springer

for more information.

Basically, as i understand the matter, as you now have to solve an algebraic system, you will need to iterate multiple times among the blocks, during the solution at each time step, in order to properly exchange the information at the interfaces during the inner iterations on the algebraic systems on the single blocks. How to alternate between iterations among blocks and inner iterations, for each time step, is the main matter here and i don't have experience on this.

6) In serial (or shared memory parallel) you don't really need ghost cells. You just need some mask, linked list (or whatever) that tells you how near block interface cells are connected. However, the ghost cell method is useful because than you can easily move to the MPI parallel case, at least for a first naive implementation. In this case, your blocks wouldn't be anymore on the same machine, but you would distribute the blocks (Whole blocks, one at least) among the processors. Everything would work mostly in the same way, the main difference being that the blocks would be running in parallel and the grabbing of interface conditions should consider two main things:

- which processor has the near interface cells i need
- using an MPI command to exchange the information


I'm sure there are more efficient/intelligent way to parallelize everything, but this is certainly easier. If the original code is already parallel, honestly, i don't know of any simple way to do the job and, actually, parallelization is always the last step in code development. Here the main difficulty would be that anything should be inverted and all the processors would have the same equivalent part from all the blocks (1/Np fraction of cells from each block, with Np processors) and each serial solver should actually work on Nb, separate, sub-blocks, Nb being the total number of blocks.

This is, more or less, the textbook part of the job. I'm pretty sure other people can give more accurate and useful information on the matter.
michujo likes this.
sbaffini is offline   Reply With Quote

Old   December 13, 2013, 06:07
Default
  #7
Member
 
Ren/Xingyue
Join Date: Jan 2010
Location: Nagoya , Japan
Posts: 44
Rep Power: 16
hilllike is on a distinguished road
I did that in my code.

The only difference is how to store the mesh.

store the entire domain as one grid system so that you do not need to treat the interface (ghost cell is not needed) and discretize all the control equations over the entire computational domain.
hilllike is offline   Reply With Quote

Old   December 13, 2013, 13:03
Default
  #8
New Member
 
Join Date: Dec 2013
Posts: 4
Rep Power: 13
antoine_b is on a distinguished road
Ok thanks to both. I can see better what I need to do now.
antoine_b is offline   Reply With Quote

Reply

Tags
finite-volume, single/multi blocks, structured


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Block structured hexagonal mesh vs automated tetra mesh inside Workbench for CFD Chander CFX 3 November 27, 2011 17:24
Parallel Processing of unsteady, structured single block c-grid drrbradford Fidelity CFD 0 April 22, 2010 12:15
[Commercial meshers] Icem Mesh to Foam jphandrigan OpenFOAM Meshing & Mesh Conversion 4 March 9, 2010 03:58
Version 15 on Mac OS X gschaider OpenFOAM Installation 113 December 2, 2009 11:23
Design Integration with CFD? John C. Chien Main CFD Forum 19 May 17, 2001 16:56


All times are GMT -4. The time now is 13:35.