|
[Sponsors] |
June 30, 2021, 10:17 |
Implementing periodic boundary conditions
|
#1 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
I have a scramjet body with 4 structured grid blocks, as shown in the figure.
While implementing the periodic boundary conditions as shown in between blocks 1-4 and 3-4, I'm currently not 100% sure if my implementation would be correct. Currently, I think that if we just match the fluxes on the faces of the periodic boundaries, and then calculate the residuals in each cell correctly, our solutions will be correct. Is that so? Or am I missing something important? PS : There's one-to-one mapping between the faces of the different blocks at the boundaries, so I don't think we need any interpolation. |
|
June 30, 2021, 13:46 |
|
#2 |
Senior Member
|
If each block has a layer of ghost/halo/whatever cells, which are just copies of the original cells in the corresponding blocks, all you need to do is:
1) Treat ghost cells of a block as if they were its own cells (i.e., allocate space for them in your arrays), but you won't use them except for computing fluxes between such ghost cells and actual cells of the current block. Such fluxes will, of course, only update the regular cells of the given block; that is, you need only the variables from the ghost cells, but otherwise leave them untouched. 2) Fluxes on faces between two blocks can then be handled like fluxes on interior faces of a block 3) Before computing fluxes you need to copy variables in ghost cells from their original cells in the original block and, if needed, locally recompute (or copy as well) all the resulting properties (viscosity, etc.) This is how you do distributed parallel computations and, I think, is all you need here. Periodicity is basically handled in the same way, but as it may actually be rotational (and require rotation of vectors), I would not use "periodicity" to describe your situation, as it may be misleading. |
|
June 30, 2021, 14:29 |
|
#3 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Thanks, I got the general idea. I will refer to literature, as I remember reading something like what you described in many papers.
This part will hurt a lil' bit. I don't have ghost cells on my n*m cell blocks. But I can use separate arrays to act as ghost cell layers. I'm definitely taking a lil' bit of performance hit there. But, I can make it work. |
|
July 1, 2021, 05:18 |
|
#4 |
Senior Member
|
As you might have noticed, my explanation is kind of biased toward a distributed parallel implementation. But, if you are in a shared memory environment, of course, you don't really need such ghost cells, you can just have separate loops on each interblock face, as long as you can pick the correct variables from the two sides.
In distributed parallel you are obliged to copy somewhere the values coming from other processors, and ghost cells are the most obvious place. But, if you already have those values somewhere, it might not make sense to copy them at all. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Wind turbine simulation | Saturn | CFX | 60 | July 17, 2024 06:45 |
sliding mesh problem in CFX | Saima | CFX | 46 | September 11, 2021 08:38 |
Periodic Boundary Conditions | mali_anttho | FLUENT | 0 | July 14, 2016 06:46 |
PEMFC module + multiple periodic boundary conditions | vkrastev | FLUENT | 2 | December 22, 2014 05:15 |
An error has occurred in cfx5solve: | volo87 | CFX | 5 | June 14, 2013 18:44 |