|
[Sponsors] |
January 16, 2017, 05:26 |
On the immersed boundary method
|
#1 |
Senior Member
|
Dear all,
i want to open a discussion on the immersed boundary method to collect some experiences from practitioners or anybody else willing to contribute. First of all, some context for everybody. What do we mean by "Immersed Boundary Method" (IBM) ? IBM roughly refers to a way of handling boundary conditions on geometrically non trivial boundaries trough geometrically trivial grids or, at least, relatively trivial (e.g., we can use IBM on fully unstructured grids only for some geometric features not directly captured by the grid). I won't go full details here on all the possible methods, nor i will be correct in terminology, as I am a practitioner more than an academic expert. Roughly speaking, IBM can either be implemented trough continuous forcing methods (an analytical source term is added to the equations before the discretization) or via discrete forcing methods (which, in practice, means interpolating/extrapolating to special ghost cells, but can actually be formalized as a discrete forcing, i.e., a force added after the discretization according to certain criteria). Some would also include cut-cell methods in the IBM family but i won't here as, to me, they differ from the others because they work on the geometry of the grid instead of working on the field discretized on the grid (to the point of being, at least in certain cases, full body fitted discretizations). Besides the pure analytical forcing, additional methods can be, as well, added to the first class of methods, like: Brinkman penalization, VOF-like treatment of solids and level-set-like treatment of solids. Besides the fact that these methods are all based on a continuous formulation, they also have an other common implementation ground: (to the best of my knowledge) they require an implementation on a full grid; that is, grid parts falling within any solid objects are still retained in the computation because, by construction, the continuous fields underlying the methods are defined everywhere and require a full discretization also at the interface between the solid and fluid parts of the grid. Moreover, these methods also have some limitation in the difficulty, or impossibility, of implementing complex boundary conditions (e.g., wall functions). Discrete forcing methods, on the other side, basically use an approximate grid (not fitted to the geometry) and then basically do one of two things: either use a special boundary condition on the new, non body fitted, boundary or interpolate/extrapolate to dummy (ghost) cells outside of the current computational domain and use the interior scheme also on the boundaries. More roughly speaking, you can interpolate/extrapolate to wherever you need to finally have a suitable boundary condition. Obviously, interpolation/extrapolations (according to the position of the point where variables are needed, inside/outside, with respect to the computational domain) need to take into account both the local boundary condition and the local flow field (where local is, somehow, left to the author to decide). The main advantage of the discrete forcing is that, besides ghost cells, you don't need the grid also in the solid part and, as a consequence, your ghost cells can geometrically overlap with no consequences (they're just useful locations where variables can be stored). Also, complex boundary conditions are no more difficult to implement than in a body fitted framework. In this post, I want to focus on an industrial level IBM, implemented in an otherwise unstructured code, with no cell waste (i.e., "solid" cells are not actually present in the grid), and required to deal with complex boundary conditions. As such, I will focus on the discrete forcing method. Also, as some might have noticed, I described a Finite Volume (FV) world, which, again, i feel as most suitable. Last edited by sbaffini; January 16, 2017 at 08:55. |
|
January 16, 2017, 08:55 |
|
#2 |
Senior Member
|
Continuing from the previous post...
I've been in the "immersed boundary business" for a while now, and i have to say that i'm quite skeptic on some of its aspects. First of all, the boundary treatment itself. The whole IBM business, as we said, somehow, relates to the fact that on boundary faces we still keep using the same schemes as for interior faces. We might eighter interpolate/extrapolate to ghost cells (depending on if they are inside or outside the domain) and normally apply the schemes on the shared face or directly interpolate to the face and somehow use such information directly or again trough the scheme. For example, Kang et al. https://www.researchgate.net/publica...oundary_method interpolate to the face and use such interpolated variebles directly to build the fluxes (at least according to their paper). Which for the central schemes they use in LES/DNS with a pressure based code might be ok. A similar approach is used by Mathur & Murthy (former Fluent developers): https://pdfs.semanticscholar.org/922...a4de103216.pdf http://docs.lib.purdue.edu/cgi/viewc...&context=prism and, from what I understood, also by some OpenFOAM developments. However, I work on a density based code heavily depending on characteristic directions and upwind. So, two problems typically arise. First, I cannot use directly the interpolated variables on the faces to directly build the fluxes, as these fluxes might not respect the characteristic directions (by construction, such interplations are centered, to account for both the local b.c. and the local flow field), so I am forced to use the convective scheme on all the boundary faces (assuming that the fluxes based on the direct interpolation come from a fictitious cell from the other side of the boundary, say, the right cell, while the left part is computed from the cell on the interior side). However, as soon as I start using the scheme on the boundaries, i might end up totally descarding part or all of the boundary information because of the upwinding. The consequences are not absolutely catastrophic per se but, coupled with the other problem discussed below, I typically have to invest more cells than usual to counteract such effects. An alternative approach, to keep using the directly interpolated fluxes on the boundary faces, would be to switch (at least for boundary faces) from an upwind scheme (i.e., Roe flux difference splitting) to a central scheme + dissipation (e.g., Jameson), whose dissipation is strictly applied at the cell level and not on the faces. But I have no idea if that would actually help. Is this something you experienced? Last edited by sbaffini; January 16, 2017 at 11:20. |
|
January 16, 2017, 09:27 |
|
#3 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
I worked on the IBM several years ago, using 2D unstructured grids with elastic membranes moving over it. I followed the formulation of Peskin.
This formulation produces stifness problems for rigid element. My main concern is about the correct simulation of closed not-permeable body where you must ensure the mass conservation. Some papers tried to improve this aspect but as I am no longer involved in this topic I am aware only of old papers. I wonder if any of you have performed a quantitative analysis of that. For example, a simple channel flow with a rectangula bluf body simuolate by the immersed boundary. Did you check that the body is really not permeable at machine accuracy or you let that is not permeable at the accuracy of the local truncation error? Does the inflow flow rate of the channel is as same as that at the outlet? |
|
January 16, 2017, 10:25 |
|
#4 |
Senior Member
|
Another issue in the discrete forcing IBM is the variables arrangement.
As we said, for most aspects, the grid used by the method is a regular grid, just that it doesn't fit the true geometry. However, there is a lot of room here to devise different approaches. Kang et al. from the paper above discard all the cells cut by the true geometry and the remaining unreachable ones (e.g., inside the body); then they interpolate to the boundary face centers. I also use this arrangement because: - no computational cell is retained that is cut by the geometry (face areas and cell volumes are computed straightly) - variables are always interpolated, meaning that the interpolation points (the centers of the boundary faces) are, by construction, in the fluid domain - boundary fluxes are also always applied to faces entirely within the fluid - among the methods with the features above, this is the one whose first computational cell next to the wall is the closest (with respect to other methods) as the space between the boundary face and the geometry can be arbitrarily small However, several alternatives exist. The most used approach retains all the cells whose center is inside the domain and keep their adjacent ones as ghost cells (just as ghost cells in regular body fitted grids). However, with this approach, while you have the advantage of having the first fluid cell as the closest possible one to the boundary, there are, in my opinion, two main disadvantages: - values in ghost cells are, strictly, extrapolated, as ghost cell centers are outside the fluid domain by construction - boundary fluxes also could as well be applied to face centers which are not in the fluid domain, which, somehow, is another kind of extrapolation (however, this does not always happen) Finally, one could also interpolate to purely "fluid" ghost cells (similarly to the injection used for the k-eps turbulence model) like, for example, done here: https://www.researchgate.net/publication/234877111_An_immersed_boundary_method_for_compress ible_flows_using_local_grid_refinement Basically, one uses the ghost cell arrangement above but, instead of extrapolating to the ghost cell centers (which are out of the fluid domain), one interpolates in the fluid cell centers adjacent to them, which by construction are fluid. This approach avoids extrapolation, but some of the cell faces used in the computation might still be intersected by the actual geometry. In order to avoid this, one might use the Kang et al. setting with injection in the first fluid cell adjacent to the boundary. This is the only approach i still have to use and would like to investigate. The only concern here, however, is that the first computational cell ends up relatively far from boundary which, in the end, might be counter productive. Do you have any preferred arrangement? If so, why? Have you investigated some alternatives? |
|
January 16, 2017, 11:06 |
|
#5 |
Senior Member
|
Finally, i want to somehow connect to the issue raised by Filippo, even if the continuous forcing framework he used might have a very different interpretation of the problem.
What i found for the discrete forcing is that, even if you work out perfectly all the interpolations, avoid any extrapolation, etc., non matching fluxes at the boundaries (e.g., mass leak on solid boundaries) are still going to dominate the robustness of the code. Not as much in terms of divergence of the code but in terms of accuracy/convergence toward the right solution. For the IBM novices, the above problem in the discrete forcing method can be summarized as follows. Imagine to have the flow around a sphere. In a body fitted world, your convective flux on the wall boundary will be exactly 0 for each boundary face on the sphere. As a consequence, what enters your domain will necessarily leave it trough your outflow and no mass flux will obviously be present trough your sphere wall boundary. In the IBM world this is not anymore true, at least not always. Your IBM grid will be a stairstep approximation of the sphere, and boundary conditions need to be applied at face centers which typically are in the fluid and not exactly on the solid boundary (as would be in the body-fitted world). So, in practice, instead of having all 0s as mass fluxes on the wall boundaries, you end up with a collection of interpolated values, probably close to 0 most of the times, but with no guarantee of that. Also, for closed boundaries, you have no guarantee that all such values will sum up to machine 0. In practice, this is most evident for the mass fluxes as you end up having a mass leak trough the walls (actually the sign is arbitrary), but this effect is present for all the fluxes. Your only hope is to build everything correctly so as to drive such flux leaks to 0 with the interpolation error trough grid refinement. There are some methods to handle this problem which, as stated at the beginning, is common to most IBM. The most prominent one I am aware of for the discrete forcing arrangement is this: https://www.researchgate.net/publication/220206770_Prediction_of_wall-pressure_fluctuation_in_turbulent_flows_with_an_im mersed_boundary_method a method similar to the one used in outflow for incompressible flows. You know the flux imbalance on a given set of faces and redistribute an area weighted correction to the boundary fluxes to achieve the balance at every iteration. In practice, the method above also includes a local correction factor proportional to the local interpolation error, so that the correction is within the error of the overall method. However, my question is, do you know if this really works in terms of local accuracy for any practical resolution and for all the cases? For example, in the paper above they use a direct flux interpolation on the boundary faces, while I have to pass trough the convective scheme. So, according to the local upwind direction, part of the correction might be totally useless if done before. So, probably, this might need to be applied only after the boundary fluxes. But then, the specific formulation can only be used as a generic guide as it is not anymore exact. Mathur & Murthy also cite a similar correction, but distributed among all the computational cells of the domain (thus, it is just volume weighted). This might be too rough, yet the paper obviously claims that it is working. Toughts/ideas about this? I hope I have also answered the questions raised by Filippo (yes, at least down to the smallest grids I have tested, I have a mass leak proportional to the 2nd power of the grid spacing and I have Mdot_wall + Mdot_in + Mdot_out = 0 to machine accuracy). |
|
January 16, 2017, 17:15 |
|
#6 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
just to help the discussion, this is a review of Peskin about its method
http://www4.ncsu.edu/~zhilin/TEACHIN...8Z/Peskin1.pdf |
|
January 16, 2017, 22:22 |
|
#7 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,291
Rep Power: 35 |
Let me add little bit about my experience to the discussion.
We used immersed boundary for the simulation of golf balls. For non rotating balls the results were good at least at low reynolds number. As the Reynolds number increased and as the rotation of ball increased what we found that the lift coefficient that solver predicted were nearly 0 or a bit higher than 0 while the actual lift coefficient would be in range of 0.15 to 0.2 or so. I invested for long time into the reasons for it. In the end, came up with two fixes that finally predicted the correct lift coefficients. As pointed out, equations solved in case of immersed boundary are not exact. In terms of navier stokes one can say that there are errors introduced in momentum equation and then there are errors introduced in continuity equation. I have adopted this equationwise thinking based on control volumes. Continuity The error that is introduced here is one can see some mass leaking to solid cells. The fluxes can be set to 0 to minimize this error. But it does not remove the error in fluxes on other faces of control volume. Momentum equation The convection term sees that leaked flux and shear stress is missing to properly close this equation in control volume in consideration. ------ In our case there was one very subtle thing that affected. Typically the solid region also sees pressure solve and thus have distribution of pressure. Since pressure is relative in an incompressible flow, the pressure in the solid affects how flow behaves at the solid flow boundary and may not allow flow to be calculated properly. In our case predicting separation was absolutely must. But since pressure is wrong there and the top part of separation the pressure there is tied up to the pressure at the bottom of the ball, the separation could not be predicted well. In body fitted scenario this error was missing. So to remedy this the control volumes those were marked solid were taken out from computation and solid pressure were obtained from extrapolation from interior like a body fitted grid. Second change was made to account to shear stress. To remedy this problem the most attractive was ghost cell method, becausr that fixed solid values such that after descretization the effect will be almost equal to of a shear. (In 1D it shall be exact). The problem with using ghost cell was that it modifies the value of solid region and that kind of disturbs continuity. The velocities in solid now may not be smooth. So to counter this, what I did was to remember how far from cell center solid surface cut the cell. Based on it a factor between 0 to 1 was calculated. The contribution from that face of control volume for diffusion term was then scaled based on this factor correcting diffsion flux. I called this method shear stress scaling, This was major factor to fix that lift coefficient problem. to be continued. .... |
|
January 17, 2017, 17:29 |
|
#8 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
I would be glad to see some quantitative result about the error in volume conservation
|
|
January 18, 2017, 06:37 |
|
#9 |
Senior Member
|
In the next few days i might be more busy than usual, but i'll come back ASAP with some quantitative results. When doing this analysis, I typically focus on inviscid incompressible flows so that, at least in theory, my only independent flux is the mass one (in practice, however, I still solve the full system). Would that be OK?
|
|
January 18, 2017, 08:23 |
|
#10 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
what kind of test is that?
|
|
January 18, 2017, 09:48 |
|
#11 |
Senior Member
|
Well, i was thinking simple.
A square channel with a sphere: 1 body fitted INLET, 1 body fitted OUTLET, 4 body fitted INVISCID lateral WALLS and 1 immersed boundary INVISCID sphere WALL. I would then check the mass imbalance between INLET and OUTLET at full machine precision convergence for different uniform grid spacings. |
|
January 18, 2017, 11:50 |
|
#12 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
Quote:
Paolo, can I suggest to check for simplest problem? channel with one immersed square body and uniform Cartesian Grid. |
||
January 18, 2017, 11:59 |
|
#13 | |
Senior Member
|
Quote:
However, let's start from that and see what happens. P.S. I actually also worked on a simplified test case with only 4 cells, one of which is cut. This allowed me to put the equations in analytical form (this, of course, was only possible for a Laplace equation, that's also why the inviscid/incompressible assumption). We can get there, if needed. |
||
January 18, 2017, 12:03 |
|
#14 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
Quote:
ok, I just want a case where the interpolation for the IB is as much accurate as possible. The recirculation at the rear part of the bluff body is steady and symmetric at low Re so that we can evaluate the presence of mass leaking for the most favorable setup. |
||
January 18, 2017, 12:52 |
|
#15 |
Senior Member
|
What is disappointing about this matter is that, for a billion papers on the IBM, there are a billion different IBM approaches. And as the method is strongly dependent from every single detail, the usefulness of such an amount of research is basically null.
Unfortunately, IBM does not exists as a separate field, it's just a bunch of people modifying their cartesian, staggered (!!!), explicit codes to handle complex geometries. I'll get back soon with some results. |
|
January 19, 2017, 02:27 |
|
#16 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,291
Rep Power: 35 |
Quote:
Yes this is a big problem. Everyone has his own ideas and there is nothing concrete. The way I see is that there are two types of errors, one is geometric that is failure to represent boundary well, second is error due to change in boundary conditions are solid flow because of that geometric error made in (1). With benchmark solution perhaps it would be possible to see and report how much are geometric errors. I wanted to write more but my brain very busy right now with explicit overset so I come back with more thoughts later. |
||
January 19, 2017, 06:11 |
|
#17 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
To tell the truth, my opinion on the IBM is that has new life but no new idea... I remember that J.P. Boris introduced the cut-cell method since the '70 for simulating explosions over ships using cartesian grid.
|
|
January 19, 2017, 07:51 |
|
#18 | |
Senior Member
|
Quote:
Actually, I frequently find works whose title (and, in theory, the actual content too) intrigues me. But, when I start reading it, I always find so much specificities (a staggered, pressure-based, approach without concern for complex models is the most frequent case) that even the interesting stuff become useless to a practitioner just needing a robust, flexible implementation. I guess that this is related to the very idea of the IBM, to turn a cartesian solver into something to manage complex geometries. Makes sense, still... it's just sad that, after LES, I picked up another field with similar problems (in LES that is related to the numerical implementation). |
||
January 19, 2017, 07:58 |
|
#19 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,897
Rep Power: 73 |
Exact! Just new life in old CFD codes... The key is in the fact that now we can use very flexible codes to manage complex geometry, importing into the cartesian grid and construct the intersection and topology. Nothing else...
|
|
January 19, 2017, 09:50 |
|
#20 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,291
Rep Power: 35 |
Quote:
1. Getting it right in polyhedral format. 2. Doing it efficiently. 3. Efficient parallel implementation. 4. Using it for more than just velocity enforcement. So far immersed boundary means forcing flow. Personally I am attracted towards cut cell type but without actually creating cut cells. |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Wind turbine simulation | Saturn | CFX | 60 | July 17, 2024 06:45 |
3D Windturbine simulation in SU2 | k.vimalakanthan | SU2 | 15 | October 12, 2023 06:53 |
implementation of the Immersed Boundary Method | mi_cfd | Main CFD Forum | 19 | April 24, 2019 02:24 |
[ImmersedBoundary] Is Immersed Boundary Method already available in OF 2.1.x or 2.2.x | keepfit | OpenFOAM Community Contributions | 16 | April 27, 2016 08:24 |
Code for Immersed Boundary Method (ask for help) | syq129 | Main CFD Forum | 2 | November 23, 2010 04:16 |