|
[Sponsors] |
June 9, 2022, 03:16 |
Need processor boundary condition guide
|
#1 |
Member
ff
Join Date: Feb 2010
Posts: 81
Rep Power: 16 |
Hello Foamers,
I need to understand more how OpenFOAM parallel boundary processor works, especially, how the processor boundary help to assemble the matrix, how does the processor boundary contribute the off-diag of neighbour cell? Thanks very much. |
|
June 13, 2022, 02:12 |
|
#2 | |
Member
ff
Join Date: Feb 2010
Posts: 81
Rep Power: 16 |
Quote:
Can anyone help? I would like to know how the zero-halo layer works? Thanks a lot. |
||
June 13, 2022, 13:43 |
|
#3 |
Senior Member
|
Could you please elaborate your query and explain in more details what you are looking for?
Is your query related to the discretization process (finite volume counterpart of divergence or gradient)? Or is your query related related to the linear system solve process? Assume that processor i is connected to various processor j. Then information on how processor i is connected to processor j in stored on processor i in the zero-halo layer for processor j. The exact nature on what information is stored and how this information is use requires further discussion. Does this help? EDIT: a nice illustration of zero-halo layers is given in post number 7 of the thread lduMesh, lduAddressing and thisDb() Last edited by dlahaye; June 13, 2022 at 13:49. Reason: information added |
|
June 16, 2022, 02:41 |
|
#4 | |
Member
ff
Join Date: Feb 2010
Posts: 81
Rep Power: 16 |
Quote:
Thanks very much for your reply. So what I am looking for is a bit both side, the discretization process and the linear system solve process. We can use the following site as example: https://openfoamwiki.net/index.php/O...es_in_OpenFOAM So we can focus this simple 3x3 grid with 9 cells for simple laplacian equation. It is very straight forward, in single processor mode, we can build up this 9x9 matrix. Let's focus on cell index 5 since it has no boundary patch linked, the neighbour cells of cell 5 are 2,4,6,8. Therefore, on matrix, there is coefficient on matrix index are diagonal (4,4), and off-diagonal (4,1), (4,3), (4,5) (4,7). Now, if we split the mesh at cell 1,6,7. The faces between cell (1,2), (6,5), (7,8) will be converted into processor type boundary patch. Let's say cells 1,6,7 on processor rank 0, cells 2,3,5,4,8,9 on processor rank 1. Then, on processor 0, we will build up matrix 3x3 locally, and on processor 1, we will build up a matrix 6x6 locally. Traditionally, we need to contrust dummy layer on processor 0 and processor 1. So in traditional way, processor 0, will receive the information of cell 2, 5, 8 from processor 1, and build up matrix 6x6 locally. And processor 1 will receive the cell 1,6,7 information, and build up 9x9 matrix locally. But in openFoam, I don't see boundary processor transferentire neighbour cell information, and it only implement as local patch boundary condition with face information transferred. So I am a bit confused how OpenFOAM build up the parallel matrix. |
||
June 16, 2022, 03:12 |
|
#5 |
Member
ff
Join Date: Feb 2010
Posts: 81
Rep Power: 16 |
So let's trace the OpenFOAM code,
1. in gaussLaplacianScheme, we have Code:
if pvf.coupled { fvm.internalCoeffs()[patchi] = pGamma*pvf.gradientInternalCoeffs(pDeltaCoeffs); fvm.boundaryCoeffs()[patchi] = -pGamma*pvf.gradientBoundaryCoeffs(pDeltaCoeffs); } where pDeltaCoeffs is the distance between owner cell center to processor patch face centre. pGamma is surface area mag * gamma(like thermal conductivity). (corrected me if I am wrong) 2. Then processor coupled patch, processorFvPatchField->coupledFvPatchField-> gradientInternalCoeffs will be called. Code:
Foam::coupledFvPatchField<Type>::gradientInternalCoeffs ( const scalarField& deltaCoeffs ) const { return -Type(pTraits<Type>::one)*deltaCoeffs; } Foam::coupledFvPatchField<Type>::gradientBoundaryCoeffs ( const scalarField& deltaCoeffs ) const { return -this->gradientInternalCoeffs(deltaCoeffs); } 3. in fvMatrixSolve Foam::SolverPerformance<Type> Foam::fvMatrix<Type>::solveSegregated it runs addBoundarySource, which Code:
void Foam::fvMatrix<Type>::addBoundarySource ( Field<Type>& source, const bool couples ) const { forAll(psi_.boundaryField(), patchi) { const fvPatchField<Type>& ptf = psi_.boundaryField()[patchi]; const Field<Type>& pbc = boundaryCoeffs_[patchi]; if (!ptf.coupled()) { addToInternalField(lduAddr().patchAddr(patchi), pbc, source); } else if (couples) { const tmp<Field<Type>> tpnf = ptf.patchNeighbourField(); const Field<Type>& pnf = tpnf(); const labelUList& addr = lduAddr().patchAddr(patchi); forAll(addr, facei) { source[addr[facei]] += cmptMultiply(pbc[facei], pnf[facei]); } } } } 4. the it runs Code:
updateMatrixInterfaces ( true, bouCoeffsCmpt, interfaces, psiCmpt_ss(), sourceCmpt_ss.ref(), cmpt, startRequest ); which is in Code:
lduMatrixUpdateMatrixInterfaces.C void Foam::lduMatrix::updateMatrixInterfaces ....... interfaces[interfacei].updateInterfaceMatrix ( result, add, psiif, coupleCoeffs[interfacei], cmpt, Pstream::defaultCommsType ); Code:
void Foam::processorFvPatchField<Type>::updateInterfaceMatrix ( Field<Type>& result, const bool add, const Field<Type>&, const scalarField& coeffs, const Pstream::commsTypes commsType ) const transformCoupleField(receiveBuf_); // Multiply the field by coefficients and add into the result this->addToInternalField(result, !add, coeffs, receiveBuf_); 6. this will call the LduInterfaceField Code:
template<class Type> void Foam::lduInterfaceField::addToInternalField ( Field<Type>& result, const bool add, const scalarField& coeffs, const Field<Type>& vals ) const { const labelUList& faceCells = this->interface().faceCells(); if (add) { forAll(faceCells, elemI) { result[faceCells[elemI]] += coeffs[elemI]*vals[elemI]; } } else { forAll(faceCells, elemI) { result[faceCells[elemI]] -= coeffs[elemI]*vals[elemI]; } } } Follow the entire process, so it seems it first multiply the boundary coefficients (which is -1*gradientInternalCoeff) if owner and neighbour side of processor boundary. Then, it deduced this -= coeffs[elemI]*receiveBuf_[elemI]. So the processor boundary is only contribute the matrix system's diagonal and right side source term. This is mainly a bit confused me, how openfoam only build up the parallel matrix at processor boundary with only affect diagonal and source term of matrix to get the same solution of single matrix. |
|
June 16, 2022, 07:06 |
|
#6 |
Senior Member
|
Thank you for further elaborating. I much like your ideas.
I suggest to separate between discretization and linear system solve. The discretization is well discribed is the book by Moukalled and Darwish (at least for the sequential case). I am not able to answer your queries on discretization on short term. The linear solve is done by a preconditioned Krylov subspace method implemented in parallel. One can thus analyse how the required vector and matrix-vector operations are implemented in OpenFoam. A start of this analysis is given in https://www.linkedin.com/pulse/openf...menico-lahaye/ I imagine that you are overlooking ingredients when stating that "only diagonal elements" are modified/used. I much like your suggestion of looking into a small example. One could draw a scheme of the matrix decomposition (including the interfaces), further examine the code and use a parallel debugger to verify the perceived understanding. Does this make sense? |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Nasa Rotor 37 Stuck at Choked Condition | okumush | OpenFOAM Running, Solving & CFD | 2 | July 31, 2021 06:30 |
The difference of cyclic boundry condition and mapped boundary condition | caitao | OpenFOAM Running, Solving & CFD | 1 | December 4, 2019 08:29 |
Accessing multiple boundary patches from a custom boundary condition file | ripudaman | OpenFOAM Programming & Development | 0 | October 22, 2014 19:34 |
what "If" condition means in rebound | brbbhatti | OpenFOAM Programming & Development | 0 | August 12, 2014 10:18 |
Internal flow operating condition? | kookguy | FLUENT | 2 | June 26, 2014 01:15 |