|
[Sponsors] |
Interpolation of a volVectorField onto processor patches |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
December 22, 2014, 12:43 |
Interpolation of a volVectorField onto processor patches
|
#1 |
New Member
Adrien
Join Date: Feb 2013
Location: Switzerland
Posts: 2
Rep Power: 0 |
Dear all,
I need to compute fluxes over cell faces given a volVectorField. To fulfill the needs of my custom parallel code, the resulting surfaceScalarField requires consistent values across processor patches. I would like to use a piece of code like this one: Code:
phi_ = fvc::interpolate(U_) & mesh_.Sf(); Hence my question: is there any direct way to interpolate a field from cell centers to face centers, with consistent values on processor patches? To show the inconsistency that I mention, I attached a test case with a Cartesian mesh that has two grid cells and is decomposed onto two processors (i.e., one grid cell per processor). A velocity field U has value (0, 0, 0) on processor 0 and (1, 0, 0) on processor 1. As a result of fvc::interpolate(U), I would expected a value of (0.5, 0, 0) on the processor patch on both processors. However, the (stripped) output is as follows: Code:
[1] phi = dimensions [0 1 -2 0 0 0 0]; [1] [1] internalField nonuniform 0(); [1] [1] boundaryField [1] { [1] domainBoundary [1] { [1] type calculated; [1] value uniform (0 0 0); [1] } [1] procBoundary1to0 [1] { [1] type processor; [1] value uniform (0.5 0 0); [1] } [1] } [0] phi = dimensions [0 1 -2 0 0 0 0]; [0] [0] internalField nonuniform 0(); [0] [0] boundaryField [0] { [0] domainBoundary [0] { [0] type calculated; [0] value uniform (0 0 0); [0] } [0] procBoundary0to1 [0] { [0] type processor; [0] value uniform (0 0 0); [0] } Code:
wmake ./Allrun Code:
void Foam::synchronizeProcessorPatchField ( volVectorField& vf ) { const polyBoundaryMesh& patches = vf.mesh().boundaryMesh(); FieldField<fvPatchField, vector> oldBoundaryField = vf.boundaryField(); forAll(patches, patchI) { vf.boundaryField()[patchI].initEvaluate(Pstream::blocking); } forAll(patches, patchI) { const polyPatch& pp = patches[patchI]; fvPatchField<vector>& pf = vf.boundaryField()[patchI]; // The following call puts the values of the patch neighbour field // info pf. pf.evaluate(Pstream::blocking); if (isA<processorPolyPatch>(pp)) { pf = 0.5*(pf.patchNeighbourField() + oldBoundaryField[patchI]); } } } Any input is welcome! Thanks, Adrien Last edited by Mithrandirrr; December 22, 2014 at 12:56. Reason: Missing attached file |
|
April 4, 2023, 07:29 |
|
#2 |
New Member
Ranjodh Rai
Join Date: Feb 2021
Posts: 3
Rep Power: 5 |
Hi, has anyone ever found a solution this?
|
|
April 5, 2023, 18:36 |
|
#3 |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,715
Rep Power: 40 |
You will need to check what is being done with the values later on. In general, the processor patches in OpenFOAM work a bit like a halo-value swap. That means that the the procBoundary0to1 contains values from the faces of processor1 and procBoundary1to0 contains values from the faces of processor0. If, for your purposes, you need the averaged values for determining fluxes, you need to handle the coupled patches yourself. For example, https://develop.openfoam.com/Develop.../fluxSummary.C |
|
Tags |
interpolation, parallel, patch, processor |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Interpolation Error on FAM Mesh with Cyclic BCs | ngj | OpenFOAM Bugs | 1 | August 9, 2011 07:12 |
urgent help needed (rhie-chow interpolation problem) | Ardalan | Main CFD Forum | 2 | March 18, 2011 16:22 |
Surface interpolation schemes and parallelization | jutta | OpenFOAM Running, Solving & CFD | 0 | February 25, 2010 15:32 |
momentum interpolation for collocated grid | Hadian | Main CFD Forum | 4 | December 25, 2009 08:25 |
How to update volVectorField velocity | swlee | OpenFOAM Running, Solving & CFD | 0 | June 18, 2008 05:57 |