CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

My *first* multiregion case

Register Blogs Community New Posts Updated Threads Search

Like Tree3Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 30, 2021, 18:40
Default probem running snappyHexMesh in parallel
  #21
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
My radiator case is now running in serial, but setting it up for parallel is giving me trouble. I was trying to figure out how to write a run script, but found a more fundamental problem.


To create 2 regions, I run blockMesh twice. The command sequence for the serial mode is like this:


blockMesh
surfaceFeatures
// next step creates the radiator mesh and puts it in the solid folder
snappyHexMesh -dict system/snappyHexMeshDict.1 -overwrite
cd constant
cp -r polyMesh ./solid
rm -r polyMesh
cd ..
blockMesh
// next step carves the fairing out of the domain and puts it in the fluid folder
snappyHexMesh -dict system/snappyHexMeshDict.2 -overwrite
cd constant
cp -r polyMesh ./fluid
rm -r polyMesh
cd ..
chtMultiRegionFoam


It works, and I'm a happy camper.

Now, when I adapt it for the parallel case, the sequence goes:


blockMesh
surfaceFeatures
decomposePar -copyZero -force
mpirun -np 2 snappyHexMesh -dict system/snappyHexMeshDict.1 -parallel -overwrite
// stopping here, because something goes wrong



It's here that the showstopper occurs. When running mpirun SHM, it doesn't create the radiator mesh. In fact, nothing happens to the blockMesh domain. This is with the same SHMDict.1 file. In the subsequent steps, when I run mpirun SHM with the SHMDict.2, it fails again. Something about the parallel mode is making SHM perform differently, or more likely, my mpirun SHM command is somehow incorrect.


Help! Where am I going wrong? Hoping for advice and info.
boffin5 is offline   Reply With Quote

Old   November 1, 2021, 12:38
Default
  #22
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29
Yann will become famous soon enoughYann will become famous soon enough
Hi Alan,

Quote:
Originally Posted by boffin5 View Post
Code:
blockMesh
surfaceFeatures

decomposePar -copyZero -force

mpirun -np 2 snappyHexMesh -overwrite -parallel     # creates mesh for radiator

cd constant
cp -r polyMesh ./solid

cd ..
blockMesh

mpirun -np 2 snappyHexMesh -dict system/snappyHexMeshDict.2 -overwrite -parallel    #  carves out bod from domain

cd constant
cp -r polyMesh ./fluid

rm -r polyMesh
cd ..

mpirun -np 2 potentialFoam -noFunctionObjects -parallel

checkMesh -region solid | tee checkMesh-solid.log
checkMesh -region fluid | tee checkMesh-fluid.log

mpirun -np 2 chtMultiRegionFoam -parallel | tee run.log

reconstructParMesh -constant
reconstructPar -latestTime
In this script, your problem is related to the 2nd region: when you run blockMesh, the new mesh is written in constant/polyMesh. If you run snappy in parallel, it's going to look for the initial mesh in the processor directories, but since you didn't decompose the mesh you have just created with blockMesh, it will fails because there is no mesh to work with in the processor* directories. When you run something in OpenFOAM, always try to understand what files your are creating and where you are writing it. Generally speaking, if you run a command in serial mode, it will write data at the root of your case, either in constant or time directories depending on the command you are running. If you run a command in parallel, it will look into processors directories and write things in processor directories (whether it is mesh or variables)

The beginning of your script should be fine, but there is an easier way to deal with this thanks to the -region option available in many utilities.

Try something like this:

Code:
surfaceFeatures

#create initial meshes for both regions
blockMesh -region solid
blockMesh -region fluid

#decompose both regions at once 
decomposePar -copyZero -allRegions

#run snappy in parallel
mpirun -np 2 snappyHexMesh -region solid -parallel -overwrite 2>&1 | tee snappyHexMesh-solid.log
mpirun -np 2 snappyHexMesh -region fluid -parallel -overwrite 2>&1 | tee snappyHexMesh-fluid.log

#check both meshes in parallel
mpirun -np 2 checkMesh -region solid -constant -parallel 2>&1 | tee checkMesh-solid.log
mpirun -np 2 checkMesh -region fluid -constant -parallel 2>&1 | tee checkMesh-fluid.log
This will require to have a blockMeshDict, decomposeParDict and snappyHexMeshDict located in their respective directories inside system (system/solid and system/fluid)

I did not try to run this script so it might require some adjustments to make it work, but at least it gives you a global idea of the workflow.

There is no -region option for potentialFoam, so it will surely fails because it will look for a mesh in processor*/constant/polyMesh but your meshes are actually in processor*/constant/solid/polyMesh and processor*/constant/fluid/polyMesh.

Another tip: write a log file of your script, or at least write a log file of each openFoam command you run in your script. If something goes wrong, you have to follow your workflow step by step until finding which operation failed. If you don't have log files it will not be easy to find the source of the problem. (for instance if something fails at the beginning of your script, all the others steps will fail too. You will get a never ending list of errors in your terminal but actually the one which matters is the first one)

You will notice I use this syntax "2>&1 | tee myLogFile":
  • File descriptor 1 is the standard output (stdout)
  • File descriptor 2 is the standard error (stderr)
2>&1 means redirecting the standard error in the standard output. If you just do this "| tee myLogFile", only the standard output will be written in the log file but the errors will not be written, which is not very helpful if you want to be able to see what went wrong when reading your log files.


Let us know if this is enough to solve your last issues!
Yann
Yann is offline   Reply With Quote

Old   November 1, 2021, 15:19
Default Thanks again Yann!
  #23
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
I started incorporating your suggestions, but I found that there is also no '-regions' option for snappyHexMesh. It's unfortunate that there are so many differences between v2012 and openfoam8, but there you are.

So I'm trying to figure that one out. I'll let you know.
boffin5 is offline   Reply With Quote

Old   November 1, 2021, 15:36
Default
  #24
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29
Yann will become famous soon enoughYann will become famous soon enough
Hi Alan,

My bad, I thought I checked for the -region option for snappyHexMesh in OpenFOAM-8 but it turns out I just did it for blockMesh and decomposePar.

Since you cannot specify any region with snappy, you have to get back to your initial idea: moving polyMesh directories in order to get the job done.

Something like this:
Code:
surfaceFeatures

#create initial meshes for both regions
blockMesh -region solid
blockMesh -region fluid

#decompose both regions at once 
decomposePar -copyZero -allRegions

#mesh solid region
for i in $(seq 0 1); do mv processor$i/constant/solid/polyMesh processor$i/constant/polyMesh; done
mpirun -np 2 snappyHexMesh -parallel -overwrite 2>&1 | tee snappyHexMesh-solid.log
for i in $(seq 0 1); do mv processor$i/constant/polyMesh processor$i/constant/solid/polyMesh; done

#mesh fluid region
for i in $(seq 0 1); do mv processor$i/constant/fluid/polyMesh processor$i/constant/polyMesh; done
mpirun -np 2 snappyHexMesh -parallel -overwrite 2>&1 | tee snappyHexMesh-fluid.log
for i in $(seq 0 1); do mv processor$i/constant/polyMesh processor$i/constant/fluid/polyMesh; done

#check both meshes in parallel
mpirun -np 2 checkMesh -region solid -constant -parallel 2>&1 | tee checkMesh-solid.log
 mpirun -np 2 checkMesh -region fluid -constant -parallel 2>&1 | tee checkMesh-fluid.log
There might be more elegant ways to do it, but it should do the job. We could loop over the regions to avoid duplicating the meshing sequence but if you are not used to shell scripting lets start with this.

Cheers,
Yann
Yann is offline   Reply With Quote

Old   November 3, 2021, 13:47
Default multiRegion radiator case - 1 inch to the finish line (okay, 25.4 mm)
  #25
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
My case runs in serial mode, and with Yann's help, I have gotten the meshing worked out for the parallel mode. But when I run it, it nearly finishes the preliminary steps, but then halts with this message:


[0] --> FOAM FATAL IO ERROR:
[0] inconsistent patch and patchField types for
patch type processor and patchField type zeroGradient
[0]
[0] file: /home/boffin5/cfdaero/radiator-case5-parallel/processor0/0/solid/htcConst/boundaryField/.* from line 26 to line 26.


Why the failure just for the parallel case? This is confounding, since I copied the BCs for 0/solid/htcConst directly from the heatExchanger tutorial:


dimensions [1 0 -3 -1 0 0 0];

internalField uniform 10;

boundaryField
{
".*"
{
type zeroGradient;
}
}


But somehow they fail here. So I said, aha, I'll just use the type for the T boundary conditions; seems to make sense for a heat transfer coefficient:


type compressible::turbulentTemperatureCoupledBaffleMix ed;
value $internalField;
kappa kappa;
Tnbr T;


But now the run failed with this message:


[0] --> FOAM FATAL ERROR:
[0] ' not type 'mappedPatchBase'
for patch rad_radinlet of field htcConst in file "/home/boffin5/cfdaero/radiator-case5-parallel/processor0/0/solid/htcConst"
[0]
[0] From function Foam::compressible::turbulentTemperatureCoupledBaf fleMixedFvPatchScalarField::turbulentTemperatureCo upledBaffleMixedFvPatchScalarField(const Foam::fvPatch&, const Foam:imensionedField<double, Foam::volMesh>&, const Foam::dictionary&)
[0] in file derivedFvPatchFields/turbulentTemperatureCoupledBaffleMixed/turbulentTemperatureCoupledBaffleMixedFvPatchScala rField.C at line 96.
[0]
FOAM parallel run exiting


It seems to have a problem with patch rad_radinlet. Some background: My radiator.stl has patch names imbedded in it, including radinlet. Accordingly, my SHMDict file has the patches listed as such:


refinementSurfaces
{
rad
{
// Surface-wise min and max refinement level

level (4 4);

radinlet {level (4 4); patchInfo {type patch;}}
radoutlet {level (1 1); patchInfo {type patch;}}
radwalltop {level (1 1); patchInfo {type wall; }}
radwallbottom {level (1 1); patchInfo {type wall; }}
radwall-lh {level (1 1); patchInfo {type wall; }}
radwall-rh {level (1 1); patchInfo {type wall; }}
}


In the process of blockMesh, OpenFOAM hangs the 'rad_' prefix onto all those patches, and they are listed that way in all my BCs. For example, rad_radinlet.



But I'm not even sure if the problem is related to patch names. Actually, I'm not sure about anything. Excepting that, if I can resolve the htcConst problem, this case should run!


As before, I'm hoping for help from the community, and I must say that it's a fantastic community, that has helped me so much.
boffin5 is offline   Reply With Quote

Old   November 4, 2021, 06:04
Default
  #26
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29
Yann will become famous soon enoughYann will become famous soon enough
Hi Alan,

Code:
boundaryField
{
    ".*"
    {
        type zeroGradient;
    }
}
This means you are applying a zeroGradient condition on all the patches in your domain.

When decomposing your case in order to run in parallel, new boundaries are created in each subdomain. Those boundaries are basically the interfaces between each processor and they are names like this: procBoundary1to0, procBoundary1to2, etc... You can see this if you have a look at your polyMesh/boundary files in the processor directories after decomposing the case.

Like any other boundary, a boundary condition must be defined for these new patches in each variable in your 0 directory. These patches uses a specific boundary condition named processor:

Code:
    procBoundary1to0
    {
        type processor;
    }
The error you get when trying to run you case in parallel is basically the solver complaining because you are applying a zeroGradient condition on a patch which is declared in the polyMesh/boundary files as a type processor.

In order to solve your issue, you need to define a processor boundary condition for each variable in your 0 directory. Thanks to regular expressions this is easy to do, just add this:
Code:
    "proc.*"
    {
        type processor;
    }
Let us know if your case is running after this fix!
Yann
Yann is offline   Reply With Quote

Old   November 4, 2021, 16:41
Default gave me a bit of a shock, but it worked!
  #27
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
Hi Yann,


As you advised, I incorporated the 'proc' change to all the BCs in the zero directory, and was totally disappointed when the run failed again with the same error message.


Then I tried a few changes, and based on the error outputs, ended up with this boundary condition version:


boundaryField
{
"proc.*"
{
type processor;
}

/* ".*"
{
type zeroGradient;
}*/

rad_radinlet
{
type zeroGradient;
}

rad_radoutlet
{
type zeroGradient;
}

"(rad_radwalltop|rad_radwallbottom|rad_radwall-lh|rad_radwall-rh)"
{
type zeroGradient;
}
}


The line with * ".*" is supposed to apply to all patches, but the only way it would work was by listing the patches individually. Go figure. This reminds me of an incident at my former company, where a big computer program kept failing and only after relentless debugging was it found that, whereas 'square root of x' didn't work, 'x to the power of 1/2' did.


I can't thank you enough for all your help!
boffin5 is offline   Reply With Quote

Old   November 5, 2021, 04:19
Default
  #28
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29
Yann will become famous soon enoughYann will become famous soon enough
Hi Alan,

In which order did you write your BC?

This will work:
Code:
boundaryField
{
    ".*"
    {
        type zeroGradient;
    }
    
    "proc.*"
    {
        type processor;
    }
}
But this will NOT work:

Code:
boundaryField
{
    "proc.*"
    {
        type processor;
    }
    
    ".*"
    {
        type zeroGradient;
    }
}
In the last example, the processor BC is applied to all the patches starting with "proc" and then the zeroGradient BC is applied to all patches, including the ones starting with "proc". So basically the last BC override the previous declaration.

This is valid for everything in OpenFOAM: if a parameter is defined twice, this is always the last statement which is applied.

Cheers,
Yann
Yann is offline   Reply With Quote

Old   November 6, 2021, 05:42
Default
  #29
Senior Member
 
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 172
Rep Power: 13
derekm is on a distinguished road
Here is the strategy for learning OpenFOAM.

1)Start with the Tutorial closest to your problem, take a copy, run it to make sure it works. This now your latest working version



Repeat the following until you get the desired result.



2)take a copy of the latest working version

3)make one modification
4) Run it
5) if it works proceed to next step if it fails go back to step 2

6) this version is now the latest working version goto step 2.


Slow tedious but with something as complex as openFoam necessary.
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET
derekm is offline   Reply With Quote

Old   November 6, 2021, 16:40
Default BC question, and now a paraFoam issue
  #30
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
Regarding the use of

".*"

{type etc.}

in boundary conditions, I would think it should be the first one in a list, since if it was the last, it would overwrite all the previous ones?


At any rate, in the multiRegion radiator case, I put "proc.*" {type processor}, at the end of all the BCs in the case. And now it runs. But when I look at it in paraView, the solid region has color gradients, but the fluid region never changes from its initial color of blue; see attached. The streamline function works, so velocity data is being generated. I cannot fathom why paraView is not addressing the fluid region, and yet again, am hoping for help.
Attached Images
File Type: png color-problem.png (10.6 KB, 18 views)
boffin5 is offline   Reply With Quote

Old   November 7, 2021, 07:23
Default
  #31
Senior Member
 
Join Date: Oct 2017
Posts: 133
Rep Power: 9
Krapf is on a distinguished road
Quote:
Originally Posted by boffin5 View Post
Regarding the use of

".*"

{type etc.}

in boundary conditions, I would think it should be the first one in a list, since if it was the last, it would overwrite all the previous ones?
The rule is that an exact match has priority over a regular expression and multiple regular expressions are matched in reverse order. The latter means that the regular expressions in the file are processed from bottom to top and the value is assigned if a match occurs. This value would not be overwritten in case of another match with a regular expression. In case of an exact match, however, the value would be changed.
See: https://openfoam.org/release/1-6/, Library developments, Dictionary improvements/changes
Yann likes this.
Krapf is offline   Reply With Quote

Old   November 8, 2021, 11:56
Default
  #32
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
Thank you Krapf; I appreciate your help! What you are saying is subtly complex. In Yann's post 3 entries prior, he has two examples. In each example, going from bottom to top:



First example - procs get type processor, then, everything including procs, gets type zeroGradient. Those procs get over-ridden to type processor. Desired outcome.


Second example - everything including procs, get type zeroGradient. Then, procs with type processor get over-ridden to type zeroGradient. Bad outcome.


The conclusion is the same as Yann's, even though the logic is from the opposite direction.
boffin5 is offline   Reply With Quote

Old   November 8, 2021, 12:46
Default
  #33
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29
Yann will become famous soon enoughYann will become famous soon enough
Hi all,

Quote:
Originally Posted by Krapf View Post
The rule is that an exact match has priority over a regular expression and multiple regular expressions are matched in reverse order. The latter means that the regular expressions in the file are processed from bottom to top and the value is assigned if a match occurs. This value would not be overwritten in case of another match with a regular expression. In case of an exact match, however, the value would be changed.
See: https://openfoam.org/release/1-6/, Library developments, Dictionary improvements/changes
Thanks Krapf for pointing this out, I did not know the keyword match was taking precedence over the regular expression match irrespective of the order of the entries. This is good to know.

Alan, can you give us a bit more context about how you load your case in ParaView ? Have you tried loading only the fluid/internalMesh to see if you can visualize something?

Yann
Yann is offline   Reply With Quote

Old   November 9, 2021, 12:33
Default paraview problem solved - brute force method
  #34
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
Hi Yann,


Paraview works correctly now; what I did was to just rebuild the case around a tutorial, cleaning things up as I went.
It amounts to overhauling an engine, rather than tracing an oil leak. A learning opportunity lost.

But it works, so I'm chuffed (English term).


I do have a question about the use of potentialFoam in a multiRegion case, but I think I will post it under a new thread.


At the risk of be repetitive, sincere thanks go to you for your indispensable help, and now I'm looking forward to applying OpenFOAM in a productive way!
Yann likes this.
boffin5 is offline   Reply With Quote

Old   November 12, 2021, 18:34
Default Another problem has reared its head - chtMultiRegion case
  #35
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
My initial happiness when my radiator case successfully ran, has proved to be fleeting. I had the 0/fluid/U and the 0/solid/U both set at 0.01, and with these values the case ran. But obviously, 1 centimeter per second is not a real world airflow velocity, so I changed it to 30 meter per second in 0/fluid/U, leaving it alone in the solid folder. The velocity there is only an initial setting, and is recalculated through the run; please correct me if this is not so. Running the case produced the attached run log, complete with errors.


Bottom line, it says:
energy -> temperature conversion failed to converge
maximum number of iterations exceeded: 100


I have searched on this message, but can't find any real leads on the problem.


My fluid temp was set at 300, and for the solid it was 400.
Attached is a text file showing the fvOptions for both fluid and solid.


As one of the many things I tried, I changed the 'master' to 'true' in constant/fluid/fvOptions, with assocated changes in the solid folder. I was thinking that the airflow should be the master. This required that I copy AoV and htcConst into the fluid folder. But although it ran, the results were the same with a fluid velocity of 30.


My case was based on the heatExchanger tutorial, and I can't find any significant differences. So at this point I am again perplexed, as I thought I had this case whipped! So again, I sadly and hopefully ask for advice and aid. The complete case zip file is also attached.
Attached Files
File Type: zip radiator5.zip (178.7 KB, 3 views)
File Type: txt run-log(1).txt (18.0 KB, 5 views)
File Type: txt fvOptions.txt (1.5 KB, 9 views)
boffin5 is offline   Reply With Quote

Old   November 13, 2021, 15:06
Default
  #36
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
I'm still grappling with this case, and made 2 different debugging attempts that may yield information to someone far smarter than myself.


Because the failure message said, "maximum number of iterations exceeded: 100," I searched for and found in system/fluid/fvSolution, a setting: maxIter 100;. So I increased this to 1000, but the case failed again with the same message output.


Then, to simplify things, I ran the case with no fairing around the radiator, just the radiator itself, which is a porous zone with the properties of water, hanging in space. This time the case ran okay, with a fluid velocity of 35. But when I put the fairing back on, it failed. Obviously, the fairing is necessary to do vehicle studies.



I can't draw any conclusions from this debugging, but perhaps someone else can?
boffin5 is offline   Reply With Quote

Old   November 16, 2021, 17:46
Default I can't believe it failed again!
  #37
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
At the risk of validating the definition of insanity, I again rebuilt my case around the heatExhanger tutorial, this time religiously changing nothing except what was absolutely necessary for my radiator case.


And with an initial time step of 50, it worked, and I was so relieved that I finally had it in working order. So I then set it for the standard time step of 2000, and launched it.


To my dismay, it failed at time step 143; this was a crushing development. The failure message was the same as before,
'energy -> temperature conversion failed to converge
maximum number of iterations exceeded: 100


My next step was to learn how to look at residuals. I can do that now, but I'm not sure it would give me an idea as to what is causing these failures. I have attached my run log for this latest snafu it has the middle timesteps deleted to manage the file size. I really hope that someone can educate me as to why this is happening.
Attached Files
File Type: txt run-log2.txt (158.3 KB, 2 views)
boffin5 is offline   Reply With Quote

Old   November 20, 2021, 07:39
Default
  #38
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29
Yann will become famous soon enoughYann will become famous soon enough
Hi Alan,

I think you figured it out yourself, but your error might be related to your mesh quality. I think your mesh is too coarse and leads to this error.

I tried refining your fluid region mesh in the radiator vicinity to level 4.

I also changed some BC, especially the velocity definition in the fluid region where I initialized a 0m/s velocity field in the internalField and just set the 35m/s velocity at the inlet:

Code:
internalField   uniform (0 0 0);

boundaryField
{
    [...]
    
    inlet
    {
        type            fixedValue;
        value           uniform (35 0 0);
    }
The case ran up to 1161 iterations before crashing with the same error. I also made other changes on the BC along the way but I'm not sure it's relevant for this issue. I would rather try to get a finer mesh.

I did not have time to try it but let me know how it goes.

Cheers,
Yann
Yann is offline   Reply With Quote

Old   November 22, 2021, 11:45
Default multiRegion problem - try to solve by improving mesh
  #39
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
Thank you Yann,

I looked at the residuals for my initial failure; an image of them is attached. This is the first time I have examined residuals, but my take is: Things were going okay, until something unhappy happened at around 40 seconds, and then it all went wrong starting at 100. So it seems that the math was running okay, but it ran into a mesh problem, so that will be my line of attack.


My plan is to reduce the blockMesh size for the solid region so as to just encompass the radiator body, and to put layers on the outer periphery of it. This is looking at it as a short pipe, with layers on the wall boundary.
Then for the fluid region, I will extract faces from the fairing, and use them to put layers on the interior walls of the duct, again treating it as a pipe with a rectangular cross section.


Okay, let's do this!
Attached Images
File Type: jpg residuals-rad-case.jpg (153.6 KB, 13 views)
boffin5 is offline   Reply With Quote

Old   November 23, 2021, 18:20
Default mesh should be okay, but now New Problem
  #40
Senior Member
 
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6
boffin5 is on a distinguished road
After concluding that my convergence failure problem was caused by a bad mesh, I went to work and now think that the mesh is okay (see image). It has sharp edges with 2 layers on both the radiator and the fairing.


But when I try to run it, a new problem comes up. For those of you familiar with the game 'Whac - A - Mole', where you bash a mole in one hole, then another one pops up in another hole, over and over again, you can understand my experience.



At least this one mercifully fails right away, giving the message: 'FOAM FATAL ERROR - plane normal defined with zero length.' Attached is a text file of the run error. A search brought up answers concerning mapFields; could it be that the overlapping solid and fluid fields have inconsistently overlapping meshes? I really have no idea, and once again (I feel like Odysseus), I am lost just before reaching my goal.


Is there anyone working with multiRegion cases who is familiar with this situation? If so, I would be greatful for helpful information.
Attached Images
File Type: png radiator-mesh (2).png (59.4 KB, 13 views)
Attached Files
File Type: txt plane-normal-error.txt (5.2 KB, 5 views)
boffin5 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
OF 4.0 multiregion case calculate yPlus pbnuclex OpenFOAM Post-Processing 6 July 16, 2020 05:27
Is Playstation 3 cluster suitable for CFD work hsieh OpenFOAM 9 August 16, 2015 15:53
[OpenFOAM] ParaView 4.10 and OpenFOAM 2.3.0 Multiregion and decomposed case romant ParaView 3 April 7, 2014 16:42
Transient case running with a super computer microfin FLUENT 0 March 31, 2009 12:20
Turbulent Flat Plate Validation Case Jonas Larsson Main CFD Forum 0 April 2, 2004 11:25


All times are GMT -4. The time now is 01:14.