|
[Sponsors] |
October 30, 2021, 18:40 |
probem running snappyHexMesh in parallel
|
#21 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
My radiator case is now running in serial, but setting it up for parallel is giving me trouble. I was trying to figure out how to write a run script, but found a more fundamental problem.
To create 2 regions, I run blockMesh twice. The command sequence for the serial mode is like this: blockMesh surfaceFeatures // next step creates the radiator mesh and puts it in the solid folder snappyHexMesh -dict system/snappyHexMeshDict.1 -overwrite cd constant cp -r polyMesh ./solid rm -r polyMesh cd .. blockMesh // next step carves the fairing out of the domain and puts it in the fluid folder snappyHexMesh -dict system/snappyHexMeshDict.2 -overwrite cd constant cp -r polyMesh ./fluid rm -r polyMesh cd .. chtMultiRegionFoam It works, and I'm a happy camper. Now, when I adapt it for the parallel case, the sequence goes: blockMesh surfaceFeatures decomposePar -copyZero -force mpirun -np 2 snappyHexMesh -dict system/snappyHexMeshDict.1 -parallel -overwrite // stopping here, because something goes wrong It's here that the showstopper occurs. When running mpirun SHM, it doesn't create the radiator mesh. In fact, nothing happens to the blockMesh domain. This is with the same SHMDict.1 file. In the subsequent steps, when I run mpirun SHM with the SHMDict.2, it fails again. Something about the parallel mode is making SHM perform differently, or more likely, my mpirun SHM command is somehow incorrect. Help! Where am I going wrong? Hoping for advice and info. |
|
November 1, 2021, 12:38 |
|
#22 | |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29 |
Hi Alan,
Quote:
The beginning of your script should be fine, but there is an easier way to deal with this thanks to the -region option available in many utilities. Try something like this: Code:
surfaceFeatures #create initial meshes for both regions blockMesh -region solid blockMesh -region fluid #decompose both regions at once decomposePar -copyZero -allRegions #run snappy in parallel mpirun -np 2 snappyHexMesh -region solid -parallel -overwrite 2>&1 | tee snappyHexMesh-solid.log mpirun -np 2 snappyHexMesh -region fluid -parallel -overwrite 2>&1 | tee snappyHexMesh-fluid.log #check both meshes in parallel mpirun -np 2 checkMesh -region solid -constant -parallel 2>&1 | tee checkMesh-solid.log mpirun -np 2 checkMesh -region fluid -constant -parallel 2>&1 | tee checkMesh-fluid.log I did not try to run this script so it might require some adjustments to make it work, but at least it gives you a global idea of the workflow. There is no -region option for potentialFoam, so it will surely fails because it will look for a mesh in processor*/constant/polyMesh but your meshes are actually in processor*/constant/solid/polyMesh and processor*/constant/fluid/polyMesh. Another tip: write a log file of your script, or at least write a log file of each openFoam command you run in your script. If something goes wrong, you have to follow your workflow step by step until finding which operation failed. If you don't have log files it will not be easy to find the source of the problem. (for instance if something fails at the beginning of your script, all the others steps will fail too. You will get a never ending list of errors in your terminal but actually the one which matters is the first one) You will notice I use this syntax "2>&1 | tee myLogFile":
Let us know if this is enough to solve your last issues! Yann |
||
November 1, 2021, 15:19 |
Thanks again Yann!
|
#23 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
I started incorporating your suggestions, but I found that there is also no '-regions' option for snappyHexMesh. It's unfortunate that there are so many differences between v2012 and openfoam8, but there you are.
So I'm trying to figure that one out. I'll let you know. |
|
November 1, 2021, 15:36 |
|
#24 |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29 |
Hi Alan,
My bad, I thought I checked for the -region option for snappyHexMesh in OpenFOAM-8 but it turns out I just did it for blockMesh and decomposePar. Since you cannot specify any region with snappy, you have to get back to your initial idea: moving polyMesh directories in order to get the job done. Something like this: Code:
surfaceFeatures #create initial meshes for both regions blockMesh -region solid blockMesh -region fluid #decompose both regions at once decomposePar -copyZero -allRegions #mesh solid region for i in $(seq 0 1); do mv processor$i/constant/solid/polyMesh processor$i/constant/polyMesh; done mpirun -np 2 snappyHexMesh -parallel -overwrite 2>&1 | tee snappyHexMesh-solid.log for i in $(seq 0 1); do mv processor$i/constant/polyMesh processor$i/constant/solid/polyMesh; done #mesh fluid region for i in $(seq 0 1); do mv processor$i/constant/fluid/polyMesh processor$i/constant/polyMesh; done mpirun -np 2 snappyHexMesh -parallel -overwrite 2>&1 | tee snappyHexMesh-fluid.log for i in $(seq 0 1); do mv processor$i/constant/polyMesh processor$i/constant/fluid/polyMesh; done #check both meshes in parallel mpirun -np 2 checkMesh -region solid -constant -parallel 2>&1 | tee checkMesh-solid.log mpirun -np 2 checkMesh -region fluid -constant -parallel 2>&1 | tee checkMesh-fluid.log Cheers, Yann |
|
November 3, 2021, 13:47 |
multiRegion radiator case - 1 inch to the finish line (okay, 25.4 mm)
|
#25 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
My case runs in serial mode, and with Yann's help, I have gotten the meshing worked out for the parallel mode. But when I run it, it nearly finishes the preliminary steps, but then halts with this message:
[0] --> FOAM FATAL IO ERROR: [0] inconsistent patch and patchField types for patch type processor and patchField type zeroGradient [0] [0] file: /home/boffin5/cfdaero/radiator-case5-parallel/processor0/0/solid/htcConst/boundaryField/.* from line 26 to line 26. Why the failure just for the parallel case? This is confounding, since I copied the BCs for 0/solid/htcConst directly from the heatExchanger tutorial: dimensions [1 0 -3 -1 0 0 0]; internalField uniform 10; boundaryField { ".*" { type zeroGradient; } } But somehow they fail here. So I said, aha, I'll just use the type for the T boundary conditions; seems to make sense for a heat transfer coefficient: type compressible::turbulentTemperatureCoupledBaffleMix ed; value $internalField; kappa kappa; Tnbr T; But now the run failed with this message: [0] --> FOAM FATAL ERROR: [0] ' not type 'mappedPatchBase' for patch rad_radinlet of field htcConst in file "/home/boffin5/cfdaero/radiator-case5-parallel/processor0/0/solid/htcConst" [0] [0] From function Foam::compressible::turbulentTemperatureCoupledBaf fleMixedFvPatchScalarField::turbulentTemperatureCo upledBaffleMixedFvPatchScalarField(const Foam::fvPatch&, const Foam:imensionedField<double, Foam::volMesh>&, const Foam::dictionary&) [0] in file derivedFvPatchFields/turbulentTemperatureCoupledBaffleMixed/turbulentTemperatureCoupledBaffleMixedFvPatchScala rField.C at line 96. [0] FOAM parallel run exiting It seems to have a problem with patch rad_radinlet. Some background: My radiator.stl has patch names imbedded in it, including radinlet. Accordingly, my SHMDict file has the patches listed as such: refinementSurfaces { rad { // Surface-wise min and max refinement level level (4 4); radinlet {level (4 4); patchInfo {type patch;}} radoutlet {level (1 1); patchInfo {type patch;}} radwalltop {level (1 1); patchInfo {type wall; }} radwallbottom {level (1 1); patchInfo {type wall; }} radwall-lh {level (1 1); patchInfo {type wall; }} radwall-rh {level (1 1); patchInfo {type wall; }} } In the process of blockMesh, OpenFOAM hangs the 'rad_' prefix onto all those patches, and they are listed that way in all my BCs. For example, rad_radinlet. But I'm not even sure if the problem is related to patch names. Actually, I'm not sure about anything. Excepting that, if I can resolve the htcConst problem, this case should run! As before, I'm hoping for help from the community, and I must say that it's a fantastic community, that has helped me so much. |
|
November 4, 2021, 06:04 |
|
#26 |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29 |
Hi Alan,
Code:
boundaryField { ".*" { type zeroGradient; } } When decomposing your case in order to run in parallel, new boundaries are created in each subdomain. Those boundaries are basically the interfaces between each processor and they are names like this: procBoundary1to0, procBoundary1to2, etc... You can see this if you have a look at your polyMesh/boundary files in the processor directories after decomposing the case. Like any other boundary, a boundary condition must be defined for these new patches in each variable in your 0 directory. These patches uses a specific boundary condition named processor: Code:
procBoundary1to0 { type processor; } In order to solve your issue, you need to define a processor boundary condition for each variable in your 0 directory. Thanks to regular expressions this is easy to do, just add this: Code:
"proc.*" { type processor; } Yann |
|
November 4, 2021, 16:41 |
gave me a bit of a shock, but it worked!
|
#27 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
Hi Yann,
As you advised, I incorporated the 'proc' change to all the BCs in the zero directory, and was totally disappointed when the run failed again with the same error message. Then I tried a few changes, and based on the error outputs, ended up with this boundary condition version: boundaryField { "proc.*" { type processor; } /* ".*" { type zeroGradient; }*/ rad_radinlet { type zeroGradient; } rad_radoutlet { type zeroGradient; } "(rad_radwalltop|rad_radwallbottom|rad_radwall-lh|rad_radwall-rh)" { type zeroGradient; } } The line with * ".*" is supposed to apply to all patches, but the only way it would work was by listing the patches individually. Go figure. This reminds me of an incident at my former company, where a big computer program kept failing and only after relentless debugging was it found that, whereas 'square root of x' didn't work, 'x to the power of 1/2' did. I can't thank you enough for all your help! |
|
November 5, 2021, 04:19 |
|
#28 |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29 |
Hi Alan,
In which order did you write your BC? This will work: Code:
boundaryField { ".*" { type zeroGradient; } "proc.*" { type processor; } } Code:
boundaryField { "proc.*" { type processor; } ".*" { type zeroGradient; } } This is valid for everything in OpenFOAM: if a parameter is defined twice, this is always the last statement which is applied. Cheers, Yann |
|
November 6, 2021, 05:42 |
|
#29 |
Senior Member
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 172
Rep Power: 13 |
Here is the strategy for learning OpenFOAM.
1)Start with the Tutorial closest to your problem, take a copy, run it to make sure it works. This now your latest working version Repeat the following until you get the desired result. 2)take a copy of the latest working version 3)make one modification 4) Run it 5) if it works proceed to next step if it fails go back to step 2 6) this version is now the latest working version goto step 2. Slow tedious but with something as complex as openFoam necessary.
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET |
|
November 6, 2021, 16:40 |
BC question, and now a paraFoam issue
|
#30 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
Regarding the use of
".*" {type etc.} in boundary conditions, I would think it should be the first one in a list, since if it was the last, it would overwrite all the previous ones? At any rate, in the multiRegion radiator case, I put "proc.*" {type processor}, at the end of all the BCs in the case. And now it runs. But when I look at it in paraView, the solid region has color gradients, but the fluid region never changes from its initial color of blue; see attached. The streamline function works, so velocity data is being generated. I cannot fathom why paraView is not addressing the fluid region, and yet again, am hoping for help. |
|
November 7, 2021, 07:23 |
|
#31 | |
Senior Member
Join Date: Oct 2017
Posts: 133
Rep Power: 9 |
Quote:
See: https://openfoam.org/release/1-6/, Library developments, Dictionary improvements/changes |
||
November 8, 2021, 11:56 |
|
#32 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
Thank you Krapf; I appreciate your help! What you are saying is subtly complex. In Yann's post 3 entries prior, he has two examples. In each example, going from bottom to top:
First example - procs get type processor, then, everything including procs, gets type zeroGradient. Those procs get over-ridden to type processor. Desired outcome. Second example - everything including procs, get type zeroGradient. Then, procs with type processor get over-ridden to type zeroGradient. Bad outcome. The conclusion is the same as Yann's, even though the logic is from the opposite direction. |
|
November 8, 2021, 12:46 |
|
#33 | |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29 |
Hi all,
Quote:
Alan, can you give us a bit more context about how you load your case in ParaView ? Have you tried loading only the fluid/internalMesh to see if you can visualize something? Yann |
||
November 9, 2021, 12:33 |
paraview problem solved - brute force method
|
#34 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
Hi Yann,
Paraview works correctly now; what I did was to just rebuild the case around a tutorial, cleaning things up as I went. It amounts to overhauling an engine, rather than tracing an oil leak. A learning opportunity lost. But it works, so I'm chuffed (English term). I do have a question about the use of potentialFoam in a multiRegion case, but I think I will post it under a new thread. At the risk of be repetitive, sincere thanks go to you for your indispensable help, and now I'm looking forward to applying OpenFOAM in a productive way! |
|
November 12, 2021, 18:34 |
Another problem has reared its head - chtMultiRegion case
|
#35 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
My initial happiness when my radiator case successfully ran, has proved to be fleeting. I had the 0/fluid/U and the 0/solid/U both set at 0.01, and with these values the case ran. But obviously, 1 centimeter per second is not a real world airflow velocity, so I changed it to 30 meter per second in 0/fluid/U, leaving it alone in the solid folder. The velocity there is only an initial setting, and is recalculated through the run; please correct me if this is not so. Running the case produced the attached run log, complete with errors.
Bottom line, it says: energy -> temperature conversion failed to converge maximum number of iterations exceeded: 100 I have searched on this message, but can't find any real leads on the problem. My fluid temp was set at 300, and for the solid it was 400. Attached is a text file showing the fvOptions for both fluid and solid. As one of the many things I tried, I changed the 'master' to 'true' in constant/fluid/fvOptions, with assocated changes in the solid folder. I was thinking that the airflow should be the master. This required that I copy AoV and htcConst into the fluid folder. But although it ran, the results were the same with a fluid velocity of 30. My case was based on the heatExchanger tutorial, and I can't find any significant differences. So at this point I am again perplexed, as I thought I had this case whipped! So again, I sadly and hopefully ask for advice and aid. The complete case zip file is also attached. |
|
November 13, 2021, 15:06 |
|
#36 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
I'm still grappling with this case, and made 2 different debugging attempts that may yield information to someone far smarter than myself.
Because the failure message said, "maximum number of iterations exceeded: 100," I searched for and found in system/fluid/fvSolution, a setting: maxIter 100;. So I increased this to 1000, but the case failed again with the same message output. Then, to simplify things, I ran the case with no fairing around the radiator, just the radiator itself, which is a porous zone with the properties of water, hanging in space. This time the case ran okay, with a fluid velocity of 35. But when I put the fairing back on, it failed. Obviously, the fairing is necessary to do vehicle studies. I can't draw any conclusions from this debugging, but perhaps someone else can? |
|
November 16, 2021, 17:46 |
I can't believe it failed again!
|
#37 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
At the risk of validating the definition of insanity, I again rebuilt my case around the heatExhanger tutorial, this time religiously changing nothing except what was absolutely necessary for my radiator case.
And with an initial time step of 50, it worked, and I was so relieved that I finally had it in working order. So I then set it for the standard time step of 2000, and launched it. To my dismay, it failed at time step 143; this was a crushing development. The failure message was the same as before, 'energy -> temperature conversion failed to converge maximum number of iterations exceeded: 100 My next step was to learn how to look at residuals. I can do that now, but I'm not sure it would give me an idea as to what is causing these failures. I have attached my run log for this latest snafu it has the middle timesteps deleted to manage the file size. I really hope that someone can educate me as to why this is happening. |
|
November 20, 2021, 07:39 |
|
#38 |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,238
Rep Power: 29 |
Hi Alan,
I think you figured it out yourself, but your error might be related to your mesh quality. I think your mesh is too coarse and leads to this error. I tried refining your fluid region mesh in the radiator vicinity to level 4. I also changed some BC, especially the velocity definition in the fluid region where I initialized a 0m/s velocity field in the internalField and just set the 35m/s velocity at the inlet: Code:
internalField uniform (0 0 0); boundaryField { [...] inlet { type fixedValue; value uniform (35 0 0); } I did not have time to try it but let me know how it goes. Cheers, Yann |
|
November 22, 2021, 11:45 |
multiRegion problem - try to solve by improving mesh
|
#39 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
Thank you Yann,
I looked at the residuals for my initial failure; an image of them is attached. This is the first time I have examined residuals, but my take is: Things were going okay, until something unhappy happened at around 40 seconds, and then it all went wrong starting at 100. So it seems that the math was running okay, but it ran into a mesh problem, so that will be my line of attack. My plan is to reduce the blockMesh size for the solid region so as to just encompass the radiator body, and to put layers on the outer periphery of it. This is looking at it as a short pipe, with layers on the wall boundary. Then for the fluid region, I will extract faces from the fairing, and use them to put layers on the interior walls of the duct, again treating it as a pipe with a rectangular cross section. Okay, let's do this! |
|
November 23, 2021, 18:20 |
mesh should be okay, but now New Problem
|
#40 |
Senior Member
Alan w
Join Date: Feb 2021
Posts: 288
Rep Power: 6 |
After concluding that my convergence failure problem was caused by a bad mesh, I went to work and now think that the mesh is okay (see image). It has sharp edges with 2 layers on both the radiator and the fairing.
But when I try to run it, a new problem comes up. For those of you familiar with the game 'Whac - A - Mole', where you bash a mole in one hole, then another one pops up in another hole, over and over again, you can understand my experience. At least this one mercifully fails right away, giving the message: 'FOAM FATAL ERROR - plane normal defined with zero length.' Attached is a text file of the run error. A search brought up answers concerning mapFields; could it be that the overlapping solid and fluid fields have inconsistently overlapping meshes? I really have no idea, and once again (I feel like Odysseus), I am lost just before reaching my goal. Is there anyone working with multiRegion cases who is familiar with this situation? If so, I would be greatful for helpful information. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
OF 4.0 multiregion case calculate yPlus | pbnuclex | OpenFOAM Post-Processing | 6 | July 16, 2020 05:27 |
Is Playstation 3 cluster suitable for CFD work | hsieh | OpenFOAM | 9 | August 16, 2015 15:53 |
[OpenFOAM] ParaView 4.10 and OpenFOAM 2.3.0 Multiregion and decomposed case | romant | ParaView | 3 | April 7, 2014 16:42 |
Transient case running with a super computer | microfin | FLUENT | 0 | March 31, 2009 12:20 |
Turbulent Flat Plate Validation Case | Jonas Larsson | Main CFD Forum | 0 | April 2, 2004 11:25 |