CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Pre-Processing

Decomposing meshes

Register Blogs Community New Posts Updated Threads Search

Like Tree14Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 8, 2014, 13:24
Default Decomposing meshes
  #1
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
Hi all,

I have a few questions to the decompose process.

The first question is about the decomposition output and general stuff:
  • First of all I think a main topic is to reduce the faces between the processors as good as you can but
  • on the other hand you should not have 1000 faces on processor1 and 2 faces on processor2 (if you are on processor3).
Is that correct? If I am right, then the following decomposition should be very very bad and decrease your speed extremly due to the fact of the bad face distribution (marked red)



Code:
Processor 0
    Number of cells = 79703
    Number of faces shared with processor 1 = 2347
    Number of faces shared with processor 3 = 1252
    Number of faces shared with processor 6 = 1983
    Number of faces shared with processor 7 = 4
    Number of faces shared with processor 9 = 6
    Number of processor patches = 5
    Number of processor faces = 5592
    Number of boundary faces = 13097

Processor 1
    Number of cells = 30927
    Number of faces shared with processor 0 = 2347
    Number of faces shared with processor 2 = 2691
    Number of faces shared with processor 3 = 5
    Number of faces shared with processor 4 = 605
    Number of faces shared with processor 5 = 1
    Number of faces shared with processor 6 = 6
    Number of faces shared with processor 7 = 1223
    Number of faces shared with processor 8 = 35
    Number of faces shared with processor 9 = 1
    Number of processor patches = 9
    Number of processor faces = 6914
    Number of boundary faces = 6206

Processor 2
    Number of cells = 98548
    Number of faces shared with processor 1 = 2691
    Number of faces shared with processor 4 = 6
    Number of faces shared with processor 5 = 1538
    Number of faces shared with processor 8 = 2181
    Number of faces shared with processor 11 = 1
    Number of processor patches = 5
    Number of processor faces = 6417
    Number of boundary faces = 17893

Processor 3
    Number of cells = 72934
    Number of faces shared with processor 0 = 1252
    Number of faces shared with processor 1 = 5
    Number of faces shared with processor 4 = 2949
    Number of faces shared with processor 6 = 1
    Number of faces shared with processor 9 = 1460
    Number of processor patches = 5
    Number of processor faces = 5667
    Number of boundary faces = 18218

Processor 4
    Number of cells = 45303
    Number of faces shared with processor 1 = 605
    Number of faces shared with processor 2 = 6
    Number of faces shared with processor 3 = 2949
    Number of faces shared with processor 5 = 3615
    Number of faces shared with processor 9 = 28
    Number of faces shared with processor 10 = 1574
    Number of faces shared with processor 11 = 24
    Number of processor patches = 7
    Number of processor faces = 8801
    Number of boundary faces = 8444

Processor 5
    Number of cells = 66574
    Number of faces shared with processor 1 = 1
    Number of faces shared with processor 2 = 1538
    Number of faces shared with processor 4 = 3615
    Number of faces shared with processor 10 = 5
    Number of faces shared with processor 11 = 1430
    Number of processor patches = 5
    Number of processor faces = 6589
    Number of boundary faces = 14245

Processor 6
    Number of cells = 38329
    Number of faces shared with processor 0 = 1983
    Number of faces shared with processor 1 = 6
    Number of faces shared with processor 3 = 1
    Number of faces shared with processor 7 = 1036
    Number of faces shared with processor 9 = 700
    Number of faces shared with processor 12 = 2172
    Number of faces shared with processor 13 = 2
    Number of faces shared with processor 15 = 8
    Number of processor patches = 8
    Number of processor faces = 5908
    Number of boundary faces = 6084

Processor 7
    Number of cells = 96970
    Number of faces shared with processor 0 = 4
    Number of faces shared with processor 1 = 1223
    Number of faces shared with processor 6 = 1036
    Number of faces shared with processor 8 = 1488
    Number of faces shared with processor 9 = 4
    Number of faces shared with processor 10 = 593
    Number of faces shared with processor 11 = 4
    Number of faces shared with processor 12 = 2
    Number of faces shared with processor 13 = 248
    Number of faces shared with processor 14 = 2
    Number of faces shared with processor 16 = 2
    Number of processor patches = 11
    Number of processor faces = 4606
    Number of boundary faces = 20630

Processor 8
    Number of cells = 45017
    Number of faces shared with processor 1 = 35
    Number of faces shared with processor 2 = 2181
    Number of faces shared with processor 7 = 1488
    Number of faces shared with processor 11 = 593
    Number of faces shared with processor 14 = 1197
    Number of faces shared with processor 17 = 2
    Number of processor patches = 6
    Number of processor faces = 5496
    Number of boundary faces = 8169

Processor 9
    Number of cells = 33585
    Number of faces shared with processor 0 = 6
    Number of faces shared with processor 1 = 1
    Number of faces shared with processor 3 = 1460
    Number of faces shared with processor 4 = 28
    Number of faces shared with processor 6 = 700
    Number of faces shared with processor 7 = 4
    Number of faces shared with processor 10 = 910
    Number of faces shared with processor 15 = 1436
    Number of faces shared with processor 16 = 5
    Number of processor patches = 9
    Number of processor faces = 4550
    Number of boundary faces = 7381

Processor 10
    Number of cells = 144720
    Number of faces shared with processor 4 = 1574
    Number of faces shared with processor 5 = 5
    Number of faces shared with processor 7 = 593
    Number of faces shared with processor 9 = 910
    Number of faces shared with processor 11 = 1497
    Number of faces shared with processor 15 = 2
    Number of faces shared with processor 16 = 575
    Number of faces shared with processor 17 = 4
    Number of processor patches = 8
    Number of processor faces = 5160
    Number of boundary faces = 29770

Processor 11
    Number of cells = 35368
    Number of faces shared with processor 2 = 1
    Number of faces shared with processor 4 = 24
    Number of faces shared with processor 5 = 1430
    Number of faces shared with processor 7 = 4
    Number of faces shared with processor 8 = 593
    Number of faces shared with processor 10 = 1497
    Number of faces shared with processor 16 = 6
    Number of faces shared with processor 17 = 1100
    Number of processor patches = 8
    Number of processor faces = 4655
    Number of boundary faces = 8624

Processor 12
    Number of cells = 97632
    Number of faces shared with processor 6 = 2172
    Number of faces shared with processor 7 = 2
    Number of faces shared with processor 13 = 3143
    Number of faces shared with processor 15 = 1410
    Number of processor patches = 4
    Number of processor faces = 6727
    Number of boundary faces = 20346

Processor 13
    Number of cells = 31151
    Number of faces shared with processor 6 = 2
    Number of faces shared with processor 7 = 248
    Number of faces shared with processor 12 = 3143
    Number of faces shared with processor 14 = 1952
    Number of faces shared with processor 15 = 7
    Number of faces shared with processor 16 = 682
    Number of faces shared with processor 17 = 3
    Number of processor patches = 7
    Number of processor faces = 6037
    Number of boundary faces = 6840

Processor 14
    Number of cells = 72706
    Number of faces shared with processor 7 = 2
    Number of faces shared with processor 8 = 1197
    Number of faces shared with processor 13 = 1952
    Number of faces shared with processor 17 = 1281
    Number of processor patches = 4
    Number of processor faces = 4432
    Number of boundary faces = 12889

Processor 15
    Number of cells = 71806
    Number of faces shared with processor 6 = 8
    Number of faces shared with processor 9 = 1436
    Number of faces shared with processor 10 = 2
    Number of faces shared with processor 12 = 1410
    Number of faces shared with processor 13 = 7
    Number of faces shared with processor 16 = 3801
    Number of processor patches = 6
    Number of processor faces = 6664
    Number of boundary faces = 17329

Processor 16
    Number of cells = 44918
    Number of faces shared with processor 7 = 2
    Number of faces shared with processor 9 = 5
    Number of faces shared with processor 10 = 575
    Number of faces shared with processor 11 = 6
    Number of faces shared with processor 13 = 682
    Number of faces shared with processor 15 = 3801
    Number of faces shared with processor 17 = 3951
    Number of processor patches = 7
    Number of processor faces = 9022
    Number of boundary faces = 8759

Processor 17
    Number of cells = 75775
    Number of faces shared with processor 8 = 2
    Number of faces shared with processor 10 = 4
    Number of faces shared with processor 11 = 1100
    Number of faces shared with processor 13 = 3
    Number of faces shared with processor 14 = 1281
    Number of faces shared with processor 16 = 3951
    Number of processor patches = 6
    Number of processor faces = 6341
    Number of boundary faces = 15767

Number of processor faces = 54789
Max number of cells = 144720 (120.392% above average 65664.8)
Max number of processor patches = 11 (65% above average 6.66667)
Max number of faces between processors = 9022 (48.2013% above average 6087.67)

Time = 0

The second question is about simple and hierarchical


  • it is known that hierarchical is only a extended simple algorithm
  • you can specify which axis has to decomposed first (e.g. zxy)
But what advantage do you have? If you decompose like (3 1 2) in zxy you split first z (2) then x (3) and y (1 = none). At least you have 6 domains.

If you split like (3 1 2) in xyz, you split first x (3) then y (1) and z (2). At least you also get 6 domains which should be (in my imagination) the same as above, or not?



The third question is about scotch and metis

  • Metis and scotch are decompose algorithm that attmpt to minimize the proc boundary faces between each core by using some special algorithm, aren't they?
  • Therefor you should be able to get a better (more balanced mesh) by using these algorithm instead of simple or hierarchical
Due to the fact that at the moment the metis lib is not available (if you do not compile it yourself) you have the ability to use scotch method. In the scotchDecomp.C file there are some hints how the algorithm is working but unfortunatelly I dont understand the things. Is there a paper, literature about this method?


At least one more questions

  • The more cores you have the more neigbourfaces you get. So if you have something like above (decompose output), could it be better to reduce the cores to get a better decomposition mesh? At least speed up the simulation because of better cpu communication?
  • How can you check out if your mesh is decomposed well or not
    • only with the decomposePar output?
    • What can i derivate of the green marked lines in the code block above?

Thanks in advance and for reading the topic.
Any hints and experiences are welcome.
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   September 9, 2014, 06:30
Default
  #2
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
Hi all,

today I made some tests with my college using decompose method hierarchical. Question two is not anymore necessary to explain. I miss understand something. For all who are interested here is the answer:

  • using hierarchical or simple split your mesh into regions with same amount of cells
  • I thought that the splitting is done in the center or due to the length of a geometry; e.g. if you have a pipe with a length of 2m then I thought with splitting of (2 1 1) (xyz), the mesh will be splitted in the middle at 1m. Thats correct if you have same amount of cells at each side. But if you have (e.g. a refinement region at one side) the mesh will not be splitted in the middle because of different cell amount. Its splitted so, that both processors will have same amount of cells.
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   July 13, 2017, 03:37
Default
  #3
Member
 
Sebastian Trunk
Join Date: Mar 2015
Location: Erlangen, Germany
Posts: 60
Rep Power: 11
sisetrun is on a distinguished road
Hey Tobi,

could you please answer your questions from above if you know the answers now?

Thanks and best wishes

Sebastian
sisetrun is offline   Reply With Quote

Old   July 13, 2017, 05:59
Default
  #4
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
Hello Sebastian,

I don't have all answers but I can give you more informations.

Answer to question 1
  • The amount of shared faces should be as low as possible. How many you share is depending on what problem you solve. E.g. I know a few people who have 100.000 cells and simulating a single phase flow with 20 cores. The result is that you have only 5.000 cells per processor. The calculation of the single phase flow is very fast so that the limitation is based on sharing informations between the cores.
  • Thats why we should try to reduce the shared faces as much as possible and decompose in a way it makes sense. E.g. if one has an own model which calculates a lot of stuff it can be worth to have only 5.000 cells per core. Actually the load of calculation should be always higher than sharing data
  • In addition, decomposing will change your results. E.g. for free convection where we do not have a preferred flow direction, the new processor boundary can and will induce numerical errors. One interesting discussion is here (german openfoam forum: http://ww3.cad.de/foren/ubb/Forum527/HTML/000753.shtml)

    Here the damn break tutorial was analyzed. Different decompositions levels were compared at one defined point (velocity). See also here http://ww3.cad.de/foren/ubb/upl/F/Fr...eak_difcpu.pdf

    The explanation why this behavior occur is given in the german forum (sorry for all english speaking people)

Information about question 3
  • Right now I am always using scotch for decomposition on complex geometries based on the fact that I get better results here (faces and cell balance).
  • However, I always check out the different sub-meshes (the new paraview version shows it with the whole geometry (opacity < 1))
  • There are some literature out you can find. E.g. https://gforge.inria.fr/docman/view....ch_user5.1.pdf
  • However I never checked metis and scotch in to details. In the sources you will also find more information and one additional default value (writeGraph)
I hope there is some new information for you which will help.
beatlejuice and ersoflow like this.
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   July 13, 2017, 06:03
Default
  #5
Member
 
Sebastian Trunk
Join Date: Mar 2015
Location: Erlangen, Germany
Posts: 60
Rep Power: 11
sisetrun is on a distinguished road
"Again what learned" as Lothar Matthäus would say !
Thank you very much for your quick answer...
sisetrun is offline   Reply With Quote

Old   July 30, 2018, 11:45
Default
  #6
Senior Member
 
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 342
Rep Power: 28
GerhardHolzinger will become famous soon enoughGerhardHolzinger will become famous soon enough
Let me share a recent observation of mine.

I simulated axi-symmetric gas flow in a long pipe. As the domain is quite large, the simulation was run using several parallel processes. The scotch decomposition created a processor boundary, which zic-zags through the pipe.

The part of the processor boundary, which is parallel to the flow, seems to create some disturbance in the pressure field. Luckily, the disturbance does not blow up the simulation, yet it's quite interesting.

The attached images show the pressure field. The involved subdomains are shown as white wireframes.
Attached Images
File Type: jpg pressureField.jpg (26.1 KB, 189 views)
File Type: jpg pressureField_wSubdomain01.jpg (37.1 KB, 163 views)
File Type: jpg pressureField_wSubdomain02.jpg (40.9 KB, 159 views)
Tobi and Ship Designer like this.
GerhardHolzinger is offline   Reply With Quote

Old   July 30, 2018, 16:56
Default
  #7
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
There is also a topic of damn break in the German OpenFOAM forum. A guy decomposed the domain and got different results for different decomposition methods. That is clear if one is thinking about the fluxes which has to be shared at the processor boundaries.


Nice to get the proof again. Thank you Gerhard
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   August 1, 2018, 05:46
Default
  #8
Senior Member
 
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 342
Rep Power: 28
GerhardHolzinger will become famous soon enoughGerhardHolzinger will become famous soon enough
The images I posted are from a case, which I am not able to share. Unfortunately, I am not able to reproduce the issue using a simpler, distributable geometry.

A minimal working example (MWE) for the behaviour I observed in my case, would be quite interesting, since this is a quite odd behaviour.
GerhardHolzinger is offline   Reply With Quote

Old   August 1, 2018, 13:56
Default
  #9
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
The influence of the decomposition should be stronger for free convection (if the flow can go anywhere). However, for forced convection - while the fluid has a fixed direction - the decomposition should not influence the fields too much.
saidc. likes this.
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   August 3, 2018, 16:18
Default
  #10
Senior Member
 
Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 616
Rep Power: 16
mAlletto will become famous soon enough
Quote:
Originally Posted by GerhardHolzinger View Post
Let me share a recent observation of mine.

I simulated axi-symmetric gas flow in a long pipe. As the domain is quite large, the simulation was run using several parallel processes. The scotch decomposition created a processor boundary, which zic-zags through the pipe.

The part of the processor boundary, which is parallel to the flow, seems to create some disturbance in the pressure field. Luckily, the disturbance does not blow up the simulation, yet it's quite interesting.

The attached images show the pressure field. The involved subdomains are shown as white wireframes.
It Seems something Like the checkboard effect when pressure and velocity are decoupled
mAlletto is offline   Reply With Quote

Old   November 18, 2019, 00:20
Default Decomposing Mesh for a multiregion domain
  #11
New Member
 
Muhammad Omer Mughal
Join Date: Jul 2010
Location: Singapore
Posts: 22
Rep Power: 16
Muhammad Omer Mughal is on a distinguished road
Dear Tobi and all



I am performing a heat transfer simulation in which I have four regions. When I use scotch method of decomposition for the regions, the mesh is decomposed correctly however, it doesnot move forward while performing faceAgglomeration. When I use simple method with the following coefficients for the two of the larger regions while using scotch method for the other two domains I get a singular matrix error.


numberOfSubdomains 144;

method simple;
simpleCoeffs
{
n (16 9 1); // total must match numberOfSubdomains
delta 0.001;
}


When I try using simple method for all regions with the above coefficients as before for the larger regions while the following coefficients for the other two smaller regions,




numberOfSubdomains 144;
method simple;

simpleCoeffs
{
n (12 12 1); // total must match numberOfSubdomains
delta 0.001;
}




I get the following warning and I also find 0 cells in some of the processors during decompostion


FOAM Warning :
From function Foam:olyMesh:olyMesh(const Foam::IOobject &)
in file meshes/polyMesh/polyMesh.C at line 330
no points in mesh

When I try running the solver, it terminates with the following message


[57] --> FOAM FATAL ERROR:
[57] (5 20) not found in table. Valid entries:
847
(
(98 103)
(88 104)

.......................

...........................


[57] From function T& Foam::HashTable<T, Key, Hash>:perator[](const Key&) [with T = double; Key = Foam::edge; Hash = Foam::Hash<Foam::edge>]
[57] in file OpenFOAM-6/src/OpenFOAM/lnInclude/HashTableI.H at line 117.






Can some one kindly help me to fix this issue

Last edited by Muhammad Omer Mughal; November 18, 2019 at 00:30. Reason: missed some information
Muhammad Omer Mughal is offline   Reply With Quote

Old   January 23, 2020, 04:19
Default
  #12
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
I am a bit confused about the results shared here.
In my opinion, a domain decomposition/parallelization strategy should be designed in a way that it has no influence on the results whatsoever. That would be my first and most important item on a list of prerequisites for any parallelization.
Is this a bug in OpenFOAM, or do other CFD packages just do a better job at hiding the influence of domain decomposition?
flotus1 is offline   Reply With Quote

Old   January 23, 2020, 12:27
Default
  #13
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
Dear Flotus,


well, as I have no idea how ANSYS and other programs are solving the problem, I only can make a rough statement how I think FOAM is doing it. Maybe it is wrong here, probably as I don't now these things in detail as I never investigated into that.

Without talking about parallelization, what I think we do is actually to divide the physical domain into closed single domains that - together - form the whole geometry again. Now, the problem is as follows:


  • Using one core and the whole mesh, you have the boundary faces and the internal faces. For each cell you know the neighbor cell and thus, you can calculate the fluxes or any other quantity on the face based on the values of the two cells that share that face
  • Now, if we decompose the geometry into several ones, each geometry does not know anything about the neighbor region. During the decomposition, we split cells and introduce so called 'processor' boundaries. And here we have the problem...


Lets imagine that we have two cells that share a face for the one core case, it is easy to calculate the face value based on the two cells. Now, assume that the mesh is split at these cells. Therefore, as each processor domain is separated, the before mentioned cells don't know anything about each other anymore. They are only connected via the faces. Therefore, we calculate the value at the processor face by using only one cell (as we don't know the neighbor cell -> it is in another processor domain). This information is sent to the other mesh that is solved by the other processor and it is done until we reach the convergence criterion.



So actually it is as follows:


  • for two cells that share a face while both cells are within one domain, we can use both cell center values to calculate the face value
  • for two cells that are separated through processor patches (introduced during decomposition), these two cells are not located within one processor anymore and thus, the cells don't know each other anymore. The only information both cells have is the shared processor face. But here, the calculation is different now


Thus, for directed flows, this is not a big deal but for free convection it will influence your solution. Of course, if your decomposition strategy is not well defined and you get the processor boundary faces at - let say - really bad locations, this will also influence forced fluid flow cases.

Nevertheless, I personally could tell that I have decomposition influences probably for free-convection cases using scotch, as it randomly decomposes your mesh. The error one introduces here depends on the number of decomposed regions, the way one decomposes (e.g., arbitrary (scotch) or aligned to the axis (simple, hierarchical)).

Hope it got a bit clearer. If other software uses other strategies, no idea.

PS: I might investigate into the processor boundary condition to ensure how it works.
kcavatar likes this.
__________________
Keep foaming,
Tobias Holzmann

Last edited by Tobi; January 23, 2020 at 13:49.
Tobi is offline   Reply With Quote

Old   January 23, 2020, 13:41
Default
  #14
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
What you describe sounds like ordinary domain decomposition.
If a cell on a domain boundary needs information from an adjacent cell, which resides in a different domain, then the parallelization needs to provide this information. E.g. via MPI transfers. And that's what is usually done when parallelizing a code using domain decomposition.
A very intuitive way to achieve this is to add the adjacent cells from the neighbor domain to the original domain, sometimes referred to as "ghost cells". They don't update their own values, they just provide values for updating the regular cells of each domain.
I thought this was the standard way of handling domain decomposition, which avoids reverting back to lower order methods.
flotus1 is offline   Reply With Quote

Old   January 23, 2020, 13:48
Default
  #15
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
Hi Alex,


well, I have to say, I don't know if foam is doing it like that. Here, one should investigate into the processor boundary condition in order to allow one to make a clear statement. I can't do that and I added a hint to my previous post.
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   January 23, 2020, 16:07
Default
  #16
Senior Member
 
Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 616
Rep Power: 16
mAlletto will become famous soon enough
This slides provide some explanation of how parallization is done in OF https://www.google.com/url?sa=t&sour...a1AouhvqjcH3Ih
mAlletto is offline   Reply With Quote

Old   January 23, 2020, 16:25
Default
  #17
Super Moderator
 
Tobi's Avatar
 
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 52
Tobi has a spectacular aura aboutTobi has a spectacular aura aboutTobi has a spectacular aura about
Send a message via ICQ to Tobi Send a message via Skype™ to Tobi
Thanks for the link.


Summary: I was wrong. We share the information of the neighbor cell values if I got the presentation correct.
__________________
Keep foaming,
Tobias Holzmann
Tobi is offline   Reply With Quote

Old   January 24, 2020, 06:42
Default
  #18
Senior Member
 
Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 616
Rep Power: 16
mAlletto will become famous soon enough
I understood the presentation that the processor patch is treated as boundary conditions.



If we look at the source code (https://www.openfoam.com/documentati...8C_source.html) we have an evaluate() function. This function is called to set the boundary conditions for the fields which are solved by the fvMatrix class. Actually it is called by the function correctBoundaryConditions(). For a deeper explanation see this thread: updateCoeffs() and evaluate().


The correctBoundayCondition() function is directly called by the fvMatrix solver when solving the matrix. (see e.g. https://www.openfoam.com/documentati...ve_8C_source.c)


so I guess depending on the operator (div or laplacian) the processor patch is responsible to evaluate the fluxes on the patch
mAlletto is offline   Reply With Quote

Old   January 19, 2021, 22:51
Default
  #19
New Member
 
victor
Join Date: Nov 2015
Location: pku,china
Posts: 5
Rep Power: 11
turbu is on a distinguished road
Dear Foamers,

I am recently working on the wide-gap Taylor-Couette flow (eta=0.5), the Reynolds number is 475, the number of vortices is varying according to the number of processors and time-steps.

In the work of Razzak, they found the number of vortexes is 6 at Reynolds number is 475. (https://doi.org/10.1063/1.5125640)

However, in my study, the number of vortexes is 6 when using 280 processors, the number of vortexes is 8 when using 240 processors, the number of vortexes is 10 when using 360 processors. The OpenFOAM version is openfoam5.

The decompose method used here is scotch, similar results observed with simple and hierarchical methods to do decompose.

So my question is whether the decomposing method in OF is able to such a lower Reynolds number? Have you ever met such an issue that the flow structure is varying according to the number of processors and time-steps?

Maybe in turbulent flow, the numerical dissipation induced by the parallel decomposing will be less significant.

Thanks in advance.

Different flow structures obtained due to the number of processors and time-steps
turbu is offline   Reply With Quote

Old   January 22, 2021, 06:13
Default
  #20
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 802
Blog Entries: 1
Rep Power: 18
dlahaye is on a distinguished road
It is well established that the accuracy of the domain decomposition preconditioner decreases as the number of subdomains increases (see e.g. [1], [2]).

I am unaware of how this effects number of vortices. I can imagine, however, that
there is some link.

Do you monitor residuals in the simulations? Are you able to enforce same accuracy at each step of the segregated solver, independent of number of processors?

[1]: https://books.google.nl/books?id=dxw...sition&f=false
[2]: ddm.org
dlahaye is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Hex and Tet meshes - simplefoam comparison danvica OpenFOAM Running, Solving & CFD 10 January 4, 2013 02:18
Getting prism to inflate into mixed tet-hex meshes Joe CFX 16 October 10, 2011 08:06
HELP!! How could I obtain structured-orthogonal-body fitted meshes???? DajeMoo ANSYS 0 January 28, 2011 13:52
Dynamic Meshes Cfdtoy FLUENT 2 February 6, 2004 13:14
Large 3D tetrahedral meshes Aldo Bonfiglioli Main CFD Forum 4 August 27, 1999 04:33


All times are GMT -4. The time now is 08:00.