|
[Sponsors] |
October 26, 2020, 10:50 |
Error forAll cycle in parallel (and interFoam diffusion correction)
|
#1 |
New Member
Giovanni Farina
Join Date: Nov 2019
Location: Trieste, Italy
Posts: 11
Rep Power: 7 |
Summary of the thread:
Premise: I had an unwanted concentration diffusion in interFoam from one phase (water) to another (air). There are different methods to solve this issue, the one that best worked for me is cycling over cells at and above interface and adding concentration back to water phase (in my case along the z-direction) Main problem of the thread: the code does not work in parallel, so this is relevant in general for using forAll in parallell Solution:
Code:
if (Pstream::myProcNo() == 0 || Pstream::myProcNo() == 18){ // blabla }
Code:
forAll(conc,i) { if (mesh.C()[i][0]>1.805 && mesh.C()[i][2]>0.16 && alpha1[i]<0.49 && mesh.C()[i][2]<0.19) // the region in which I want to make the correction: air and interface { label neighbourCount=mesh.cellCells()[i].size(); // counting neighbouring cells labelList neighbours=mesh.cellCells()[i]; // list of neighbouring cells if ((conc[neighbours[0]]*mesh.V()[neighbours[0]]+conc[i]*mesh.V()[i])/(mesh.V()[neighbours[0]])<1) // can't get more than initial concentration, which is one { conc[neighbours[0]]=(conc[neighbours[0]]*mesh.V()[neighbours[0]]+conc[i]*mesh.V()[i])/(mesh.V()[neighbours[0]]); // add conc below conc[i]=0; // remove conc above } } } -------------------------------------------------------------------------- Hi Foamers, I have a diffusion problem in interFoam, my issue is that concentration diffuses from water to air. I tried different solutions ( I already have the diffusion coefficient dependant of alpha) and at the end I opted for a drastic one: reassigning concentration from air cells to cells immediately below. I thougth this would work in parallel (in serial it works perfectly fine) as usually subdomains are not divided vertically. This was true for some subdivision and not for others, therefore I implemented a manual decomposition so that the issue would not arise. However, I do get error when running in parallel, but not when the correction is made, just at the subsequent timeStep, as if the volScalarField reassigning made the data unreadable. Or maybe it's something else. If I do the correction at each writeTime (an integer number of seconds) I get strange errors like "could not find nut" (turbulent viscosity) in the processor18/1 folder; which look unrelated but they are not. If I make the correction on a portion of the domain which belongs only to one processor I don't get any errors, also if the portion is in two adjacent subdomains, even in four, it looks like there is some specific processor subdomain pairing in which the problem arises. Maybe there is some issue with the shared cells between subdomains and writing/reading fields there. Edit: If I make the correction on either half of the desired region it works, but then if I want to apply the correction to the whole region I get the error, even if I do one half first and then the other one. If anyone wants to suggest some other kind of correction I tried setting a fictional downward wind in the concentration equation but it was just deleting concentration, while I want to conserve concentration mass. The error is the usual printstack error Code:
[20] #0 Foam::error::printStack(Foam::Ostream&)[28] #0 Foam::error::printStack(Foam::Ostream&) at ??:? [20] #1 Foam::sigFpe::sigHandler(int) at ??:? [28] #1 Foam::sigFpe::sigHandler(int) at ??:? [28] #2 ? at ??:? [20] #2 ? in "/lib64/libc.so.6" [28] #3 ? in "/lib64/libc.so.6" [20] #3 at ??:? [28] #4 __libc_start_main? in "/lib64/libc.so.6" [28] #5 ? at ??:? [20] #4 __libc_start_main at ??:? in "/lib64/libc.so.6" [20] #5 Code:
forAll(conc,i) { if (mesh.C()[i][0]>1.80501 && mesh.C()[i][2]>0.161 && alpha1[i]<0.49 && mesh.C()[i][2]<0.185) // { for (label j=1; j<=100000; j++) { if (mesh.C()[i-j][0]==mesh.C()[i][0] && mesh.C()[i-j][1]==mesh.C()[i][1] && mesh.C()[i-j][2]<mesh.C()[i][2]) { index=j; break; } } if ((conc[i-index]*mesh.V()[i-index]+conc[i]*mesh.V()[i])/(mesh.V()[i-index])<1) { conc[i-index]=(conc[i-index]*mesh.V()[i-index]+conc[i]*mesh.V()[i])/(mesh.V()[i-index]); conc[i]=0; } } } Info<< "Finished solving now for concentration\n" << endl; } runTime.write(); Info<< "ExecutionTime = " << runTime.elapsedCpuTime() << " s" << " ClockTime = " << runTime.elapsedClockTime() << " s" << nl << endl; Code:
#include "readDyMControls.H" if (LTS) { #include "setRDeltaT.H" } else { #include "CourantNo.H" #include "alphaCourantNo.H" #include "setDeltaT.H" } runTime++; Info<< "Time = " << runTime.timeName() << nl << endl; The general issue is also the one addressed in: InterFoam: Add an equation to only one phase Adding diffusion term to interFoam transport equation InterfaceCompression interFoam High diffusion in interFoam I really appreciate any kind of help and feedback. Giovanni Last edited by the_ichthyologist; November 4, 2020 at 07:03. Reason: Update with my solution to the proposed issue for general utility |
|
October 29, 2020, 06:53 |
|
#2 |
New Member
Giovanni Farina
Join Date: Nov 2019
Location: Trieste, Italy
Posts: 11
Rep Power: 7 |
Some updates... After closer analysis of the error I found that the run exits at the beginning of the cycle, during the calculation of the Courant number, specifically at this point:
Code:
scalarField sumPhi ( fvc::surfaceSum(mag(phi))().primitiveField() ); I also noticed that this is the most computationally costly operation in terms of time execution, this, together with the fact that correction worked on either half but not on the whole, suggests me that my correction requires something more which the solver struggles doing. Maybe running through cells in parallel deletes something that needs to be calculated again, I have no clue. It could also have something to do with the way pStream handles code. More updates: I tried correcting each half alternatively but now it does not run anymore even on one half (I tried removing processors folders and reinitialise the case). I tried defining a subfield scalarField& concCells = const_cast<scalarField&>(conc().field()); but nothing changes I tried using the method mesh.cellCells() to find neighbouring cells and loop over them, but it looks even more problematic I tried putting some tolerances for coordinates, looping in opposite direction, renumbering the mesh: nothing happens I tried running the correction without modifying conc (thus just accessing cells) and I get the same errror: it's just accessing the cells that triggers the error in reading phi. I tried getting some text output (I wanted to know indices) but I found that the solver does not even get in the first if. And though, it gives an error which depends on the if condition (there is where I set my "correction region boundaries"). I printed indices and coordinates: it looks like the forAll works only in the processor0 domain, thus it never satisfies the if condition. Or maybe this is just the output I can see. I think I can solve the whole issue by a proper use of pStream functions, if anyone knows them well please help me. I could also look myself if the other alternatives won't work. How to do communication across processors This sounds all very strange to me, it looks like the forAll cycle doesn't work but yet gives an error depending on its content. If anybody has even the slightest idea why this happens please tell me. Right now I'm getting back to the "wind" correction in the concentration equation. Last edited by the_ichthyologist; November 1, 2020 at 05:11. |
|
Tags |
error, forall loop, parallel, printstack error |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Error running openfoam in parallel | fede32 | OpenFOAM Programming & Development | 5 | October 4, 2018 17:38 |
Explicitly filtered LES | saeedi | Main CFD Forum | 16 | October 14, 2015 12:58 |
simpleFoam parallel | AndrewMortimer | OpenFOAM Running, Solving & CFD | 12 | August 7, 2015 19:45 |
simpleFoam in parallel issue | plucas | OpenFOAM Running, Solving & CFD | 3 | July 17, 2013 12:30 |
Correct way to program a nonlinear cycle to run it in parallel | davide_c | OpenFOAM Programming & Development | 4 | March 27, 2012 10:24 |