|
[Sponsors] |
How to turn-off parallel options for some operations? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
December 18, 2013, 12:06 |
How to turn-off parallel options for some operations?
|
#1 |
Member
|
Hello, dear FOAMers!
Please, could you advise me how to turn off options of parallel run for some operations. In my case I solve a 3D problem and in each cell the additional 1D problem is solved. Generally speaking, it's a 4D problem. I've created additional region (say, "meshW") and during the cycle over all cells in 3D mesh my program solves 1D transport equation on meshW for each cell in my 3D mesh. My program runs OK in serial runs but crashes in parallel ones. It crashes in the end of the first time step. I figured out that if I comment the procedure of solving 1D transoprt equations on meshW, there're no crashes. So I suppose that turning of options of parallel runs during solution of 1D transport equations could help. However, I can't understand how to do this. Please, help. Best regards, Aleksey. |
|
December 19, 2013, 08:44 |
|
#2 |
Senior Member
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 22 |
Hi Aleksey,
The easiest is probably to simply run the 1D problem on the master CPU in the parallel run with: Code:
if(Pstream::master()) { // Do your 1D stuff here } Also, you need to make sure the master node knows the meshW-region and that meshW is not decomposed over all nodes. If you need the result of the 1D calculations for the 3D problem, you will also need to include stuff like Pstream::scatter etc. So not an easy task if you ask me, but I think it's manageable. Good luck! Lieven |
|
December 19, 2013, 16:31 |
|
#3 |
Senior Member
Kyle Mooney
Join Date: Jul 2009
Location: San Francisco, CA USA
Posts: 323
Rep Power: 18 |
Would you be able to share the code for your 1-D transport equation? We might be able to spot the not-parallel-safe part of the code that is causing the crash.
Also, sharing details about the crash is helpful : seg fault? Error? floating point exception? Cheers! |
|
December 24, 2013, 12:49 |
|
#4 |
Member
|
Hello, dear colleagues!
Thank you very much for your replies and sorry for delay. As far as I understand, solving 1D transport equation for each cell is something like solving chemistry for each cell. If I perform this action only on master I obtain significant performance loss. I've noticed that explicit scheme work OK since it doesn't need any linear solvers to run. So, now I use explicit scheme, as it can be easily seen from the next part of code: Code:
while (runTime.loop()) { ... Info << "Solving hydrodynamics" << endl << endl; ... PISO stuff ... Info << "Solving transport\n"; ... for (int nonOrth=0; nonOrth<=nNonOrthCorr; nonOrth++) { ... forAll(meshW.cells(), cellIW) { volScalarField& c_w_SpI = c_w_Spatial[cellIW](); fvScalarMatrix cwEqn ( fvm::ddt(c_w_SpI) + fvm::div(phi, c_w_SpI) - fvm::laplacian(Diff, c_w_SpI) ); cwEqn.solve(); } } ... Info << "Solving evolution in w-space\n"; label debLevel; debLevel = lduMatrix::debug; lduMatrix::debug = 0; forAll(mesh.cells(), cellISpatial) { surfaceScalarField phiW // "convective flux" in w-space ( IOobject ( "phiW", runTime.timeName(), meshW, IOobject::NO_READ, IOobject::NO_WRITE ), ... ); // obtaining c_w distribution in the specified cell forAll(meshW.cells(), cellIW) { c_w_w[cellIW] = c_w_Spatial[cellIW]()[cellISpatial]; } // implicit scheme - fails to work in parallel /* fvScalarMatrix c_w_wEqn ( fvm::ddt(c_w_w) + fvm::div(phiW, c_w_w) ); c_w_wEqn.solve(); */ // explicit scheme - works fine both in serial and in parallel runs c_w_w +=-runTime.deltaT()*fvc::div(phiW, c_w_w); // updating c_w distribution (i.e. writing back) forAll(meshW.cells(), cellIW) { c_w_Spatial[cellIW]()[cellISpatial] = c_w_w[cellIW]; } } lduMatrix::debug = debLevel; Info << "Solving chemistry\n"; ... forAll(mesh.cells(), celli) { ... } ... runTime.write(); } c_w_wEqn.solve(); I.e. commenting of this line eliminates crashes. Terminal output of crash: Code:
Solving evolution in w-space [chaos2:7807] *** An error occurred in MPI_Recv [chaos2:7807] *** on communicator MPI_COMM_WORLD [chaos2:7807] *** MPI_ERR_TRUNCATE: message truncated [chaos2:7807] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 7807 on node chaos2 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- Best regards, Aleksey. |
|
Tags |
parallel, turn off |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Script to Run Parallel Jobs in Rocks Cluster | asaha | OpenFOAM Running, Solving & CFD | 12 | July 4, 2012 23:51 |
HP MPI warning...Distributed parallel processing | Peter | CFX | 10 | May 14, 2011 07:17 |
CorrectBoundaryConditions affecting processor patch in parallel operations | adona058 | OpenFOAM Running, Solving & CFD | 1 | March 11, 2008 17:13 |
IcoFoam parallel woes | msrinath80 | OpenFOAM Running, Solving & CFD | 9 | July 22, 2007 03:58 |
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 | Amitava Majumdar | Main CFD Forum | 0 | January 5, 1999 13:00 |