CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

How to turn-off parallel options for some operations?

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By kmooney

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 18, 2013, 12:06
Default How to turn-off parallel options for some operations?
  #1
Member
 
Aleksey Rukhlenko
Join Date: Nov 2009
Location: Moscow
Posts: 55
Rep Power: 17
Aleksey_R is on a distinguished road
Send a message via ICQ to Aleksey_R
Hello, dear FOAMers!

Please, could you advise me how to turn off options of parallel run for some operations.

In my case I solve a 3D problem and in each cell the additional 1D problem is solved. Generally speaking, it's a 4D problem.

I've created additional region (say, "meshW") and during the cycle over all cells in 3D mesh my program solves 1D transport equation on meshW for each cell in my 3D mesh.

My program runs OK in serial runs but crashes in parallel ones. It crashes in the end of the first time step.

I figured out that if I comment the procedure of solving 1D transoprt equations on meshW, there're no crashes. So I suppose that turning of options of parallel runs during solution of 1D transport equations could help. However, I can't understand how to do this.

Please, help.

Best regards,
Aleksey.
Aleksey_R is offline   Reply With Quote

Old   December 19, 2013, 08:44
Default
  #2
Senior Member
 
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 22
Lieven will become famous soon enough
Hi Aleksey,

The easiest is probably to simply run the 1D problem on the master CPU in the parallel run with:
Code:
if(Pstream::master())
{
        // Do your 1D stuff here
}
But make no illusions, this won't be efficient since all other CPUs will be waiting for the master to finish before to continue with the computation.

Also, you need to make sure the master node knows the meshW-region and that meshW is not decomposed over all nodes. If you need the result of the 1D calculations for the 3D problem, you will also need to include stuff like Pstream::scatter etc.

So not an easy task if you ask me, but I think it's manageable.

Good luck!

Lieven
Lieven is offline   Reply With Quote

Old   December 19, 2013, 16:31
Default
  #3
Senior Member
 
kmooney's Avatar
 
Kyle Mooney
Join Date: Jul 2009
Location: San Francisco, CA USA
Posts: 323
Rep Power: 18
kmooney is on a distinguished road
Would you be able to share the code for your 1-D transport equation? We might be able to spot the not-parallel-safe part of the code that is causing the crash.

Also, sharing details about the crash is helpful : seg fault? Error? floating point exception?

Cheers!
ngj likes this.
kmooney is offline   Reply With Quote

Old   December 24, 2013, 12:49
Default
  #4
Member
 
Aleksey Rukhlenko
Join Date: Nov 2009
Location: Moscow
Posts: 55
Rep Power: 17
Aleksey_R is on a distinguished road
Send a message via ICQ to Aleksey_R
Hello, dear colleagues!

Thank you very much for your replies and sorry for delay.

As far as I understand, solving 1D transport equation for each cell is something like solving chemistry for each cell. If I perform this action only on master I obtain significant performance loss.

I've noticed that explicit scheme work OK since it doesn't need any linear solvers to run. So, now I use explicit scheme, as it can be easily seen from the next part of code:

Code:
while (runTime.loop())
{
    ...
    Info << "Solving hydrodynamics" << endl << endl;

    ... PISO stuff ...

    Info << "Solving transport\n";
    ...
    for (int nonOrth=0; nonOrth<=nNonOrthCorr; nonOrth++)
    {
        ...
        forAll(meshW.cells(), cellIW)
        {
            volScalarField& c_w_SpI = c_w_Spatial[cellIW]();
            fvScalarMatrix cwEqn
            (
                fvm::ddt(c_w_SpI)
             + fvm::div(phi, c_w_SpI)
              - fvm::laplacian(Diff, c_w_SpI)
            );

            cwEqn.solve();
        }
    }

    ...

    Info << "Solving evolution in w-space\n";
    label debLevel;
    debLevel = lduMatrix::debug;
    lduMatrix::debug = 0;
    forAll(mesh.cells(), cellISpatial)
    {
        surfaceScalarField phiW            // "convective flux" in w-space
        (
            IOobject
            (
                "phiW",
                runTime.timeName(),
                meshW,
                IOobject::NO_READ,
                IOobject::NO_WRITE
            ),
        ...
        );

        // obtaining c_w distribution in the specified cell
        forAll(meshW.cells(), cellIW)
        {
            c_w_w[cellIW] = c_w_Spatial[cellIW]()[cellISpatial];
        }

        // implicit scheme - fails to work in parallel
        /*
            fvScalarMatrix c_w_wEqn
            (
                fvm::ddt(c_w_w)
              + fvm::div(phiW, c_w_w)
            );

            c_w_wEqn.solve();
        */

        // explicit scheme - works fine both in serial and in parallel runs
        c_w_w +=-runTime.deltaT()*fvc::div(phiW, c_w_w);


        // updating c_w distribution (i.e. writing back)
        forAll(meshW.cells(), cellIW)
        {
            c_w_Spatial[cellIW]()[cellISpatial] = c_w_w[cellIW];
        }
    }
    lduMatrix::debug = debLevel;

    Info << "Solving chemistry\n";
    ...
    forAll(mesh.cells(), celli)
    {
    ...
    }
    ...
    runTime.write();
}
I've figured out that if I use implicit scheme (now it's commented) in parallel run, I get crash on:
c_w_wEqn.solve();
I.e. commenting of this line eliminates crashes.

Terminal output of crash:

Code:
Solving evolution in w-space
[chaos2:7807] *** An error occurred in MPI_Recv
[chaos2:7807] *** on communicator MPI_COMM_WORLD
[chaos2:7807] *** MPI_ERR_TRUNCATE: message truncated
[chaos2:7807] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 7807 on
node chaos2 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
If I could somehow to tell linear solvers to solve the w-space (i.e. 1D) problem without any parallel things, I could use implicit scheme.

Best regards,
Aleksey.
Aleksey_R is offline   Reply With Quote

Reply

Tags
parallel, turn off


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Script to Run Parallel Jobs in Rocks Cluster asaha OpenFOAM Running, Solving & CFD 12 July 4, 2012 23:51
HP MPI warning...Distributed parallel processing Peter CFX 10 May 14, 2011 07:17
CorrectBoundaryConditions affecting processor patch in parallel operations adona058 OpenFOAM Running, Solving & CFD 1 March 11, 2008 17:13
IcoFoam parallel woes msrinath80 OpenFOAM Running, Solving & CFD 9 July 22, 2007 03:58
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 Amitava Majumdar Main CFD Forum 0 January 5, 1999 13:00


All times are GMT -4. The time now is 17:26.