CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

error while running in parallel using openmpi on local mc 6 processors

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By suryawanshi_nitin

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 20, 2012, 06:02
Default error while running in parallel using openmpi on local mc 6 processors
  #1
New Member
 
Nitin Suryawanshi
Join Date: Mar 2009
Location: Pune, India
Posts: 28
Rep Power: 17
suryawanshi_nitin is on a distinguished road
when i m running my case for parallel processing with following
mpirun -np 6 pisoFoam -parallel > log &

getting following error...

neptune@ubuntu:~/tutorials/incompressible/icoFoam/cavity$ mpirun -np 6 pisoFoam -parallel > log &
[1] 12387
neptune@ubuntu:~/tutorials/incompressible/icoFoam/cavity$ --------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[0]
[0]
[0] --> FOAM FATAL ERROR:
[0] number of processor directories = 2 is not equal to the number of processors = 6
[0]
FOAM parallel run exiting
[0]
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 12388 on
node ubuntu exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------


therefore after this i did parallel test
as mentioned in on of Brunos link

  1. Run mpirun with a test for launching mpi'less applications. For example, run each one of these at a time: Code:
    mpirun -np 2 bash -c "ls -l" mpirun -np 2 bash -c "export"
    The first one will show you the contents of the folder each remotely launched bash shell. The second one will show you the environment variables for each remote shell.
    If neither one of these work, then your MPI installation isn't working.
  2. Build the test application designed for these kinds of tests: Code:
    cd $WM_PROJECT_DIR wmake applications/test/parallel
    Now go to your case that has the decomposePar already done.
    Then run the following scenarios:

Till 2 step everything working ok but when i say parallelTest gives following error

neptune@ubuntu:~/tutorials/incompressible/icoFoam/cavity$ parallelTest
parallelTest: command not found

Thanks in advance... please help me on this....
suryawanshi_nitin is offline   Reply With Quote

Old   May 20, 2012, 09:28
Default
  #2
Senior Member
 
Adhiraj
Join Date: Sep 2010
Location: Karnataka, India
Posts: 187
Rep Power: 16
adhiraj is on a distinguished road
Why does it complain that you have 2 processor directories and are trying to run with 6 processors?
adhiraj is offline   Reply With Quote

Old   May 20, 2012, 11:45
Default
  #3
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Greetings to both!

@suryawanshi_nitin: Adhiraj is right, the decomposition apparently didn't go as you expected. Check your "system/decomposeParDict".

As for parallelTest, as of 2.0.0 it has been renamed to Test-parallel.

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   May 20, 2012, 14:37
Default
  #4
New Member
 
Nitin Suryawanshi
Join Date: Mar 2009
Location: Pune, India
Posts: 28
Rep Power: 17
suryawanshi_nitin is on a distinguished road
Thnaks for your valuable replies
yes you are right, in my actual case i m working with 6 processors but it was giving issue, so i thought of checking with simple case. Below is the error msg of actual case. i have allready solved this case in of201 debian pack, till 1.7 sec by decomposing domain for 6 processors, And now i m using same data to solve further in of210 source pack installation. but while running it for further time after 1.7 sec but getting following error...
(And test-Parallel is working now)

neptune@ubuntu:~/nitin/s$ mpirun -np 6 pisoFoam -parallel > log &
[1] 2865
neptune@ubuntu:~/nitin/s$ [5]
[5]
[5] --> FOAM FATAL IO ERROR:
[5] essential value entry not provided
[5]
[5] file: /home/neptune/nitin/s/processor5/1.77/phi::boundaryField::symmetryBottom from line 59453 to line 59453.
[5]
[5] From function fvsPatchField<Type>::fvsPatchField
(
const fvPatch& p,
const DimensionedField<Type, surfaceMesh>& iF,
const dictionary& dict
)
[5]
[5] in file lnInclude/fvsPatchField.C at line 110.
[5]
FOAM parallel run exiting
[5]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 5 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 5 with PID 2871 on
node ubuntu exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

kindly waiting for your replies. Thanks in advance.

Nitin
suryawanshi_nitin is offline   Reply With Quote

Old   May 21, 2012, 09:31
Default
  #5
Member
 
Jan
Join Date: Dec 2009
Location: Berlin
Posts: 50
Rep Power: 20
SirWombat is on a distinguished road
Send a message via Skype™ to SirWombat
Quote:
Originally Posted by suryawanshi_nitin View Post
[5] --> FOAM FATAL IO ERROR:
[5] essential value entry not provided
[5]
[5] file: /home/neptune/nitin/s/processor5/1.77/phi::boundaryField::symmetryBottom from line 59453 to line 59453.
What OpenFOAM is trying to tell you: Your "symmetryBottom" has a missing value in your boundary setup. Be sure to provide all needed variables!
__________________
~~~_/)~~~
SirWombat is offline   Reply With Quote

Old   May 21, 2012, 15:13
Default
  #6
New Member
 
Nitin Suryawanshi
Join Date: Mar 2009
Location: Pune, India
Posts: 28
Rep Power: 17
suryawanshi_nitin is on a distinguished road
sir thanks for your reply
its working well now, but i started the case from start time i.e. 0.0 sec in controldict file, a complete new simulation. From this what i understood is if we are having solution data of of debian pack of201 and if we want to run that case further in of210 source pack installation, then it of210 unable to understand/handle old data from old version especially in parallel case. this is what my interpretation.... thanks sir for your valuable time. if anyone is having clear view about this then they are most wellcome.
This is the way to learn fast with more clarity.....


regards
Nitin Suryawanshi.
sibo likes this.
suryawanshi_nitin is offline   Reply With Quote

Old   February 18, 2017, 23:11
Default
  #7
Member
 
sibo
Join Date: Oct 2016
Location: Chicago
Posts: 55
Rep Power: 10
sibo is on a distinguished road
Hi Nitin,

I read your post "error while running in parallel using openmpi on local mc 6 processors" and noticed you solved this problem.

I have the exactly same error. I was trying to run a case in parallel with 4 processors in Cluster. the task stops right after it starts and i found this error message in log file. But i tried to run this case in parallel in my own laptop, it works fine.

Can you please share how you solve this problem with more detail? Thanks a lot!!!
sibo is offline   Reply With Quote

Old   February 22, 2017, 00:21
Default
  #8
Member
 
Maria
Join Date: Jul 2013
Posts: 84
Rep Power: 13
marialhm is on a distinguished road
Quote:
Originally Posted by sibo View Post
Hi Nitin,

I read your post "error while running in parallel using openmpi on local mc 6 processors" and noticed you solved this problem.

I have the exactly same error. I was trying to run a case in parallel with 4 processors in Cluster. the task stops right after it starts and i found this error message in log file. But i tried to run this case in parallel in my own laptop, it works fine.

Can you please share how you solve this problem with more detail? Thanks a lot!!!
Hi, sibo,

Have you solved the problem? I have the same one!!!

Maria
marialhm is offline   Reply With Quote

Old   February 22, 2017, 10:04
Default
  #9
Member
 
sibo
Join Date: Oct 2016
Location: Chicago
Posts: 55
Rep Power: 10
sibo is on a distinguished road
Hi Maria,

I am still working on that! so annoying!!
What i did is I installed Openfoam again and especially pay attention to the load Openmpi to the environment step.
Also make sure your openfoam are the same version.
Hope this works.

Thanks.
sibo is offline   Reply With Quote

Old   February 22, 2017, 10:34
Default
  #10
Member
 
sibo
Join Date: Oct 2016
Location: Chicago
Posts: 55
Rep Power: 10
sibo is on a distinguished road
Hi Maria,

It works now!!!
sibo is offline   Reply With Quote

Old   February 22, 2017, 22:33
Default
  #11
Member
 
Maria
Join Date: Jul 2013
Posts: 84
Rep Power: 13
marialhm is on a distinguished road
Thanks, Sibo.

Mine is also working, and I didn't reinstall it. I have made some mistakes.
marialhm is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Transient simulation not converging skabilan OpenFOAM Running, Solving & CFD 14 December 17, 2019 00:12
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 06:36
How to write k and epsilon before the abnormal end xiuying OpenFOAM Running, Solving & CFD 8 August 27, 2013 16:33
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 bookie56 OpenFOAM Installation 8 August 13, 2011 05:03
Running dieselFoam in parallel. Palminchi OpenFOAM 0 February 17, 2010 05:00


All times are GMT -4. The time now is 13:33.