|
[Sponsors] |
mpirun openfoam output is buffered, only output at the end |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
August 15, 2015, 12:28 |
mpirun openfoam output is buffered, only output at the end
|
#1 |
New Member
nija parkman
Join Date: Aug 2015
Posts: 10
Rep Power: 11 |
Hello,
Title says it all; when using mpirun to run any solver in parallel the openfoam output is buffered and only appears once the run is finished; i.e. not great to follow it. I went through mpirun manual and the forum but so far haven't found anything? Is this also happening to some of you? any fix? Thanks, N. |
|
August 16, 2015, 14:08 |
|
#2 | |
Senior Member
Troy Snyder
Join Date: Jul 2009
Location: Akron, OH
Posts: 220
Rep Power: 19 |
Quote:
I use something like the following to run the job and dump the output to a log file: Code:
>>mpirun -np (# of processors) (executable) -parallel > output.log & |
||
August 16, 2015, 14:24 |
|
#3 |
New Member
nija parkman
Join Date: Aug 2015
Posts: 10
Rep Power: 11 |
Hello,
Yes exactly the same. The job runs fine, output as expected but only available at the end. Thanks, N |
|
August 16, 2015, 14:28 |
|
#4 |
Senior Member
Troy Snyder
Join Date: Jul 2009
Location: Akron, OH
Posts: 220
Rep Power: 19 |
I am not sure what you are asking. If you want ouput to the file and stdout, you
could do the following: Code:
mpirun -np (# of processors) (executable) -parallel 2>&1 | tee output.log |
|
August 16, 2015, 15:29 |
|
#5 |
New Member
nija parkman
Join Date: Aug 2015
Posts: 10
Rep Power: 11 |
Ill try to be a bit clearer:
when I run the mpirun command (with required arguments), the processes are starting ok and running fine. No standard output are generated until the process ends. Only when it does ends that output in generated. It's why I used the term "buffered", it seems mpirun buffers all standard output and only spit it out at the end. |
|
August 16, 2015, 16:18 |
|
#6 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Greetings to all!
@newbie_cfd: You will have to provide more details about your working environment, because that is not a common thing to occur with mpirun itself. A few examples of what I mean by "work environment":
The problem here is that we cannot guess what you're actually using, because mpirun usually always gives us the output while it's running, unless we use a job scheduler on a cluster or supercomputer; although for these latter ones the job scheduler usually is not named "mpirun". Best regards, Bruno
__________________
|
|
August 16, 2015, 16:32 |
|
#7 |
New Member
nija parkman
Join Date: Aug 2015
Posts: 10
Rep Power: 11 |
Hello,
Thanks for helping me onto that one. So I am running on a workstation with CentOS. Code:
mpirun --version mpirun (Open MPI) 1.8.2 Code:
which mpirun /usr/mpi/gcc/openmpi-1.8.2/bin/mpirun Code:
cat /etc/centos-release CentOS release 6.5 (Final) Hope this helps, Thanks, N |
|
August 16, 2015, 16:44 |
|
#8 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,981
Blog Entries: 45
Rep Power: 128 |
Try the following (change the number of cores and solver name to your own):
Code:
mpirun -np 4 -output-filename file.log interFoam -parallel & Code:
tail -f file.log.1.0 Another detail is that perhaps you shouldn't use all of the cores available in the workstation. For example, try with a case with only 2 sub-domains, so that this way you're certain that your machine still has enough cores to do other stuff. If this still has issues, then something else is getting in the way, possibly some configuration of the file system. ---------- Edit: Also, it seems that you're using a custom installation of Open-MPI, because the default is 1.8.1 on CentOS. Check the contents of the file "openmpi-mca-params.conf", which you can find with this command: Code:
find /usr/mpi/gcc/openmpi-1.8.2/ -name "*mca-params.conf" Last edited by wyldckat; August 16, 2015 at 16:55. Reason: see "Edit:" |
|
August 16, 2015, 17:08 |
|
#9 |
New Member
nija parkman
Join Date: Aug 2015
Posts: 10
Rep Power: 11 |
Ok, great. Solved!
Running with: Code:
-output-filename file.log But, the "mca-params.conf" file you mentioned was the 'culprit', from here it's mentionned that some config have "opal_event_include=poll" in their config files which shouldn't be there. I had "opal_event_include=epoll" which I commented. It fixes my issues and I get the output updated as the simulations runs. Thanks Wyldckat!, N |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
mass flow in is not equal to mass flow out | saii | CFX | 12 | March 19, 2018 06:21 |
Compression stoke is giving higher pressure than calculated | nickjuana | CFX | 62 | May 19, 2015 14:32 |
Radiation interface | hinca | CFX | 15 | January 26, 2014 18:11 |
Question about heat transfer coefficient setting for CFX | Anna Tian | CFX | 1 | June 16, 2013 07:28 |
FSI: Pressure and Normal Force don't match with expected values | Geraud | CFX | 6 | August 21, 2012 16:34 |