|
[Sponsors] |
April 11, 2006, 10:35 |
Hi everybody, I set up a clust
|
#1 |
Member
clo
Join Date: Mar 2009
Posts: 36
Rep Power: 17 |
Hi everybody, I set up a cluster (only a server and 1 node) and I wanted to see if it works correctly, so I try to run an openfoam case. I took the OpenFoam users guide (1.3 version )and I found an example using LAM/MPI. (page U-83).
I launch LAM and all was OK: n-1<12588> ssi:boot:base:linear: booting n0 (clo) n-1<12588> ssi:boot:base:linear: booting n1 (oscarnode1) n-1<12588> ssi:boot:base:linear: finished Then I tried to generate the mesh in the slave node (n1 in my case) mpirun n1 -np 1 blockMesh $FOAM_RUN/tutorials/interFoam damBreak -parallel </dev/null>& log & The Output was: [1] 16717 and nothing more... OpenFOAM is working fine because I tried on the server and the calculations have no problem It seems like the slave node isn't working at all...what is the [1] 16717 number? Has anyone already experienced these kind of jobs? Thanx ciao |
|
April 11, 2006, 10:42 |
The " 16717" number is : first
|
#2 |
Senior Member
Francesco Del Citto
Join Date: Mar 2009
Location: Zürich Area, Switzerland
Posts: 237
Rep Power: 18 |
The "[1] 16717" number is [1]: first process in background; 16717: PID (Process id) of the process.
It's normal, and it is a consequence of the ambersand at the end of the command line. If you want to see what really happens, try to run mpirun without redirecting output and without sending it in the background, ie: mpirun n1 -np 1 blockMesh $FOAM_RUN/tutorials/interFoam damBreak -parallel </dev/null I usually run openfoam in parellel on a cluster, and it works quite well. Francesco |
|
April 11, 2006, 10:51 |
Thanx for your help! I try it
|
#3 |
Member
clo
Join Date: Mar 2009
Posts: 36
Rep Power: 17 |
Thanx for your help! I try it but it semms like nothing happen; the output:
/*---------------------------------------------------------------------------*\ | ========= | | | \ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \ / O peration | Version: 1.3 | | \ / A nd | Web: http://www.openfoam.org | | \/ M anipulation | | \*---------------------------------------------------------------------------*/ Exec : blockMesh /home/ufftecn1/OpenFOAM/ufftecn1-1.3/run/tutorials/interFoam damBreak -parallel Probably is something in my cluster...maybe it's a silly question but can you give me a hint to be shure that something is going on? |
|
April 11, 2006, 11:06 |
I really don't know if you can
|
#4 |
Senior Member
Francesco Del Citto
Join Date: Mar 2009
Location: Zürich Area, Switzerland
Posts: 237
Rep Power: 18 |
I really don't know if you can run "blockMesh" in parallel...
If you look at the manual, you can find the standard procedure to run a parallel case. First, you have to run decomposePar, in order to decompose the computing mesh. Then you can run the solver on the decomposed case, on the same number of processes. So, you can try to run a tutorial in parallel (i.e, the damBreak with interFoam), using more than one process. I hope this can help you. Francesco |
|
April 11, 2006, 11:11 |
I will try to do as you said..
|
#5 |
Member
clo
Join Date: Mar 2009
Posts: 36
Rep Power: 17 |
I will try to do as you said...Grazie ciao
|
|
April 12, 2006, 02:28 |
i have a problem with decompos
|
#6 |
New Member
auvi
Join Date: Mar 2009
Posts: 5
Rep Power: 17 |
i have a problem with decomposePar for the damBreakFine tutorial case.
when i run decomposePar on the damBreakFine case it exits with a fatal error like this: --> FOAM FATAL I/O ERROR Cannot find 'value' entry wgich is required to set the values of the default patch field. Please add the 'value' entry to the write function of the user defined boundary condition. .... a file name is mentioned here in the Error: damBreakFine/0/pd::atmosphere line 51 to 52 in the "pd" file surroundng the lins 51 to 52 is here: atmosphere { type totalPressure; p0 uniform; } ============================================= before running decomposePar i have edited the decomposeParDict file according to the tutorial and ran these: 1) blockMesh on damBreakFine 2) setFields on damBreakFine dear clo and Francesco Del Citto or anyone : please help Auvi |
|
April 12, 2006, 03:40 |
Hi auvi, for the moment I run
|
#7 |
Member
clo
Join Date: Mar 2009
Posts: 36
Rep Power: 17 |
Hi auvi, for the moment I run the damBreak case (not Fine) but view that the boundary conditions are the same, in my 0/p file, line 50-> I read:
atmosphere { type totalPressure; p0 uniform 0; value uniform 0; } maybe it cans help you... |
|
February 23, 2009, 00:44 |
I have a problem running OF in
|
#8 |
New Member
vijayakrishnan
Join Date: Mar 2009
Posts: 5
Rep Power: 17 |
I have a problem running OF in paralled over two linux machines. The solver is sonicTurbFoam.
I am attaching the log for reference: Exec : sonicTurbFoam -parallel Date : Feb 19 2009 Time : 12:31:39 Host : soorya PID : 5104 Case : /home/openfoam15/OpenFOAM/vijay-1.5/run/vayumach2clus nProcs : 4 Slaves : 3 ( soorya.5105 kidambiHP219.4565 kidambiHP219.4566 ) Pstream initialized with: floatTransfer : 1 nProcsSimpleSum : 0 commsType : nonBlocking // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 Reading thermophysical properties Selecting thermodynamics package hThermo<puremixture<consttransport<speciethermo<hc onstthermo<perfectgas>>>>> 1 additional process aborted (not shown) Also this is the error messages I get: Thu Feb 19 12:31:36 IST 2009 nohup: appending output to `nohup.out' [soorya:05105] *** An error occurred in MPI_Waitall [soorya:05105] *** on communicator MPI_COMM_WORLD [soorya:05105] *** MPI_ERR_TRUNCATE: message truncated [soorya:05105] *** MPI_ERRORS_ARE_FATAL (goodbye) [soorya:05104] *** An error occurred in MPI_Waitall [soorya:05104] *** on communicator MPI_COMM_WORLD [soorya:05104] *** MPI_ERR_TRUNCATE: message truncated [soorya:05104] *** MPI_ERRORS_ARE_FATAL (goodbye) mpirun noticed that job rank 2 with PID 4565 on node kidambiHP219 exited on signal 15 (Terminated). Command exited with non-zero status 1 0.02user 0.01system 0:06.14elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k 4752inputs+16outputs (27major+2408minor)pagefaults 0swaps |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Large test case for running OpenFoam in parallel | fhy | OpenFOAM Running, Solving & CFD | 23 | April 6, 2019 10:55 |
OpenFOAM on cluster | markh83 | OpenFOAM Installation | 1 | October 17, 2008 20:09 |
How to run Openfoam in a cluster after I install it | xiuying | OpenFOAM Installation | 5 | May 5, 2008 13:54 |
OpenFOAM in a Linux Cluster | gedanken | OpenFOAM Installation | 1 | August 25, 2005 14:32 |
A conference in air load /flight simulation/flight test/wind tunnel test/aero modeling for high AOA | cimsi | Main CFD Forum | 0 | September 17, 1998 07:26 |