|
[Sponsors] |
January 18, 2012, 00:33 |
Running in parallel on multiple nodes
|
#1 |
Member
|
Friends,
I was trying to run a case by using resources of 2 computers by using the following command. mpirun --hostfile <machines> -np <nprocs> snappyHexMesh -parallel When I script without the hostfile on 1 node with 8 processors, I don't get any errors but when I run the same script on 2 nodes using the machines names in the machines hostfile, I get an error saying cannot find points in directory polymesh from 0 down to constant. I tried checking if the constant directory had the polymesh directory and the points file in it and apparently, it does. Can someone please help me. Where is it that I am going wrong? regards, Kalyan Goparaju |
|
January 18, 2012, 05:01 |
|
#2 |
Senior Member
BastiL
Join Date: Mar 2009
Posts: 530
Rep Power: 20 |
Hi,
you need to decompse your model in order to run in parallel. Run decomposePar in order to do this. Regards Bastian |
|
January 18, 2012, 10:22 |
|
#3 |
Member
|
Bastil,
I did do that. The following are the steps I followed. 1. blockMesh 2. decomposePar 3. mpirun --hostfile machines -np <nprocs> snappyHexMesh -parallel The problem what I mentioned is when I do the third step. Kalyan |
|
January 18, 2012, 12:13 |
|
#5 |
Member
|
Elvis,
As I understand, using MPI with machine files doesn't require us to have the working folder in both the system. To answer your question, no I don't have the file in the slave nodes. But I will give it a shot now and see if it works. Kalyan Update - Elvis, I did put the folder in both the nodes and tried running. I get the same error |
|
January 18, 2012, 12:36 |
|
#6 |
Senior Member
Olivier
Join Date: Jun 2009
Location: France, grenoble
Posts: 272
Rep Power: 18 |
hello,
In fact you need to have the working folder in both system (usually via nfs shared file system). And you also need that snappyHexMesh can work on both system (so same as the working folder, you need openfoam on nfs, or install openfoam on the same dir on each machine). You aslo need to source your bashrc on each node. On way to do this is to use foamExec. And take a look at ssh access with shared key (doesn't need a password for each node). regards, olivier |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
problem of running parallel Fluent on linux cluster | ivanbuz | FLUENT | 15 | September 23, 2017 20:12 |
running OpenFoam in parallel | vishwa | OpenFOAM Running, Solving & CFD | 22 | August 2, 2015 09:53 |
multiple parallel jobs on one machine | joeybernard | CFX | 0 | December 16, 2010 11:10 |
Running dieselFoam in parallel. | Palminchi | OpenFOAM | 0 | February 17, 2010 05:00 |
Minimum number of nodes to run CFX in parallel | Rui | CFX | 3 | April 11, 2005 21:46 |