CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

Running Multiple OpenFOAM Cases at Once on HPC

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 1, 2019, 14:04
Default Running Multiple OpenFOAM Cases at Once on HPC
  #1
New Member
 
Milad Mozayyani
Join Date: Jun 2019
Posts: 4
Rep Power: 7
papamilad is on a distinguished road
Hello all,

I have successfully setup a 3 node, 120 core HPC using an infiniband switch. It works great running one large case from the master and throwing the entire 120 cores at it but sometimes we have smaller cases we would like to run simultaneously with a smaller number of cores. Rather than putting the cases on each machine and running them from the nodes (or via SSH), is there an easy way to be able to assign IE 6 cases with 20 cores each and have them all ran from the master? I briefly looked into some cluster management software but couldn't find what I was looking for. I am very new to Linux and CFD in general so a nudge in the right direction would be much obliged.

Thank you!

Edit: I should mention we are running OpenFOAM 5 on Ubuntu 18.14

Last edited by papamilad; August 1, 2019 at 14:06. Reason: More info
papamilad is offline   Reply With Quote

Old   August 1, 2019, 20:41
Default
  #2
Senior Member
 
Join Date: Oct 2011
Posts: 242
Rep Power: 17
naffrancois is on a distinguished road
Hello,

How did you launch from the master your big simulation using all available nodes and cores?

Using mpi you can edit a machinefile containing the following info:

Node1:8
Node2:8

Assuming Node1 and Node2 properly defined through their respective ip addresses in etc/hosts

Then launch your exe as mpiexec -np 16 -f machinefile exe. It will launch a 16 core mpi job spread over Node1 and Node2.

Otherwise have a look to a job scheduler like slurm. It will allow you more control, and let you send jobs in queue until resource is available, which is quite great before holidays
naffrancois is offline   Reply With Quote

Old   August 2, 2019, 00:17
Default
  #3
New Member
 
Milad Mozayyani
Join Date: Jun 2019
Posts: 4
Rep Power: 7
papamilad is on a distinguished road
Quote:
Originally Posted by naffrancois View Post
Hello,

How did you launch from the master your big simulation using all available nodes and cores?

Using mpi you can edit a machinefile containing the following info:

Node1:8
Node2:8

Assuming Node1 and Node2 properly defined through their respective ip addresses in etc/hosts

Then launch your exe as mpiexec -np 16 -f machinefile exe. It will launch a 16 core mpi job spread over Node1 and Node2.

Otherwise have a look to a job scheduler like slurm. It will allow you more control, and let you send jobs in queue until resource is available, which is quite great before holidays

That's exactly how our cluster is setup and ran as is. The problem we're running into is if we have 2 cases that don't really scale as well with more cores (IE using 120 cores results in more than 1/2 the time as 60 cores) so running 2 at once is quicker than 2 back to back using the entire cores. The problem is if you run two mpirun -np 60 ... commands at once, it will try to use the same 60 cores for both rather than distribute things to empty cores. Can slurm replace mpirun to do what we want or does it work in conjunction?
papamilad is offline   Reply With Quote

Old   August 2, 2019, 05:45
Default
  #4
Senior Member
 
Join Date: Oct 2011
Posts: 242
Rep Power: 17
naffrancois is on a distinguished road
I may not have understood exactly what you want to do. If you have two machinefiles:

File1:
Node1:40
Node2:20

File2:
Node2:20
Node3:40

Then you launch two distinct mpiexec instances:
mpiexec -np 60 -f File1 execname
mpiexec -np 60 -f File2 execname

Then I do not expect mpi to assign one or more cores to both jobs unless something wrong in your configuration or your cores are already busy with some other job.

Slurm works on top of mpi, it does not replace it. I am not an expert of it, I am using it to put lots of runs in queue and manage resource availability across different users.
naffrancois is offline   Reply With Quote

Old   August 2, 2019, 12:52
Default
  #5
New Member
 
Milad Mozayyani
Join Date: Jun 2019
Posts: 4
Rep Power: 7
papamilad is on a distinguished road
Quote:
Originally Posted by naffrancois View Post
I may not have understood exactly what you want to do. If you have two machinefiles:

File1:
Node1:40
Node2:20

File2:
Node2:20
Node3:40

Then you launch two distinct mpiexec instances:
mpiexec -np 60 -f File1 execname
mpiexec -np 60 -f File2 execname

Then I do not expect mpi to assign one or more cores to both jobs unless something wrong in your configuration or your cores are already busy with some other job.

Slurm works on top of mpi, it does not replace it. I am not an expert of it, I am using it to put lots of runs in queue and manage resource availability across different users.
That worked flawlessly! Thank you! I'm gonna give Slurm a shot from reading the wiki it seems like it removes the need to manually create a machines file for each case and keep track of what cores/nodes you assign.
papamilad is offline   Reply With Quote

Old   August 2, 2019, 13:20
Default
  #6
Senior Member
 
Join Date: Oct 2011
Posts: 242
Rep Power: 17
naffrancois is on a distinguished road
i am glad you figured it out. As far as slurm is concerned, it is a bit of a pain to configure wrt to other tools but definitely worth the effort if you have to schedule many simulations or if there are various users
naffrancois is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Docker] Running OpenFoam 4.1 from script without user intervention fnellmeldin OpenFOAM Running, Solving & CFD 9 June 7, 2017 19:04
Running Multiple cases using Dakota! CFD-Lover OpenFOAM Running, Solving & CFD 5 April 28, 2017 14:52
OpenFOAM Training, London, Chicago, Munich, Sep-Oct 2015 cfd.direct OpenFOAM Announcements from Other Sources 2 August 31, 2015 14:36
Something weird encountered when running OpenFOAM in parallel on multiple nodes xpqiu OpenFOAM Running, Solving & CFD 2 May 2, 2013 05:59
Running Multiple cases over night Michael Bo Hansen FLUENT 9 May 7, 2009 04:10


All times are GMT -4. The time now is 02:29.