|
[Sponsors] |
Massive speed penalty when using HPC Pack 2012 Cluster Manager |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
January 20, 2016, 04:53 |
Massive speed penalty when using HPC Pack 2012 Cluster Manager
|
#1 |
New Member
Chris Pounds
Join Date: May 2015
Posts: 20
Rep Power: 11 |
Hi,
I've been using Converge at work for about six months now and starting runs manually from the command line after placing the relevant files on our cluster. I'm trying to do a factorial study of a device with different BC's and key geometry dimensions and running HPC Pack 2012 R2. I queued up about 27 sims to run over this last weekend and came in on Tuesday to a nasty shock - only two had run, and the remainder were split roughly 50/50 between not started and crashed. I spent yesterday cleaning things up to the point that I can queue sims that will run, but they do so at a small fraction of our cluster's capacity at ~5%. If I return to running manually then its at ~95%. Does anyone have any idea how I can avoid this massive performance penalty so we can use the queue system in the future? Thanks. |
|
January 20, 2016, 13:27 |
|
#2 |
Member
Yunliang Wang
Join Date: Dec 2015
Location: Convergent Science, Madison WI
Posts: 58
Rep Power: 11 |
Hi Chris,
Thank you for your question. Can you tell me which MPI you are using? and what is the version? When your simulation crashes, what is the error message? Are you trying to run multiple jobs on the same node? Your IT person may help you out. Best, Yunliang |
|
February 2, 2016, 05:56 |
|
#3 |
New Member
Chris Pounds
Join Date: May 2015
Posts: 20
Rep Power: 11 |
Hi, sorry for the delay, I've been fighting other fires at work.
We're using HP-MPI but I dont know the version. I fixed the sims manually, just a garden variety CFL problem with a lowest time step being too high. We have one node dedicated for our use with 60+ cores we use on a regular basis. As far as I can tell, if I run a job through the HPC Pack 2012 scheduler then it only runs on a single core despite the number of cores we specify in the HPC 2012 "Edit Task" command line dialog. We are trying to run at least two in parallel at any one time, but really what we need is the ability to simply queue up a list of simulations so that the hardware is actually busy. |
|
February 3, 2016, 10:35 |
|
#4 |
Member
Yunliang Wang
Join Date: Dec 2015
Location: Convergent Science, Madison WI
Posts: 58
Rep Power: 11 |
Hi Chris,
Nice to hear that you figured out the issue with dt_min. Frankly speaking, we don't have many clients who run CONVERGE on Windows. You mentioned that you were using HPMPI. I was wondering if you ever tried MSMPI instead. Thanks, Yunliang |
|
February 5, 2016, 11:44 |
|
#5 |
Member
Yunliang Wang
Join Date: Dec 2015
Location: Convergent Science, Madison WI
Posts: 58
Rep Power: 11 |
Hi Chris,
I just talked to our GUI team and we ever helped a client for a similar issue with HPC. It was the setup issue. Please email me so that we may take a look at your setting remotely and fix the problem. Thanks, Yunliang ywang@convergecfd.com |
|
Tags |
batch runs, converge, hpc cluster, queing |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
need MS HPC Pack to run in "parallel" mode | Chris Lee | SU2 | 0 | November 24, 2014 21:05 |
Can we merge HPC Pack licenses? | Phillamon | FLUENT | 0 | January 24, 2014 03:59 |
Fluent-HPC PACK 2012 | makaveli1903 | FLUENT | 0 | March 15, 2013 08:29 |
Microsoft HPC Pack 2008 Tool Pack (LINPACK) | jemyungcha | Hardware | 1 | October 22, 2011 19:21 |
Linux Cluster Manager from SGI out for X86 | Shelly Zavoral | Main CFD Forum | 0 | February 13, 2009 13:26 |