CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > ANSYS

Fluent job with Slurm

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 18, 2018, 04:00
Default Fluent job with Slurm
  #1
Member
 
Join Date: Mar 2016
Posts: 33
Rep Power: 10
mahmoodn is on a distinguished road
With Slurm job manager, I use the following script to run a multi-node fluent simulation.
Code:
#!/bin/bash

# The name of the script is myjob
#SBATCH -J fluent

#SBATCH --partition=NOLIMIT
#SBATCH --account=nl

# Number of nodes
#SBATCH -N 2

# Number of MPI processes per node
#SBATCH --ntasks-per-node=24

# The Journal file
JOURNALFILE=fluent.journal
FLUENT=/state/partition1/ansys_inc/v190/fluent/bin/fluent

# Total number of Processors
# NPROCS=24
NTASKS=`echo $SLURM_TASKS_PER_NODE | cut -c1-2`
NPROCS=`expr $SLURM_NNODES \* $NTASKS`

$FLUENT 2ddp -g -slurm -t$NPROCS -mpi=openmpi -i $JOURNALFILE > fluent3.log

By submitting that script, I see

Code:
/state/partition1/ansys_inc/v190/fluent/fluent19.0.0/bin/fluent -r19.0.0 2ddp -g -slurm -t48 -mpi=openmpi -i fluent.journal
So, the command is correct. However, it doesn't spawn on two nodes. Only one node and more than its logical core count is used.
Code:
Starting fixfiledes /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/multiport/mpi/lnamd64/openmpi/bin/mpirun --mca btl self,sm,tcp --mca btl_sm_use_knem 0 --prefix /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/multiport/mpi/lnamd64/openmpi --x LD_LIBRARY_PATH --np 48 --host compute-0-4.local /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/lnamd64/2ddp_node/fluent_mpi.19.0.0 node -mpiw openmpi -pic shmem -mport 10.1.1.250:10.1.1.250:42514:0

----------------------------------------------------------------------------------
ID     Hostname           Core   O.S.      PID        Vendor
----------------------------------------------------------------------------------
n0-47  compute-0-4.local  48/32  Linux-64  4000-4047  Intel(R) Xeon(R) E5-2660 0
host   compute-0-4.local         Linux-64  3811       Intel(R) Xeon(R) E5-2660 0

MPI Option Selected: openmpi
Selected system interconnect: shared-memory
----------------------------------------------------------------------------------
Any idea?
mahmoodn is offline   Reply With Quote

Old   August 23, 2020, 20:59
Default
  #2
New Member
 
Mike
Join Date: Jul 2019
Posts: 5
Rep Power: 7
Leibniz is on a distinguished road
Did you ever find a solution to this problem? I am encountering something similar.
Leibniz is offline   Reply With Quote

Old   June 15, 2021, 05:21
Default
  #3
New Member
 
Hamed
Join Date: Jul 2012
Posts: 16
Rep Power: 14
aroma is on a distinguished road
Try fluent instead of #FLUENT
It may work

(I know the question is for a few years back, but I am posting this for anyone who may have similar problem)
aroma is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Problem running a parallel fluent job on local machine via mpd highhopes FLUENT 0 March 3, 2011 06:07
fluent job submission Sisir FLUENT 0 March 22, 2007 01:13
error in fluent job Sisir FLUENT 0 March 2, 2007 23:30
How about FLUENT user's job prospect? CFDHunnter FLUENT 3 November 5, 2003 17:01
cfd job Dr. Don I anyanwu Main CFD Forum 20 May 17, 1999 16:13


All times are GMT -4. The time now is 22:54.