|
[Sponsors] |
May 18, 2018, 04:00 |
Fluent job with Slurm
|
#1 |
Member
Join Date: Mar 2016
Posts: 33
Rep Power: 10 |
With Slurm job manager, I use the following script to run a multi-node fluent simulation.
Code:
#!/bin/bash # The name of the script is myjob #SBATCH -J fluent #SBATCH --partition=NOLIMIT #SBATCH --account=nl # Number of nodes #SBATCH -N 2 # Number of MPI processes per node #SBATCH --ntasks-per-node=24 # The Journal file JOURNALFILE=fluent.journal FLUENT=/state/partition1/ansys_inc/v190/fluent/bin/fluent # Total number of Processors # NPROCS=24 NTASKS=`echo $SLURM_TASKS_PER_NODE | cut -c1-2` NPROCS=`expr $SLURM_NNODES \* $NTASKS` $FLUENT 2ddp -g -slurm -t$NPROCS -mpi=openmpi -i $JOURNALFILE > fluent3.log By submitting that script, I see Code:
/state/partition1/ansys_inc/v190/fluent/fluent19.0.0/bin/fluent -r19.0.0 2ddp -g -slurm -t48 -mpi=openmpi -i fluent.journal Code:
Starting fixfiledes /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/multiport/mpi/lnamd64/openmpi/bin/mpirun --mca btl self,sm,tcp --mca btl_sm_use_knem 0 --prefix /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/multiport/mpi/lnamd64/openmpi --x LD_LIBRARY_PATH --np 48 --host compute-0-4.local /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/lnamd64/2ddp_node/fluent_mpi.19.0.0 node -mpiw openmpi -pic shmem -mport 10.1.1.250:10.1.1.250:42514:0 ---------------------------------------------------------------------------------- ID Hostname Core O.S. PID Vendor ---------------------------------------------------------------------------------- n0-47 compute-0-4.local 48/32 Linux-64 4000-4047 Intel(R) Xeon(R) E5-2660 0 host compute-0-4.local Linux-64 3811 Intel(R) Xeon(R) E5-2660 0 MPI Option Selected: openmpi Selected system interconnect: shared-memory ---------------------------------------------------------------------------------- |
|
August 23, 2020, 20:59 |
|
#2 |
New Member
Mike
Join Date: Jul 2019
Posts: 5
Rep Power: 7 |
Did you ever find a solution to this problem? I am encountering something similar.
|
|
June 15, 2021, 05:21 |
|
#3 |
New Member
Hamed
Join Date: Jul 2012
Posts: 16
Rep Power: 14 |
Try fluent instead of #FLUENT
It may work (I know the question is for a few years back, but I am posting this for anyone who may have similar problem) |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Problem running a parallel fluent job on local machine via mpd | highhopes | FLUENT | 0 | March 3, 2011 06:07 |
fluent job submission | Sisir | FLUENT | 0 | March 22, 2007 01:13 |
error in fluent job | Sisir | FLUENT | 0 | March 2, 2007 23:30 |
How about FLUENT user's job prospect? | CFDHunnter | FLUENT | 3 | November 5, 2003 17:01 |
cfd job | Dr. Don I anyanwu | Main CFD Forum | 20 | May 17, 1999 16:13 |