ANSYS Fluent

Fluent software contains the broad, physical modeling capabilities needed to model flow, turbulence, heat transfer and reactions for industrial applications. These range from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing and from clean room design to wastewater treatment plants. Fluent spans an expansive range, including special models, with capabilities to model in-cylinder combustion, aero-acoustics, turbomachinery and multiphase systems.

Fluent also offers highly scalable, high-performance computing (HPC) to help solve complex, large-model computational fluid dynamics (CFD) simulations quickly and cost-effectively. Fluent set a world supercomputing record by scaling to 172,000 cores.

Examples of Running Fluent on Bebop

NOTE:
The user is expected to have a caseneme.cas, casname.dat and a casename.journal file along with other necessary input files as required for the case to be run.

In the example problems below, these files are: combustor_71m.cas, combustor_71m.dat, combustor_71m.journal

Running on Broadwell Nodes

#!/bin/bash
#SBATCH -N 128
#SBATCH --ntasks-per-node=36
#SBATCH -p bdwall
#SBATCH -t 01:00:00
#SBATCH -A your_project
#SBATCH -o log_128N_intel_edr.out
#SBATCH -e error.out
#SBATCH -J fluent_combus
#module purge
#module load ansys-fluids

    export I_MPI_FABRICS=shm:tmi
    export I_MPI_OFI_PROVIDER=psm2
    export I_MPI_EXTRA_FILESYSTEM=1
    export I_MPI_EXTRA_FILESYSTEM_LIST=gpfs

    export MPI_TMPDIR=/scratch
    cd $SLURM_SUBMIT_DIR
#Checks number of tasks and sets number of processes
if [ -z "$SLURM_NPROCS" ]; then
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
    N=$SLURM_NPROCS
fi

echo $SLURM_JOB_NODELIST # Prints Node range to output file
# Prints number of processes to output file
echo $SLURM_NPROCS
echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
srun hostname -s > hostfile
cp -p hostfile hostfile_N$SLURM_NNODES

fluent -t $SLURM_NTASKS -mpi=intel  -cnf=hostfile -ssh 3ddp -g -i combustor_71m.journal
#-pib.infinipath
#    if [ $? -gt 0 ]; then
#        exit 13
#    fi

echo "The job ran for: `sacct -j $SLURM_JOB_ID -o Elapsed | sed -n '3p'`"
echo "Core-hours used : `sacct -j $SLURM_JOB_ID -o UserCPU | sed -n '3p'`"
echo "Job id, cores, nodes " $SLURM_JOB_ID $SLURM_NTASKS $SLURM_NNODES

Running on KNL Nodes

#!/bin/bash
#SBATCH -N 8
#SBATCH --ntasks-per-node=64
#SBATCH -C knl,cache,quad
#SBATCH -p knlall
#SBATCH -t 02:00:00
#SBATCH -A your_project
#SBATCH -o log_8N_intel_edr.out
#SBATCH -e error
#SBATCH -J fluent_comb

    export I_MPI_FABRICS=shm:tmi
    export I_MPI_OFI_PROVIDER=psm2
    export I_MPI_EXTRA_FILESYSTEM=1
    export I_MPI_EXTRA_FILESYSTEM_LIST=gpfs

    cd $SLURM_SUBMIT_DIR

#Checks number of tasks and sets number of processes
if [ -z "$SLURM_NPROCS" ]; then
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
    N=$SLURM_NPROCS
fi

echo $SLURM_JOB_NODELIST # Prints Node range to output file
# Prints number of processes to output file
echo $SLURM_NPROCS
echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
srun hostname -s > hostfile
cp -p hostfile hostfile_N$SLURM_NNODES
fluent -t $SLURM_NTASKS -mpi=intel  -cnf=hostfile -ssh 3ddp -g -i combustor_71m.journal
#-pib.infinipath

echo "The job ran for: `sacct -j $SLURM_JOB_ID -o Elapsed | sed -n '3p'`"
echo "Core-hours used : `sacct -j $SLURM_JOB_ID -o UserCPU | sed -n '3p'`"
echo "Job id, cores, nodes " $SLURM_JOB_ID $SLURM_NTASKS $SLURM_NNODES
ANSYS Fluent Sidebar