Description
ANSYS simulation software enables organizations to confidently predict
how their products will operate in the real world. We believe that every product is
a promise of something greater.
Software Category: cae
Local support is minimal; users should make an account at the student forum through the ANSYS
website for technical support and for obtaining detailed information.
Available Versions
The current installation of ANSYS
incorporates the most popular packages. To find the available versions and learn how to load them, run:
module spider ansys
The output of the command shows the available ANSYS
module versions.
For detailed information about a particular ANSYS
module, including how to load the module, run the module spider
command with the module’s full version label. For example:
module spider ansys/2021r1
Module | Version |
Module Load Command |
ansys | 2021r1 |
module load ansys/2021r1
|
Licensing
The current general UVA license can be used for research but is limited in the size of the models it can use, and it cannot use multinode (MPI) mode. Users who have their own research licenses with greater capabilities must specify that license. To use such a research license on Rivanna, you must create file called ansyslmd.ini
with the format
SERVER=1055@myhost.mydept.virginia.edu
ANSYSLI_SERVERS=2325@myhost.mydept.virginia.edu
You must obtain the full names of the hosts from your group’s license administrator. The numbers in the above lines are the standard ANYSYS ports, but it is possible they may differ for some license servers; consult your license administrator for specific values.
This file should be placed into the /home/$USER/.ansys
folder. Please note the period in front of ansys
; that is required. It will override the general, more restricted, license.
Using ANSYS Workbench
If you wish to run jobs using the Workbench, you need to edit the ~/.kde/share/config/kwinrc
file and add the following line:
FocusStealingPreventionLevel=0
The workbench application, runwb2
, should be executed in an interactive Open OnDemand Desktop session.
When you are assigned a node, launch the desktop, start a terminal, load the desired module and start the workbench with the runwb2
command.
module load ansys
unset SLURM_GTIDS
runwb2
Be sure to delete your Open OnDemand session if you finish before your requested time expires.
Multi-Core Jobs
You can write a batch script to run ANSYS jobs. Please refer to ANSYS documentation for instructions in running from the command line. These examples use threading to run on multiple cores on a single node.
ANSYS Slurm Script:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task
#SBATCH --time=12:00:00
#SBATCH --partition=standard
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --mem=96000
#SBATCH --output=myCFXrun.txt
mkdir /scratch/$USER/myCFXrun
cd /scratch/$USER/myCFXrun
module load ansys/2021r1
ansys192 -np ${SLURM_CPUS_PER_TASK} -def /scratch/yourpath/yourdef.def -ini-file/scratch/yourpath/yourresfile.res
CFX Slurm Script:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=20
#SBATCH --partition=standard
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --mem=12000
#SBATCH --output=myCFXrun.txt
module load ansys/2021r1
cfx5solve -double -def /scratch/yourpath/mydef.def -par-local -partition "$SLURM_CPUS_PER_TASK"
Multi-Node MPI Jobs
You must use IntelMPI. IBM MPI (Platform) will not work on our system.
For Fluent specify -mpi=intel
along with the flag -srun
to dispatch the MPI tasks using Slurm’s task launcher. Also include the -slurm
option. It is generally better with ANSYS and related products to request a total memory over all processes rather than using memory per core, because a process can exceed the allowed memory per core. You must have access to a license that supports HPC usage. These examples also show the minimum number of command-line options; you may require more for large jobs.
You must also set up passwordless ssh between nodes as described here.
Fluent Slurm Script:
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH --time=12:00:00
#SBATCH --partition=parallel
#SBATCH -J myFluentrun
#SBATCH -A mygroup
#SBATCH --mem=96000
#SBATCH --output=myFluentrun.txt
for host in $SLURM_JOB_NODELIST; do
scontrol show hostname $host >> hosts
done
IFS=' '
module load intel/18.0
module load ansys/2021r1
fluent 3ddp -g -t${SLURM_NPROCS} -cnf=hosts -srun -pinfiniband -mpi=intel -ssh -i myjournalfile.jou
CFX Slurm script:
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH --time=12:00:00
#SBATCH --partition=parallel
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --mem=12000
#SBATCH --output=myCFXrun.txt
NPARTS=`expr $SLURM_NTASKS_PER_NODE \* $SLURM_NNODES`
hostlist=$(scontrol show hostname $SLURM_JOB_NODELIST | paste -s -d,)
module load intel/18.0
module load ansys/2021r1
cfx5solve -double -def /scratch/yourpath/mydef.def -par-dist "$hostlist" -partition "$NPARTS" -start-method "Intel MPI Distributed Parallel"