Barnard Cluster
In the course we use the Barnard cluster, which is available at the TU Dresden. Here we have collected a small summary of our working environment.
Modules
The programs are compiled with a GCC and a MPI module, which we always load, to have the right environment available:
module load GCC/11.3.0 module load OpenMPI/4.1.4
In addition, we need Python for some scripts, which is also available as a module:
module load SciPy-bundle/2022.05
And for the visualization of results we use:
module load gnuplot/5.4.4 module load ParaView/5.10.1-mpi
Use:
module save
to store the set of modules for your environment so that they get automatically loaded when you login. You can look at the list of loaded modules with:
module list
File system
A workspace mechanism is used on Barnard to store computing data in temporary directories. Such a workspace must be created by everyone for the computation:
ws_allocate cfd 7
For example, this command creates a personal workspace for 7 days. You can display the list of all your own workspaces with ws_list.
The files for the course can be found in /data/horse/ws/nhr420-hpcfd. In the slides we use the variable $KURS for this path, it therefore makes sense to define this variable in your own profile:
echo 'export KURS=/data/horse/ws/nhr420-hpcfd' >> $HOME/.bash_profile
For our own created workspace we use the variable $MYWS, it is helpful to also define this variable in the profile. Overall, the procedure for creating the workspace can be described as follows:
MYWS=`ws_allocate cfd 7` echo "export MYWS=$MYWS" >> $HOME/.bash_profile
Queueing
Details on the queuing system on Barnard can be found on the ZIH websites.
On the cluster we have to share the nodes and configure the layout accordingly in the job. This is set via SBATCH lines in the job script:
#SBATCH --nodes=1 #SBATCH --ntasks=8 #SBATCH --mem=20000
A script (named job.pbs for example) is then submitted for example with the following command:
qsub job.pbs
into the batch system. For computations that should run overnight, use the general queue without a reservation.
Interactive jobs can be created using the salloc command. For example:
salloc --nodes=1 --ntasks=8 --mem=20000
You can list your jobs with:
squeue --me
To terminate jobs use scancel.
Reservations
For the course we have daily reservations to allow for swift and interactive usage of the cluster, these are:
- Mo, 05.02. : p_nhr_cfd_126
- Di, 06.02. : p_nhr_cfd_127
- Mi, 07.02. : p_nhr_cfd_128
- Do, 08.02. : p_nhr_cfd_129
- Fr, 09.02. : p_nhr_cfd_130
To use them, use the --reservation=p_nhr_cfd_1?? in your sbatch or salloc commands.