Visualization
Translations: deFor visualization we work with Paraview. You can install this locally to process the data on your computer. However, this requires the resulting data to be copied from the cluster. On productive systems, the visualization usually takes place directly on the system itself. Unfortunately, this is currently only very limited possible on the training cluster. To completely perform the Visualization on the HLRS cluster itself we need TurboVNC. Details on the production system can also be found in the HLRS Wiki. See the section below on the Visualization with VNC.
An alternative approach to avoid the copying of data to the local system is to use a local Paraview installation and connect to a paraview server on the cluster without an VNC environment.
Thus there are three options for the visualization:
- Copy the data to your system and visualize completely locally.
- Run a local Paraview client and connect it to pvserver on the cluster.
- Use a VNC setup on your machine to make use of the visualization capabilities on the cluster.
These three options are described below.
Local visualization
For local visualization, the files must be copied via SSH connection to the local computer. The visualization files can then be opened in Paraview.
Another option is to use sshfs, which integrates remote directories into the local system via SSH.
Visualization with VNC: One-off preparation
To prepare for using VNC we first need a password for the VNC sessions and a .vnc directory in your own Home:
$ mkdir $HOME/.vnc $ vncpasswd
vncpasswd then asks for a password that you want to use for your VNC sessions and a confirmation of the same. You can also assign a password for read-only access, so others could join your session while watching. We don't need that here.
Note on the use of Putty for SSH tunnels
Below, we make use of ssh tunnels and the commands used are ssh, if you are using Putty, you use plink.exe instead.
Start a visualization session
To start a VNC session for visualization, run:
module load vis/VirtualGL vis_via_vnc.sh -msmp 01:00:00
The script then tells you what you need to run on your local system to connect to the VNC session.
It should be something along these lines:
Connect to the VNC session via the vncviewer:
vncviewer -via username@training.hlrs.de n002701:1
If the vncviewer doesn't support -via, we need to create a separate SSH tunnel for the connection. The necessary command is provided by the vis_via_vnc.sh script, however the node name gets lost, you need to use the ports provided by the script and put the visualization node in between like this:
ssh -N -L <port>:n002701:<port> <user>@training.hlrs.de
With this tunnel, it should then be possible to start vncviewer with:
vncviewer localhost:port
Then open a terminal in this session and do the following:
module load system/amdgpu module load vis/VirtualGL module load vis/paraview/5.11.0-MPI-Linux vglrun paraview
Note: ignore warnings about zink written by paraview.
Alternative with local Paraview and Paraview server
Instead of performing the visualization in a VNC session on the cluster, we can also use a local Paraview Client to connect with a pvserver on the visualization node.
For this we need a Paraview installation in version 5.11 on the local computer. The version is important, otherwise the server will refuse the connection. This approach tends to be slower than using the VNC session, but takes fewer steps.
First we start a job that provides a pvserver:
qsub -I -l select=1:node_type=skl:ncpus=1:mem=16gb,walltime=02:00:00 -q smp
In the interactive shell we then run the pvserver and tell it to listen on a dedicated port XXXXX, use the number from your username for this to have unique port:
module load vis/paraview/5.11.0-MPI-Linux-osmesa pvserver --server-port=XXXXX
The pvserver then gives us the info on how to connect to it, like this:
Connection URL: cs://<nodename>:XXXXX Accepting connection(s): <nodename>:XXXXX
In order to connect a local Paraview to the pvserver on that node, an SSH tunnel needs to be created according to the following scheme on the local computer:
ssh -N -L 11111:<nodename>:XXXXX <user>@training.hlrs.de
This shell must now remain open in the background while the Paraview is running.
Then we can start the Paraview on our own computer and with that connect to the pvserver over the tunnel. On the command line this works with:
paraview -url=cs://localhost:11111
pvserver on the frontend
Instead of running the pvserver on some compute node it is also possible to run it on the frontend itself, however in this case no computing capabilities are used on the cluster, and the rendering has to be performed by the client. In this case, we can provide the pvserver start directly in the ssh tunneling command, though for that you need to pick a unique, not yet used port number. Please use your username number for that, so on your local system you run:
ssh -L 11111:localhost:XXXXX <user>@training.hlrs.de /opt/hlrs/non-spack/vis/paraview/ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64/bin/pvserver --server-port=XXXXX
Then it is possible to connect to that from your local machine to the pvserver instance with:
paraview -url=cs://localhost:11111
(Or configure this in the connect server option of the Paraview interface).
It is also possible to have paraview itself performing the pvserver startup by utilizing the reverse connection with a reverse tunnel. For this, configure a "Client / Server (reverse connection)", specify the port XXXXX as above and change the Startup Type to Command with the following command:
ssh -R XXXXX:localhost:XXXXX <user>@training.hlrs.de /opt/hlrs/non-spack/vis/paraview/ParaView-5.11.0-osmesa-MPI-Linux-Python3.9-x86_64/bin/pvserver --reverse-connection --client-host=localhost --server-port=XXXXX
This only works properly if the SSH command can be executed by Paraview without the need to enter a passphrase.
If you have trouble with OpenGL visualization, you may need to add the "--mesa" option to starting paraview and pvserver.