Singularity is a software for creating and running containers (similar to e.g. Docker), which is developed specifically with scientific computing and HPC in mind. Singularity version 3.7.1 ist installed on the OMNI cluster.

To user Singularity, you need to load the corresponding module (more on modules here):

module load singularity

The documentation of Singularity can be found here. You can display an overview of important Singularity commands with singularity help.

On this page you can find a short explanation of the container concept and a brief overview over the tools vor creating and running Singularity containers on OMNI (including running them in parallel with MPI).

The Container concept

Containers are executable files that do not only contain a program but usually also the software and software libraries needed to run that program (the program’s so-called dependencies). Unlike virtual machines, the hardware is not virtualized for containers.

Singularity containers have multiple advantages:

  • Since the software and all its dependencies are present in a known and fixed version, the reproducibility of the scientific results is increased.
  • Since a container is a single file, it is easier to share it with other people and launch it on other computers.
  • Singularity supports multiple container formats, including that of Docker. By default, Singularity’s own format . sif is used.
  • Unlike Docker, no root access is needed for creating and running containers.

Creating containers

You have multiple ways of obtaining or creating Singularity containers. All creation methods use the singularity build command, which is explained here in detail.

  • The first and most simple way is downloading a container from the container library, the Singularity Hub or another repository. Singularity also has an option to convert Docker containers to the Singularity format.
  • The second way is connecting into a container with ssh and installing software manually in it.
  • The third way is writing a definition file for the container and creating the container using that file.

You can interchange files and directories between the container and the host system (i.e. the computer on which the container runs) in multiple ways. The most important ones are so-called Bind Paths and Mounts, which are both explained here.

Caution: User namespaces have been disabled on the cluster for security reasons. Therefore some of the methods mentioned above only work with restrictions, especially when the --fakeroot option is used. There are two possible workarounds:

  • You can build the container on a Linux PC (or Linux virtual machine) on which you have root privileges or the --fakeroot option is working.
  • You can use the Online Build Service of Sylabs, the company that develops Singularity.

In both cases, you need to create a definition file. After building the container, you can simply copy it to the cluster like any other file and use it there.

Running containers

You can run a container like any other executable file:

./<containername>.sif

or with the following command:

singularity run <containername>

This will execute the so-called runscript of the container. The runscript determines what exactly happens when the container is executed. When you create a container yourself, you need to create the runscript yourself as well, typically as part of the definition file. You can find details about runscripts in definition files in the Singularity documentation here.

Caution: Like any other program, this container will run on the login node. You should never run compute-intensive programs on the login nodes, since you are sharing it with all other users. Normally you should create a jobscript in which you include the line singularity run ... (don’t forget to load the singularity module).

Parallel execution

You can launch MPI applications inside of Singularity containers and you can also set up containers such that multiple containers communicate via MPI. The Singularity documentation explains two different ways of doing that here, both methods are compatible with the OpenMPI installation on OMNI as well as with SLURM. The main difference between the two methods is that in one case only the host system’s MPI will be used, while the other one (called “hybrid model” in Singularity) needs both an MPI inside and outside the container.

Aktualisiert um 15:15 am 8. February 2021 von Gerd Pokorra