Dear cluster users,
in the next few months, the University of Siegen will replace the HorUS cluster with a new and considerably more capable system. According to the current plans, the system will go live at the end of April (the initial plans were for the end of February, this will not be possible, mostly due to delays in the delivery of some components).
Overview of the new system
The system consists of the following compute partitions:
- 434 nodes in the regular HPC partition, which are equipped with AMD CPUs of type EPYC 7452 and 256 GB of RAM each.
- OpenStack and Kubernetes partitions (8 and 5 nodes respectively).
- 2 SMP nodes, each with 4 Intel Xeon 5218 CPUs (Cascade Lake) and 1.5 TB RAM.
- 10 GPU nodes with a total of 24 nVidia GPUs type Tesla V100 (4×4, 2×2, 4×1 GPUs per node)
The storage capacity is 1 Petabyte of primary hard drive space, additionally there is a burst buffer of SSDs (32 Terabytes) and 48 TB of Object Storage space. The flexible concept allows us to adjust the relative sizes of these storage complexes in the future if necessary.
The high-speed interconnect used throughout the cluster is of type Infiniband HDR100.
The following information represents the current state of planning and may change until the new cluster starts operating.
What will change about access and login?
Like the HoRUS cluster, the new cluster will be available to all members of the University of Siegen. The same cluster address will continue to be used and will point to the new cluster after the end of the transition phase at the latest.
What will be the name of the new cluster?
The cluster does not have a name yet. We will have a naming contest for it soon.
Will the other compute resources also be replaced by the new cluster?
No, the HPE Moonshot HTC system and the recently acquired NEC SX-Aurora Tsubasa vector system (more information about that in the near future) will continue to be available.
How do I get my data from HoRUS to the new cluster?
During the transition phase you will have enough time to copy whatever information you need.
Which operating system will the new cluster use?
The OS will be CentOS 8 (HoRUS currently uses CentOS 7). The job scheduler used will be SLURM, just like on HoRUS.
What software will be available on the new cluster?
There will be a collection of commercial software and non-commercial software for a variety of purposes, similar to the HoRUS cluster. You will also be able to request the installation of additional software, we will decide this on a case-by-case basis.
If you have any other questions, email us at hpc-support@uni-siegen.de
Best regards
Your HPC Team