Wiss. Rechnen » HTC system HPE Moonshot

Access and Login

The Moonshot nodes are mostly integrated into the HoRUS cluster. If you have HoRUS access (you can find out how to get access here) you can also log into the Moonshot nodes.

The two login nodes have the designations htc001 and htc002 respectively, just like on the HoRUS there is an alias htcwhich will bring you onto one of the login nodes. You should use this alias whenever possible, since the load balancer will always bring you onto the less busy login node. Connection happens via ssh just like on HoRUS, by appending .zimt.uni-siegen.de to the alias or the node name.

Caution: unlike HoRUS, the HTC system can only be reached from within the Uni Siegen network (or via VPN).

The remaining nodes htc003 through htc007 are compute nodes and are not directly accessible from the outside.

Another difference is the fact that the Moonshot nodes are not connected to the HoRUS cluster’s Infiniband interconnect. File IO is therefore not as fast, even if you use workspaces.

Installed Software

In principle, all modules that ar einstalled on HoRUS are also available on the HTC nodes. However, due to the different CPU architecture it is not guaranteed that a module works just because it is available.

Caution: ZIMT has not tested all HoRUS modules on the HTC nodes and you should always conduct your own tests with a given module before you use it productively.

Running computations

You can run compute jobs on the nodes htc003 through htc007 in the same way as on the HoRUS: by queuing SLURM jobs in the htc queue. Job and nodes status in the htc queue can be monitored as usual with squeue and sinfo, both from HoRUS and from the HTC nodes. The individual SLURM commands are described here. The default and maximum walltime in the HTC queue are both set to 24 hours.

Caution: if you do not specify a queue (queue=partition in SLURM terminology), the job will be put into the default queue (defq) and will therefore run on HoRUS and not on the HTC nodes. You have to include the following line:

#SBATCH --partition=htc

in your job script (or specify the htc partition when calling sbatch) if you want your job to run on the HTC nodes.

Can HTC jobs be queued from HoRUS and vice versa?

Partially yes. You can queue jobs as long as the differences in the CPU architecture make no difference. In particular, ZIMT does not currently support cross-compiling.

For example, it should be easily possible to queue a Matlab job from HoRUS into the HTC queue, because Matlab is installed on both. However, if you want to compile a C or Fortran program, this has to happen on the HTC front end (htc001 or htc002).

Aktualisiert um 10:14 am 12. November 2019 von Jan Steiner