Virtual cluster 
High-performance computing system specifications

The University cluster is a set of high-performance processors and software used for conducting complex scientific calculations for research that require significant power and data storage.

The virtual cluster has a total of 15 nodes, one of which is a front-end node, with the following characteristics:

Node 1 (front-end)
  • 8 AMD EPYC 7413 cores @2.65GHz
  • 32 GB RAM
  • 300 GB local storage
  • 30 TB of NAS storage for home
  • Ubuntu 22.04 LTS operating system
Nodes 2-3
  • 20 Intel(R) Xeon(R) Gold 6148 CPU cores @ 2.40GHz
  • 200 GB RAM
  • 300 GB local storage
  • 30 TB NAS storage for home
  • Ubuntu 22.04 LTS operating system
Nodes 4-15
  • 20 AMD EPYC 7413 cores @2.65GHz
  • 200 GB RAM
  • 300 GB local storage
  • 30 TB of NAS storage for home
  • Ubuntu 22.04 LTS operating system

In addition, there are a number of GPUs connected to specific nodes:

  • 2 NVIDIA Tesla GPUs connected to the node 2
  • 1 NVIDIA A40 GPUs connected to the node 3

The cluster includes, among others, the following software and tools:

  • Matlab
  • Stata
  • R
  • Gromacs 2024
  • Python
  • NVIDIA CUDA Toolkit
  • other standard tools already in the OS

To ensure that all active projects have access to resources, each job must run for a maximum of 24 hours. Jobs must be scheduled via Slurm; exceptions are Matlab and Stata tasks, which run on software not handled by the job manager.
There is a dedicated queue on Slurm for testing and debugging tasks, which is limited to 1 hour.

How to access

To gain access, the Principal Investigator of a research project must send an online request (“ticket”) to ASIT, in accordance with the guidelines on ICT support for research.

If enabled, the PI must then use their account as the user and vhcp01.vhpc.unive.it (157.138.18.181) as the server, connecting via VPN (Research profile) using the SSH protocol.

Last update: 28/08/2024