DeiC National HPC Center, SDU

The SDU eScience Center houses the supercomputer ABACUS 2.0 (DeiC National HPC Center, SDU). It is a state-of-the-art supercomputer optimized to solve a wide range of computational problems in all sciences.

All Danish researchers have the opportunity to use this supercomputer within both conventional HPC disciplines, and the many new and upcoming HPC areas. ABACUS 2.0 makes it possible to run up to 580 trillion calculation operations per second, which naturally paves the way to new and extraordinary research opportunities for all users.

Current users run everything from advanced chemical models and simulations, to problems in the fields of material science, biophysics, high-energy physics, engineering science, data medicine and scientific data visualization.

The supercomputer is based on an IBM/Lenovo NeXtScale solution comprising 392 calculation nodes (i.e. computers) linked up to a high-speed network. In total, there are 9,408 Intel E5-2680v3 CPU cores, 52.5 TB of RAM, 140 TB local and extremely fast SSD disk space and a shared, high-speed disc system comprising fully 1,000 TB of storage capacity.

The ABACUS 2.0 supercomputer was inaugurated on March 24, 2015, and extended with further 192 slim nodes in May 2016.

Calculation nodes

Each of the 392 calculation nodes features two Intel E5-2680v3 CPUs (each with 12 cores), 64 GB RAM, 200/400 GB local and extremely fast SSD disk space and a high-speed InfiniBand FDR network card (56 Gbit/s).
In addition, some of the nodes are dedicated to accelerating software that demands a great deal of memory or calculation power:

  • 64 fat nodes have a total of 512 GB RAM for extremely memory-intensive software.
  • 72 GPU nodes each feature two nVidia K40 accelerators with 2880 CUDA cores, that provide the node with an additional 2.86 trillion calculation operations per second.

Network

All the calculation nodes in ABACUS 2.0 are linked to a high-speed 56 GB/s InfiniBand FDR network from Mellanox. The nodes are connected in a 3D Torus Topology.

Storage

Each calculation node features 400 GB of local, extremely fast SSD disk space, although the GPU nodes have only 200 GB.
In addition, there is a shared, high-speed disk system based on GPFS—a Lenovo GSS26 solution. Total disk capacity amounts to 1,000 TB, with a total aggregated I/O bandwidth of more than 10 GB/s.

Queuing system, software, etc.

Jobs on ABACUS 2.0 are managed using a queuing system based on the extremely popular Slurm Workload Manager. Slurm provides user-friendly access to the system with optimal exploitation of the available resources.

All software is available to users via a modular system that allows users to collect software packages as required, and then put together the software environment that best suits the tasks at hand.

How can I get access?

You can also contact:

BECOME A PILOT PROJECT—IF YOU ARE A NEW USER!

DeiC eScience Competency Center offers scientists help in order to start using high performance computing (HPC) / supercomputing. Therefore, researchers are invited to submit an interest statement to become a national eScience pilot project. The extent of a national pilot project on the supercomputer ABACUS 2.0 (DeiC National HPC Center, SDU) will be estimated based on calculation hours and technical support time needed before a potential cooperation is initiated between the parties involved.

As a national pilot project in the DeiC Competence Center, you will automatically be designated as the HPC frontrunner within your research field. This means that you commit to sharing your HPC experiences and tools in relevant forums. DeiC will also encourage active participation at the DeiC Competence Center's knowledge portal.

DeiC eScience Competence Center aims to reach new branches in the continuously expanding HPC environment in all disciplines.

Statements of interest from relevant research projects can be submitted continuously using this form, which can be completed in English or Danish. The completed form must be sent to escience@sdu.dk. The SDU eScience Center is available to answer questions.

 

If you have any questions, requests or comments, please contact: