All Danish researchers have the opportunity to use this supercomputer within both conventional HPC disciplines, and the many new and upcoming HPC areas. Abacus 2.0 makes it possible to run up to 580 trillion calculation operations per second, which naturally paves the way to new and extraordinary research opportunities for all users.
Current users run everything from advanced chemical models and simulations, to problems in the fields of material science, biophysics, high-energy physics, engineering science, data medicine and scientific data visualization.
The supercomputer is based on an IBM/Lenovo NeXtScale solution comprising 392 calculation nodes (i.e. computers) linked up to a high-speed network. In total, there are 9,408 Intel E5-2680v3 CPU cores, 52.5 TB of RAM, 140 TB local and extremely fast SSD disk space and a shared, high-speed disc system comprising fully 1,000 TB of storage capacity.
There are plans to expand the Abacus 2.0 supercomputer in 2016.
Each of the 392 calculation nodes features two Intel E5-2680v3 CPUs (each with 12 cores), 64 GB RAM, 200/400 GB local and extremely fast SSD disk space and a high-speed InfiniBand FDR network card (56 Gbit/s).
In addition, some of the nodes are dedicated to accelerating software that demands a great deal of memory or calculation power:
- 64 fat nodes have a total of 512 GB RAM for extremely memory-intensive software.
- 72 GPU nodes each feature two nVidia K40 accelerators with 2880 CUDA cores, that provide the node with an additional 2.86 trillion calculation operations per second.
All the calculation nodes in Abacus 2.0 are linked to a high-speed 56 GB/s InfiniBand FDR network from Mellanox. The nodes are connected in a 3D Torus Topology.
Each calculation node features 400 GB of local, extremely fast SSD disk space, although the GPU nodes have only 200 GB.
In addition, there is a shared, high-speed disk system based on GPFS—a Lenovo GSS26 solution. Total disk capacity amounts to 1,000 TB, with a total aggregated I/O bandwidth of more than 10 GB/s.
Queuing system, software, etc.
Jobs on Abacus 2.0 are managed using a queuing system based on the extremely popular Slurm Workload Manager. Slurm provides user-friendly access to the system with optimal exploitation of the available resources.
All software is available to users via a modular system that allows users to collect software packages as required, and then put together the software environment that best suits the tasks at hand.
How can I get access?
You can also contact:
- Wendy Engelberts, HPC Coordinator at SDU, on +45 6550 2678
If you would like to contact and ‘pick the brains’ of an active superuser of the system, contact:
- Professor Claudio Pica, Professor with special assignments at SDU, on +45 6550 2519
|BECOME A PILOT PROJECT—IF YOU ARE A NEW USER!|
The DeiC eScience Competence Center offers to help new HPC users to get off to a good start. Researchers may therefore nominate a relevant project for designation as a national eScience pilot project at the DeiC National HPC Center at SDU. As a member of a national pilot project, you have access to eight hours of free technical support, and 1,000 hours of calculation time will be made available for the project free of charge.
As a member of a national pilot project in the DeiC Competence center, you will automatically be designated an HPC frontrunner within your field of research. This means that you undertake to share your HPC experience and tools in relevant fora. Moreover, DeiC encourages you to participate actively in the DeiC Competence Center’s knowledge portal.
The DeiC eScience Competence Center seeks to reach out to new branches in the continuously developing HPC environment, with particular emphasis on projects in the fields of the humanities and social sciences. Other academic areas may also be taken into consideration, however.
All enquiries will receive an assessment in relation to their qualification for HPC access as a frontrunner eScience pilot project at the DeiC Competence Center.
Nominations can be made continuously via the following website.
If you have any questions, requests or comments, please contact: