Logo Cineca Logo SCAI

You are here

Hardware

(Updated Nov 2016)

The HPC Infrastructure 

Cineca is currently one of the Large Scale Facilities in Europe and it is a PRACE Tier-0 hosting site.

  • MARCONI: It is the new Tier-0 system that replaced FERMI in July 2016. It is based on the LENOVO NeXtScale platform and the next generation of the Intel Xeon Phi product family. It will gradually be completed in about 12 months: the first partition (Marconi-A1) with Intel Broadwell is into production from July the first, 2016, the second partition (Marconi-A2) with Intel KnightsLanding is into production from January, 2017.
    Marconi is classified in Top500 list among the most powerful supercomputer: Marconi-A1 rank 46 in June 2016 and Marconi-A2 rank 12 in November 2016.
  • GALILEO: It is the second system (Tier-1), it is a IBM NeXtScale cluster accelerated with Intel Phis: in full production from February 2015, this system replaces the old PLX system. Galileo is named after the italian physicist and philosopher who played a major role in the scientific revolution during the Renaissance.
  • PICO: last but not least, a BigData infrastructure has been acquired (Nov 2014) devoted to "Big Analytics". It is named after the Italian Renaissance philosopher famous for his amazing memory.
  CPU (mhz,core, ...) Total cores / Total Nodes Memory  per node Accelerator Notes
MARCONI-A1 Intel Broadwell
2x Intel Xeon E5-2697 v4
@2.3GHz
18 cores each 
54.432 / 1512 128 GB -  
MARCONI-A2

Intel Knights Landing
1x Intel Xeon Phi7250 
@1.4GHz
68 cores each

244800 / 3600 96 GB -  
GALILEO

Intel Haswell
2 x Intel Xeon 2630 v3 @2.4GHz 
8 cores each
8384 / 524 128 GB 768 Intel Phi 7120p 8 nodes devoted to visualization
Pico 2 x Intel Xeon 2670 v2 @2.5GHz   
10 cores each
 1480 / 74 128 GB - 4 nodes devoted to visualization


The Data Storage Facility

  • Scratch: each system has its own local scratch area (pointed by $CINECA_SCRATCH env variable)
  • Work: a working storage is mounted to the three systems (pointed by $WORK env variable)
  • DRes: a shared storage area is mounted on all machine's login-nodes and all Pico's computes node (pointed by $DRES env variable)
  • Tape: a tape library (12 PB, expandible to 16PB) is connected to the DRES storage area as a multi-level archive (via LTFS)
 
  Scratch (local) Work (local) DRes (shared) Tape (shared)
MARCONI tbd (10PB in total) tbd (10PB in total) 4 PB 12 PB
GALILEO 300 TB 1500 TB
Pico 100 TB   500 TB

 

  

 

Old HPC Infrastructure

  CPU (mhz,core, ...) Total cores / Total Nodes Memory  per node Accelerator Notes
Fermi PowerA2@1.6GHz, 
16 cores each
163.840 / 10.240 16 GB -  -