Project

Back to overview

FLARE - Computing Infrastructure for LHC Experiments

English title FLARE - Computing Infrastructure for LHC Experiments
Applicant Donegà Mauro
Number 186172
Funding scheme FLARE
Research institution Institut für Teilchen- und Astrophysik ETH Zürich
Institution of higher education ETH Zurich - ETHZ
Main discipline Particle Physics
Start/End 01.04.2019 - 31.03.2021
Approved amount 1'768'620.00
Show all

Keywords (5)

Particle Physics; GRID Computing; Large Data handling; Information and Communication Technology; CLOUD and HPC

Lay Summary (German)

Lead
Während den Jahren 2015-2018 hat der "Large Hadron Collider (LHC)" am CERN Protonen auf Protonen bei einer Schwerpunktsenergie von 17 TeV kollidiert. Dieses Projekt erlaubt es, die Strukturen von Materie zu untersuchen, und Bedingungen zu erzeugen, wie sie im frühen Universum vorherrschten,Die dabei anfallenden riesigen Datenmengen (einige hundert Petabytes) werden mit Hilfe eines weltweit vernetzten Computer-system (GRID) analysiert. Jedes Experiment am LHC (die Schweiz ist Mitglied in ATLAS, CMS und LHCb) ist eine Kollaboration von Instituten aus verschiedenen Nationen, und jede Nation trägt Ihren Beitrag an die globalen Computer-resourcen bei. Mit diesen gemeinsamen Resourcen wird ein global und transparent operierendes Rechenzentrum betrieben.Das vorliegende Projekt dient dazu, die Rechenanforderungen der Schweizer Physiker an diesen internationalen Projekten zu unterstützen.
Lay summary

Die "Worldwide LHC Computing Grid (WLCG)" Kollaboration betreibt die gesamte LHC computing Infrastruktur. Die Schweiz, representiert durch das "Swiss Institute of Particle Physics (CHIPP)", als Mitglied dieser Kollaboration hat sich verpflichtet, unserer Grösse entsprechende Resourcen an die WLCG Kollaboration beizusteuern.CHIPP und CSCS (Swiss National Supercomputing Centre) sind zusammen eine formale Kooperation eingegangen, in der CSCS das Schweizerische "Regionale Rechenzentrum fuer LHC" betreibt, und damit ATLAS, CMS und LHCb direkt unterstützt.

Das "CHIPP computing steering board" hat im Dezember 2017 entschieden, die Strategie unseres Rechenbetriebs zu modernisieren. Das gesamte System wurde in der Folge vom bisherigen Betrieb auf spezieller, eigenener Hardware-clustern kontinuierlich umgestellt auf ein "Shared resource model". Dabei wurden der Betrieb aller unserer LHC Physik-Anwendungen auf die grossen, gemeinsamen Computing-Resourcen des High Performance Computer (HPC) am  CSCS migriert. Dieser Betriebsmodus ist erwartungsgemäss kosteneffektiver, und lässt sich insbesondere eben auch viel einfacher skalieren, und somit an die steigenden Anforderungen in der Zukunft des LHC anpassen. Um die bisherigen Investitionen voll auszunutzen wird der bestehende Cluster innerhalb der geplanten Lebensdauer weiterlaufen gelassen. Danach werden die Komponenten des alten  Rechenclusters vom Tier-2 an unsere lokalen Schweizer-Institute mit ihren Tier-3 Rechenclustern weitergegeben, um dort noch weitere Dienste zu leisten.

 

Direct link to Lay Summary Last update: 05.04.2019

Lay Summary (English)

Lead
During the period 2015-2018, the Large Hadron Collider (LHC) at CERN collided protons at the centre of mass energy of 13 TeV. The vast sets of data produced are analyzed using a worldwide computing system. Each experiment at the LHC (ATLAS, CMS, LHCb are the ones with Swiss involvements) is a collaboration of institute from different nations and each nation contributes a share of resources to the worldwide computing. This projects aims at supporting the overall computing needs of the Swiss institutes working at the LHC.
Lay summary

The worldwide LHC computing GRID (WLCG) community maintains the overall LHC computing infrastructure. Switzerland, represented by CHIPP (Swiss Institute of Particle Physics), as a member of this community is committed to contribute commensurate resources into WLCG for the profit of the overall community operation. CHIPP and CSCS (Swiss National Supercomputing Centre) established a cooperation in which CSCS operates as the major Swiss “regional centre”, serving ATLAS, CMS and LHCb.

In December 2017 the CHIPP computing steering board (members of which are all FLARE PIs) decided to evolve our compute resource model. The system has transitioned from operating dedicated hardware (a cluster of standard computers) to a model of using the shared resources of the large High Performance Computer (HPC) at CSCS. This mode of operation promises to be more cost effective, and in particular to be much more scalable in terms of increased performance requirements. At the same time, to safeguard all previous investments, the present hardware is operated to the end of its standard lifetime and, after being phased out of operation at the Tier-2, the hardware will then be re-used in our own Swiss Tier-3 facilities.

Direct link to Lay Summary Last update: 05.04.2019

Responsible applicant and co-applicants

Employees

Publications

Publication
ATLAS results
(2021), ATLAS results, in -, -.
CMS results
(2021), CMS results, in -, -.
LHCb results
(2021), LHCb results, in -, -.

Communication with the public

Communication Title Media Place Year
New media (web, blogs, podcasts, news feeds etc.) ATLAS updates International 2019
New media (web, blogs, podcasts, news feeds etc.) CMS updates International 2019
New media (web, blogs, podcasts, news feeds etc.) LHCb updates International 2019

Associated projects

Number Title Start Funding scheme
201466 FLARE - CSCS Tier 2 LHC Computing Infrastructure 01.04.2021 FLARE
204238 Understanding the Flavour Anomalies 01.10.2021 Project funding
178826 Measurement of Higgs Boson Properties and Upgrade of the CMS Pixel Detector for Phase-2 01.04.2018 Project funding
197084 Exploitation and Upgrades of the CMS experiment at the LHC: the next phase 01.11.2020 Project funding
201476 FLARE 2021-2025: Operation, Computing and Upgrades of the CMS Experiment 01.04.2021 FLARE
188442 Search for new physics with high precision tracking detectors 01.10.2019 Project funding
200642 Measurement of Higgs Boson Properties with CMS and Search for Lepton Flavor Violation with Mu3e 01.04.2021 Project funding
173600 FLARE - GRID Infrastructure for LHC Experiments 01.04.2017 FLARE

Abstract

By November 2018, the "Run-2" of LHC, at the centre of mass energy of 13 TeV, was completed. Overall the machine delivered an outstanding luminosity at the level of ~50 1/fb in 2017 and ~60 1/fb in 2018. In the next two years, during the long shutdown LS2, the experiments plan - apart of course from the detector hardware upgrades - to re-process all previously accumulated data, and to produce an adequate amount of MC samples in order to provide a uniform and consistent data set for physics exploitation. Therefore the computing requirements are not expected to decrease. "WLCG", the worldwide LHC computing GRID community organization, continues to coordinate the main operational tools for the LHC experiments to reconstruct and analyze their data, as well as to perform simulation tasks. Overall, in 2018 a total of ~69 PB of data were acquired by the four experiments, and the simulated events have to be added to this. The total resources provided by the WLCG community in 2018 reached a total of about 9 MHS06, delivering up to ~250 MHS06-days/month, about 400 PB of disk storage and over 300 PB of tape storage. The data transfer rates have been globally at a level of over 40 Gb/s sustained. The Swiss particle physics community continues its commitment to the LHC physics program. Switzerland, represented by CHIPP, has signed the computing MoU with WLCG. We as members of WLCG are committed to contribute commensurate resources into WLCG for the profit of the overall community operation. For this purpose, CHIPP and CSCS established a cooperation in which CSCS functions as the major Swiss “Tier-2 regional centre”, serving ATLAS, CMS and LHCb.The resources granted upon our previous requests have been invested in this national “Swiss Tier-2”. In December 2017, we have decided within the CHIPP computing steering board (members of which are all FLARE grant PIs) to evolve our compute resource model. We will transition from operating our own dedicated hardware (linux cluster PHOENIX) to a model of using the shared resources of the large HPC computers at CSCS (presently PizDaint). This mode of operation on the shared HPC resources has been tested in production mode over a year and has proven to be successfully working. It promises to be more cost effective, and in particular to be much more scalable in terms of increased performance requirements. It should be noted that, in order to safeguard all previous investments, the present hardware is operated to the end of its standard lifetime. After being phased out of operation at the Tier-2 due to reliability reasons, the hardware will then be re-used in our own Swiss Tier-3 facilities.The grant asked for in this present proposal will cover a two year period, and it will provide the resources for the next period 1.4.2019-31.3.2021, as required from an officially acknowledged national Tier-2 centre as part of WLCG. The computing resource requirements of the LHC experiments are collected by the WLCG management; they are monitored and scrutinized by the CERN "Computing Resources Scrutiny Group CRSG", and presented to the CERN-computing resource review board C-RRB. The data approved by the C-RRB serve as the basis for the funding requests in the various countries, also for Switzerland. According to the C-RRB recommendation we assume a flat budget for planning. This allows us to project our expenditures over the next two years, and present a scheme of average resource growth based on such a constant flat budget. The work is supervised by the CHIPP Computing Board under the auspices of the Swiss Institute of Particle Physics (CHIPP). This board includes representatives from all Swiss particle physics institutes as well as CSCS. All institutes continue to rely on and strongly support the project.
-