Project

Back to overview

Machine Learning for Upgrades in Large Data Volumes and High-Event Rate for Handling Pile-up Noise and Data Fusion in Experimental High-Energy Physics

English title Machine Learning for Upgrades in Large Data Volumes and High-Event Rate for Handling Pile-up Noise and Data Fusion in Experimental High-Energy Physics
Applicant Beck Hans-Peter
Number 186265
Funding scheme Resource not found: '73db8922-9c9a-4c27-a5dd-3e7f63f62a65'
Research institution Laboratorium für Hochenergiephysik Albert Einstein Center Universität Bern
Institution of higher education University of Berne - BE
Main discipline Particle Physics
Start/End 01.12.2020 - 30.11.2023
Approved amount 210'243.00
Show all

All Disciplines (2)

Discipline
Particle Physics
Information Technology

Keywords (11)

Experimental High Energy Physics; Particle Physics; Standard Model; Beyond Standard Model; Data Acquisition Systems; Online Filtering; Big Data; Machine Learning; Deep Learning; Data Fusion; Pileup

Lay Summary (German)

Lead
Machine Learning (maschinelles Lernen) wurde in der letzten Zeit in verschiedenen Bereichen erfolgreich eingesetzt. Mit den kontinuierlichen Fortschritten, insbesondere in Deep Learning-Techniken wächst auch der Anwendungsbereich stetig, in dem maschinelles Lernen vielversprechende Ansätze zur Lösung schwieriger Probleme bietet. Hochenergiephysik-Experimente an Teilchenbeschleunigern wie dem Large Hadron Collider am CERN sammeln eine riesige Menge an Rohdaten, die in mehreren Schritten verarbeitet, gefiltert und analysiert werden müssen. Mustererkennung und Objektidentifizierung in einer rauen und verrauschten Umgebung sind Anwendungsbereiche, in denen maschinelles Lernen geradezu ideal und äusserst attraktiv erscheint, klassische Ansätze zu übertreffen. Somit verspricht maschinelles Lernen hilfreich zu sein, um Antworten auf Fragen nach den Grundbausteinen des Universums zu finden.
Lay summary

Das ATLAS-Experiment am Large Hadron Collider (LHC) des CERN nimmt Proton-Proton-Kollisionsdaten auf und sieht sich dabei immer härteren Bedingungen ausgesetzt. Der LHC wird in mehrere Etappen aufgerüstet um seine finale Kollisionsenergie von 14 TeV zu erreichen und um seine Luminosität, d.h. die Anzahl der Proton-Proton-Kollisionen pro Sekunde, zu erhöhen. Der High-Luminosity LHC (LH-LHC) wird alle 25 ns, bei jedem Kreuzen von Protonenpaketen, typischerweise 140 Proton-Proton-Kollisionen erzeugen. Bisher sind dies ca. 40. Wenn beim Durchdringen zweier Protonpakete mehr als eine Kollision stattfindet, führt dies zu Pile-Up von Signalen im Detektor, die sich überlagern und die die Identifizierung der Signaturen, die Teilchen im Detektor hinterlassen, massiv erschweren.

Damit ATLAS seinen bisherigen effizienten Betrieb auch unter diesen verschärften Bedingungen aufrecht halten kann, muss die Online-Filterung (Trigger) der Kollisionsereignisse innerhalb der angestrebten Signaleffizienz, Verarbeitungslatenz und der zur Verfügung stehenden Bandbreite gehalten werden. Hierzu muss die Onlineanalyse von Detektorinformationen effizienter gestaltet werden.

Maschinelles Lernen wird für die Energierekonstruktion und die Online-Filterung auf Basis von Signalen im flüssig Argon Kalorimeter verwendet und mit Informationen des Tracking-Systems kombiniert. Dieser neue Ansatz soll zu einer signifikanten Verbesserung der Effizienz des Triggers führen und so bei der Suche nach extrem seltenen Ereignissen beitragen. Dabei sollen neue Einblicke in das Verständnis des Standardmodells entstehen, und auch die Suche nach neuer Physik jenseits des Standardmodells ermöglicht werden.

Direct link to Lay Summary Last update: 17.01.2020

Responsible applicant and co-applicants

Gesuchsteller/innen Ausland

Employees

Name Institute

Collaboration

Group / person Country
Types of collaboration
ATLAS Collaboration Switzerland (Europe)
- in-depth/constructive exchanges on approaches, methods or results
- Publication
- Research Infrastructure

Scientific events

Active participation

Title Type of contribution Title of article or contribution Date Place Persons involved
Seminar at the Albert Einstein Center for Fundamental Physics Individual talk Artificial Intelligence Strategies for Measuring Energy Deposits in Calorimeter Cells at Particle Colliders 15.11.2021 Bern, Switzerland Peralva Bernardo Sotto-Maior;


Awards

Title Year
LINK Fellowship of the Faculty of Science of the University of Bern 2021

Associated projects

Number Title Start Funding scheme
169015 Exploring the high energy frontier and searches for new physics with the ATLAS detector and its upgrades 01.10.2016 Project funding
173598 FLARE: Maintenance & Operation for the LHC Experiments 2017-2020 01.04.2017 FLARE

Abstract

Machine learning (and in particular, neural network modeling) has successfully been applied in different areas since some time. Links between computational intelligence models and statistics have been established along the years and paved the way for engineering/computing/mathematics communities to work together, which favored even more complex applications to be developed. Today, due to advances in Deep Learning techniques, machine learning became very attractive also for very large experiments that are employed to unveil questions concerning the basic constituents of the Universe. Nowadays, CERN operates the most powerful particle collider machine in the world (LHC) and this proposal refers to its larger experiment ATLAS, which provides a complex environment for the development of cutting-edge scientific methods, which have to be dealing with big data, rare event detections, high event-rate, high-dimensional data representation, and data fusion. The ATLAS experiment is under continuous improvements to keep up with its efficient operation under more stringent conditions, as LHC is also being upgraded to HL-LHC, its high-luminosity version. The ATLAS upgrade programme is staggered in two phases: to be operational in 2021-2023 (Phase-I) and 2026-2038 (Phase-II). Severe signal pile-up conditions arise as an additional noise component from the HL-LHC and will require ATLAS readout devices to be either replaced or improved in order to cope with such harsh environment. The upgrade programme is particularly intense for the ATLAS calorimeter and inner detector systems, which play important roles in the event reconstruction, by their measurements of tracks, momenta and energies of the particles or jets of particles created with every collision. These detector systems furnish fast signals for the online filtering (ATLAS produces 70 TB/s as data volume and events of interest are relatively rare). The calorimeter comprises two sections (electromagnetic and hadronic) and provides more than two-hundred thousand readout channels split into seven layers of instrumentation. The mitigation of the pileup noise component improves the event reconstruction in stages processing higher level information. However, residuals are carried out to these stages, as in online filtering hypothesis testing using calorimeter information. This is the case of electrons, key objects in ATLAS, for which online filtering relies solely on calorimeter information at the early stage. Due to the pile-up deposits in the detector systems, electron signatures in ATLAS are hard to distinguish from other physics objects in the same event. To keep electron online filtering within the targeted signal efficiency, processing latency and readout output rate, the handling of detector information must be performed more efficiently. Identifying electrons and applying an electron hypothesis on candidate objects can be complemented with discriminant information provided by the ATLAS Inner Detector (ID) system, currently processing 100 million readout channels using computer-vision-like techniques. It is required to fuse multimodal information in an adequate manner to get the best benefit from the distinct discriminant representations. For achieving this goal (both Phase-1 and Phase-II), machine learning techniques will be exploited for energy reconstruction (compensating for pileup) and online triggering based on calorimetry information and combined with the tracking system. Using expert information (calorimeter signal description) and advanced computational intelligence models, this proposal aims at contributing for a significant improvement of calorimeter response and online triggering efficiency in the search for those extremely rare high-energy physics processes that enable new insight in the understanding of the Standard Model or in the searches for new physics beyond the Standard Model.
-