Abstract
VERITAS (Very Energetic Radiation Imaging Telescope Array System) is the current-generation array comprising four 12-meter optical ground-based Imaging Atmospheric Cherenkov Telescopes (IACTs). Its primary goal is to indirectly observe gamma-ray emissions from the most violent astrophysical sources in the universe. Recent advancements in Machine Learning (ML) have sparked interest in utilizing neural networks (NNs) to directly infer properties from IACT images. However, the current training data for these NNs is generated through computationally expensive Monte Carlo (MC) simulation methods. This study presents a simulation method that employs conditional Generative Adversarial Networks (cGANs) to synthesize additional VERITAS data to facilitate training future NNs. In this test-of-concept study, we condition the GANs on five classes of simulated camera images consisting of circular muon showers and gamma-ray shower images in the first, second, third, and fourth quadrants of the camera. Our results demonstrate that by casting training data as time series, cGANs can 1) replicate shower morphologies based on the input class vectors and 2) generalize additional signals through interpolation in both the class and latent spaces. Leveraging GPUs strength, our method can synthesize novel signals at an impressive speed, generating over 106 shower events in less than a minute.
Original language | English (US) |
---|---|
Article number | 806 |
Journal | Proceedings of Science |
Volume | 444 |
State | Published - Sep 27 2024 |
Event | 38th International Cosmic Ray Conference, ICRC 2023 - Nagoya, Japan Duration: Jul 26 2023 → Aug 3 2023 |
Bibliographical note
Publisher Copyright:© Copyright owned by the author(s) under the terms of the Creative Commons.