Inference-optimized AI and high performance computing for gravitational wave detection at scale. (arXiv:2201.11133v1 [gr-qc])

We introduce an ensemble of artificial intelligence models for gravitational
wave detection that we trained in the Summit supercomputer using 32 nodes,
equivalent to 192 NVIDIA V100 GPUs, within 2 hours. Once fully trained, we
optimized these models for accelerated inference using NVIDIA TensorRT. We
deployed our inference-optimized AI ensemble in the ThetaGPU supercomputer at
Argonne Leadership Computer Facility to conduct distributed inference. Using
the entire ThetaGPU supercomputer, consisting of 20 nodes each of which has 8
NVIDIA A100 Tensor Core GPUs and 2 AMD Rome CPUs, our NVIDIA TensorRT-optimized
AI ensemble porcessed an entire month of advanced LIGO data (including Hanford
and Livingston data streams) within 50 seconds. Our inference-optimized AI
ensemble retains the same sensitivity of traditional AI models, namely, it
identifies all known binary black hole mergers previously identified in this
advanced LIGO dataset and reports no misclassifications, while also providing a
3X inference speedup compared to traditional artificial intelligence models. We
used time slides to quantify the performance of our AI ensemble to process up
to 5 years worth of advanced LIGO data. In this synthetically enhanced dataset,
our AI ensemble reports an average of one misclassification for every month of
searched advanced LIGO data. We also present the receiver operating
characteristic curve of our AI ensemble using this 5 year long advanced LIGO
dataset. This approach provides the required tools to conduct accelerated,
AI-driven gravitational wave detection at scale.



Related post