An Exploratory Study of AI System Risk Assessment from the Lens of Data Distribution and Uncertainty. (arXiv:2212.06828v1 [cs.LG])
Deep learning (DL) has become a driving force and has been widely adopted in
many domains and applications with competitive performance. In practice, to
solve the nontrivial and complicated tasks in real-world applications, DL is
often not used standalone, but instead contributes as a piece of gadget of a
larger complex AI system. Although there comes a fast increasing trend to study
the quality issues of deep neural networks (DNNs) at the model level, few
studies have been performed to investigate the quality of DNNs at both the unit
level and the potential impacts on the system level. More importantly, it also
lacks systematic investigation on how to perform the risk assessment for AI
systems from unit level to system level. To bridge this gap, this paper
initiates an early exploratory study of AI system risk assessment from both the
data distribution and uncertainty angles to address these issues. We propose a
general framework with an exploratory study for analyzing AI systems. After
large-scale (700+ experimental configurations and 5000+ GPU hours) experiments
and in-depth investigations, we reached a few key interesting findings that
highlight the practical need and opportunities for more in-depth investigations
into AI systems.
Source: https://arxiv.org/abs/2212.06828