Responsible AI Challenges in End-to-end Machine Learning. (arXiv:2101.05967v1 [cs.LG])

Responsible AI is becoming critical as AI is widely used in our everyday
lives. Many companies that deploy AI publicly state that when training a model,
we not only need to improve its accuracy, but also need to guarantee that the
model does not discriminate against users (fairness), is resilient to noisy or
poisoned data (robustness), is explainable, and more. In addition, these
objectives are not only relevant to model training, but to all steps of
end-to-end machine learning, which include data collection, data cleaning and
validation, model training, model evaluation, and model management and serving.
Finally, responsible AI is conceptually challenging, and supporting all the
objectives must be as easy as possible. We thus propose three key research
directions towards this vision – depth, breadth, and usability – to measure
progress and introduce our ongoing research. First, responsible AI must be
deeply supported where multiple objectives like fairness and robust must be
handled together. To this end, we propose FR-Train, a holistic framework for
fair and robust model training in the presence of data bias and poisoning.
Second, responsible AI must be broadly supported, preferably in all steps of
machine learning. Currently we focus on the data pre-processing steps and
propose Slice Tuner, a selective data acquisition framework for training fair
and accurate models, and MLClean, a data cleaning framework that also improves
fairness and robustness. Finally, responsible AI must be usable where the
techniques must be easy to deploy and actionable. We propose FairBatch, a batch
selection approach for fairness that is effective and simple to use, and Slice
Finder, a model evaluation tool that automatically finds problematic slices. We
believe we scratched the surface of responsible AI for end-to-end machine
learning and suggest research challenges moving forward.

Source: https://arxiv.org/abs/2101.05967

webmaster

Related post