Human-in-the-loop review of model explanations with Amazon SageMaker Clarify and Amazon A2I

 Human-in-the-loop review of model explanations with Amazon SageMaker Clarify and Amazon A2I

Domain experts are increasingly using machine learning (ML) to make faster decisions that lead to better customer outcomes across industries including healthcare, financial services, and many more. ML can provide higher accuracy at lower cost, whereas expert oversight can ensure validation and continuous improvement of sensitive applications like disease diagnosis, credit risk management, and fraud detection. Organizations are looking to combine ML technology with human review to introduce higher efficiency and transparency in their processes.

Regulatory compliance may require companies to provide justifications for decisions made by ML. Similarly, internal compliance teams may want to interpret a model’s behavior when validating decisions based on model predictions. For example, underwriters want to understand why a particular loan application was flagged suspicious by the model. AWS customers want to scale such interpretable systems with a large number of models supported by a workforce of human reviewers.

In this post, we use Amazon SageMaker Clarify to provide explanations of individual predictions and Amazon Augmented AI (Amazon A2I) to create a human-in-the-loop workflow and validate specific outcomes below a threshold on an income classification use case.

Explaining individual predictions with a human review can have the following technical challenges:

  • Advanced ML algorithms learn non-linear relationships between the input features, and traditional feature attribution methods like partial dependence plots can’t explain the contribution of each feature for every individual prediction
  • Data science teams must seamlessly translate technical model explanations to business users for validation

SageMaker Clarify and Amazon A2I

Clarify provides ML developers with greater visibility into their data and models so they can identify potential bias and explain predictions. SHAP (SHapley Additive exPlanations), based on the concept of a Shapley value from the field of cooperative game theory, works well for both aggregate and individual model explanations. The Kernel SHAP algorithm is model agnostic, and Clarify uses a scalable and efficient implementation of Kernel SHAP.

Amazon A2I makes it easy to build the workflows required for human review at your desired scale and removes the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. You can send model predictions and individual SHAP values from Clarify for review to internal compliance teams and customer-facing employees via Amazon A2I.

Together, Clarify and Amazon A2I can complete the loop from producing individual explanations to validating outcomes via human review and generating feedback for further improvement.

Solution o

[...]

Source - Continue Reading: https://aws.amazon.com/blogs/machine-learning/human-in-the-loop-review-of-model-explanations-with-amazon-sagemaker-clarify-and-amazon-a2i/

webmaster

Related post