There is nothing more exhilarating than getting the keys to your first car or driving off the lot with the car of your dreams. Sadly, that exhilaration can quickly fade to frustration when your car is damaged. Working through the phone calls, emails, and damage reports with your insurance provider can be a painstaking process. But couldn’t it all be much easier?
That is the question that Torsten Ostervall, CEO of Autonet (a part of Progrits AB) set out to solve in 2019 when he started creating a new auto claims product that placed focus on visual analysis of car damage. “Insurance companies have vast stores of images from claims, which we saw as a rich source of information to predict the severity of a claim,” says Ostervall.
With over 25 years in the auto insurance space, a database of vehicle and parts data (in cooperation with the Nordic vehicle authorities), and the power of computer vision with Amazon Rekognition, Autonet set out to automate the auto insurance claims process.
In the insurance world, auto damage and the type of action required is assessed on a spectrum. At one end of the spectrum, minor damage can be repaired by Small Medium Area Repair Technology (SMART) specialists, who handle repairs that can be done directly to the vehicle that don’t require a costly visit by an insurance adjuster (such as minor dent removal). On the other end of the spectrum, if damage can be verified as a total loss, settling that claim efficiently can help the customer get paid and maybe into a new car. In between are normal repair cases, usually handled by traditional full-service body shops.
Ostervall’s idea was to use machine learning (ML) to analyze images and decide which repair category is most likely. To Ostervall, the value was clear: “Sending cases to SMART repair vs. regular body shops saves at least $500 per claim, and often much more, and is faster and better for the environment.”
Initially, the team looked at thousands of images from past claims, and tested submitting them to pre-trained image labeling APIs. They quickly found that pre-trained models were unable to properly classify the damage as needed. Blaine Bateman, the Chief Data Scientist at Autonet, says, “My team tried to build classification models using the tags, but the model performance just wasn’t there.”
The team used the company’s data on vehicle configuration to augment the tags with details about each particular vehicle. “Having one of the best databases on vehicle configuration as manufactured is an advantage for Autonet,” Ostervall says.
However, the results, although improved, were still unsatisfactory. After looking at the data (images) for so many cases, Ostervall realized the specific kinds and locations of damages were rarely in the generic tags. That is where Amazon Rekognition Custom Labels came in.
Amazon Rekognition Custom Labels is a feature from Amazon Rekognition that allows you to use your data to build object detection and image classification models and train your model on thousands of images instead of millions, providing excellent results. Amazon Rekognition Custom Labels achieves high accuracy by automatically choosing the best algorithms and tuning hyperparameters for your specific data.
It also utilizes transfer learning. Transfer learning begins with a base model that has been trained on a large set of images against a wide variety of classes. This base model understands different concepts that allow it to more quickly and accurately understand the new classes and objects found in the dataset.
In the case of Amazon Rekognition Custom Labels, the base model is trained on millions of images, which allows the final model to be much more accurate than models trained using exclusively open-source techniques. Autonet annotated around 10,000 images with labels corresponding to the level of damage of a vehicle. An application was created to expedite updating tags to improve the model, which integrated with Amazon Rekognition Custom Labels.
The service provided the flexibility to let Autonet fine-tune their model with limited data. “Amazon Rekognition Custom Labels allowed us to build a highly accurate model with 90% fewer annotated images than building custom models with other ML tools and frameworks, enabling us get to market with our product much faster,” Bateman says. With this breakthrough, Autonet was able to create an application that allows end-users to take photos of their damage and quickly and easily get an assessment of the next steps.
The following diagram illustrates the solution workflow.
First, images captured from the device are sent to the Amazon Rekognition pre-trained labels API for general tagging (generating a range of tags depicting everything in the photos). The images are then sent to the damage detection and classification model built using Amazon Rekognition Custom Labels, which assesses where, what type, and the level of damage the car has suffered. The damages that are found are shown to the customer. The customer can choose the relevant bounding boxes to approve the image and confirm that it shows the damage they want to document.
Lastly, those results, plus vehicle information, are passed to a downstream data model developed by Autonet that combines the three sources of data, applies feature engineering, and is trained to predict the three classes of damage. That information is routed depending on the decision—in the case of SMART repair, the case is sent directly to a SMART repair partner, and they contact the customer. The partners are integrated in the Autonet workflow and can see the cases immediately.
Also, the partner has a final say if they don’t agree the case is SMART repair. “We’ve found that there will always be some cases with hidden or missed damage that may make a case a standard repair, and it’s important for the partner to be in the loop,” Ostervall says. “But in most cases, by using this system, the customer can get their car repaired in less than two days.”
According to Ostervall, this application workflow “has produced amazing results in record time.”
The integration of Amazon Rekognition Custom Labels into the solution enables every Autonet customer to use state-of-the-art computer vision technology and improve the claims adjustment process. “The partnership with AWS and the Amazon Rekognition tool is a win for everybody” Bateman says. “Insurers get access to cutting-edge AI technology and their customers get better service, all managed by our integrated services.”
For Autonet, the result is a process that is better for the car owner, the insurance company, and the repair partners.
About the Authors
Oliver Myers is the Principal WW Business Development Manager for Amazon Rekognition (an AI service that allows customers to extract visual metadata from images and videos) at AWS. In this role he focuses on helping customers implement computer vision into their business workflows across industries.