Why Interpretability is Critical

Interpretable models have many key advantages that streamline the data science process and make it easier to focus directly on deriving value

Understand the strengths and weaknesses of a model

Black-box models often overcomplicate the reasoning behind a decision, where a simpler explanation may suffice. Even worse, a black-box model can use data in unintuitive ways that can lead to poor predictions on new data. Experiments have demonstrated that deep learning approaches can be fooled into giving vastly different predictions by changing just one pixel in the input.

Since an interpretable model is transparent and can be understood in its entirety by humans, we can easily identify whether the model makes sense or not, as well identify parts of the model where the reasoning might be flawed.

Examples from One Pixel Attack for Fooling Deep Neural Networks. The black text indicates the true label for each image, while the blue text shows the predicted label and confidence in the prediction after altering a single pixel in the image.

Regulatory compliance

There is often explicit government regulation surrounding decision-making in industries such as health care, finance, insurance. Interpretability gives companies the evidence needed to confidently stand behind their models and assert they are not discriminatory.

Our case study on predicting risk of loan default demonstrates the clarity that interpretability can bring to a complex decision process.

From Bloomberg

Faster iteration and better feedback

Good data science is an iterative process, where the models are improved by understanding where the model is doing poorly, and why. With interpretability, data scientists and managers can see the model clearly and identify immediately where additional feature engineering may help, whether to fix part of the data ingestion process, or if they need to collect more data for certain cohorts.

In an application to cybersecurity, this rapid iteration enabled our client to capture more relevant data to quickly and directly close holes in the model's reasoning.

The data science process is a continuous iterative loop

Smooth integration of domain expertise

Black-box models rarely incorporate domain knowledge, as the algorithm simply learns from data and cannot easily explain how it is functioning. Interpretable methods facilite conversations between domain experts and data scientists, where a model can be inspected together. Given they can understand the model, domain experts can share feedback on how to improve further, and the data scientists can easily make use of this extensive experience in the next iteration.

Each of our applications in the medical field rely heavily upon expert doctors providing feedback and ensuring the end result is clinically relevant.

Increased chance of adoption and success

Key stakeholders and executives cannot be expected to take significant risks by deploying models they do not understand. Interpretability allows them to audit the model directly and have an honest conversation with the data and modeling experts.

When developing an algorithm for our approach for deciding property prices at auction, we encouraged as many of our clients' portfolio managers as possible to analyze and critique our approach. When it came time to deploy the model the key stakeholders were already on our side and had complete trust in the model due to its interpretability.

If I am going to make an important decision around underwriting risk or food safety, I need much more explainability.

David Kenny
CEO, Neilsen Holdings Plc.

From Artificial intelligence has some explaining to do (Bloomberg)

Learn and discover new insights from data

Companies have a wealth of accumulated data, but often even domain experts do not have a perfect understanding of how this data is best used to assist decision making. Interpretable models are not just for making predictions: they can help us understand which data sources are actually relevant. The insights delivered by interpretable models can help augment the understanding of domain experts.

In a predictive maintenance application, our interpretable models suggested new ideas to the engineers, augmenting their understanding and helping them to prioritize design improvements in order to decrease the machine failure rate.

Case studies with interpretable models

Our proprietary interpretable algorithms have been applied successfully in many real-world cases

Want to try Interpretable AI software?
We provide free academic licenses and evaluation licenses for commercial use.
We also offer consulting services to develop interpretable solutions to your key problems.

© 2020 Interpretable AI, LLC. All rights reserved.