Causal Inference and Policy Learning

Learning interpretable rules while accounting for biases from observational data

From data to decisions

Despite most of machine learning focusing on predictions, learning to make decisions directly from data is one of the key problems in practical applications. Examples of these problems include pricing optimization to maximize revenue, personalized treatment recommendation, and credit risk management.

The key difficulty in prescription problems is lack of counterfactuals - we only see what happened under the treatment assigned in the data. In order to make informed decisions, we need to be able to accurately estimate the outcomes under different treatment options.

Our Reward Estimation and Optimal Policy Trees modules address this issue from a rigorous causal inference perspective, delivering unbiased and understandable decision rules directly from observational data.
X T Y
x11 x12 A y1A ? ?
x21 x22 A y2A ? ?
x31 x32 B ? y3B ?
x41 x42 C ? ? y4A

The problem of counterfactual estimation: given features X and treatment T, we only observe the outcome Y under the assigned treatment, but need to estimate the rest properly

Estimating counterfactuals from observational data

Observational data is likely to contain treatment assignment biases. For example, sicker patients may tend to get more intensive treatments, but it is likely incorrect to conclude that these intensive treatments cause worse outcomes.

A naive and commonly-used approach is to train machine learning models to predict the outcome directly from the features and treatment, but depending on the choice of model class and parameters, the estimated treatment effects can vary drastically, and we have no way of telling which relationships are correct.

The field of causal inference is a rich literature spanning over 30 years that deals with the task of capturing the true causal dependence between the treatment application and the outcome. We leverage the most recent developments to properly adjust for treatment assignment bias and model misspecification errors.

Based on data from a recent study, we show that models with similar predictive performance can still estimate treatment-response curves that are far from the truth, depending on the model class (random forest or boosting) and the parameter tuning procedure.

Optimal Policy Trees learn understandable treatment rules

Following proper estimation of the counterfactuals, we can learn a treatment assignment policy. Building on top of the Optimal Trees framework, we have developed Optimal Policy Trees to find a simple tree structure that assign treatments to observations based on their features.

Compared to alternatives such as Regress-and-Compare or Causal Forests, the Optimal Policy Trees deliver an interpretable decision rule that is less prone to overfitting, and allows for the incorporation of more advanced causal inference techniques. In a recent paper, we show that these optimal trees outperform their greedy counterparts and are on par with the black-box methods in terms of performance.

Example Policy Tree for marketing in investment management, which assigns best interaction to maximize fund inflow

Flexible, practical and easy to use

This approach for solving the causal inference problem and learning an interpretable policy is highly flexible. It supports different types of outcomes (including numeric, binary, and survival) as well as different treatment types (categorical, numeric, or multiple continuous-dose treatments), using the best tailored methods in each case.

The software provides a simple API that automatically and carefully handles the nuances of conducting the reward estimation process, allowing practitioners to focus on the machine learning task at hand rather than dealing with the intricacies and complexities of implementation.

Optimal Policy Tree prescribing diabetes treatments, considering multiple continuous-dose treatments such as insulin, metformin, and oral drugs

Want to try Causal Inference and Policy Learning?
We provide free academic licenses and evaluation licenses for commercial use.
We also offer consulting services to develop interpretable solutions to your key problems.

© 2020 Interpretable AI, LLC. All rights reserved.