Press and Publications

A selection of coverage about our work with interpretability

Machine Learning under a Modern Optimization Lens

Our algorithms form the core of the recent graduate-level textbook Machine Learning Under A Modern Optimization Lens by co-founders Bertsimas and Dunn. This book details the transformative effect modern optimization is bringing to the fields of machine learning and artificial intelligence, and is guiding teaching at leading universities like MIT.

Selected methodological papers

A selection of the academic papers pioneering our algorithms

Optimal Classification Trees

Dimitris Bertsimas and Jack Dunn

Machine Learning, 2017

The original publication by the co-founders pioneering Optimal Trees. The paper developed the first scalable mixed-integer optimization formulation for training optimal decision trees, and presents empirical results that such trees outperform classical methods such as CART.

Optimal Prescriptive Trees

Dimitris Bertsimas, Jack Dunn, and Nishanth Mundru

INFORMS Journal on Optimization, 2019

This paper extends the optimal trees optimization framework to the field of prescriptive decision making. The resulting Optimal Prescriptive Trees learn how to prescribe directly from observational data, and perform competitively with the best black-box methods for the same task.

Optimal Survival Trees

Dimitris Bertsimas, Jack Dunn, Emma Gibson, and Agni Orfanoudaki

Machine Learning, under review.

The optimal trees optimization framework is extended to the task of survival analysis. Optimal Survival Trees learn factors that affect survival over a continuous time period, with direct applications to healthcare and predictive maintenance.

Optimal Policy Trees

Maxime Amram, Jack Dunn, and Ying Daisy Zhuo

Machine Learning, under review.

Optimal Policy Trees combines methods from the causal inference literature with global optimality under the Optimal Trees framework. This method yields interpretable prescription policies, is highly scalable, handles both discrete and continuous treatments, and has shown superior performance in multiple expriments.

Sparse high-dimensional regression: Exact scalable algorithms and phase transitions

Dimitris Bertsimas and Bart Van Parys

The Annals of Statistics, 2020

The original publication by our co-founder pioneering Optimal Feature Selection. This paper presents the first scalable approach to exact subset selection in the linear regression problem, and demonstrates superior empirical results compared to existing heuristic approaches.

Sparse Regression: Scalable algorithms and empirical performance

Dimitris Bertsimas, Jean Pauphilet and Bart Van Parys

To appear in Statistical Science, 2020

This paper develops an extremely scalable heuristic for solving the Optimal Feature Selection problem that is significantly faster than the original approach, without sacrificing performance.

Sparse classification and phase transitions: A discrete optimization perspective

Dimitris Bertsimas, Jean Pauphilet and Bart Van Parys

Preprint available on arXiv

The Optimal Feature Selection methodology is extended to classification problems, namely logistic regression and support vector machines. Experiments demonstrate the superior performance compared to alternative methods.

From predictive methods to missing data imputation: An optimization approach.

Dimitris Bertsimas, Colin Pawlowski, and Daisy Zhuo

The Journal of Machine Learning Research, 2017

The original publication by the co-founders pioneering Optimal Imputation. The paper formulates the missing data imputation problem as a joint optimization problem and presents a scalable method to solve it to optimality, establishing superior performance to the state of the art.

Interpretable Matrix Completion: A Discrete Optimization Approach

Dimitris Bertsimas and Michael Li

Preprint available on arXiv

The original publication by the co-founders pioneering Interpretable Matrix Completion. The paper uses mixed-integer optimization to formulate the problem of creating an interpretable factorization of a matrix with side information, leading to simple and intuitive recommendation systems.

Fast Exact Matrix Completion: A Unifying Optimization Framework

Dimitris Bertsimas and Michael Li

Preprint available on arXiv

An extension of Interpretable Matrix Completion to develop a fast and scalable stochastic algorithm for solving the matrix completion problem both with and without side information.

Want to try Interpretable AI software?
We provide free academic licenses and evaluation licenses for commercial use.
We also offer consulting services to develop interpretable solutions to your key problems.

© 2020 Interpretable AI, LLC. All rights reserved.