An MIT Alumni Association Publication

The Key to AI for Medicine? Letting Doctors See How It Works

  • Nicole Estvanik Taylor
  • Slice of MIT

Filed Under

Dimitris Bertsimas SM ’87, PhD ’88 will be a speaker at the Feb. 28, 2019, celebration of the new MIT Stephen A. Schwarzman College of Computing. Learn more and watch the live stream at helloworld.mit.edu.


Imagine you are a physician trying to decide whether to perform emergency surgery on a patient. Now, imagine that a computer could tell you with a high level of accuracy the percent likelihood of patient survival. Would you be comfortable basing your decision on that number?

That almost certainly depends—according to MIT Sloan School of Management faculty member Dimitris Bertsimas SM ’87, PhD ’88—on whether the number is emerging from a “black box” algorithm, or whether you, the physician, know what factors and rationale led to that output. Bertsimas, who is the Boeing Leaders for Global Operations Professor of Management and codirector of MIT’s Operations Research Center (ORC), has been working with a group of roughly 30 doctoral students, along with colleagues from local hospitals, to create AI tools that won’t leave medical professionals in the dark.

According to Bertsimas, while the form of AI known as “deep learning” has made significant progress in computer vision, automatic language translation, and voice recognition, such models’ workings are usually opaque to human users. That’s a major obstacle to widespread adoption for high-stakes applications such as autonomous vehicles or personalized medicine. “Interpretability matters,” he emphasizes. An interpretable model allows humans to understand what its algorithms are taking into consideration, even when the human brain is not capable of performing the same complex calculations.

Bertsimas explains that the heuristic approach to prediction known as classification and regression trees (CART), used widely in computing since the 1980s, is relatively easy for humans to understand but not very high performing. Recent improved models tend not to be intuitive. “The purpose of my work is to develop algorithms that are both interpretable and state-of-the art,” Bertsimas says. His group has made progress toward this goal with advances such as optimal classification trees (OCT), as explained in a paper he coauthored in Machine Learning in 2017 with then-student Jack Dunn PhD ’18. The paper demonstrated that OCT’s new approach can outperform CART by 1.2–7 percent, depending on the level of CART accuracy and sufficiency of training data.

The purpose of my work is to develop algorithms that are both interpretable and state-of-the art.

Such advances have already leapt beyond the pages of academic journals into medicine. Bertsimas and Dunn have cofounded, along with ORC alumna Daisy Zhuo PhD ’18, a spinout called Interpretable AI. Its products so far include a clinical-decision support app called POTTER developed with two doctors from Massachusetts General Hospital (MGH). Publicly available and in use at MGH and elsewhere, the tool calculates the risks of emergency and elective surgeries based on an in-app questionnaire. POTTER’s first iteration drew on data from half a million patients nationally. Roughly twice a year, the team will reevaluate and recalibrate the model based on new data and medical knowledge, including feedback from doctors who have used it and new drugs entering the market. A second app, called OncoMortality, was developed in collaboration with oncologists from Dana-Farber Cancer Institute and MGH and is being used in hospitals around the country to help medical providers predict risk of mortality among cancer patients.

For Bertsimas, this work has a personal dimension. Still fresh in his mind is the experience of accompanying his father through a stage IV cancer diagnosis, including the difficult decision to switch from chemotherapy to palliative care—a choice that might have been aided by the kinds of tools he is now helping to invent. In cases where treatment is possible, he envisions such technologies assisting in other ways, such as customizing drug selection to different types of cancer.

“The time has come to utilize data to inform medical decisions,” says Bertsimas. “A place like the MIT ORC is well positioned to do that. With our experience with data and algorithms, and working together with doctors, we have an opportunity to develop medicine in a way that is more effective, more data-driven, and far more accurate.”

Filed Under