Interpretable Machine Learning
The “black box” metaphor is commonly used to refer to the lack of understanding of how modern Machine Learning (ML) systems make decisions. Researchers are working actively to remedy this situation which is especially problematic in Healthcare where legal accountability and ethics have to be taken into account in the decision-making process. Consequently, the industry cannot yet fully benefit from this new generation of predictive models which have proven to be highly performant.
In this context, this research project aims to build an interpretability capability for supervised ML models via an add-on to the Clinical Risk Assessments and Calculators capability developed in an earlier PDH project.