Interpretable Machine Learning
The “black box” metaphor is commonly used to refer to the lack of understanding of how modern Machine Learning (ML) systems make decisions.

The “black box” metaphor is commonly used to refer to the lack of understanding of how modern Machine Learning (ML) systems make decisions.
Researchers are actively trying to remedy this situation which is especially problematic in healthcare, largely because legal accountability and ethics have greater emphasis and importance in the healthcare decision-making process than in some other domains. Consequently, the lack of interpretability means industry cannot yet fully benefit from this new generation of predictive models which have proven to be highly performant.
Data scientists spend time developing innovative models, however unless clinicians understand how and why they are making their predictions, they won’t use them. Providing explanations alongside a model will make it easier for clinicians to understand and trust our work. This will make it easier for machine learning models to be integrated in healthcare settings, and provide real benefits to clinicians and patients.
This PDH project aims to deliver a framework for building machine learning interpretability. The team are currently exploring a variety of interpretability techniques to show visual explanations that aid ML models’ interpretability. This helps data scientists, clinicians and anyone interested to understand how the models work and undertake any necessary changes.