By Anna Spyker, Software EngineerRecently I attended the IEEE CBMS Conference in Cordoba, Spain to connect with other researchers, data scientists and clinicians and share the work we’ve been doing on interpretable machine learning.
I felt an incredible sense of community at the conference. As it was a smaller conference, there was a lot of opportunity to connect with others and understand their work on a deeper level, including the challenges they’d overcome.
I presented in the research track Artificial Intelligence for Healthcare: from black box to explainable models.
The “black box” metaphor is commonly used to refer to the lack of understanding of how modern Machine Learning (ML) systems make decisions. Many researchers are actively trying to remedy this situation which is especially problematic in healthcare, largely because legal accountability and ethics have greater emphasis and importance in the healthcare decision-making process than in some other domains. Consequently, the lack of interpretability means the health industry cannot yet fully benefit from this new generation of predictive models which have proven to be highly performant.
Unless clinicians understand how and why a machine learning model makes its predictions, they won’t use them. Providing explanations alongside a model will make it easier for clinicians to understand and trust our work. This will make it easier for machine learning models to be integrated in healthcare settings, and provide real benefits to clinicians and patients.
The research project that I presented on was to deliver a framework for building machine learning interpretability. We explored a variety of interpretability techniques to show visual explanations that aid ML models’ interpretability. Using the findings from this, we increased interpretability for a black-box clinical risk calculator that predicts risk of hospital readmissions after 30 days of discharge.
This framework is currently being implemented at a large hospital in Auckland which will help clinicians to understand how the models work and why they are making certain predictions. It was clear that although the presentations in this stream were all on the same topic, our research was one of few that has been translated into clinical practice.
My presentation was very well received and I was pleased to have one of the largest audiences. Speaking to many of the attendees afterwards, I had lots of interest in what we are planning to do next with our work.
I connected with others working on similar projects in their field who provided recommendations on what interpretation methods to apply, along with what methods to avoid. Connecting with people who are working on interpretable machine learning research at their respective universities and research centres validates our current work and helps us plan for what future work could look like. It was interesting that others shared their open source repositories, a good way for researchers to get feedback on their work.
There were many parallels with work that was presented at the conference and our current PDH projects, it was informative to hear how other people have solved problems that we are currently facing. It was a great environment to share ideas with people who are all working towards a similar goal, to improve computer-based healthcare.