Understanding machine learning classifier decisions in automated radiotherapy quality assurance

Abstract

The complexity of generating radiotherapy treatments demands a rigorous quality assurance (QA) process to ensure patient safety and to avoid clinically significant errors. Machine learning classifiers have been explored to augment the scope and efficiency of the traditional radiotherapy treatment planning QA process. However, one important gap in relying on classifiers for QA of radiotherapy treatment plans is the lack of understanding behind a specific classifier prediction. We develop explanation methods to understand the decisions of two automated QA classifiers: (1) a region of interest (ROI) segmentation/labeling classifier, and (2) a treatment plan acceptance classifier. For each classifier, a local interpretable model-agnostic explanation (LIME) framework and a novel adaption of team-based Shapley values framework are constructed. We test these methods in datasets for two radiotherapy treatment sites (prostate and breast), and demonstrate the importance of evaluating QA classifiers using interpretable machine learning approaches. We additionally develop a notion of explanation consistency to assess classifier performance. Our explanation method allows for easy visualization and human expert assessment of classifier decisions in radiotherapy QA. Notably, we find that our team-based Shapley approach is more consistent than LIME. The ability to explain and validate automated decision-making is critical in medical treatments. This analysis allows us to conclude that both QA classifiers are moderately trustworthy and can be used to confirm expert decisions, though the current QA classifiers should not be viewed as a replacement for the human QA process.

Publication
Physics in Medicine and Biology
Dionne M. Aleman, PhD, PEng
Dionne M. Aleman, PhD, PEng
Professor of Industrial Engineering

Related