Explaining Clinical Outcome Predictions with Compositional Rationale Extraction Model

Published in 2025 IEEE International Conference on Big Data (BigData), 2025

Clinical notes provide comprehensive patient information and serve as valuable resources for predicting clinical outcomes. Although advanced deep learning models demonstrate strong predictive performance using clinical notes, their adoption in practice is largely limited due to their black-box nature. Explainability in AI models is critical for their adoption in highrisk domains, such as healthcare. In this paper, we propose a COmpositional Rationale Extraction model (CORE), which provides faithful, sufficient, and plausible explanations to model predictions. CORE adopts a rationale-extraction framework, using only the extracted rationales for prediction, and is thus inherently faithful. CORE extracts rationales at the composition level, which captures domain knowledge and semantic units for sufficiency and plausibility. CORE also generates documentlevel rationales that capture the information interaction in long documents across the boundary of token limits in pretrained language models. Empirical evaluation validates CORE’s performance advantages over state-of-the-art rationale extraction methods.

Recommended citation: Gao, W., Grossman, R. & Chen, Y. (2025). Explaining Clinical Outcome Prediction with Compositional Rationale Extraction Model. In 2025 IEEE International Conference on Big Data (BigData).
Download Paper