## Highlight: Interpretability, transparency and safe AI
Predictive models are an integral part of medical image analysis, where they currently form the backbone not just for image guided diagnosis and prognosis, but also for image processing tasks such as denoising, segmentation and registration. Given that the output returned from these models are used to improve diagnosis and treatment, as well as to enhance our anatomical knowledge, it becomes crucial for these models to also come with information on their own limitations. We develop methods for quantification of uncertainty, interpretability, and for detecting bias and enhancing fairness in predictive models for medical imaging.