Predictive models are an integral part of medical image analysis, forming the backbone not just for image guided diagnosis and prognosis, but also for image processing tasks such as segmentation (IPMI'21, ICLR'23) and registration (NeurIPS'21) that are widely adopted even in the clinic. The output returned from these models is used to improve diagnosis and treatment, as well as to enhance our anatomical knowledge. It is therefore crucial for these models to also come with information on their own limitations. We develop methods for quantification of uncertainty (IPMI'21, ICLR'23), interpretability (arXiv'22), and for detecting bias and enhancing fairness in predictive models for medical imaging (MICCAI'22).

For example see:
*[Zepf et al: That Label's Got Style, International Conference on Learning Representations (ICLR) 2023](https://openreview.net/forum?id=wZ2SVhOTzBX)
...
...
@@ -60,6 +59,8 @@ For example see:
## Topology aware learning for medical imaging

While most modern medical imaging takes a very local, pixel-focused approach to problems such as image segmentation or registration, these often lead to suboptimal performance when viewed globally, in the sense that topological constraints naturally inherent in the data are violated. This can take the form of segmented structures taking on an incorrect topology, or image registration algorithms incorrectly representing the topology of the underlying anatomy. Our research includes topology-aware deep learning models for medical image processing.