Research

Our aim is to advance and translate methods and applications for processing and analysis of medical imaging data using artificial intelligence (AI) and machine learning (ML) methods.

We envision imaging applications and computing technology, that provide reliable, explainable and human-interpretable solutions, that enable the integration of AI solutions into clinical practice, and that provide patient-centered workflows for the comprehensive diagnosis and management of neurological, cardiovascular and oncological patients. 


Research focuses

1. Artificial intelligence-based data processing

The application of AI methods in acquisition, reconstruction, post-processing or analysis are investigated. We develop reliable, robust, specific, and sensitive methods to serve this purpose. The inclusion of AI into medical data processing can help to improve performance by (but not limited to) increasing precision, boosting quality of service, easing processing, reducing computational times, and reducing energy consumption. 

As the demand for medical imaging grows, the imaging-related energy consumption and sustainable operation is becoming a pressing concern for radiology departments and practices. The development of efficient acquisition strategies with multi-parametric and dynamic imaging is thus essential. In this regard, one-stop-shop imaging solutions and AI-based processing (reconstruction, handling of motion, image quality control) enable a more comprehensive non-invasive tissue and metabolism characterization. 

The aim is to provide an improved workflow with automatic data handling to derive clinical biomarkers that can be used in diagnosis.

2. Patient-centered and epidemiological cohort analysis

A patient-centered analysis combines information from imaging, genetics, laboratory data, meta data and expert knowledge to derive clinical biomarkers that can be used in diagnosis or therapy monitoring. In the context of large epidemiological studies, manual image analysis is often not feasible due to the overwhelming amount of data. The aim is to provide improved and automated workflows using AI-based processing while also investigating their role and impact. To this end, causal and explainable models are investigated for segmentation, volumetric measurements, textural composition, biological age estimation and treatment response prediction, in various organs, tissue and pathologies of interest. In order to guarantee a reliable image quality, data quality control checks and countermeasures are included. We investigate the causal relationships that lead to the AI-based findings and identify confounding factors with disentanglement of independent variables.

3. Translation of AI to clinical applications

We aim to implement state-of-the-art AI methods for clinical applications to support the clinicians in their daily work. These projects enable automated processing, simplified workflows and patient-phenotypic processing and analysis. Imaging and non-imaging data are coherently processed to guide and monitor patients with neurological, cardiovascular and oncological diseases. One example use-case is the automatic analysis of whole-body imaging data such as PET-CT. In this context we launched an ML challenge: autoPET.

Research projects

A list of some of the ongoing research projects:

Magnetic Resonance Imaging (MRI) is crucial in medical diagnostics, offering high soft-tissue contrast. However, extended examination times pose a challenge. Accelerating MRI data acquisition by reducing phase-encoding steps is a key focus. However, this sub-Nyquist acceleration introduces aliasing artifacts. Current research emphasizes deep learning-based reconstruction methods, aiming to increase MRI speed and image quality. This intersection of MRI and deep learning holds promise for faster and more efficient diagnostic imaging, benefiting both patients and healthcare professionals.

  • Sampling trajectory
  • Compressed Sensing
  • deep-learning based image reconstruction
  • motion-compensated reconstruction
  • low-rank + sparse methods
  • self-supervised learning

Magnetic resonance angiography (MRA) is a non-invasive alternative for the assessment of vessels. This technique can be used to diagnose for example narrowing or blockage of blood vessels, aneurysms, or an aortic dissection. The big disadvantage of MRA compared to for example CT angiography is the very long acquisition time, due to acquiring high resolution data in a large field of view. To shorten the acquisition times accelerated magnetic resonance imaging techniques are often used, including deep learning-based image reconstruction. 

  • Cardiac magnetic resonance imaging 
  • Model-based image reconstruction
  • Physics-driven deep learning
  • Magnetic resonance angiography 


Patient, respiratory and cardiac movements produce motion artefacts in the final image which reduce the quality and impair reliable diagnosis. The task is to accurately resolve, detect and correct the movements in acquisition, reconstruction and post-processing. High spatial-and temporal-resolved MR images need to be acquired over several respiratory and cardiac cycles under free movement conditions. Motion-resolved images are then reconstructed under the consideration of a derived motion model and surrogate signals guiding the motion model. Motion models are estimated by image registrations (conventional and deep learning). Motion estimation from highly accelerated images can be paired with image reconstruction for faster acquisition without compromising the image quality.

  • Motion correction
  • Optical-flow estimation
  • Deep-learning image registration
  • Deep-learning motion-compensated image reconstruction
  • Random walks

MRI acquisitions can be accelerated with the use of physics-driven deep learning-based reconstructions. Moreover, for low-field MRI these reconstructions can improve the image quality. Commonly performed acquisitions are water-fat separated to reduce artifacts originating from water-fat interference. They improve the image contrast by suppressing the fat signal, which usually appears bright, due to the short T1 relaxation time. Additionally, acquiring separate water and fat signals can be desirable especially in a case of non-alcoholic fatty liver diseases. For an accurate fat quantification, an acquisition of multiple shifted echoes is required, which results in long acquisition times and low SNR. Modelling the physical behavior into the reconstruction helps to learn and map the correlations between the measured echoes.

  • Liver Imaging
  • Model-based image reconstruction 
  • Physics-driven deep learning 
  • Fat Fraction quantification and R2* Mapping
  • Quantitative MR

Magnetic resonance imaging (MRI) allows for the acquisition of both qualitative and quantitative images. T1 mapping is an example for such a quantitative imaging technique, allowing for a non-invasive tissue characterization. T1 mapping is commonly used during cardiac MRI exams, where a specific acquisitions scheme, called the Modified Look-Locker inversion recovery scheme (MOLLI), is utilized. This scheme is restrictive in terms of scan time and the spatial resolution of the resulting T1 maps. Deep learning-based image reconstruction can be used to overcome these restrictions, resulting in higher spatial resolved T1 maps. 

  • Cardiac magnetic resonance
  • Deep learning-based image reconstruction
  • Quantitative Imaging
  • T1 mapping

Epidemiological and large-scale cohort studies such as the UK Biobank and NAKO enable the investigation of patient-specific biomarkers, including individual aging. Age is a crucial factor for characterizing a person in a medical context, but aging rates vary among individuals. To better account for differences in aging, the concept of biological age is introduced. We apply this concept in an organ-specific manner to consider varying aging patterns across different organs. Deep-learning-based methods are employed to explore imaging and non-imaging information (laboratory and genetic information, etc.) in these large databases to identify age-specific patterns in the data and estimate biological age.

  • Age prediction in MR images using uncertainty estimation
  • Correlation of imaging and non-imaging data for biological age estimation

MR imaging offers crucial information for guiding medical decisions. Nonetheless, patient motion, including breathing or rigid movements, remains a primary external factor contributing to the degradation of image quality. Evaluating image quality is essential to minimize the potential of propagation of artifacts to downstream tasks. Manual inspection is time-consuming, costly, and impractical for large-scale cohort data such as the UK Biobank or NAKO. Hence, there is a need for an automated approach to assess image quality. The obtained quality scores can be applied for retrospective data categorization and correction or for online quality assessment directly at the scanner to enable direct rescans of artifact-affected regions. 

  • Self-supervised contrastive learning
  • Local and global quality score
  • Online quality assessment
  • Retrospective artifact correction

Many factors, such as scanners, acquisition-site or condition like artifacts, patient compliance, etc. influence the perceived MR image. Consequently, DL models trained on this data tend to learn shortcuts and spurious correlations instead of task-specific features based on anatomical information. Since many medical databases contain confounding factors, it is important to improve the model’s robustness to create fair and stable predictions in every environment. Understanding and identifying causal dependencies between variables (imaging and non-imaging data) can help to explain the AI-based reasoning. 

  • Causal analysis
  • Confounder-free predictions
  • Causal inference 
  • Feature disentanglement

Automatic segmentation of organs or tissue compartments in whole-body imaging is an important pre-requisite for any further analysis. We develop Deep learning based algorithms for the automatic detection of landmarks and the segmentation of organs, tissue compartiments or tumors. Our work focuses on the development and transfer of computer vision and machine learning techniques to achieve accurate results with a small number of labels and to enable the generalization to new environments.

  • Self supervision and contrastive learning
  • Weakly- and semi supervised learning for computer vision tasks
  • Attenion mechanisms and Graph Neural Networks
  • Domain adaptation and domain generalization
  • Geometrical and statistical shape analysis

The task of translating medical images between different domains has numerous useful applications. One example is the correction and restoration of artifact-corrupted images. Another potential application is the generation of novel image data between different modalities. The work focuses on the development and refinement of medical image translation frameworks and their applications - specifically, GAN-based MR motion correction, GAN-based PET attenuation correction and GAN-based image inpainting. Furthermore, the modelling and Inclusion of the uncertainties can steer the generative task and provide additional information.

  • variational autoencoders
  • generative adversarial networks
  • diffusion models
  • uncertainty estimation

Reproducible research

We believe in the concept of open and reproducible research

Challenge

To promote research on machine learning-based automated tumor lesion segmentation on whole-body FDG-PET/CT data we host the autoPET challenge and provide a large, publicly available training data set:
autoPET
autoPET II

Documentation

A collection of codes and documentations of previous projects can also be found here:
k-space astronauts

Zertifikate und Verbände

Springe zum Hauptteil