Learners of Interpretable Latent Information Interpretable AI Research Lab at the University of Sussex

Research Themes

The LILI Lab develops interpretable machine learning and statistical methods for inverse problems, where key information cannot be measured directly. Our work focuses on spatiotemporal data in medical imaging, disease progression modelling, computer vision, and ecological monitoring.

Explore our four main research themes:

Medical Image Analysis

Medical Image Analysis

Work on MRI reconstruction, image registration, and related inverse problems.

Progression Modelling

Progression Modelling

Probabilistic latent variable modelling of neurodevelopment and degeneration.

Computer Vision

Computer Vision

Projects spanning visual recognition, reconstruction, and interpretable models for vision tasks.

Ecological Monitoring

Ecological Monitoring

Interpretable learning for ecological data, from species monitoring to environmental sensing.

Machine Learning Methods

Machine Learning Methods

Core methodological work on probabilistic modeling, interpretability, and efficient training for inverse problems.