Learners of Interpretable Latent Information Interpretable AI Research Lab at the University of Sussex

LILI Lab

We are the Learners of Interpretable Latent Information (LILI) Lab, based in the Department of Informatics, University of Sussex. We create interpretable ML models for ”inverse problems”, where the information of interest is not directly measurable.

We aim to build models that are:

What We Do

Opportunities

Opportunities

Ways to work with us or collaborate:

Continual Learning

Continual Learning

Reading groups, book clubs, and teaching highlights.

Software

Software

Research tooling and reproducible codebases.

Outreach

Outreach

Engagement like science fairs, and international teaching projects.

Our Research Themes

Medical Image Analysis

Medical Image Analysis

Work on MRI reconstruction, image registration, and related inverse problems.

Progression Modelling

Progression Modelling

Probabilistic latent variable modelling of neurodevelopment and degeneration.

Computer Vision

Computer Vision

Projects spanning visual recognition, reconstruction, and interpretable models for vision tasks.

Ecological Monitoring

Ecological Monitoring

Interpretable learning for ecological data, from species monitoring to environmental sensing.

Machine Learning Methods

Machine Learning Methods

Core methodological work on probabilistic modeling, interpretability, and efficient training for inverse problems.