Learners of Interpretable Latent Information Interpretable AI Research Lab at the University of Sussex

Working in Our Lab

The LILI lab develops machine learning/statistical methodology that enables interpretability of predictions and learning in inverse problems (where the information of interest is not directly measurable). Our applications are mostly in spatiotemporal data problems across: medical imaging (MRI reconstruction, image registration); disease progression modelling; computer vision and ecologicial monitoring.

Projects in our lab typically involve:

If you want to work with us, you should have an understanding and be open to further training in probability, linear algebra, calculus and Python programming. Recommended reading material includes: