Learners of Interpretable Latent Information Interpretable AI Research Lab at the University of Sussex

Hi, I work on projects in the LiLi Lab as part of my MRes in Advanced AI. My current project focuses on using implicit neural representations to learn continuous spatiotemporal structure from ecoacoustic data. During my undergraduate studies, I also worked on projects around structured uncertainty in medical imaging and segmentation, and I am interested in uncertainty-aware modelling and probabilistic approaches to spatiotemporal inference.

Search for Isabel Lahmann's papers on the Publications page