Learners of Interpretable Latent Information Interpretable AI Research Lab at the University of Sussex

DNNs are powerful but opaque, limiting trust. I study learning as a self‑organising dynamical process, using tools from physics to probe training and representation formation. My work spans: (1) what gradient descent learns preferentially (2) new algorithms exploiting alternative hardware and symbolic/structured inference. Keywords: optimisation theory, AI safety, bio-inspired algorithms.

Search for Giuseppe Castiglione's papers on the Publications page