Program
Tentative Program
Collectionless AI and Nature-Inspired Learning

Marco Gori
He is a full professor of computer science at the University of Siena, where he leads the Siena Artificial Intelligence Lab. His work focuses on machine learning and neural computation, with a significant impact on the field of graph neural networks. Gori co-authored the pioneering paper "A New Model for Learning in Graph Domains" (IJCNN 2005), which introduced the term "Graph Neural Network" and laid the groundwork for this influential field. His later paper, "Graph Neural Networks" (IEEE-TNN 2009), expanded on these ideas and has garnered over 10,000 citations. He has also collaborated with the 3IA Côte d'Azur since 2019 and previously chaired the Italian Chapter of the IEEE Computational Intelligence Society. Gori is a Fellow of IEEE, EurAI, IAPR, and ELLIS.
Abstract: AI is revolutionizing not only the entire field of Computer Science, but nearly all fields of Science. However, while application contexts explode and LLMs display embarrassing cognitivequalities, the entire AI research field seems headed toward saturation of the fundamental ideas that have enabled today’s spectacular results of large companies. Is the infamous “AI winter” perhaps creeping into research? In this talk I argue that the time is ripe for a fundamental rethinking of AI methodologies with the purpose to migrate intelligence from the cloud to the growing global population of devices with on-board CPUs. To support learning schemes inspired by mechanisms found in nature, I propose developing intelligent systems within NARNIAN, a platform that enables social mechanisms and fosters learning processes over time, without the need for data storage.
The Foundation of Foundation Models for Adaptive Multimedia Agents

Amos Storkey
Amos Storkey is Professor of Machine Learning and Artificial Intelligence in the School of Informatics, University of Edinburgh. He completed his undergraduate studies in Mathematics at Trinity College, Cambridge, moving into Neural Networks for his PhD (1995-98). He has continued research in AI methodology and applications ever since, and is known for work in deep learning, continual learning, meta-learning, representation learning, domain shift, reinforcement learning, and efficient learning systems for edge-devices. Storkey founded and leads the Bayesian and Neural Systems Group in Edinburgh. He is a fellow of the ELLIS Society and has served as director of the EPSRC Centre for Doctoral Training in Data Science. He is current director of the EPSRC Centre for Doctoral Training in Machine Learning Systems.
Abstract: In this talk we will introduce four different types of foundation generative model, in the context of multimedia signal processing and control: autoregressive models, the class of diffusion, gradient flow and flow matching models, state space models, and basis-function approaches. We explain how and why each approach works, and what is necessary to train them. We then look at foundational representation learning models, why they work, and discuss how they can be used for multi-modal fusion. These pieces will be built together into a full tooling for foundational multimedia signal processing, along with demonstrations of multimedia foundation models in action. We will look at what is needed for combining signals, language, images, video and audio. Then in the second part we will look at the fundamental issues in using foundation models: domain shift and fine tuning, non-stationarity and adaptability, and finally computational cost and on-device implementation. Finally we will take a little time to review the state of the art development, and new understanding in the field.