IWAI 2020

1st International Workshop
on Active Inference

14 September 2020 in Ghent, Belgium - virtually

In Conjunction with ECML/PKDD 2020

The 1st International Workshop on Active Inference wants to bring together researchers on active inference as well as related research fields in order to discuss current trends, novel results, (real-world) applications, to what extent active inference can be used in modern machine learning settings, such as deep learning, and how it can be unified with the latest psychological and neurological insights.

Active inference is a theory of behaviour and learning that originated in neuroscience (Friston et al., 2006). The basic assumption is that intelligent agents entertain a generative model of their environment, and their main goal is to minimize surprise or, more formally, their free energy. The agents do so either by updating their generative model, so that it becomes better at explaining observations (i.e. learning), or by selecting policies that will resolve their surprise (i.e. acting), for example by moving towards prior, preferred states, or by moving towards less ambiguous states (Friston et al., 2017).

In the field of machine learning, the definition of free energy is also known as the (negative) evidence lower bound (ELBO) in variational Bayesian methods. In deep learning, this has become a popular method to build generative models of complex data using the variational autoencoder framework (Kingma et al., 2014, Rezende et al., 2014). Also, active inference has connections with the currently popular domain of reinforcement learning and intrinsic motivation (Friston et al., 2009).


Invited talks
Responsive image

Active Learning and Active Inference in Exploration
Philipp Schwartenbeck

Successful behaviour depends on the right balance between maximising reward and soliciting information about the world. I will discuss how different types of information-gain emerge when casting behaviour as surprise minimisation and planning as an inferential process. This formulation provides two distinct mechanisms for goal-directed exploration that express separable profiles of active sampling to reduce uncertainty. 'Hidden state' exploration motivates agents to sample unambiguous observations to accurately infer the (hidden) state of the world. Conversely, 'model parameter' exploration, compels agents to sample outcomes associated with high uncertainty, if they are informative for their representation of the task structure. I will try to provide an introductory illustration of the emergence of these types of 'Bayes-optimal' exploratory behaviour, termed active inference and active learning, and discuss possible future developments and experimental investigations of such implementations in artificial and biological agents.

Responsive image

Putting An End to End-to-End: Gradient-Isolated Learning of Representations
Sindy Löwe

We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.

Responsive image

The Free Energy Principle and Active Inference in silico and in vivo, visual sampling and 'world model' building
Rosalyn Moran

The theory of Active Inference proposes that all biological agents retain self-ness by minimizing their long-term average surprisal. In information theoretic terms, Free Energy provides a soluble approximation to this long-term surprise 'now' and necessitates the development of a generative model of the environment within the agent itself. The minimization of this quantity via a gradient flow is purported to be the purpose of neuronal activity in the brain and thus provides a mapping from brain activity to their first-principle computations. In this talk I will outline the theory of Active Inference and describe how discrete and continuous-time systems that perceive and act can be built in silico, while providing evidence for these implementations in neurobiological and behavioral recordings. Using two experiments in human participants, I aim to demonstrate that human visual search and classification of the MNIST dataset (experiment 1) and world model building and adjustment in a maze task (experiment 2) can be cast as Active Inference processes that utilize neurobiologically plausible architectures comprising prediction in visual hierarchies and alterations in precision via neuromodulation.

Accepted presentations

Confirmatory evidence that healthy individuals can adaptively adjust prior expectations and interoceptive precision estimates
Ryan Smith, Rayus Kuplicki, Adam Teed, Valerie Upshaw and Sahib S. Khalsa

On the relationship between active inference and control as inference
Beren Millidge, Alexander Tschantz, Anil Seth and Christopher L. Buckley

Visual search as active inference
Emmanuel Daucé and Laurent Perrinet.

Integrated World Modeling Theory (IWMT) Implemented: Towards Reverse Engineering Consciousness with the Free Energy Principle and Active Inference
Adam Safron

A deep active inference model of the rubber-hand illusion
Thomas Rood, Marcel van Gerven and Pablo Lanillos

Sleep: Model Reduction in Deep Active Inference
Samuel Wauthier, Ozan Catal, Cedric De Boom, Tim Verbelen and Bart Dhoedt

Active Inference for Fault Tolerant Control of Robot Manipulators with Sensory Faults
Corrado Pezzato, Mohamed Baioumy, Carlos Hernandez Corbato, Nick Hawes, Martijn Wisse and Riccardo Ferrari

Modulation of viability signals for self-regulatory control
Alvaro Ovalle and Simon Lucas

Active Inference or Control as Inference? A Unifying View
Abraham Imohiosen, Joe Watson and Jan Peters

A Worked Example of Fokker-Planck based Active Inference
Magnus T. Koudahl and Bert de Vries

You Only Look... as much as you have to: Using the Free Energy Principle for Active Vision
Toon Van de Maele, Tim Verbelen, Ozan Catal, Cedric De Boom and Bart Dhoedt

Bayesian hyperparameter dynamics in a Markov chain
Martin Biehl and Ryota Kanai

Deep active inference for Partially Observable MDPs
Otto van der Himst and Pablo Lanillos

Accepted posters

Online system identification in a Duffing oscillator by free energy minimisation
Wouter Kouw
[Presentation] [Poster]

Causal blankets: Theory and algorithmic framework
Fernando E. Rosas, Pedro A.M. Mediano, Martin Biehl, Shamil Chandaria and Daniel Polani

Sophisticated Affective Inference: Simulating Affective Dynamics Induced by Imagined Future Events
Casper Hesp, Alexander Tschantz, Beren Millidge, Maxwell Ramstead, Karl Friston and Ryan Smith

Learning Where to Park
Burak Ergul, Thijs van de Laar, Magnus Koudahl, Martin Roa Villescas and Bert de Vries

End-Effect Exploration Drive for Effective Motor Learning
Emmanuel Daucé.

Hierarchical Gaussian filtering of sufficient statistic time series for active inference
Christoph Mathys and Lilian A.E. Weber

Call for papers

Papers on all subjects and applications of active inference and related research areas are welcome. Topics of interest include (but are not limited to):

Important dates

Abstract Submission Deadline: June 9, 2020
Paper Submission Deadline: June 22, 2020 June 28, 2020
Acceptance Notification: July 9, 2020 July 15, 2020
Camera Ready Submission Deadline: September 1, 2020
Workshop Date: September 14, 2020

Paper submissions

We welcome submissions of papers with up to 6 printed pages (including references) in LNCS format (click here for details). Submissions will be evaluated according to their originality and relevance to the workshop, and should an abstract of 60-100 words. Contributions should be in PDF format and submitted via Easychair (click here).

In accordance with the main conference, will apply a double-blind review process (see also the double-blind reviewing process section below for further details). All papers need to be anonymized in the best of efforts. It is allowed to have a (non-anonymous) online pre-print. Reviewers will be asked not to search for them.


The workshop registrations will be handled by ECML/PKDD 2020 (click here). At least one author of each accepted paper should register for the conference.

Keep in mind: the early registration deadline for ECML/PKDD is July 20, 2020.


Tim Verbelen, Ghent University - imec, Belgium
Cedric De Boom, Ghent University - imec, Belgium
Pablo Lanillos, Donders Institute for Brain, Cognition and Behaviour, Netherlands
Christopher Buckley, University of Sussex, United Kingdom

Programme committee

Karl Friston, University College London, United Kingdom
Philipp Schwartenbeck, University College London, United Kingdom
Noor Sajid, University College London, United Kingdom
Rosalyn Moran, King’s College London, United Kingdom
Ayca Ozcelikkale, Uppsala University, Sweden
Christoph Mathys, Aarhus University, Denmark
Glen Berseth, University of California Berkeley, USA
Casper Hesp, University of Amsterdam, Netherlands
Tim Verbelen, Ghent University - imec, Belgium
Cedric De Boom, Ghent University - imec, Belgium
Bart Dhoedt, Ghent University - imec, Belgium
Christopher Buckley, University of Sussex, United Kingdom
Alexander Tschantz, University of Sussex, United Kingdom
Maxwell Ramstead, McGill University, Canada
Pablo Lanillos, Donders Institute for Brain, Cognition and Behaviour, Netherlands
Kai Ueltzhöffer, Heidelberg University, Germany
Martijn Wisse, Delft University of Technology, Netherlands


Karl Friston, James Kilner, Lee Harrison. A free energy principle for the brain. Journal of Physiology-Paris, Volume 100, Issues 1–3, 2006.

Karl J. Friston, Jean Daunizeau, and Stefan J. Kiebel. Reinforcement Learning or Active Inference? PLoS ONE, 4(7), 2009.

Karl J. Friston, Marco Lin, Christopher D. Frith, Giovanni Pezzulo, J. Allan Hobson, and Sasha Ondobaka. Active Inference, Curiosity and Insight. Neural Computation, 29(10): 2633–2683, 2017.

Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. 2nd International Conference on Learning Representations, 2014.

Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. 31st International Conference on International Conference on Machine Learning, 2014.