Show simple item record

dc.contributor.authorFriston, Karlen
dc.contributor.authorSamothrakis, Spyridonen
dc.contributor.authorMontague, P. Readen
dc.date.accessioned2017-09-08T14:18:49Zen
dc.date.available2017-09-08T14:18:49Zen
dc.date.issued2012-08-03en
dc.identifier.urihttp://hdl.handle.net/10919/78836en
dc.description.abstractThis paper describes a variational free-energy formulation of (partially observable) Markov decision problems in decision making under uncertainty. We show that optimal control can be cast as active inference. In active inference, both action and posterior beliefs about hidden states minimise a free energy bound on the negative log-likelihood of observed states, under a generative model. In this setting, reward or cost functions are absorbed into prior beliefs about state transitions and terminal states. Effectively, this converts optimal control into a pure inference problem, enabling the application of standard Bayesian filtering techniques.We then consider optimal trajectories that rest on posterior beliefs about hidden states in the future. Crucially, this entails modelling control as a hidden state that endows the generative model with a representation of agency. This leads to a distinction between models with and without inference on hidden control states; namely, agency-free and agency-based models, respectively.en
dc.language.isoen_USen
dc.publisherSpringeren
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectpartially observable Markov decision processesen
dc.subjectoptimal controlen
dc.subjectBayesianen
dc.subjectagencyen
dc.subjectinferenceen
dc.subjectactionen
dc.subjectfree energyen
dc.titleActive inference and agency: optimal control without cost functionsen
dc.typeArticle - Refereeden
dc.title.serialBiological Cyberneticsen
dc.identifier.doihttps://doi.org/10.1007/s00422-012-0512-8en


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record