Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Scalable Bayesian Reinforcement Learning for Multiagent POMDPs

Christopher Amato, Frans A. Oliehoek, and Eric Shyu. Scalable Bayesian Reinforcement Learning for Multiagent POMDPs. In Proceedings of the First Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM), 2013.

Download

pdf [506.8kB]  

Abstract

Bayesian methods for reinforcement learning (RL) allow model uncertainty to be considered explicitly and offer a principled way of dealing with the exploration/exploitation tradeoff. However, for multiagent systems there have been few such approaches, and none of them apply to problems with state uncertainty. In this paper, we fill this gap by proposing a Bayesian RL framework for multiagent partially observable Markov decision processes that is able to take advantage of structure present in many problems. In this framework, a team of agents operates in a centralized fashion, but has uncertainty about the model of the environment. Fitting many real-world situations, we consider the case where agents learn the appropriate models while acting in an online fashion. Because it can quickly become intractable to choose the optimal action in na ̈ve versions of this online learning problem, we propose a more scalable approach based on sample-based search and factored value functions for the set of agents. Experimental results show that we are able to provide high quality solutions to large problems even with a large amount of initial model uncertainty.

BibTeX Entry

@inproceedings{Amato13RLDM,
    author =    {Christopher Amato and Frans A. Oliehoek and Eric Shyu},
    booktitle = {Proceedings of the First Multidisciplinary Conference on 
                Reinforcement Learning and Decision Making (RLDM)},
    title =     {Scalable {Bayesian} Reinforcement Learning for Multiagent {POMDPs}},
    year =      2013,
    note =      {},
    keywords =  {workshop},
    url =       {http://www.princeton.edu/~yael/RLDM2013ExtendedAbstracts.pdf},
    abstract = {
    Bayesian methods for reinforcement learning (RL) allow model
    uncertainty to be considered explicitly and offer a principled way
    of dealing with the exploration/exploitation tradeoff. However, for
    multiagent systems there have been few such approaches, and none of
    them apply to problems with state uncertainty. In this paper, we fill
    this gap by proposing a Bayesian RL framework for multiagent partially
    observable Markov decision processes that is able to take advantage of
    structure present in many problems. In this framework, a team of
    agents operates in a centralized fashion, but has uncertainty about
    the model of the environment. Fitting many real-world situations, we
    consider the case where agents learn the appropriate models while
    acting in an online fashion.  Because it can quickly become
    intractable to choose the optimal action in na ̈ve versions of this
    online learning problem, we propose a more scalable approach based on
    sample-based search and factored value functions for the set of
    agents. Experimental results show that we are able to provide high
    quality solutions to large problems even with a large amount of
    initial model uncertainty.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Mon Apr 08, 2024 20:28:07 UTC