Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Bayesian Reinforcement Learning for Multiagent Systems with State Uncertainty

Christopher Amato and Frans A. Oliehoek. Bayesian Reinforcement Learning for Multiagent Systems with State Uncertainty. In Proceedings of the Eighth AAMAS Workshop on Multi-Agent Sequential Decision Making in Uncertain Domains (MSDM), pp. 76–83, 2013.

Download

pdf [177.5kB]  

Abstract

Bayesian methods for reinforcement learning are promising because they allow model uncertainty to be considered explicitly and offer a principled way of dealing with the exploration/exploitation tradeoff. However, for multiagent systems there have been few such approaches, and none of them apply to problems with state uncertainty. In this paper we fill this gap by proposing two frameworks for Bayesian RL for multiagent systems with state uncertainty. This includes a multiagent POMDP model where a team of agents operates in a centralized fashion, but has uncertainty about the model of the environment. We also consider a best response model in which each agent also has uncertainty over the policies of the other agents. In each case, we seek to learn the appropriate models while acting in an online fashion. We transform the resulting problem into a planning problem and prove bounds on the solution quality in different situations. We demonstrate our methods using sample-based planning in several domains with varying levels of uncertainty about the model and the other agents' policies. Experimental results show that overall, the approach is able to significantly decrease uncertainty and increase value when compared to initial models and policies.

BibTeX Entry

@inproceedings{Amato13MSDM,
    author =    {Christopher Amato and  Frans A. Oliehoek},
    booktitle =  MSDM13,
    title =     {Bayesian Reinforcement Learning for Multiagent Systems with State Uncertainty},
    year =       2013,
    pages =     {76--83},
    note =      {},
    keywords =  {workshop},
    abstract = {
    Bayesian methods for reinforcement learning are promising because
    they allow model uncertainty to be considered explicitly and offer
    a principled way of dealing with the exploration/exploitation
    tradeoff. However, for multiagent systems there have been few such
    approaches, and none of them apply to problems with state
    uncertainty. In this paper we fill this gap by proposing two
    frameworks for Bayesian RL for multiagent systems with state
    uncertainty. This includes a multiagent POMDP model where a team
    of agents operates in a centralized fashion, but has uncertainty
    about the model of the environment. We also consider a best
    response model in which each agent also has uncertainty over the
    policies of the other agents. In each case, we seek to learn the
    appropriate models while acting in an online fashion. We transform
    the resulting problem into a planning problem and prove bounds on
    the solution quality in different situations.  We demonstrate our
    methods using sample-based planning in several domains with
    varying levels of uncertainty about the model and the other
    agents' policies. Experimental results show that overall, the
    approach is able to significantly decrease uncertainty and
    increase value when compared to initial models and policies.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Nov 05, 2024 16:13:37 UTC