Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Approximate Solutions for Factored Dec-POMDPs with Many Agents

Frans A. Oliehoek, Shimon Whiteson, and Matthijs T. J. Spaan. Approximate Solutions for Factored Dec-POMDPs with Many Agents. In Proceedings of the Twelfth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 563–570, 2013.

Download

pdf [238.2kB]  

Abstract

A Decentralized Partially Observable Markov Decision Process (Dec-POMDP) is a powerful model for multiagent planning. However, it is also provably intractable and thus optimal methods do not scale well in the number of agents. Although there has been work on scaling such methods to more agents by exploiting weak coupling in factored models, scalability for unrestricted subclasses remains limited. This paper proposes an approximate approach that tackles the stages of the problem one by one, exploiting weakly coupled structure at each of these stages. To enable the method to scale to many agents, we propose a set of approximations. In particular, our method approximates the stages using a sparse interaction structure, bootstraps off smaller tasks to compute heuristic payoff functions, and employs approximate inference to estimate probabilities at each stage and compute the best decision rules. An empirical evaluation shows that the loss in solution quality due to these approximations is small and that the proposed method achieves unprecedented scalability, solving Dec-POMDPs with hundreds of agents. Furthermore, we show that our method outperforms a number of baselines and, in cases where the optimal policy is known, produces near-optimal solutions.

BibTeX Entry

@inproceedings{Oliehoek13AAMAS,
    author    = {Frans A. Oliehoek and
                 Shimon Whiteson and
                 Matthijs T. J. Spaan},
    title     = {Approximate Solutions for Factored
                 {Dec-POMDPs} with Many Agents},
    booktitle = AAMAS13,
    year      = {2013},
    note =      {},
    pages =     {563--570},
    url =       {www.ifaamas.org/Proceedings/aamas2013/docs/p563.pdf},
    abstract = {
    A Decentralized Partially Observable Markov Decision Process
    (Dec-POMDP) is a powerful model for multiagent planning. However, it
    is also provably intractable and thus optimal methods do not scale
    well in the number of agents.  Although there has been work on scaling
    such methods to more agents by exploiting weak coupling in factored
    models, scalability for unrestricted subclasses remains limited.  This
    paper proposes an approximate approach that tackles the stages of the
    problem one by one, exploiting weakly coupled structure at each of
    these stages.  To enable the method to scale to many agents, we
    propose a set of approximations.  In particular, our method
    approximates the stages using a sparse interaction structure,
    bootstraps off smaller tasks to compute heuristic payoff functions,
    and employs approximate inference to estimate probabilities at each
    stage and compute the best decision rules.  An empirical evaluation
    shows that the loss in solution quality due to these approximations is
    small and that the proposed method achieves unprecedented scalability,
    solving Dec-POMDPs with hundreds of agents.  Furthermore, we show that
    our method outperforms a number of baselines and, in cases where the
    optimal policy is known, produces near-optimal solutions.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Nov 05, 2024 16:13:37 UTC