Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs

Philipp Robbel, Frans A. Oliehoek, and Mykel J. Kochenderfer. Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs. In Proceedings of the AAAI Fall Symposium on Sequential Decision Making in Intelligent Agents, November 2015.

Download

pdf [336.3kB]  

Abstract

The Markov Decision Process (MDP) framework is a versatile method for addressing single and multiagent sequential decision making problems. Many exact and approximate solution methods attempt to exploit struc- ture in the problem and are based on value factoriza- tion. Especially multiagent settings (MAS), however, are known to suffer from an exponential increase in value component sizes as interactions become denser, meaning that approximation architectures are overly re- stricted in the problem sizes and types they can handle. We present an approach to mitigate this limitation for certain types of MASs, exploiting a property that can be thought of as `anonymous influence' in the factored MDP. In particular, we show how anonymity can lead to representational and computational efficiencies, both for general variable elimination in a factor graph but also for the approximate linear programming solution to factored MDPs. The latter allows to scale linear pro- gramming to factored MDPs that were previously un- solvable. Our results are shown for a disease control do- main over a graph with 50 nodes that are each connected with up to 15 neighbors.

BibTeX Entry

@inproceedings{Robbel15SDMIA,
    title =     {Exploiting Anonymity in Approximate Linear Programming:
                 Scaling to Large Multiagent {MDPs}},
    author =    {Philipp Robbel and
                 Frans A. Oliehoek and
                 Mykel J. Kochenderfer},
    booktitle = {Proceedings of the {AAAI} Fall Symposium on 
                 Sequential Decision Making in Intelligent Agents},
    month =     nov,
    year =      2015,
    keywords =  {workshop},
    url =       {https://www.aaai.org/ocs/index.php/FSS/FSS15/paper/view/11688/11511},
    abstract =  {
    The Markov Decision Process (MDP) framework is a
    versatile method for addressing single and multiagent
    sequential decision making problems. Many exact and
    approximate solution methods attempt to exploit struc-
    ture in the problem and are based on value factoriza-
    tion. Especially multiagent settings (MAS), however,
    are known to suffer from an exponential increase in
    value component sizes as interactions become denser,
    meaning that approximation architectures are overly re-
    stricted in the problem sizes and types they can handle.
    We present an approach to mitigate this limitation for
    certain types of MASs, exploiting a property that can
    be thought of as `anonymous influence' in the factored
    MDP. In particular, we show how anonymity can lead
    to representational and computational efficiencies, both
    for general variable elimination in a factor graph but
    also for the approximate linear programming solution
    to factored MDPs. The latter allows to scale linear pro-
    gramming to factored MDPs that were previously un-
    solvable. Our results are shown for a disease control do-
    main over a graph with 50 nodes that are each connected
    with up to 15 neighbors.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Mon Apr 08, 2024 20:28:07 UTC