Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Solving Transition-Independent Multi-agent MDPs with Sparse Interactions (Extended version)

Joris Scharpff, Diederik M. Roijers, Frans A. Oliehoek, Matthijs T. J. Spaan, and Mathijs de Weerdt. Solving Transition-Independent Multi-agent MDPs with Sparse Interactions (Extended version). ArXiv e-prints, arXiv:1511.09047, February 2016.

Download

HTML 

Abstract

In cooperative multi-agent sequential decision making under uncertainty, agents must coordinate to find an optimal joint policy that maximises joint value. Typical algorithms exploit additive structure in the value function, but in the fully-observable multi-agent MDP (MMDP) setting such structure is not present. We propose a new optimal solver for transition-independent MMDPs, in which agents can only affect their own state but their reward depends on joint transitions. We represent these de- pendencies compactly in conditional return graphs (CRGs) . Using CRGs the value of a joint policy and the bounds on partially specified joint policies can be efficiently computed. We propose CoRe, a novel branch-and-bound policy search algorithm building on CRGs. CoRe typically requires less runtime than the available alternatives and finds solutions to previously unsolvable problems

BibTeX Entry

@article{Scharpff16arxiv,
    author =    {Joris Scharpff and Diederik M. Roijers and Frans A. Oliehoek 
                 and Matthijs T. J. Spaan and Mathijs de Weerdt},
    title =     {Solving Transition-Independent Multi-agent {MDPs} with Sparse Interactions (Extended version)},
    journal =   {ArXiv e-prints},
    volume =    {arXiv:1511.09047},
    year =      2016,
    month =     feb,
    url =       {https://arxiv.org/abs/1511.09047},
    keywords =   {nonrefereed, arxiv},
    abstract = {
        In cooperative multi-agent sequential decision making under
        uncertainty, agents must coordinate to find an optimal joint policy
        that maximises joint value. Typical algorithms exploit additive
        structure in the value function, but in the fully-observable
        multi-agent MDP (MMDP) setting such structure is not present.  We
        propose a new optimal solver for transition-independent MMDPs, in
        which agents can only affect their own state but their reward
        depends on joint transitions. We represent these de- pendencies
        compactly in conditional return graphs (CRGs) . Using CRGs the
        value of a joint policy and the bounds on partially specified joint
        policies can be efficiently computed. We propose CoRe, a novel
        branch-and-bound policy search algorithm building on CRGs.  CoRe
        typically requires less runtime than the available alternatives and
        finds solutions to previously unsolvable problems
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Nov 05, 2024 16:13:37 UTC