Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

What model does MuZero learn?

Jinke He, Thomas M. Moerland, Joery A. De Vries, and Frans A. Oliehoek. What model does MuZero learn?. In ECAI 2024 - 27th European Conference on Artificial Intelligence (ECAI), pp. 1599–1606, October 2024.

Download

pdf [1.5MB]  ps.gz ps HTML 

Abstract

Model-based reinforcement learning (MBRL) has drawn considerable interest in recent years, given its promise to improve sample efficiency. Moreover, when using deep-learned models, it is possible to learn compact and generalizable models from data. In this work, we study MuZero, a state-of-the-art deep model-based reinforcement learning algorithm that distinguishes itself from existing algorithms by learning a value-equivalent model. Despite MuZero’s success and impact in the field of MBRL, existing literature has not thoroughly addressed why MuZero performs so well in practice. Specifically, there is a lack of in-depth investigation into the value-equivalent model learned by MuZero and its effectiveness in model-based credit assignment and policy improvement, which is vital for achieving sample efficiency in MBRL. To fill this gap, we explore two fundamental questions through our empirical analysis: 1) to what extent does MuZero achieve its learning objective of a value-equivalent model, and 2) how useful are these models for policy improvement? Among various other insights, we conclude that MuZero’s learned model cannot effectively generalize to evaluate unseen policies. This limitation constrains the extent to which we can additionally improve the current policy by planning with the model.

BibTeX Entry

@inproceedings{He24ECAI,
    author =    {{He}, Jinke and {Moerland}, Thomas M. and 
                 {De Vries}, Joery A. and  {Oliehoek}, Frans A.},
    title =     {What model does {MuZero} learn?},
    booktitle = ECAI24,
    pages =     {1599--1606},
    year =      2024,
    month =     oct,
    doi =       {10.3233/FAIA240666},
    keywords =  {refereed},
    abstract =  {
        Model-based reinforcement learning (MBRL) has drawn considerable
        interest in recent years, given its promise to improve sample
        efficiency. Moreover, when using deep-learned models, it is
        possible to learn compact and generalizable models from data. In
        this work, we study MuZero, a state-of-the-art deep model-based
        reinforcement learning algorithm that distinguishes itself from
        existing algorithms by learning a value-equivalent model. Despite
        MuZero’s success and impact in the field of MBRL, existing
        literature has not thoroughly addressed why MuZero performs so well
        in practice. Specifically, there is a lack of in-depth
        investigation into the value-equivalent model learned by MuZero and
        its effectiveness in model-based credit assignment and policy
        improvement, which is vital for achieving sample efficiency in
        MBRL. To fill this gap, we explore two fundamental questions
        through our empirical analysis: 1) to what extent does MuZero
        achieve its learning objective of a value-equivalent model, and 2)
        how useful are these models for policy improvement? Among various
        other insights, we conclude that MuZero’s learned model cannot
        effectively generalize to evaluate unseen policies. This limitation
        constrains the extent to which we can additionally improve the
        current policy by planning with the model.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Nov 05, 2024 16:13:37 UTC