Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL

Miguel Suau, Matthijs T. J. Spaan, and Frans A. Oliehoek. Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL. In European Workshop on Reinforcement Learning (EWRL), 2023.

Download

pdf [942.5kB]  

Abstract

Reinforcement learning agents may sometimes develop habits that are effective only when specific policies are followed. After an initial exploration phase in which agents try out different actions, they eventually converge toward a particular policy. When this occurs, the distribution of state-action trajectories becomes narrower, and agents start experiencing the same transitions again and again. At this point, spurious correlations may arise. Agents may then pick up on these correlations and learn state representations that do not generalize beyond the agent's trajectory distribution. In this paper, we provide a mathematical characterization of this phenomenon, which we refer to as policy confounding, and show, through a series of examples, when and how it occurs in practice.

BibTeX Entry

@inproceedings{Suau23EWRL,
    title =     {Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL},
    author =    {Suau, Miguel and Matthijs T. J. Spaan and Frans A. Oliehoek},
    booktitle = EWRL23,
    OPTseries = 	 {European Workshop on Reinforcement Learning},
    year =      2023,
    url =       {https://arxiv.org/abs/2306.02419},
    keywords =   {refereed},
    abstract={
        Reinforcement learning agents may sometimes develop habits that are effective only when specific policies are followed. 
        After an initial exploration phase in which agents try out different actions, they eventually converge toward a particular policy. 
        When this occurs, the distribution of state-action trajectories becomes narrower, and agents start experiencing the same 
        transitions again and again. At this point, spurious correlations may arise. Agents may then pick up on these correlations and 
        learn state representations that do not generalize beyond the agent's trajectory distribution. In this paper, we provide a 
        mathematical characterization of this phenomenon, which we refer to as policy confounding, and show, through a series of examples, 
        when and how it occurs in practice.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Nov 05, 2024 16:13:37 UTC