Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL

Miguel Suau, Matthijs T. J. Spaan, and Frans A. Oliehoek. Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL. In Proceedings of the First International Conference on Reinforcement Learning (RLC), August 2024.

Download

pdf [993.6kB]  

Abstract

Reinforcement learning agents tend to develop habits that are effective only under specific policies. Following an initial exploration phase where agents try out different actions, they eventually converge onto a particular policy. As this occurs, the distribution over state-action trajectories becomes narrower, leading agents to repeatedly experience the same transitions. This repetitive exposure fosters spurious correlations between certain observations and rewards. Agents may then pick up on these correlations and develop simplistic habits tailored to the specific set of trajectories dictated by their policy. The problem is that these habits may yield incorrect outcomes when agents are forced to deviate from their typical trajectories, prompted by changes in the environment. This paper presents a mathematical characterization of this phenomenon, termed policy confounding, and illustrates, through a series of examples, the circumstances under which it occurs.

BibTeX Entry

@inproceedings{Suau24RLC,
    title =     {Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in RL},
    author =    {Suau, Miguel and Matthijs T. J. Spaan and Frans A. Oliehoek},
    booktitle = RLC24,
    year =      2024,
    month =     aug,
    url =       {https://openreview.net/forum?id=CdJjvhaaMs},
    keywords =   {refereed},
    abstract={
    Reinforcement learning agents tend to develop habits that are effective only
    under specific policies. Following an initial exploration phase where
    agents try out different actions, they eventually converge onto a
    particular policy. As this occurs, the distribution over state-action
    trajectories becomes narrower, leading agents to repeatedly experience the
    same transitions. This repetitive exposure fosters spurious correlations
    between certain observations and rewards. Agents may then pick up on these
    correlations and develop simplistic habits tailored to the specific set of
    trajectories dictated by their policy. The problem is that these habits may
    yield incorrect outcomes when agents are forced to deviate from their
    typical trajectories, prompted by changes in the environment. This paper
    presents a mathematical characterization of this phenomenon, termed policy
    confounding, and illustrates, through a series of examples, the
    circumstances under which it occurs.        
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Jun 25, 2024 12:39:45 UTC