Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Learning from Demonstration in the Wild

Feryal Behbahani, Kyriacos Shiarlis, Xi Chen, Vitaly Kurin, Sudhanshu Kasewa, Ciprian Stirbu, João Gomes, Supratik Paul, Frans A. Oliehoek, João Messias, and Shimon Whiteson. Learning from Demonstration in the Wild. arXiv e-prints, pp. arXiv:1811.03516, November 2018.
Accepted for publcation at ICRA

Download

pdf [4.5MB]  

Abstract

Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on artificially generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviour that was occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose video to behaviour (ViBe), a new approach to learning models of road user behaviour that requires as input only unlabelled raw video data of a traffic scene collected from a single, monocular, uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.

BibTeX Entry

@ARTICLE{Behbahani18arxiv,
       author = {{Behbahani}, Feryal and {Shiarlis}, Kyriacos and {Chen}, Xi and
         {Kurin}, Vitaly and {Kasewa}, Sudhanshu and {Stirbu}, Ciprian and
         {Gomes}, Jo{\~a}o and {Paul}, Supratik and {Oliehoek}, Frans A. and
         {Messias}, Jo{\~a}o and {Whiteson}, Shimon},
        title = {Learning from Demonstration in the Wild},
      journal = {arXiv e-prints},
         year = 2018,
        month = nov,
          eid = {arXiv:1811.03516},
        pages = {arXiv:1811.03516},
archivePrefix = {arXiv},
       eprint = {1811.03516},
 primaryClass = {cs.LG},
    keywords =          {nonrefereed, arxiv},
    wwwnote =           {Accepted for publcation at ICRA},
    abstract = {
    Learning from demonstration (LfD) is useful in settings where hand-coding
    behaviour or a reward function is impractical. It has succeeded in a wide
    range of problems but typically relies on artificially generated
    demonstrations or specially deployed sensors and has not generally been
    able to leverage the copious demonstrations available in the wild: those
    that capture behaviour that was occurring anyway using sensors that were
    already deployed for another purpose, e.g., traffic camera footage
    capturing demonstrations of natural behaviour of vehicles, cyclists, and
    pedestrians. We propose video to behaviour (ViBe), a new approach to
    learning models of road user behaviour that requires as input only
    unlabelled raw video data of a traffic scene collected from a single,
    monocular, uncalibrated camera with ordinary resolution. Our approach
    calibrates the camera, detects relevant objects, tracks them through time,
    and uses the resulting trajectories to perform LfD, yielding models of
    naturalistic behaviour. We apply ViBe to raw videos of a traffic
    intersection and show that it can learn purely from videos, without
    additional expert knowledge. 
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Nov 05, 2024 16:13:37 UTC