Gaussian Processes for Data-Efficient Learning in Robotics and ControlReportar como inadecuado


Gaussian Processes for Data-Efficient Learning in Robotics and Control


Gaussian Processes for Data-Efficient Learning in Robotics and Control - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Publication Date: 2013-11-04

Journal Title: IEEE Transactions on Pattern Analysis and Machine Intelligence

Publisher: IEEE

Volume: 37

Issue: 2

Pages: 408-423

Language: English

Type: Article

Metadata: Show full item record

Citation: Deisenroth, M. P., Fox, D., & Rasmussen, C. E. (2013). Gaussian Processes for Data-Efficient Learning in Robotics and Control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37 (2), 408-423. https://doi.org/10.1109/TPAMI.2013.218

Description: This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/TPAMI.2013.218

Abstract: Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.

Keywords: policy search, robotics, control, Gaussian processes, Bayesian inference, reinforcement learning

Sponsorship: The research leading to these results has received funding from the EC’s Seventh Framework Programme (FP7/2007-2013) under grant agreement #270327, ONR MURI grant N00014-09-1-1052, Intel Labs, and the Department of Computing, Imperial College London.

Identifiers:

External DOI: https://doi.org/10.1109/TPAMI.2013.218

This record's URL: https://www.repository.cam.ac.uk/handle/1810/255116







Autor: Deisenroth, Marc PeterFox, DieterRasmussen, Carl Edward

Fuente: https://www.repository.cam.ac.uk/handle/1810/255116



DESCARGAR PDF




Documentos relacionados