Unsupervised Learning of Long-Term Motion Dynamics for VideosReportar como inadecuado




Unsupervised Learning of Long-Term Motion Dynamics for Videos - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, USA, July 21-26, 2017 Publication date: 2017

We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, Depth, and RGB-D videos.

Reference EPFL-CONF-230240





Autor: Luo, Zelun; Peng, Boya; Huang, De-An; Alahi, Alexandre; Fei-Fei, Li

Fuente: https://infoscience.epfl.ch/record/230240?ln=en







Documentos relacionados