A stereoscopic video conversion scheme based on spatio-temporal analysis of MPEG videosReportar como inadecuado




A stereoscopic video conversion scheme based on spatio-temporal analysis of MPEG videos - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

EURASIP Journal on Advances in Signal Processing

, 2012:237

First Online: 12 November 2012Received: 09 March 2012Accepted: 18 October 2012

Abstract

In this article, an automatic stereoscopic video conversion scheme which accepts MPEG-encoded videos as input is proposed. Our scheme is depth-based, relying on spatio-temporal analysis of the decoded video data to yield depth perception cues, such as temporal motion and spatial contrast, which reflect the relative depths between the foreground and the background areas. Our scheme is shot-adaptive, demanding that shot change detection and shot classification be performed for tuning of algorithm or parameters that are used for depth cue combination. The above-mentioned depth estimation is initially block-based, followed by a locally adaptive joint trilateral upsampling algorithm to reduce the computing load significantly. A recursive temporal filter is used to reduce the possible depth fluctuations and also artifacts in the synthesized images resulting from wrong depth estimations. The traditional Depth-Image-Based-Rendering algorithm is used to synthesize the left- and right-view frames for 3D display. Subjective tests show that videos converted by our scheme provide comparable perceived depth and visual quality with those converted from the depth data calculated by stereo vision techniques. Also, our scheme is shown to outperform the well-known TriDef software in terms of human’s perceived 3D depth. Based on the implementation by using -OpenMP- parallel programming model, our scheme is capable of executing in real-time on a multi-core CPU platform.

KeywordsStereoscopic video conversion Depth estimation Depth cue 3D perception DIBR Electronic supplementary materialThe online version of this article doi:10.1186-1687-6180-2012-237 contains supplementary material, which is available to authorized users.

Download fulltext PDF



Autor: Guo-Shiang Lin - Hsiang-Yun Huang - Wei-Chih Chen - Cheng-Ying Yeh - Kai-Che Liu - Wen-Nung Lie

Fuente: https://link.springer.com/







Documentos relacionados