Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D SensorReport as inadecuate




Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor - Download this document for free, or read online. Document in PDF available to download.

International Journal of Optics - Volume 2017 2017, Article ID 9796127, 11 pages - https:-doi.org-10.1155-2017-9796127

Research ArticleKey Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai University, Shanghai, China

Correspondence should be addressed to Zhijiang Zhang

Received 17 February 2017; Accepted 24 April 2017; Published 28 May 2017

Academic Editor: Chenggen Quan

Copyright © 2017 Yijun Ji et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

3D reconstruction based on structured light or laser scan has been widely used in industrial measurement, robot navigation, and virtual reality. However, most modern range sensors fail to scan transparent objects and some other special materials, of which the surface cannot reflect back the accurate depth because of the absorption and refraction of light. In this paper, we fuse the depth and silhouette information from an RGB-D sensor Kinect v1 to recover the lost surface of transparent objects. Our system is divided into two parts. First, we utilize the zero and wrong depth led by transparent materials from multiple views to search for the 3D region which contains the transparent object. Then, based on shape from silhouette technology, we recover the 3D model by visual hull within these noisy regions. Joint Grabcut segmentation is operated on multiple color images to extract the silhouette. The initial constraint for Grabcut is automatically determined. Experiments validate that our approach can improve the 3D model of transparent object in real-world scene. Our system is time-saving, robust, and without any interactive operation throughout the process.





Author: Yijun Ji, Qing Xia, and Zhijiang Zhang

Source: https://www.hindawi.com/



DOWNLOAD PDF




Related documents