Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Real-time depth enhancement by fusion for RGB-D cameras

This study presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-D cameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occlusions or low object remission. In this work, the authors use recent depth enhancement filters intended for time-of-flight cameras, and extend them to structured light-based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding two-dimensional image, we correct the depth measurements by separately treating its undesired regions. To that end, the authors propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artefacts, the authors introduce the use of RGB images as guidance images as an alternative to real-time state-of-the-art fusion filters that use greyscale guidance images. The experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, the authors propose a mathematical formulation that enables to use the filter in real-time applications.

References

    1. 1)
      • 10. Crabb, R., Tracey, C., Puranik, A., Davis, J.: ‘Real-time foreground segmentation via range and color imaging’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR), 2008, pp. 15.
    2. 2)
      • 17. Scharstein, D., Szeliski, R.: ‘A taxonomy and evaluation of dense two-frame stereo correspondence algorithms’, Int. J. Comput. Vis., 2002, 47, pp. 742 (doi: 10.1023/A:1014573219977).
    3. 3)
      • 5. Garcia, F., Mirbach, B., Ottersten, B., Grandidier, F., Cuesta, A.: ‘Pixel weighted average strategy for depth sensor data fusion’. Int. Conf. Image Processing (ICIP), 2010, pp. 28052808.
    4. 4)
      • 2. Solh, M., AlRegib, G.: ‘Hierarchical Hole-Filling (HHF): depth image based rendering without depth map filtering for 3D-TV’. IEEE Int. Workshop on Multimedia Signal Processing (MMSP), 2010.
    5. 5)
      • 13. Paris, S., Durand, F.: ‘A fast approximation of the bilateral filter using a signal processing approach’, Int. J. Comput. Vis., 2009, 81, pp. 2452 (doi: 10.1007/s11263-007-0110-8).
    6. 6)
      • 3. Yu-Cheng, F., Tsung-Chen, C.: ‘The novel non-hole-filling approach of depth image based rendering’. 3DTV Conf.: The True Vision – Capture, Transmission and Display of 3D Video, 2008.
    7. 7)
      • 8. Kopf, J., Cohen, M., Lischinski, D., Uyttendaele, M.: ‘Joint bilateral upsampling’. SIGGRAPH ’07: ACM SIGGRAPH 2007 papers,  NY, USA, 2007, vol. 96.
    8. 8)
      • 16. Garcia, F., Aouada, D., Mirbach, B., Ottersten, B.: ‘Real-time distance-dependent mapping for a hybrid ToF multi-camera rig’, IEEE J. Sel. Topics Signal Process., 2012, 6, pp. 425436 (doi: 10.1109/JSTSP.2012.2207090).
    9. 9)
      • 18. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13-4, pp. 600612 (doi: 10.1109/TIP.2003.819861).
    10. 10)
      • 7. Min, D., Lu, J., Minh, N.D.: ‘Depth video enhancement based on weighted mode filtering’, IEEE Trans. Image Process., 2012, 21, pp. 11761190 (doi: 10.1109/TIP.2011.2163164).
    11. 11)
      • 6. Garcia, F., Aouada, D., Mirbach, B., Solignac, T., Ottersten, B.: ‘A new multi-lateral filter for real-time depth enhancement’ (Advanced Video and Signal-Based Surveillance (AVSS), 2011).
    12. 12)
      • 14. Garcia, F., Aouada, D., Mirbach, B., Ottersten, B.: ‘A new 1-D colour model and its application to image filtering’. Int. Symp. Image Signal Process. Anal., 2011, pp. 14.
    13. 13)
      • 15. Yang, Q., Tan, K.H., Ahuja, N.: ‘Real-time O(1) bilateral filtering’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR), 2009, pp. 557564.
    14. 14)
      • 12. Garcia, F., Aouada, D., Abdella, H.K., Solignac, T., Mirbach, B., Ottersten, B.: ‘Depth enhancement by fusion for passive and active sensing’. 2.5D Sensing Technologies in Motion: the Quest for 3D (QU3ST) in conjunction with European Conf. Computer Vision (ECCV), 2012.
    15. 15)
      • 11. Garcia, F.: ‘Sensor fusion combining 3-D and 2-D for depth data enhancement’. PhD thesis, The Faculty of Sciences, Technology and Communication, University of Luxembourg, 2012.
    16. 16)
      • 4. Chan, D., Buisman, H., Theobalt, C., Thrun, S.: ‘A noise-aware filter for real-time depth upsampling’. Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications (ECCVW), 2008.
    17. 17)
      • 1. Bin, S., Wei, H., Yimin, Z., Yu-Jin, Z.: ‘Image inpainting via sparse representation’. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2009.
    18. 18)
      • 9. Tomasi, C., Manduchi, R.: ‘Bilateral filtering for gray and color images’ (ICCV, 1998) pp. 839846.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2012.0289
Loading

Related content

content/journals/10.1049/iet-cvi.2012.0289
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address