http://iet.metastore.ingenta.com
1887

Precise depth map upsampling and enhancement based on edge-preserving fusion filters

Precise depth map upsampling and enhancement based on edge-preserving fusion filters

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

A texture image plus its associated depth map is the simplest representation of a three-dimensional image and video signals and can be further encoded for effective transmission. Since it contains fewer variations, a depth map can be coded with much lower resolution than a texture image. Furthermore, the resolution of depth capture devices is usually also lower. Thus, a low-resolution depth map with possible noise requires appropriate interpolation to restore it to full resolution and remove noise. In this study, the authors propose potency guided upsampling and adaptive gradient fusion filters to enhance the erroneous depth maps. The proposed depth map enhancement system can successfully suppress noise, fill missing values, sharpen foreground objects, and smooth background regions simultaneously. Their experimental results show that the proposed methods perform better in terms of both visual and subjective metrics than the classic methods and achieve results that are visually comparable with those of some time-consuming methods.

References

    1. 1)
      • 1. Yang, J.-F., Ohm, J.-R.: ‘Conceptual study of potential centralized texture depth packing SEI message’. 15th Meeting: Document: JCT3V-O1004 Joint Collaborative Team on 3D Video Coding Extensions of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 2016.
    2. 2)
      • 2. Han, Y., Yuan, Z., Muntean, G.M.: ‘An innovative no-reference metric for real-time 3D stereoscopic video quality assessment’, IEEE Trans. Broadcast., 2016, 62, (3), pp. 654663.
    3. 3)
      • 3. Yi, H., Yuan, Z., Muntean, G.M.: ‘Extended no reference objective quality metric for stereoscopic 3D video’. Proc. IEEE Communication Workshop (ICCW) Conf., 2015.
    4. 4)
      • 4. Lipski, C., Klose, F., Magnor, M.: ‘Correspondence and depth-image based rendering a hybrid approach for free-viewpoint video’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, pp. 942951.
    5. 5)
      • 5. Lin, Y.H., Wu, J.L.: ‘A digital blind watermarking for depth-image-based rendering 3D image’, IEEE Trans. Broadcast., 2011, 57, pp. 602611.
    6. 6)
      • 6. Schmeing, M., Jiang, X.: ‘Faithful disocclusion filling in depth image based rendering using superpixel-based inpainting’, IEEE Trans. Multimed., 2015, 17, (12), pp. 21602173.
    7. 7)
      • 7. Zhan, Y., Gu, Y., Huang, K., et al: ‘Accurate image-guided stereo matching with efficient matching cost and disparity refinement’, IEEE Trans. Circuits Syst. Video Technol., 2016, 26, (9), pp. 16321645.
    8. 8)
      • 8. Yang, Q.: ‘Stereo matching using tree filtering’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (4), pp. 834846.
    9. 9)
      • 9. PMD Technologies. Available at http://www.pmdtec.com, accessed 20th May 2016.
    10. 10)
      • 10. MESA Imaging. Available at http://www.mesa-imaging.ch, accessed 20th May 2016.
    11. 11)
      • 11. Microsoft, Kinect. Available at http://www.xbox.com/en- US/kinect, accessed 20th May 2016.
    12. 12)
      • 12. Gokturk, S.B., Yalcin, H., Bamji, C.: ‘A time-of-flight depth sensor-system description, issues and solutions’. IEEE Computer Vision and Pattern Recognition Workshop (CVPRW) Conf., 2004, p. 35.
    13. 13)
      • 13. Foix, S., Alenya, G., Torras, C.: ‘Lock-in time-of-flight (ToF) cameras: a survey’, IEEE Sens. J., 2011, 11, (9), pp. 19171926.
    14. 14)
      • 14. Gudmundsson, S.A., Aanaes, H., Larsen, R.: ‘Environmental effects on measurement uncertainties of time-of-flight cameras’. IEEE Int. Symp. Signals, Circuits and Systems, 2007, vol. 1, pp. 14.
    15. 15)
      • 15. Choi, J., Min, D., Sohn, K.: ‘Reliability-based multiview depth enhancement considering interview coherence’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, (4), pp. 603616.
    16. 16)
      • 16. Garcia, F., Aouada, D., Solignac, T., et al: ‘Real-time depth enhancement by fusion for RGB-D cameras’, IET Comput. Vis., 2013, 7, (5), pp. 111.
    17. 17)
      • 17. Shen, Y., Li, J., Lu, C.: ‘Depth map enhancement method based on joint bilateral filter’. Proc. IEEE Image and Signal Processing Conf., 2014, pp. 153158.
    18. 18)
      • 18. Wang, Y., Ortega, A., Tian, D., et al: ‘A graph-based joint bilateral approach for depth enhancement’. Proc. IEEE Speech and Signal Processing Conf., 2014, pp. 885889.
    19. 19)
      • 19. Kopf, J., Cohen, M.F., Lischinski, D., et al: ‘Joint bilateral upsampling’, ACM Trans. Graph., 2007, 26, (3), pp. 96.
    20. 20)
      • 20. Jung, S.W.: ‘Enhancement of image and depth map using adaptive joint trilateral filter’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, pp. 258269.
    21. 21)
      • 21. Oliveira, A., Fickel, G., Walter, M., et al: ‘Selective hole-filling for depth-image based rendering’. Proc. IEEE Speech and Signal Processing (ICASSP) Conf., 2015, pp. 11861190.
    22. 22)
      • 22. Jiufei, X., Ming, X., Dongxiao, L., et al: ‘A new virtual view rendering method based on depth image’. Proc. IEEE Wearable Computing Systems (APWCS) Conf., 2010, pp. 147150.
    23. 23)
      • 23. Yang, K., Dou, Y., Chen, X., et al: ‘Depth enhancement via non-local means filter’. Proc. IEEE Advanced Computational Intelligence Conf., 2015, pp. 126130.
    24. 24)
      • 24. Gong, X., Liu, J., Zhou, W., et al: ‘Guided depth enhancement via a fast marching method’, Image Vis. Comput., 2013, 31, (10), pp. 695703.
    25. 25)
      • 25. Ford, A., Roberts, A.: ‘Color space conversions’ (Westminster University, London, 1998), pp. 131.
    26. 26)
      • 26. Wang, W., Yan, J., Xu, N., et al: ‘Real-time high-quality stereo vision system in FPGA’, IEEE Trans. Circuits Syst. Video Technol., 2015, 25, (10), pp. 16961708.
    27. 27)
      • 27. Zhang, J., Wang, L.H., Li, D.X., et al: ‘High quality depth maps from stereo matching and ToF camera’. Proc. IEEE Soft Computing and Pattern Recognition (SoCPaR) Conf., 2011, pp. 6872.
    28. 28)
      • 28. Scharstein, D., Szeliski, R.: ‘A taxonomy and evaluation of dense two-frame stereo correspondence algorithm’, Int. J. Comput. Vis., 2002, 47, (1–3), pp. 742.
    29. 29)
      • 29. http://vision.middlebury.edu/stereo/data/ [Online] accessed 3rd February 2016.
    30. 30)
      • 30. Richardt, C., Stoll, C., Dodgson, N.A., et al: ‘Coherent spatiotemporal filtering, upsampling and rendering of RGBZ videos’, Comput. Graph., 2012, 31, (2), pp. 247256.
    31. 31)
      • 31. http://resources.mpi-inf.mpg.de/rgbz-camera/ [Online] accessed 3rd February 2016.
    32. 32)
      • 32. Keys, R.: ‘Cubic convolution interpolation for digital image processing’, IEEE Trans. Acoust. Speech Signal Process., 1981, 29, (6), pp. 11531160.
    33. 33)
      • 33. Konno, Y., Monno, Y., Kiku, D., et al: ‘Intensity guided depth upsampling by residual interpolation’, Int. Conf. Adv. Mechatronics, 2015, 2015, (6), pp. 12.
    34. 34)
      • 34. Kim, K.I., Kwon, Y.: ‘Example-based learning for single-image super-resolution’. Springer Berlin Heidelberg in Joint Pattern Recognition Symp., 2008, pp. 456465.
    35. 35)
      • 35. Buades, A., Coll, B., Morel, J.M.: ‘A non-local algorithm for image denoising’, IEEE Comput. Soc. Conf., 2005, 2, pp. 6065.
    36. 36)
      • 36. Ndjiki-Nya, P., Koppel, M., Doshkov, D., et al: ‘Depth image-based rendering with advanced texture synthesis for 3-D video’, IEEE Trans. Multimed., 2011, 13, (3), pp. 453465.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0336
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0336
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address