Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Automatic layered RGB-D scene flow estimation with optical flow field constraint

Scene flow estimation with RGB-D frames is receiving increasing attention in digital video processing and computer vision due to the widespread use of depth sensors. Existing methods based on object segmentation have shown their effectiveness for object occlusion and large displacement. However, improper segmentation often causes incomplete segmented areas or incorrect edges, which will result in inaccurate occlusion inference and scene flow estimation. To this end, an automatic layered RGB-D scene flow estimation method is proposed, which achieves more accurate layering of objects in depth image by exploring motion information. The authors exploit super-pixel segmentation for initial layering, which is beneficial for preserving edges and integrity of the objects. Furthermore, an optical flow which is highly correlated with the scene is also used for automatic layering. Finally, the utilisation of super-pixel segmentation and motion information would ensure the integrity of the object area and improve the accuracy of scene flow. They have validated their approach both qualitatively and quantitatively on several public datasets. Experimental results show that the proposed method is able to preserve the integrity of the object and achieve lower root mean square error and average angular error as compared with the current state-of-the-art algorithms.

References

    1. 1)
      • 4. Geiger, A.: ‘Are we ready for autonomous driving? the kitti vision benchmark suite’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 2012, pp. 33543361.
    2. 2)
      • 28. Zhang, Y., Dai, J., Zhang, H., et al: ‘Depth inpainting algorithm of rgb-d camera combined with color image’. 2018 2nd IEEE Advanced Information Management Communicates Electronic and Automation Control Conf. (IMCEC), Xi'an, People's Republic of China, 2018, pp. 13911395.
    3. 3)
      • 40. Maurer, D., Bruhn, A.: ‘Proflow: Learning to predict optical flow’, arXiv preprint arXiv:180600800, 2018.
    4. 4)
      • 3. Schuster, R., Bailer, C., Wasenmüller, O., et al: ‘Combining stereo disparity and optical flow for basic scene flow’. Commercial Vehicle Technology 2018, Wiesbade, Germany, 2018, pp. 90101.
    5. 5)
      • 12. Lv, Z., Beall, C., Alcantarilla, P.F., et al: ‘A continuous optimization approach for efficient and accurate scene flow’. European Conf. on Computer Vision (ECCV), Amsterdam, Netherlands, 2016, pp. 757773.
    6. 6)
      • 16. Menze, M., Heipke, C., Geiger, A.: ‘Object scene flow’, ISPRS J. Photogramm. Remote Sens., 2018, 140, pp. 6076.
    7. 7)
      • 31. Scharstein, D., Szeliski, R.: ‘High-accuracy stereo depth maps using structured light’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Madison, WI ,USA, 2003, pp. 195202.
    8. 8)
      • 11. Taniai, T., Sinha, S.N., Sato, Y.: ‘Fast multi-frame stereo scene flow with motion segmentation’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 39393948.
    9. 9)
      • 2. Menze, M., Geiger, A.: ‘Object scene flow for autonomous vehicles’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA?USA, 2015, pp. 30613070.
    10. 10)
      • 34. Quiroga, J., Devernay, F., Crowley, J.L.: ‘Local/global scene flow estimation’. IEEE Int. Conf. on Image Processing, Melbourne, Australia, 2013, pp. 38503854.
    11. 11)
      • 33. Basha, T., Moses, Y., Kiryati, N.: ‘Multi-view scene flow estimation: a view centered variational approach’, Int. J. Comput. Vis., 2013, 101, (1), pp. 621.
    12. 12)
      • 29. Cai, S.K., Liu, J., Shi, W., et al: ‘Building extraction of high resolution remote sensing image based on improved slic and region adjacency graph’, Comput. Syst. Appl., 2017, 26, (8), pp. 99106.
    13. 13)
      • 39. Yin, Z., Shi, J.: ‘Geonet: unsupervised learning of dense depth, optical flow and camera pose’. IEEE Conf. on Computer Vision and Pattern Recognition(CVPR), Salt Lake City, UT, USA, 2018, pp. 19831992.
    14. 14)
      • 25. Shao, L., Shah, P., Dwaracherla, V., et al: ‘Motion-based object segmentation based on dense rgb-d scene flow’, IEEE Robot. Autom. Lett., 2018, 3, (4), pp. 37973804.
    15. 15)
      • 32. Available at http://tracking.cs.princeton.edu/dataset.html.
    16. 16)
      • 38. Zou, Y., Luo, Z., Huang, J.B.: ‘Df-net: unsupervised joint learning of depth and flow using cross-task consistency’. European Conf. on Computer Vision (ECCV), Munich, Germany, 2018, pp. 3653.
    17. 17)
      • 13. Hadfield, S., Bowden, R.: ‘Scene particles: unregularized particle-based scene flow estimation’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 36, (3), pp. 564576.
    18. 18)
      • 24. Mayer, N., Ilg, E., Hausser, P., et al: ‘A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 40404048.
    19. 19)
      • 21. Wang, Y., Zhang, J., Liu, Z., et al: ‘Handling occlusion and large displacement through improved rgb-d scene flow estimation’, IEEE Trans. Circuits Syst. Video Technol., 2015, 26, (7), pp. 12651278.
    20. 20)
      • 27. Dosovitskiy, A., Fischer, P., Ilg, E., et al: ‘Flownet: learning optical flow with convolutional networks’. IEEE international conference on computer vision (ICCV), Santiago, Chile, 2015, pp. 27582766.
    21. 21)
      • 37. Brox, T., Malik, J.: ‘Large displacement optical flow: descriptor matching in variational motion estimation’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (3), pp. 500513.
    22. 22)
      • 20. Xiang, X., Zhai, M., Zhang, R., et al: ‘Scene flow estimation based on 3d local rigidity assumption and depth map driven anisotropic smoothness’, IEEE Access, 2018, 6, pp. 3001230023.
    23. 23)
      • 18. Schuster, R., Wasenmuller, O., Kuschk, G., et al: ‘Sceneflowfields: dense interpolation of sparse scene flow correspondences’. IEEE Winter Conf. on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 2018, pp. 10561065.
    24. 24)
      • 22. Vogel, C., Schindler, K., Roth, S.: ‘Piecewise rigid scene flow’. IEEE Int. Conf. on Computer Vision (ICCV), Sydney, Australia, 2013, pp. 13771384.
    25. 25)
      • 19. Golyanik, V., Kim, K., Maier, R., et al: ‘Multiframe scene flow with piecewise rigid motion’. 2017 Int. Conf. on 3D Vision (3DV), Qingdao, People's Republic of China, 2017, pp. 273281.
    26. 26)
      • 5. Zuo, X., Wang, S., Zheng, J., et al: ‘High-speed depth stream generation from a hybrid camera’. ACM Conf. on Multimedia (MM), Amsterdam, Netherlands, 2016, pp. 878887.
    27. 27)
      • 6. Sun, D., Sudderth, E.B., Pfister, H.: ‘Layered rgbd scene flow estimation’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 548556.
    28. 28)
      • 15. Ren, Z., Sun, D., Kautz, J., et al: ‘Cascaded scene flow prediction using semantic segmentation’. Int. Conf. on 3D Vision (3DV), Qingdao, People's Republic of China, 2017, pp. 225233.
    29. 29)
      • 14. Jaimez, M., Souiai, M., Gonzalez-Jimenez, J., et al: ‘A primal-dual framework for real-time dense rgb-d scene flow’. 2015 IEEE Int. Conf. on Robotics and Automation (ICRA), Seattle, WA, USA, 2015, pp. 98104.
    30. 30)
      • 7. Xiao, D., Yang, Q., Yang, B., et al: ‘Monocular scene flow estimation via variational method’, Multimedia Tools Appl., 2017, 76, (8), pp. 1057510597.
    31. 31)
      • 8. Schuster, R., Wasenmüller, O., Stricker, D.: ‘Dense scene flow from stereo disparity and optical flow’, arXiv preprint arXiv:180810146, 2018.
    32. 32)
      • 35. Sun, D., Wulff, J., Sudderth, E.B., et al: ‘A fully-connected layered model of foreground and background flow’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 2013, pp. 24512458.
    33. 33)
      • 36. Sun, D., Roth, S., Black, M.J.: ‘A quantitative analysis of current practices in optical flow estimation and the principles behind them’, Int. J. Comput. Vis., 2014, 106, (2), pp. 115137.
    34. 34)
      • 10. Gottfried, J.M., Fehr, J., Garbe, C.S.: ‘Computing range flow from multi-modal kinect data’. Int. Symp. on Visual Computing, Las Vegas, NV, USA, 2011, pp. 758767.
    35. 35)
      • 30. Quiroga, J., Brox, T., Devernay, F., et al: ‘Dense semi-rigid scene flow estimation from rgbd images’. European Conf. on Computer Vision (ECCV), Zurich, Switzerland, 2014, pp. 567582.
    36. 36)
      • 23. Jaimez, M., Kerl, C., Gonzalez-Jimenez, J., et al: ‘Fast odometry and scene flow from rgb-d cameras based on geometric clustering’. IEEE Int. Conf. on Robotics and Automation (ICRA), Singapore, 2017, pp. 39923999.
    37. 37)
      • 26. Thakur, R.K., Mukherjee, S.: ‘Sceneednet: a deep learning approach for scene flow estimation’. Int. Conf. on Control, Automation, Robotics and Vision (ICARCV), Singapore, 2018, pp. 394399.
    38. 38)
      • 1. Vedula, S., Baker, S., Rander, P., et al: ‘Three-dimensional scene flow’. IEEE Int. Conf. on Computer Vision (ICCV), Kerkyra, Corfu, Greece, 1999, vol. 2, pp. 722729.
    39. 39)
      • 17. Lv, Z., Kim, K., Troccoli, A., et al: ‘Learning rigidity in dynamic scenes with a moving camera for 3d motion field estimation’. European Conf. on Computer Vision (ECCV), Munich, Germany, 2018, pp. 468484.
    40. 40)
      • 9. Hadfield, S., Bowden, R.: ‘Kinecting the dots: particle based scene flow from depth sensors’. Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 22902295.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2020.0230
Loading

Related content

content/journals/10.1049/iet-ipr.2020.0230
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address