Optical flow refinement using iterative propagation under colour, proximity and flow reliability constraints
- Author(s): Tan Khoa Mai 1 ; Michèle Gouiffès 2 ; Samia Bouchafa 1
-
-
View affiliations
-
Affiliations:
1:
IBISC, Univ Evry , Université Paris-Saclay , 91020 Evry , France ;
2: LIMSI, CNRS, Univ. Paris-Sud , Université Paris-Saclay , bât. 507, Rue du Belvédère, 91405 Orsay , France
-
Affiliations:
1:
IBISC, Univ Evry , Université Paris-Saclay , 91020 Evry , France ;
- Source:
Volume 14, Issue 8,
19
June
2020,
p.
1509 – 1519
DOI: 10.1049/iet-ipr.2019.0370 , Print ISSN 1751-9659, Online ISSN 1751-9667
This study proposes a strategy to refine optical flow based on the estimated reliability maps. These maps are firstly estimated a posteriori after the motion estimation by the well-known Kanade–Lucas–Tomasi (KLT). With two new defined criteria based, respectively, on the optical flow local variance and the temporal evolution of the KLT residuals, a global refinement of the motion map is then carried out through two stages under the control of the reliability measures and the colour local homogeneousness. According to the experiments performed on the Middlebury dataset, the authors' reliability measures prove to be a good indicator for the quality of the estimation. Indeed, the correction process increases the global reliability measures and reduces the global errors in a significant way. The experiments show that the quality is higher than classical estimation methods and ranked at 88/168 on Middlebury website.
Inspec keywords: motion estimation; image sequences; reliability; image motion analysis
Other keywords: optical flow local variance; colour constraint; optical flow refinement; flow reliability constraint; flow reliability constraints; motion map; motion estimation; temporal evolution; proximity constraint; global reliability measures; KLT residuals; Kanade–Lucas–Tomasi method; iterative propagation; colour local homogeneousness
Subjects: Computer vision and image processing techniques; Optical, image and video signal processing
References
-
-
1)
-
20. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: ‘Performance of optical flow techniques’, Int. J. Comput. Vis., 1994, 12, (1), pp. 43–77.
-
-
2)
-
25. Kondermann, C., Mester, R., Garbe, C., et al: ‘Computer Vision – ECCV 2008’, in ‘Forsyth, D., Zisserman, P. (Eds.) ‘A statistical confidence measure for optical flows’ (Springer, Berlin, Heidelberg, 2008), pp. 290–301.
-
-
3)
-
15. Wulff, J., Sevilla-Lara, L., Black, M.J.: ‘Optical flow in mostly rigid scenes’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Piscataway, NJ, USA, 2017.
-
-
4)
-
21. Shi, J., Tomasi, C.: ‘Good features to track’. 1994, pp. 593–600.
-
-
5)
-
18. Liu, P., Lyu, M.R., King, I., et al: ‘Selflow: Self-supervised learning of optical flow’. CoRR, 2019.
-
-
6)
-
4. Kim, T.H., Lee, H.S., Lee, K.M.: ‘Optical Flow via Locally Adaptive Fusion of Complementary Data Costs’, 2013, pp. 3344–3351.
-
-
7)
-
17. Sun, D., Yang, X., Liu, M.Y., et al: ‘Pwc-net: CNNs for optical flow using pyramid, warping, and cost volume’. The IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018.
-
-
8)
-
3. Brox, T., Malik, J.: ‘Large displacement optical flow: descriptor matching in variational motion estimation’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (3), pp. 500–513.
-
-
9)
-
12. Valgaerts, L., Bruhn, A., Weickert, J.: ‘A variational model for the joint recovery of the fundamental matrix and the optical flow’. Pattern Recognition, Munich, Germany, 2008, pp. 314–324.
-
-
10)
-
23. Middendorf, M., Nagel, H.H.: ‘Estimation and interpretation of discontinuities in optical flow fields’. Proc. of Eighth IEEE Int. Conf. on Computer Vision 2001 (ICCV 2001), Vancouver, BC, Canada, 2001, vol. 1, pp. 178–183.
-
-
11)
-
5. Sun, D., Roth, S., Black, M.J.: ‘A quantitative analysis of current practices in optical flow estimation and the principles behind them’, Int. J. Comput. Vis. (IJCV), 2014, 106, (2), pp. 115–137.
-
-
12)
-
13. Bai, X., Dong, X., Su, Y.: ‘Edge propagation KD-trees: computing approximate nearest neighbor fields’, IEEE Signal Process. Lett., 2015, 22, (12), pp. 2209–2213.
-
-
13)
-
14. Bailer, C., Taetz, B., Stricker, D.: ‘Optical Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation’, arXiv:170302563 [cs], 2017, arXiv: 1703.02563.
-
-
14)
-
1. Horn, B.K.P., Schunck, B.G.: ‘Determining optical flow’, Artif. Intell., 1981, 17, (1-3), pp. 185–203.
-
-
15)
-
27. Bouguet, J.Y.: ‘Pyramidal implementation of the Lucas Kanade feature tracker’ (Intel Corporation, Microprocessor Research Labs, USA, 2000).
-
-
16)
-
11. Liu, H., Chellappa, R., Rosenfeld, A.: ‘Accurate dense optical flow estimation using adaptive structure tensors and a parametric model’, IEEE Trans. Image Process., 2003, 12, (10), pp. 1170–1180.
-
-
17)
-
7. Wannenwetsch, A.S., Keuper, M., Roth, S.: ‘Probflow: Joint optical flow and uncertainty estimation’. CoRR, 2017.
-
-
18)
-
16. Baker, S., Scharstein, D., Lewis, J.P., et al: ‘A database and evaluation methodology for optical flow’, Int. J. Comput. Vis., 2010, 92, (1), pp. 1–31.
-
-
19)
-
33. Besnerais, G.L., Champagnat, F.: ‘Dense optical flow by iterative local window registration’. IEEE Int. Conf. on Image Processing 2005, Geneva, Switzerland, 2005, vol. 1, pp. I–137.
-
-
20)
-
24. Jähne, B., Haussecker, H., Geissler, P.: ‘Handbook of computer vision and applications: signal processing and pattern recognition’. No. vol. 2 in Handbook of computer vision and applications' (Academic Press, USA, 1999).
-
-
21)
-
26. Aodha, O.M., Humayun, A., Pollefeys, M., et al: ‘Learning a confidence measure for optical flow’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (5), pp. 1107–1120.
-
-
22)
-
19. Yang, G., Ramanan, D.: ‘Volumetric correspondence networks for optical flow’, Wallach, H., Larochelle, H., Beygelzimer, A., d' Alché-Buc, F., Fox, E., Garnett, R. (Eds.) ‘Advances in Neural Information Processing Systems 32’ (Curran Associates Inc., New York, NY, USA, 2019), pp. 793–803.
-
-
23)
-
2. Black, M.J., Anandan, P.: ‘Robust dynamic motion estimation over time’. IEEE 1991 Proc. Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR '91), Maui, HI, USA, 1991, pp. 296–302.
-
-
24)
-
9. Black, M.J., Anandan, P.: ‘The robust estimation of multiple motions: parametric and piecewise-Smooth flow fields’, Comput. Vis. Image Underst., 1996, 63, (1), pp. 75–104.
-
-
25)
-
29. Wedel, A., Pock, T., Zach, C., et al: ‘An improved algorithm for TV-L 1 optical flow’, Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., et al (Eds.): ‘Statistical and geometrical approaches to visual motion analysis’, vol. 5604 (Springer, Berlin, Heidelberg, 2009), pp. 23–45.
-
-
26)
-
28. Mai, T.K., Gouiffès, M., Bouchafa, S.: ‘Optical flow refinement using reliable flow propagation’. Proc. of the 12th Int. Joint Conf. on Computer Vision,Imaging and Computer Graphics Theory and Applications-Volume 6: VISAPP (VISIGRAPP 2017), Porto, Portugal, 2017, pp. 451–458.
-
-
27)
-
31. Sun, D., Sudderth, E., Black, M.J.: ‘Layered image motion with explicit occlusions, temporal consistency, and depth ordering’, in Lafferty, J.D., Williams, C.K.I., Shawe.Taylor, J., et al (Eds.): ‘Advances in Neural Information Processing Systems 23’ (Curran Associates, Inc., Vancouver, CA, USA, 2010), pp. 2226–2234.
-
-
28)
-
22. Nagel, H.H., Gehrke, A.: ‘Spatiotemporally adaptive estimation and segmentation of OF-fields’, in Burkhardt, H., Neumann, B. (Eds.) ‘Computer vision – ECCV’98. No. 1407 in LNCS' (Springer, Berlin, Heidelberg, 1998), pp. 86–102. DOI: 10.1007/BFb0054735.
-
-
29)
-
30. Bruhn, A., Weickert, J.: Geometric Properties for Incomplete Data’, in Klette, R., Kozera, R., Noakes, L., Weickert, J. (Eds.) ‘A confidence measure for variational optic flow methods’ (Springer, Dordrecht, Netherlands, 2006), pp. 283–298.
-
-
30)
-
36. Fortun, D., Bouthemy, P., Kervrann, C.: ‘Aggregation of local parametric candidates with exemplar-based occlusion handling for optical flow’, Comput. Vis. Image Underst., 2016, 145, pp. 81–94. light Field for Computer Vision.
-
-
31)
-
35. Sun, D., Sudderth, E.B., Black, M.J.: ‘Layered segmentation and optical flow estimation over time’. 2012 IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 1768–1775.
-
-
32)
-
8. Lucas, B.D., Kanade, T.: ‘An Iterative Image Registration Technique with an Application to Stereo Vision’, 1981, pp. 674–679.
-
-
33)
-
32. Brox, T., Bruhn, A., Papenberg, N., et al: ‘High accuracy optical flow estimation based on a theory for warping’. Computer Vision (ECCV 2004), Prague, Czech Republic, 2004, pp. 25–36.
-
-
34)
-
6. Menze, M., Heipke, C., Geiger, A.: ‘Discrete optimization for optical flow’. German Conf. on Pattern Recognition (GCPR), Aachen, Germany, 2015, vol. 9358, pp. 16–28.
-
-
35)
-
34. Chen, Z., Jin, H., Lin, Z., et al: ‘Large displacement optical flow from nearest neighbor fields’. 2013 IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OH, USA, 2013, pp. 2443–2450.
-
-
36)
-
10. Brox, T., Weickert, J.: ‘Nonlinear matrix diffusion for optic flow estimation’, in Gool, L.V (Ed.): ‘Pattern recognition. No. 2449 in LNCS’ (Springer, Berlin, Heidelberg, 2002), pp. 446–453. Doi: 10.1007/3-540-45783-6_54.
-
-
1)