Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Vehicles detection for illumination changes urban traffic scenes employing adaptive local texture feature background model

An Adaptive Local Texture Feature Background Model (ALTF-BM) is proposed to resolve the deficiency in current background models, which are easily contaminated by sudden and gradual illumination changes in complex urban traffic scenes. Based on Weber's law, the authors first develop Adaptive Local Texture Feature (ALTF), calculated over a predefined local region around the pixel employing an adaptive distance threshold, and then the background is modelled on the base of sample consensus scheme using the calculated features. Furthermore, to label the foreground pixels, the difference between the background model and input video frames is then directly compared by ALTF encoding. Finally, the model is updated using the random update policy to adapt to the changing illumination and the dynamic background. The experimental results on real-world urban traffic videos and the public Change Detection benchmark of 2014 (CDnet2014) show that the proposed ALTF-BM offers the best performance compared to the other state-of-the-art texture-based methods, and the average F-measures and similarity results of the proposed ALTF-BM are 0.547 and 0.393 higher than benchmarks on the night traffic-light sequence, respectively. The encouraging experimental results demonstrate the efficiency of the proposed ALTF-BM in handling sudden and gradual illumination changes in urban traffic scenes.

References

    1. 1)
      • 30. Shi, Y. Q., Sun, H.: ‘Image and video compression for multimedia engineering: fundamentals, algorithms, and standards’ (CRC press, Boca Raton, FL, USA, 1999).
    2. 2)
      • 7. Li, J., Miao, Z.: ‘Foreground segmentation for dynamic scenes with sudden illumination changes’, IET Image Process., 2012, 6, (5), pp. 606615.
    3. 3)
      • 18. Yang, J., Wang, S., Lei, Z., et al: ‘Spatio-temporal LBP based moving object segmentation in compressed domain’. 2012 IEEE Ninth Int. Conf. on Advanced Video and Signal-Based Surveillance (AVSS), 2012, pp. 252257.
    4. 4)
      • 34. Wang, Y., Jodoin, P. M., Porikli, F., et al: ‘CDnet 2014: an expanded change detection benchmark dataset’. 2014 IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 393400.
    5. 5)
      • 21. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    6. 6)
      • 16. Heikkila, M., Pietikainen, M.: ‘A texture-based method for modeling the background and detecting moving objects’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (4), pp. 657662.
    7. 7)
      • 10. Choi, J.M., Chang, H.J., Yoo, Y.J., et al: ‘Robust moving object detection against fast illumination change’, Comput. Vis. Image Underst., 2012, 116, (2), pp. 179193.
    8. 8)
      • 28. Barnich, O., Van Droogenbroeck, M.: ‘ViBe: a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 17091724.
    9. 9)
      • 29. Lin, L., Xu, Y., Liang, X., et al: ‘Complex background subtraction by pursuing dynamic spatio-temporal models’, IEEE Trans. Image Process., 2014, 23, (7), pp. 31913202.
    10. 10)
      • 2. Zhong, Z., Zhang, B., Lu, G., et al: ‘An adaptive background modeling method for foreground segmentation’, IEEE Trans. Intell. Transp. Syst., 2017, 99, pp. 113.
    11. 11)
      • 26. Haque, M., Murshed, M.: ‘Perception-inspired background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (12), pp. 21272140.
    12. 12)
      • 17. Shimada, A., Taniguchi, R.: ‘Hybrid background model using spatial-temporal LBP’. Sixth IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, 2009, pp. 1924.
    13. 13)
      • 3. Kim, W., Jung, C.: ‘Illumination-invariant background subtraction: comparative review, models, and prospects’, IEEE. Access, 2017, 5, (99), pp. 83698384.
    14. 14)
      • 20. Vishnyakov, B., Gorbatsevich, V., Sidyakin, S., et al: ‘Fast moving objects detection using ilbp background model’, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., 2014, 40, (3), pp. 347350.
    15. 15)
      • 14. Xie, Y., Gu, S., Liu, Y., et al: ‘Weighted Schatten p-Norm minimization for image denoising and background subtraction’, IEEE Trans. Image Process., 2016, 25, (10), pp. 48424857.
    16. 16)
      • 4. Buch, N., Velastin, S.A., Orwell, J.: ‘A review of computer vision techniques for the analysis of urban traffic’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (3), pp. 920939.
    17. 17)
      • 25. Silva, C., Bouwmans, T., Frélicot, C.: ‘An eXtended center-symmetric local binary pattern for background modeling and subtraction in videos’. Int. Joint Conf. on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISAPP 2015, Berlin, Germany, March 2015.
    18. 18)
      • 6. Li, D., Xu, L., Goodman, E. D.: ‘Illumination-robust foreground detection in a video surveillance system’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (10), pp. 16371650.
    19. 19)
      • 27. Wang, H., Suter, D.: ‘A consensus-based method for tracking: modelling background scenario and foreground appearance’, Pattern Recognit., 2007, 40, (3), pp. 10911105.
    20. 20)
      • 11. Dong, Y., Desouza, G. N.: ‘Adaptive learning of multi-subspace for foreground detection under illumination changes’, Comput. Vis. Image Underst., 2011, 115, (1), pp. 3149.
    21. 21)
      • 15. Kim, W., Kim, Y.: ‘Background subtraction using illumination-invariant structural complexity’, IEEE Signal Process. Lett., 2016, 23, (5), pp. 634638.
    22. 22)
      • 23. Bilodeau, G.A., Jodoin, J.P., Saunier, N.: ‘Change detection in feature space using local binary similarity patterns’. 2013 Int. Conf. on Computer and Robot Vision (CRV), 2013, pp. 106112.
    23. 23)
      • 1. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11, pp. 3166.
    24. 24)
      • 12. Candès, E.J., Li, X., Ma, Y., et al: ‘Robust principal component analysis?’, J. ACM, 2011, 58, (3), pp. 139.
    25. 25)
      • 32. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 3843.
    26. 26)
      • 31. Han, G., Wang, J., Cai, X.: ‘Improved visual background extractor using an adaptive distance threshold’, J. Electron. Imaging, 2014, 23, (6), pp. 112.
    27. 27)
      • 5. Stauffer, C., Grimson, W. E. L.: ‘Adaptive background mixture models for real-time tracking’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1999, vol. 2, pp. 246252.
    28. 28)
      • 9. Mahmoudpour, S., Kim, M.: ‘Robust foreground detection in sudden illumination change’, Electron. Lett., 2016, 52, (6), pp. 441443.
    29. 29)
      • 13. Wen, J., Xu, Y., Tang, J., et al: ‘Joint video frame set division and low-rank decomposition for background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, (12), pp. 20342048.
    30. 30)
      • 24. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘Subsense: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    31. 31)
      • 33. Yin, B., Zhang, J., Wang, Z.: ‘Background segmentation of dynamic scenes based on dual model’, IET Comput. Vis., 2014, 8, (6), pp. 545555.
    32. 32)
      • 19. Wang, L., Pan, C.: ‘Fast and effective background subtraction based on ɛlbp’. EEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2010.
    33. 33)
      • 22. Liao, S., Zhao, G., Kellokumpu, V., et al: ‘Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes’. 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 13011306.
    34. 34)
      • 8. Cheng, F.C., Huang, S.C., Ruan, S. J.: ‘Illumination-sensitive background modeling approach for accurate moving object detection’, IEEE Trans. Broadcast., 2011, 57, (4), pp. 794801.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2018.5298
Loading

Related content

content/journals/10.1049/iet-its.2018.5298
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address