Vehicles detection for illumination changes urban traffic scenes employing adaptive local texture feature background model

Vehicles detection for illumination changes urban traffic scenes employing adaptive local texture feature background model

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Intelligent Transport Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

An Adaptive Local Texture Feature Background Model (ALTF-BM) is proposed to resolve the deficiency in current background models, which are easily contaminated by sudden and gradual illumination changes in complex urban traffic scenes. Based on Weber's law, the authors first develop Adaptive Local Texture Feature (ALTF), calculated over a predefined local region around the pixel employing an adaptive distance threshold, and then the background is modelled on the base of sample consensus scheme using the calculated features. Furthermore, to label the foreground pixels, the difference between the background model and input video frames is then directly compared by ALTF encoding. Finally, the model is updated using the random update policy to adapt to the changing illumination and the dynamic background. The experimental results on real-world urban traffic videos and the public Change Detection benchmark of 2014 (CDnet2014) show that the proposed ALTF-BM offers the best performance compared to the other state-of-the-art texture-based methods, and the average F-measures and similarity results of the proposed ALTF-BM are 0.547 and 0.393 higher than benchmarks on the night traffic-light sequence, respectively. The encouraging experimental results demonstrate the efficiency of the proposed ALTF-BM in handling sudden and gradual illumination changes in urban traffic scenes.


    1. 1)
      • 1. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11, pp. 3166.
    2. 2)
      • 2. Zhong, Z., Zhang, B., Lu, G., et al: ‘An adaptive background modeling method for foreground segmentation’, IEEE Trans. Intell. Transp. Syst., 2017, 99, pp. 113.
    3. 3)
      • 3. Kim, W., Jung, C.: ‘Illumination-invariant background subtraction: comparative review, models, and prospects’, IEEE. Access, 2017, 5, (99), pp. 83698384.
    4. 4)
      • 4. Buch, N., Velastin, S.A., Orwell, J.: ‘A review of computer vision techniques for the analysis of urban traffic’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (3), pp. 920939.
    5. 5)
      • 5. Stauffer, C., Grimson, W. E. L.: ‘Adaptive background mixture models for real-time tracking’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1999, vol. 2, pp. 246252.
    6. 6)
      • 6. Li, D., Xu, L., Goodman, E. D.: ‘Illumination-robust foreground detection in a video surveillance system’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (10), pp. 16371650.
    7. 7)
      • 7. Li, J., Miao, Z.: ‘Foreground segmentation for dynamic scenes with sudden illumination changes’, IET Image Process., 2012, 6, (5), pp. 606615.
    8. 8)
      • 8. Cheng, F.C., Huang, S.C., Ruan, S. J.: ‘Illumination-sensitive background modeling approach for accurate moving object detection’, IEEE Trans. Broadcast., 2011, 57, (4), pp. 794801.
    9. 9)
      • 9. Mahmoudpour, S., Kim, M.: ‘Robust foreground detection in sudden illumination change’, Electron. Lett., 2016, 52, (6), pp. 441443.
    10. 10)
      • 10. Choi, J.M., Chang, H.J., Yoo, Y.J., et al: ‘Robust moving object detection against fast illumination change’, Comput. Vis. Image Underst., 2012, 116, (2), pp. 179193.
    11. 11)
      • 11. Dong, Y., Desouza, G. N.: ‘Adaptive learning of multi-subspace for foreground detection under illumination changes’, Comput. Vis. Image Underst., 2011, 115, (1), pp. 3149.
    12. 12)
      • 12. Candès, E.J., Li, X., Ma, Y., et al: ‘Robust principal component analysis?’, J. ACM, 2011, 58, (3), pp. 139.
    13. 13)
      • 13. Wen, J., Xu, Y., Tang, J., et al: ‘Joint video frame set division and low-rank decomposition for background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, (12), pp. 20342048.
    14. 14)
      • 14. Xie, Y., Gu, S., Liu, Y., et al: ‘Weighted Schatten p-Norm minimization for image denoising and background subtraction’, IEEE Trans. Image Process., 2016, 25, (10), pp. 48424857.
    15. 15)
      • 15. Kim, W., Kim, Y.: ‘Background subtraction using illumination-invariant structural complexity’, IEEE Signal Process. Lett., 2016, 23, (5), pp. 634638.
    16. 16)
      • 16. Heikkila, M., Pietikainen, M.: ‘A texture-based method for modeling the background and detecting moving objects’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (4), pp. 657662.
    17. 17)
      • 17. Shimada, A., Taniguchi, R.: ‘Hybrid background model using spatial-temporal LBP’. Sixth IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, 2009, pp. 1924.
    18. 18)
      • 18. Yang, J., Wang, S., Lei, Z., et al: ‘Spatio-temporal LBP based moving object segmentation in compressed domain’. 2012 IEEE Ninth Int. Conf. on Advanced Video and Signal-Based Surveillance (AVSS), 2012, pp. 252257.
    19. 19)
      • 19. Wang, L., Pan, C.: ‘Fast and effective background subtraction based on ɛlbp’. EEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2010.
    20. 20)
      • 20. Vishnyakov, B., Gorbatsevich, V., Sidyakin, S., et al: ‘Fast moving objects detection using ilbp background model’, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., 2014, 40, (3), pp. 347350.
    21. 21)
      • 21. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    22. 22)
      • 22. Liao, S., Zhao, G., Kellokumpu, V., et al: ‘Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes’. 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 13011306.
    23. 23)
      • 23. Bilodeau, G.A., Jodoin, J.P., Saunier, N.: ‘Change detection in feature space using local binary similarity patterns’. 2013 Int. Conf. on Computer and Robot Vision (CRV), 2013, pp. 106112.
    24. 24)
      • 24. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘Subsense: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    25. 25)
      • 25. Silva, C., Bouwmans, T., Frélicot, C.: ‘An eXtended center-symmetric local binary pattern for background modeling and subtraction in videos’. Int. Joint Conf. on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISAPP 2015, Berlin, Germany, March 2015.
    26. 26)
      • 26. Haque, M., Murshed, M.: ‘Perception-inspired background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (12), pp. 21272140.
    27. 27)
      • 27. Wang, H., Suter, D.: ‘A consensus-based method for tracking: modelling background scenario and foreground appearance’, Pattern Recognit., 2007, 40, (3), pp. 10911105.
    28. 28)
      • 28. Barnich, O., Van Droogenbroeck, M.: ‘ViBe: a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 17091724.
    29. 29)
      • 29. Lin, L., Xu, Y., Liang, X., et al: ‘Complex background subtraction by pursuing dynamic spatio-temporal models’, IEEE Trans. Image Process., 2014, 23, (7), pp. 31913202.
    30. 30)
      • 30. Shi, Y. Q., Sun, H.: ‘Image and video compression for multimedia engineering: fundamentals, algorithms, and standards’ (CRC press, Boca Raton, FL, USA, 1999).
    31. 31)
      • 31. Han, G., Wang, J., Cai, X.: ‘Improved visual background extractor using an adaptive distance threshold’, J. Electron. Imaging, 2014, 23, (6), pp. 112.
    32. 32)
      • 32. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 3843.
    33. 33)
      • 33. Yin, B., Zhang, J., Wang, Z.: ‘Background segmentation of dynamic scenes based on dual model’, IET Comput. Vis., 2014, 8, (6), pp. 545555.
    34. 34)
      • 34. Wang, Y., Jodoin, P. M., Porikli, F., et al: ‘CDnet 2014: an expanded change detection benchmark dataset’. 2014 IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 393400.

Related content

This is a required field
Please enter a valid email address