On-line vehicle detection at nighttime-based tail-light pairing with saliency detection in the multi-lane intersection

On-line vehicle detection at nighttime-based tail-light pairing with saliency detection in the multi-lane intersection

For access to this article, please select a purchase option:

Buy eFirst article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Intelligent Transport Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

A nighttime vehicle detection method in the multi-lane intersection has been proposed based on saliency detection for traffic surveillance system in this study. Frame difference method is applied to detect moving objects at first, and all the rear lights of vehicles are extracted based on saliency map and colour information. Second, vehicles are detected through pairing off all the lamps, which include such steps as rechecking tail-lamp pairs by using prior knowledge, eliminating repaired tail-lamps on the same vehicle and removing the paired lamps across two lanes. Furthermore, to detect the vehicles that only have a single valid tail-lamp, a proving approach for virtual tail-lamp is investigated. Finally, the proposed method is verified to be more reliable and faster for nighttime vehicle detection by comparing with other detection methods, which can satisfy real-time requirements of a vehicle detection system with good performance.


    1. 1)
      • 1. Cortes, C., Vapnik, V.: ‘Support-vector networks’, Mach. Learn., 1995, 20, (3), pp. 273297.
    2. 2)
      • 2. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. IEEE Conf. on Computer Vision and Pattern Recognition, Washington, DC, USA, December 2001, pp. 511518.
    3. 3)
      • 3. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. 2005 IEEE Conf. on Computer Vision and Pattern Recognition, San Diego, CA, USA, June 2005, pp. 886893.
    4. 4)
      • 4. Ahonen, T., Hadid, A., Pietikainen, M.: ‘Face description with local binary patterns: application to face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (12), pp. 20372041.
    5. 5)
      • 5. Papageorgiou, C.P., Oren, M., Poggio, T.: ‘A general framework for object detection’. 1998 Int. Conf. on Computer Vision, Bombay, India, July 1998, pp. 555562.
    6. 6)
      • 6. Felzenszwalb, P., Mcallester, D., Ramanan, D.: ‘A discriminatively trained, multiscale, deformable part model’. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, AL, USA, 2008, pp. 18.
    7. 7)
      • 7. Dollár, P., Appel, R., Belongie, S., et al: ‘Fast feature pyramids for object detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (8), pp. 15321545.
    8. 8)
      • 8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’. 2012 Int. Conf. on Neural Information Processing Systems, Lake Tahoe, NV, USA, June 2012, pp. 10971105.
    9. 9)
      • 9. Ren, S., He, K., Girshic, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (6), pp. 11371149.
    10. 10)
      • 10. Liu, W., Anguelov, D., Erhan, D., et al: ‘SSD: single shot MultiBox detector’. 2016 European Conf. on Computer Vision, Amsterdam, The Netherlands, October 2016, pp. 2137.
    11. 11)
      • 11. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only look once: unified, real-time object detection’. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016, pp. 779788.
    12. 12)
      • 12. Sangnoree, A., Chamnongthai, K.: ‘Thermal-image processing and statistical analysis for vehicle category in nighttime traffic’, J. Vis. Commun. Image Represent., 2017, 48, pp. 88109.
    13. 13)
      • 13. Kim, S.G., Kim, J.E., Kang, Y., et al: ‘Detection and tracking of overtaking vehicle in blind spot area at night time’. IEEE Int. Conf. on Consumer Electronics. (ICCE), Las Vegas, NV, USA, 2017, pp. 4748.
    14. 14)
      • 14. Chen, Y.L., Chiang, C.Y.: ‘Embedded on-road nighttime vehicle detection and tracking system for driver assistance’. IEEE Int. Conf. on Systems Man and Cybernetics, Istanbul, Turkey, 2010, pp. 15551562.
    15. 15)
      • 15. Huang, D.Y., Chen, C.H., Chen, T.Y., et al: ‘Vehicle detection and inter-vehicle distance estimation using singlelens video camera on urban/suburb roads’, J. Vis. Commun. Image Represent., 2017, 46, pp. 250259.
    16. 16)
      • 16. Kuang, H., Zhang, X., Li, Y.J., et al: ‘Nighttime vehicle detection based on bioinspired image enhancement and weighted score-level feature fusion’, IEEE Trans. Intell. Transp. Syst., 2017, 18, pp. 927936.
    17. 17)
      • 17. Wang, X., Tang, J., Niu, J., et al: ‘Vision-based two-step brake detection method for vehicle collision avoidance’, Neurocomputing, 2016, 173, pp. 450461.
    18. 18)
      • 18. Satzoda, R.K., Trivedi, M.M.: ‘Looking at vehicles in the night: detection and dynamics of rear lights’, IEEE Trans. Intell. Transp. Syst., 2016, 99, pp. 111.
    19. 19)
      • 19. Chen, D.Y., Lin, Y.H., Peng, Y.J.: ‘Nighttime brake-light detection by Nakagami imaging’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (4), pp. 16271637.
    20. 20)
      • 20. Chen, Y.L., Wu, B.F., Huang, H.Y., et al: ‘A real-time vision system for nighttime vehicle detection and traffic surveillance’, IEEE Trans. Ind. Electron., 2010, 58, (5), pp. 20302044.
    21. 21)
      • 21. Zhang, H., Zhao, Z., Wang, C.: ‘Traffic flow detection based on the rear-lamp and virtual coil for nighttime conditions’. IEEE Int. Conf. on Signal and Image Processing, Portland, OR, USA, 2017, pp. 524528.
    22. 22)
      • 22. Salvi, G.: ‘An automated nighttime vehicle counting and detection system for traffic surveillance’. Int. Conf. on Computational Science and Computational Intelligence, Singapore, Singapore, 2014, pp. 131136.
    23. 23)
      • 23. Deng, Z., Sun, H., Zhou, S., et al: ‘Toward fast and accurate vehicle detection in aerial images using coupled region-based convolutional neural networks’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2017, 10, (8), pp. 36523664.
    24. 24)
      • 24. Wang, L., Lu, Y., Wang, H., et al: ‘Evolving boxes for fast vehicle detection’. IEEE Int. Conf. on Multimedia and Expo, Cairns, Australia, July 2017, pp. 11351140.
    25. 25)
      • 25. Rujikietgumjorn, S., Watcharapinchai, N.: ‘Vehicle detection with sub-class training using R-CNN for the UA-DETRAC benchmark’. IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, Hong Kong, China, October 2017, pp. 15.
    26. 26)
      • 26. Liu, Y., Yao, H., Gao, W., et al: ‘Nonparametric background generation’, J. Vis. Commun. Image Represent., 2007, 18, (3), pp. 253263.
    27. 27)
      • 27. Barnich, O., Van Droogenbroeck, M.: ‘Vibe: a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 17091724.
    28. 28)
      • 28. Zhan, C., Duan, X., Xu, S., et al: ‘An improved moving object detection algorithm based on frame difference and edge detection’. 2007 Int. Conf. on Image and Graphics, Lecce, Italy, August 2007, pp. 519523.
    29. 29)
      • 29. Zhai, Y., Shah, M.: ‘Visual attention detection in video sequences using spatiotemporal cues’. ACM Int. Conf. on Multimedia, Chengdu, China, 2006, vol. 177, pp. 815824.
    30. 30)
      • 30. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, 2009, pp. 15971604.
    31. 31)
      • 31. Cheng, M.M., Mitra, N.J., Huang, X., et al: ‘Global contrast based salient region detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (3), pp. 569582.
    32. 32)
      • 32. Hou, X., Zhang, L.: ‘Saliency detection: a spectral residual approach’. IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, USA, 2007, pp. 18.
    33. 33)
      • 33. Shi, K., Wang, K., Lu, J., et al: ‘PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors’. IEEE Conf. on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 2013, vol. 24, no. 10, pp. 21152122.

Related content

This is a required field
Please enter a valid email address