http://iet.metastore.ingenta.com
1887

Vehicle detection in intelligent transport system under a hazy environment: a survey

Vehicle detection in intelligent transport system under a hazy environment: a survey

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Developing an intelligent transportation system has attracted a lot of attention in the recent past. Moreover, with the growing number of vehicles on the road most nations are adopting an intelligent transport system (ITS) for handling issues like traffic flow density, queue length, the average speed of the traffic, and total vehicles passing through a point in a specific time interval and so on. ITS by capturing traffic images and videos through cameras, helps the traffic control centres in monitoring and managing the traffic. Efficient and unfailing vehicle detection is a crucial step for the ITS. This study reviews different techniques and applications used around the world for vehicle detection under various environmental conditions based on video processing systems. This study also discusses the types of cameras used for vehicle detections, and the classification of vehicles for traffic monitoring and controlling. This study finally highlights the problems encountered during surveillance under extreme weather conditions.

References

    1. 1)
      • 1. Buch, N., Velastin, S.A., Orwell, J.: ‘A review of computer vision techniques for the analysis of urban traffic’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (3), pp. 920939.
    2. 2)
      • 2. Zhang, J., Wang, F.Y., Wang, K., et al: ‘Data-driven intelligent transportation systems: a survey’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (4), pp. 16241639.
    3. 3)
      • 3. Sun, Z., Bebis, G., Miller, R.: ‘On-road vehicle detection: a review’, IEEE Trans Pattern Anal. Mach. Intell., 2006, 28, (5), pp. 694711.
    4. 4)
      • 4. Saran, K.B., Sreelekha, G.: ‘Traffic video surveillance: vehicle detection and classification’, Int. Conf. on Control Communication and Computing, India, November 2015, pp. 516521.
    5. 5)
      • 5. Wang, G., Xiao, D., Gu, J.: ‘Review on vehicle detection based on video for traffic surveillance’, IEEE Int. Conf. Automation and Logistics, Qingdao, China, September 2008, pp. 29612966.
    6. 6)
      • 6. Chen, Z., Ellis, T., Velastin, S.A.: ‘Vehicle type categorization: a comparison of classification schemes’. IEEE Conf. Intelligent Transportation Systems Proc., ITSC, Washington, DC, USA, 2011, pp. 7479.
    7. 7)
      • 7. Lai, A.S.H., Yung, N.H.C.: ‘Vehicle-type identification through automated virtual loop assignment and block-based direction-biased motion estimation’, IEEE Trans. Intell. Transp. Syst., 2000, 1, (2), pp. 8697.
    8. 8)
      • 8. https://i.ytimg.com/vi/xVwsr9p3irA/maxresdefault.jpg.
    9. 9)
      • 9. Kim, J.B., Kim, H.J.: ‘Efficient region-based motion segmentation for a video monitoring system’, Pattern Recognit. Lett., 2003, 24, (1–3), pp. 113128.
    10. 10)
      • 10. Wu, B.F., Juang, J.H.: ‘Adaptive vehicle detector approach for complex environments’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (2), pp. 817827.
    11. 11)
      • 11. Bishop, R.: ‘Intelligent vehicle applications worldwide’, IEEE Trans. Intell. Transp. Syst., 2000, 15, (1), pp. 7881.
    12. 12)
      • 12. Rai, M., Yadav, R.K.: ‘A novel method for detection and extraction of human face for video surveillance applications’, Int. J. Signal Imaging Syst. Eng., 2016, 9, (3), pp. 165173.
    13. 13)
      • 13. Baran, R., Rusc, T., Fornalski, P.: ‘A smart camera for the surveillance of vehicles in intelligent transportation systems’, Multimedia Tools Appl., 2016, 75, (17), pp. 1047110493.
    14. 14)
      • 14. Deb, S.K., Nathr, R.K.: ‘Vehicle detection based on video for traffic surveillance on road’, Int. J. Comput. Sci. Emerg. Technol., 2012, 3, (4), pp. 121137.
    15. 15)
      • 15. Michalopoulos, P.G.: ‘Vehicle detection video through image processing: the autoscope system’, IEEE Trans. Veh. Technol., 1991, 40, (1), pp. 2129.
    16. 16)
      • 16. Dickmanns, E.D.: ‘The development of machine vision for road vehicles in the last decade’, IEEE Intell. Veh. Symp. Proc., 2003, 1, pp. 268281.
    17. 17)
      • 17. Sussman, J.M.: ‘Perspectives on intelligent transportation systems’, 2005.
    18. 18)
      • 18. Gupte, S., Masoud, O., Martin, R.F.K., et al: ‘Detection and classification of vehicles’, IEEE Trans. Intell. Transp. Syst., 2002, 3, (1), pp. 3747.
    19. 19)
      • 19. Janowski, L., Kozłowski, P., Baran, R., et al: ‘Quality assessment for a visual and automatic license plate recognition’, Multimedia Tools Appl., 2014, 68, (1), pp. 2340.
    20. 20)
      • 20. Dule, E., Gokmen, M., Beratoglu, M.S.: ‘A convenient feature vector construction for vehicle color recognition’. Proc. Int. Conf. Neural Network, WSEAS, Lasi, Romania, 2010, pp. 250255.
    21. 21)
      • 21. Dziech, W., Baran, R., Wiraszka, D.: ‘Signal compression based on zonal selection methods’. Proc. of the Int. Conf. of Mathematical Methods in Electromagnetic Theory, Kharkov, Ukraine, 2000, pp. 224226.
    22. 22)
      • 22. Cao, M., Vu, A., Barth, M.J.: ‘A novel omni-directional vision sensing technique for traffic surveillance’. IEEE Intelligent Transportation Systems Conf., Seattle, USA, 2007, pp. 678683.
    23. 23)
      • 23. Bertozzi, M., Broggi, A., Cellario, M.: ‘Artificial vision in road vehicles’, Proc. IEEE, 2002, 90, (7), pp. 12581271.
    24. 24)
      • 24. Long, J., Shelhamer, E., Darrell, T.: ‘Fully convolutional networks for semantic segmentation’. Proc. IEEE Conf. Computer Vision Pattern Recognition, Boston, USA, 2015, pp. 34313440.
    25. 25)
      • 25. http://www.dailymail.co.uk/news/article-2930654/Know-enemy-Incredibly-20-differentkindscameras-spying-motorists-spot-spotyou.html.
    26. 26)
      • 26. Rai, M., Maity, T., Yadav, R.K.: ‘Thermal imaging system and its real-time applications: a survey’, J. Eng. Technol., 2017, 6, (2), pp. 290303.
    27. 27)
      • 27. Rai, M., Husain, A.A., Maity, T., et al: ‘Advance intelligent video surveillance system (AIVSS): a future aspect’ (In Video Surveillance IntechOpen, London, UK, 2018).
    28. 28)
      • 28. Pang, C.C.C., Lam, W.W.L., Yung, N.H.C.: ‘A novel method for resolving vehicle occlusion in a monocular traffic-image sequence’, IEEE Trans. Intell. Transp. Syst., 2004, 5, (3), pp. 129141.
    29. 29)
      • 29. Wang, Y.: ‘Joint random field model for all-weather moving vehicle detection’, IEEE Trans. Image Process., 2010, 19, (9), pp. 24912501.
    30. 30)
      • 30. Liu, Y., Tian, B., Chen, S., et al: ‘A survey of vision-based vehicle detection and tracking techniques in ITS’. Proc. 2013 IEEE Int. Conf. on Vehicular Electronics and Safety, Dongguan, China, 2013, pp. 7277.
    31. 31)
      • 31. Zhang, J., Marszalek, M., Lazebnik, S., et al: ‘Local features and kernels for classification of texture and object categories: a comprehensive study’, Int. J. Comput. Vis., 2007, 73, (2), pp. 213238.
    32. 32)
      • 32. Bouwmans, T., Gonzalez, J., Shan, C., et al: ‘Special issue on background modelling for foreground detection in real-world dynamic scenes’, Mach. Vis. Appl., 2014, 25, (5), pp. 11011103.
    33. 33)
      • 33. Siyal, M.Y.: ‘A neural vision-based approach for intelligent transportation system’, IEEE ICIT’ 02, Bankok, Thailand, 2002, pp. 456460.
    34. 34)
      • 34. Liu, Z., Huang, K., Tan, T.: ‘Cast shadow removal in a hierarchical manner using MRF’, IEEE Trans. Circuits Syst. Video Technol., 2012, 22, (1), pp. 5666.
    35. 35)
      • 35. Cucchiara, R., Grana, C., Piccardi, M., et al: ‘Improving shadow suppression in moving object detection with HSV color information’. IEEE Proc. Int. Conf. on Intelligent Transportation Systems, Oakland, USA, 2001, pp. 334339.
    36. 36)
      • 36. Xua, H., Xia, X., Guo, L., et al: ‘A novel algorithm of moving cast shadow suppression’. Proc. of the 18 Int. Conf. on Signal Processing, Beijing, China, 2006, pp. 14.
    37. 37)
      • 37. Yoneyama, A., Yeh, C.H., Kuo, C.C.J.: ‘Moving cast shadow elimination for robust vehicle extraction based on 2D joint vehicle/shadow models’. Proc. IEEE Conf. Advanced Video and Signal Based Surveillance, Miami, USA, 2003, pp. 229236.
    38. 38)
      • 38. Iwasaki, Y., Kurogi, Y.: ‘Real-time robust vehicle detection through the same algorithm both day and night’. Proc. of the Int. Conf. on Wavelet Analysis and Pattern Recognition, Beijing, China, 2007, pp. 10081014.
    39. 39)
      • 39. Lou, J., Tan, T., Hu, W., et al: ‘3-D model-based vehicle tracking’, IEEE Trans. Image Process., 2005, 14, (10), pp. 15611569.
    40. 40)
      • 40. Kogut, G., Trivedi, M.: ‘A wide area tracking system for vision sensor networks’. The 9th World Congress Intelligent Transport Systems, Chicago, USA, 2002.
    41. 41)
      • 41. Lee, J.W., Kim, M.S., Kweon, I.S.: ‘A Kalman filter based visual tracking algorithm for an object moving in 3-D’. Proc. Int. Conf. Intelligent Robots and Systems, Pittsburgh, USA, 1995, pp. 355358.
    42. 42)
      • 42. Costa, M.S., Shapiro, L.G.: ‘3-D object recognition and pose with relational indexing’, Comput. Vis. Image Underst., 2000, 79, (3), pp. 64407.
    43. 43)
      • 43. Tan, T.N., Baker, K.D.: ‘Efficient image gradient based vehicle localization’, IEEE Trans. Image Process., 2000, 9, (11), pp. 13431356.
    44. 44)
      • 44. Muller, K., Smolic, A., Drose, M., et al: ‘3-D construction of a dynamic environment with a fully calibrated background for traffic scenes’, IEEE Trans. Circuits Syst. Video Technol., 2005, 15, (4), pp. 538549.
    45. 45)
      • 45. Cutler, R., Davis, L.S.: ‘Model-based object tracking in monocular image sequences of road traffic scenes’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (8), pp. 781796.
    46. 46)
      • 46. Ghosh, N., Bhanu, B.: ‘Incremental vehicle 3-D modeling from video’. Proc. of the 18th Int. Conf. on Pattern Recognition, Hong Kong, China, 2006, pp. 272275.
    47. 47)
      • 47. Meyer, D., Denzler, J., Niemann, H.: ‘Model based extraction of articulated objects in image sequences for gait analysis’. Proc. IEEE Int. Conf. Image Processing, Santa Barbara, USA, 1998, pp. 7881.
    48. 48)
      • 48. Barron, J., Fleet, D., Beauchemin, S.: ‘Performance of optical flow techniques’, Int. J. Comput. Vis., 1994, 12, (1), pp. 4277.
    49. 49)
      • 49. Shaikh, S.H., Saeed, K., Chaki, N.: ‘Moving object detection using background subtraction’ (Springer Briefs in Computer Science, New York, 2014).
    50. 50)
      • 50. Chalidabhongse, T.H., Kim, K., Harwood, D.: ‘A perturbation method for evaluating background subtraction algorithms’. Int. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Nice, France, 2003.
    51. 51)
      • 51. Heikkila, J., Silven, O.: ‘A real-time system for monitoring of cyclists and pedestrians’. Proc. of 2nd IEEE Workshop on Visual Surveillance, Fort Collins, USA, 1999, pp. 7481.
    52. 52)
      • 52. Cucchiara, R., Grana, C., Prati, A., et al: ‘Probabilistic classification for human behaviour analysis in transactions on systems’, Man Cybern., 2005, 35, pp. 4254.
    53. 53)
      • 53. Benezeth, Y., Jodoin, P., Emile, B., et al: ‘Comparative study of background subtraction algorithms’, J. Electron. Imaging, Soc. Photo-Opt. Instrum. Eng., 2010, 19, (3), pp. 112.
    54. 54)
      • 54. Stauffer, C., Grimson, W.E.L.: ‘Adaptive background mixture models for real-time tracking’. Int. Conf. on Computer Vision and Pattern Recognition, Fort Collins, USA, 1999.
    55. 55)
      • 55. Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. Int. Conf. on Pattern Recognition, Cambridge, UK, 2004.
    56. 56)
      • 56. Unzueta, L., Nieto, M., Cortes, A.: ‘Adaptive multi cue background subtraction for robust vehicle counting and classification’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (2), pp. 527540.
    57. 57)
      • 57. Yuan, W., Wang, J.: ‘Gaussian mixture model based on the number of moving vehicle detection algorithm’. Proc. of 2012 IEEE Int. Conf. on Intelligent Control Automatic Detection and High-End Equipment (ICADE), Beijing, China, 2012, pp. 9497.
    58. 58)
      • 58. Bouwmans, T., El-Baf, F., Vachon, B.: ‘Background modelling using mixture of gaussians for foreground detection: a survey’, Recent Patents Comput. Sci., 2008, 1, (3), pp. 219237.
    59. 59)
      • 59. Jenifa, R., A., T, , Akila, C., et al: ‘Rapid background subtraction from video sequence’. IEEE Int. Conf. on Computing, Electronic and Electrical Technologies (ICCEET), Kumaracoil, India, 2012, pp. 10771086.
    60. 60)
      • 60. Oliver, N.M., Rosario, B., Pentland, A.P.: ‘A Bayesian computer vision system for modelling human interactions’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (8), pp. 831843.
    61. 61)
      • 61. Bouwmans, T., Zahzah, E.: ‘Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance’, Computer Vision and Image Understanding, 2014, 122, pp. 2234.
    62. 62)
      • 62. Cutler, R., Davis, L.: ‘Robust real-time periodic motion detection, analysis and applications’, IEEE Trans. Pattern Recognit. Mach. Intell., 2000, 13, pp. 129155.
    63. 63)
      • 63. Wang, Y., K., Chen, S.: ‘A robust vehicle detection approach’. IEEE Conf. on Advanced Video and Signal Based Surveillance, Tehran, Iran, 2005, pp. 117122.
    64. 64)
      • 64. Wang, X., Zhang, J.: ‘A traffic incident detection method based on wavelet algorithm’. IEEE Workshop on Soft Computing in Industrial Applications, Espoo, Finland, 2005, pp. 166172.
    65. 65)
      • 65. Yin, M., Zhang, H., Meng, H., et al: ‘An HMM based algorithm for vehicle detection in congested traffic situation’. IEEE Intelligent Transportation Systems Conf., Seattle, USA, 2007, pp. 736741.
    66. 66)
      • 66. Fadlullah, Z.M., Tang, F., Mao, B., et al: ‘State-of-the-art deep learning: evolving machine intelligence toward tomorrow's intelligent network traffic control systems’, IEEE Commun. Surv. Tutor., 2017, 19, (4), pp. 24322455.
    67. 67)
      • 67. Goodfellow, I., Bengio, Y., Courville, A.: ‘Deep learning’ (MIT Press, Cambridge, Mass, USA, 2016).
    68. 68)
      • 68. Carrio, A., Sampedro, C., Rodrigues-Ramos, A., et al: ‘A review of deep learning methods and applications for unmanned aerial vehicles’, J. Sens., 2017, 1, pp. 113.
    69. 69)
      • 69. Wang, Z.: ‘The applications of deep learning on traffic identification’, 2016. Available at https://www.blackhat.com/docs/us-15/materials/us-15-Wang-The-Applications-Of-Deep-Learning-On-Traffic-Identification-wp.pdf.
    70. 70)
      • 70. Liu, W., Zhang, M., Cai, Y.: ‘An ensemble deep learning method for vehicle type classification on visual traffic surveillance sensors’, IEEE Access, 2017, 5, pp. 2441724425.
    71. 71)
      • 71. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’. Proc. of Advances in Neural Information Processing Systems, Las Vegas, USA, 2012, pp. 10971105.
    72. 72)
      • 72. Girshick, R., Donahue, J., Darrell, T., et al: ‘Rich feature hierarchies for accurate object detection and semantic segmentation’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, USA, 2014, pp. 580587.
    73. 73)
      • 73. Karpathy, A., Toderici, G., Shetty, S., et al: ‘Large-scale video classification with convolutional neural networks’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, USA, 2014, pp. 17251732.
    74. 74)
      • 74. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, 2014. Available at https://arxiv.org/abs/1409.1556.
    75. 75)
      • 75. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Boston, USA, 2015, pp. 19.
    76. 76)
      • 76. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770778.
    77. 77)
      • 77. Dosovitskiy, A., Fischer, P., Srringenberg, J.T., et al: ‘Discriminative unsupervised feature learning with convolutional neural networks’, CoRR, 2014, vol. abs/1406, no. 6909.
    78. 78)
      • 78. Kavukcuoglu, K., Sermanet, P., Boureau, Y., et al: ‘Learning convolutional feature hierarchies for visual recognition’, Adv. Neural. Inf. Process. Syst., 2010, 23, pp. 10901098.
    79. 79)
      • 79. LeCun, Y., Boser, B., Denker, J.S., et al: ‘Backpropagation applied to handwritten Zip code recognition’, Neural Comput., 1989, 1, (4), pp. 541551.
    80. 80)
      • 80. Freund, Y., Haussler, D.: ‘Unsupervised learning of distributions on binary vectors using two layers networksProc. 4th Int. Conf. on Neural Information Processing Systems (NIPS'91), Denver, USA, 1991, pp. 912919.
    81. 81)
      • 81. Sutskever, I., Martens, J., Hinton, G.: ‘Generating text with recurrent neural networks’. Proc. of the 28th Int. Conf. on Machine Learning (ICML 11), Bellevue, USA, 2011, pp. 10171024.
    82. 82)
      • 82. Liou, C.Y., Huang, J.C., Yang, W.C.: ‘Modeling word perception using the Elman network’, Neurocomputing, 2008, 71, (1618), pp. 31503157.
    83. 83)
      • 83. Bourlard, H., Kamp, Y.: ‘Auto-association by multilayer perceptrons and singular value decomposition’, Biol. Cybern., 1988, 59, (4), pp. 291294.
    84. 84)
      • 84. Socher, R., Bengio, Y., Manning, C.: ‘Deep learning for NLP (without magic)’. Tutorial Abstracts of ACL, Atlanta, USA, 2012, pp. 55.
    85. 85)
      • 85. Zhou, Y., Nejati, H., Do, T.T., et al: ‘Image-based vehicle analysis using deep neural network: A systematic study’. IEEE Int. Conf. on Digital Signal Processing, Beijing, China, 2016.
    86. 86)
      • 86. Schmidhuber, J.: ‘Deep learning in neural networks: an overview’, Neural Netw., 2015, 61, pp. 85117.
    87. 87)
      • 87. Luckow, A., Cook, M., Ashcraft, N., et al: ‘Deep learning in the automotive industry: applications and tools’, CoRR, 2017, vol. abs/1705.00346.
    88. 88)
      • 88. Suhao, L., Jinzhao, L., Guoquan, L., et al: ‘Vehicle type detection based on deep learning in traffic scene’, Procedia Comput. Sci., 2018, 131, pp. 564572.
    89. 89)
      • 89. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only Look once: unified, real-time object detection’. IEEE Conf. on CVPR, Las Vegas, USA, 2016.
    90. 90)
      • 90. Sermanet, P, Eigen, D, Zhang, X, et al: ‘Overfeat: integrated recognition, localization and detection using convolutional networks’. Advances in Neural Information Processing Systems [S.1]: ICLR Press, Banff, Canada, 2014, pp. 10551061.
    91. 91)
      • 91. Russakovsky, O., Deng, J., Su, H., et al: ‘Imagenet large scale visual recognition challenge’, IJCV, 2015, 115, (3), pp. 211252.
    92. 92)
      • 92. Cordts, M., Omran, M., Ramos, S., et al: ‘The cityscapes dataset’. CVPR Workshop on the Future of Datasets in Vision, Las Vegas, USA, 2015.
    93. 93)
      • 93. Cordts, M., Omran, M., Ramos, S., et al: ‘The cityscapes dataset for semantic urban scene understanding’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016.
    94. 94)
      • 94. Lin, T. Y., Maire, M., Belongie, S., et al: ‘Microsoft COCO: common objects in context’, ECCV, 1, 4, 5, 7, 2014.
    95. 95)
      • 95. Geiger, A., Lenz, P., Urtasun, R.: ‘Are We ready for autonomous driving?’. The KITTI Vision Benchmark Suite, Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012.
    96. 96)
      • 96. Geiger, A., Lenz, P., Stiller, C., et al: ‘Vision meets robotics: the KITTI dataset’, Int. J. Robot. Res. (IJRR), 2013, 32, (11), pp. 12311237.
    97. 97)
      • 97. Brostow, G. J., Fauqueur, J., Cipolla, R.: ‘Semantic object classes in video: A high-definition ground truth database’, Pattern Recognit. Lett., 2009, 30, (2), pp. 8897.
    98. 98)
      • 98. Maddalena, L., Petrosino, A.: ‘A self organizing approach to background subtraction for visual surveillance applications’, IEEE Trans. Image Process., 2008, 17, (7), pp. 11681177.
    99. 99)
      • 99. Lai, A.H.S., Fung, G.S.K., Yung, N.H.C.: ‘Vehicle type classification from visual-based dimension estimation’. Proc. of the IEEE Intelligent Transportation Systems Conf., Oakland, USA, 2001, pp. 201206.
    100. 100)
      • 100. Mithun, N.C., Rashid, N.U., Rahman, S.M.: ‘Detection and classification of vehicles from video using multiple time-spatial images’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (3), pp. 12151225.
    101. 101)
      • 101. LeCun, Y., Bottou, L., Orr, G.B., et al: ‘Efficient backpropagation in neural networks’ (Tricks of the Trade, Springer, 1998), pp. 950.
    102. 102)
      • 102. Dong, Z., Wu, Y., Pei, M., et al: ‘Vehicle type classification using a semisupervised convolutional neural network’, IEEE Trans. Intell. Transp. Syst., 2015, 16, (4), pp. 22472256.
    103. 103)
      • 103. Zhang, F.: ‘Car detection and vehicle type classification based on deep learning’ (Jiangsu University, China, 2016).
    104. 104)
      • 104. Zhang, F., Xu, X, Qiao, Y.: ‘Deep classification of vehicle makers and models: the effectiveness of Pre-training and data enhancement’. IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), Zhuhai, China, 2015, pp. 231236.
    105. 105)
      • 105. Ambardekar, A., Nicolescu, M.: ‘Vehicle classification framework: a comparative study’, EURASIP J. Image Video Process., 2014, 29, pp. 113.
    106. 106)
      • 106. Huang, S.C., Chen, B.H., Cheng, Y.J.: ‘An efficient visibility enhancement algorithm for road scenes captured by intelligent transportation systems’, IEEE Trans. Intell. Transp. Syst., 2014, 15, (5), pp. 23212332.
    107. 107)
      • 107. Huang, S.C., Chen, B.H., Wang, W.J.: ‘Visibility restoration of single hazy images captured in real-world weather conditions’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, (10), pp. 18141824.
    108. 108)
      • 108. He, K., Sun, J., Tang, X.: ‘Single image haze removal using dark channel prior’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (12), pp. 23412353.
    109. 109)
      • 109. Levin, A., Lischinski, D., Weiss, Y.: ‘A closed form solution to natural image matting’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, New York, USA, 2006, pp. 6168.
    110. 110)
      • 110. Li, S., Ren, W., Zhang, J., et al: ‘Fast single image rain removal via a deep decomposition-composition network’, Comput. Vis. Pattern Recognit. (CVPR), 2018, 186, pp. 4857.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.5351
Loading

Related content

content/journals/10.1049/iet-ipr.2018.5351
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address