http://iet.metastore.ingenta.com
1887

Survey of connected automated vehicle perception mode: from autonomy to interaction

Survey of connected automated vehicle perception mode: from autonomy to interaction

For access to this article, please select a purchase option:

Buy eFirst article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Intelligent Transport Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

For enhancing the intelligence of urban traffic, connected automated vehicle (CAV) is recognised as the leading technology in the near future. With on-board sensors and communication devices, status of the vehicles can be obtained to better coordinate the traffic. However, the limited environmental perception range cannot lead to the best efficiency of the global urban traffic. In this study, a three-step evolution strategy of the CAV perception mode is proposed, from autonomous perception to interactive perception to networked perception. Key technologies in these three steps are studied. In autonomous perception, vehicle positioning and dynamic target tracking approaches are proposed. In interactive perception, a reliable multi-mode information exchange mechanism is studied. Finally, a new traffic big data storage and advanced analytics solution in networked perception is introduced. Related experiments about the above key issues are designed, implemented, and verified based on the hardware platform, open source dataset, and cloud platform, respectively. The results show that the positioning distance root mean square achieves 3.9 m, object tracking speed reaches 30fps, and communication average packet loss rate is 2%. As can be seen from testing and simulation results, our proposed approaches can meet technical requirements and support environment perception mode evolution of CAV.

References

    1. 1)
      • 1. Mur-Artal, R., Tardós, J.D.: ‘Orb-Slam2: an open-source slam system for monocular, stereo, and Rgb-D cameras’, IEEE Trans. Robot., 2017, 33, (5), pp. 12551262.
    2. 2)
      • 2. Vineet, V., Miksik, O., Lidegaard, M., et al: ‘Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction’. IEEE Int. Conf. on Robotics and Automation, Seattle, WA, USA, 2015.
    3. 3)
      • 3. Broggi, A., Cattani, S., Patander, M., et al: ‘A full-3d voxel-based dynamic obstacle detection for urban scenario using stereo vision’. Int. IEEE Conf. on Intelligent Transportation Systems, QingDao, China, 2014.
    4. 4)
      • 4. Zhang, X., Zhuang, Y., Hu, H., et al: ‘3-D laser-based multiclass and multiview object detection in cluttered indoor scenes’, IEEE Trans. Neural Netw. Learn. Syst., 2015, 28, (1), pp. 177190.
    5. 5)
      • 5. Druzhkov, P.N., Kustikova, V.D.: ‘A survey of deep learning methods and software tools for image classification and object detection’, Pattern Recogn. Image Anal., 2016, 26, (1), pp. 915.
    6. 6)
      • 6. Ribeiro, D., Mateus, A., Nascimento, J.C., et al: ‘A real-time pedestrian detector using deep learning for human-aware navigation’, 2016.
    7. 7)
      • 7. Zhao, Z.Q., Bian, H., Hu, D., et al: ‘Pedestrian detection based on fast R-CNN and batch normalization’, 2017.
    8. 8)
      • 8. Cintra, R.J., Duffner, S., Garcia, C., et al: ‘Low-complexity approximate convolutional neural networks’, IEEE Trans. Neural Netw. Learn. Syst., 2018, PP, (99), pp. 112.
    9. 9)
      • 9. Dundar, A., Jin, J., Martini, B., et al: ‘Embedded streaming deep neural networks accelerator with applications’, IEEE Trans. Neural Netw. Learn. Syst., 2017, 28, (7), pp. 15721583.
    10. 10)
      • 10. Li, Y.: ‘An overview of the DSRC/wave technology’. Int. Conf. on Heterogeneous Networking for Quality, Reliability, Security and Robustness, Houston, TX, USA, 2010.
    11. 11)
      • 11. Chelha, I.O., Rakrak, S.: ‘Best nodes approach for alert message dissemination in VANET (BNAMDV)’. Third Int. Workshop on RFID and Adaptive Wireless Sensor Networks, Agadir, Morocco, 2015.
    12. 12)
      • 12. Hotkar, D.S., Biradar, S.R.: ‘A review on existing QOS routing protocols in VANET based on link efficiency and link stability’, 2019.
    13. 13)
      • 13. Li, G., Boukhatem, L.: ‘Adaptive vehicular routing protocol based on ant colony optimization’. Proceeding of the Tenth ACM Int. Workshop on Vehicular Inter-Networking, Systems, and Applications, Taipei, Taiwan, 2013.
    14. 14)
      • 14. Arora, A., Rakesh, N., Mishra, K.K.: ‘Analysis of safety applications in VANET for LTE based network’, 2018.
    15. 15)
      • 15. Ma, X., Zhang, J., Yin, X., et al: ‘Design and analysis of a robust broadcast scheme for VANET safety-related services’, IEEE Trans. Veh. Technol., 2012, 61, (1), pp. 4661.
    16. 16)
      • 16. Soeno, S., Sugiura, S.: ‘Speed-dependent autonomous beamwidth variation for VANET safety applications’. Vehicular NETWORKING Conf., Kyoto, Japan, 2015.
    17. 17)
      • 17. Wang, S., Yang, B., Gu, F., et al: ‘A novel reliable broadcast protocol for VANET's safety applications’. Int. Conf. on Electronics Information and Emergency Communication, Beijing, China, 2016.
    18. 18)
      • 18. Hobert, L., Festag, A., Llatser, I., et al: ‘Enhancements of V2x communication in support of cooperative autonomous driving’, IEEE Commun. Mag., 2015, 53, (12), pp. 6470.
    19. 19)
      • 19. Chai, L., Cai, B., Shangguan, W., et al: ‘Connected and autonomous vehicles coordinating approach at intersection based on space-time slot’, Transportmetrica A: Transport Science, 2018, 14, (10), pp. 929951.
    20. 20)
      • 20. Wang, J., Liu, D., Jiang, W., et al: ‘Evaluation on loosely and tightly coupled GNSS/INS vehicle navigation system’. Int. Conf. on Electromagnetics in Advanced Applications, Verona, Italy, 2017.
    21. 21)
      • 21. Chu, T., Guo, N., Backén, S., et al: ‘Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging gnss environments’, Sensors, 2012, 12, (3), pp. 31623185.
    22. 22)
      • 22. Heirich, O.: ‘Bayesian train localization with particle filter, loosely coupled GNSS, IMU, and a track Map’, J. Sens., 2016, 2016, pp. 115.
    23. 23)
      • 23. Kaplan, E.D.: ‘Understanding GPS: principles and application’, J. Atmos. Sol.-Terr. Phys., 1997, 59, (5), pp. 598599.
    24. 24)
      • 24. Aumayer, B.M., Petovello, M.G., Lachapelle, G.: ‘Development of a tightly coupled vision/GNSS system’, 2014.
    25. 25)
      • 25. Soloviev, A., Bates, D., Graas, F.V.: ‘Tight coupling of laser scanner and inertial measurements for a fully autonomous relative navigation solution’, Navigation, 2007, 54, (3), pp. 189205.
    26. 26)
      • 26. Yang, Y., Gao, W.: ‘Comparison of adaptive factors in Kalman filters on navigation results’, J. Navig., 2005, 58, (3), pp. 471478.
    27. 27)
      • 27. Sermanet, P., Eigen, D., Zhang, X., et al: ‘Overfeat: integrated recognition, localization and detection using convolutional networks’, arXiv preprint arXiv:1312.6229, 2013.
    28. 28)
      • 28. Girshick, R., Donahue, J., Darrell, T., et al: ‘Rich feature hierarchies for accurate object detection and semantic segmentation’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014.
    29. 29)
      • 29. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’. Advances in Neural Information Processing Systems, Montreal, Canada, 2015.
    30. 30)
      • 30. Redmon, J., Farhadi, A.: ‘Yolo9000: better, faster, stronger’, arXiv preprint, 2016, 1612.
    31. 31)
      • 31. Li, B.: ‘3D fully convolutional network for vehicle detection in point cloud’, arXiv preprint arXiv:1611.08069, 2016.
    32. 32)
      • 32. Chen, X., Ma, H., Wan, J., et al: ‘Multi-view 3d object detection network for autonomous driving’, arXiv preprint arXiv:1611.07759, 2016.
    33. 33)
      • 33. Zhou, Y., Tuzel, O.: ‘Voxelnet: end-to-end learning for point cloud based 3D object detection’, arXiv preprint arXiv:1711.06396, 2017.
    34. 34)
      • 34. Model, I., Shamir, L.: ‘Comparison of data set bias in object recognition benchmarks’, IEEE. Access., 2015, 3, (1), pp. 19531962.
    35. 35)
      • 35. Bolme, D.S., Beveridge, J.R., Draper, B.A., et al: ‘Visual object tracking using adaptive correlation filters’. 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 2010.
    36. 36)
      • 36. Danelljan, M., Shahbaz Khan, F., Felsberg, M., et al: ‘Adaptive color attributes for real-time visual tracking’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, USA, 24–27 June 2014.
    37. 37)
      • 37. Danelljan, M., Häger, G., Khan, F., et al: ‘Accurate scale estimation for robust visual tracking’. British Machine Vision Conf., Nottingham, 1–5 September 2014.
    38. 38)
      • 38. Henriques, J.F., Caseiro, R., Martins, P., et al: ‘High-speed tracking with kernelized correlation filters’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (3), pp. 583596.
    39. 39)
      • 39. Li, Y., Zhu, J.: ‘A scale adaptive kernel correlation filter tracker with feature integration’. European Conf. on Computer Vision, Zurich, Switzerland, 2014.
    40. 40)
      • 40. Wei, Q., Lao, S., Bai, L.: ‘Long-Term visual tracking based on correlation filters’. AIP Conf. Proc., Wuhan, China, 2017.
    41. 41)
      • 41. Cui, Z., Xiao, S., Feng, J., et al: ‘Recurrently target-attending tracking’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016.
    42. 42)
      • 42. Song, Y., Ma, C., Gong, L., et al: ‘Crest: convolutional residual learning for visual tracking’. 2017 IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, 2017.
    43. 43)
      • 43. Abboud, K., Omar, H., Zhuang, W.: ‘Interworking of DSRC and cellular network technologies for V2x communications: a survey’, IEEE Trans. Veh. Technol., 2016, PP, (99), pp. 11.
    44. 44)
      • 44. Gheorghiu, R.A., Cormoș, A.C., Stan, V.A., et al: ‘Overview of network topologies for V2x communications’. Int. Conf. on Electronics, Computers and Artificial Intelligence, Targoviste, Romania, 2017.
    45. 45)
      • 45. Rehman, S.U., Khan, M.A., Zia, T., et al: ‘Vehicular Ad-Hoc networks (VANETs): An overview and challenges’, EURASIP J. Wirel. Commun. Netw., 2013, 3, (3), pp. 2938.
    46. 46)
      • 46. Fonseca, A., Vazão, T.: ‘Applicability of position-based routing for VANET in highways and urban environment’, J. Netw. Comput. Appl., 2013, 36, (3), pp. 961973.
    47. 47)
      • 47. Wang, S.-Y., Lin, C.-C., Hong, W.-J., et al: ‘On the performances of forwarding multihop unicast traffic in WBSS-based 802.11 (P)/1609 networks’, Comput. Netw., 2011, 55, (11), pp. 25922607.
    48. 48)
      • 48. Paier, A., Tresch, R., Alonso, A., et al: ‘Average downstream performance of measured IEEE 802.11 P infrastructure-to-vehicle links’. 2010 IEEE Int. Conf. on Communications Workshops (ICC), Cape Town, South Africa, 2010.
    49. 49)
      • 49. Ma, Y.-W., Lai, C.-F., Hsu, J.-M., et al: ‘RFID-Based positioning system for telematics location-aware applications’, Wirel. Pers. Commun., 2011, 59, (1), pp. 95108.
    50. 50)
      • 50. Yoo, S.-E., Chong, P.K., Kim, T., et al: ‘PGS: parking guidance system based on wireless sensor network’. 3rd Int. Symp. on Wireless Pervasive Computing, 2008. ISWPC 2008, Santorini, Greece, 2008.
    51. 51)
      • 51. Duan, X., Yang, Y., Tian, D., et al: ‘A V2x communication system and Its performance evaluation test Bed’. IEEE Int. Symp. on Wireless Vehicular Communications, Vancouver, Canada, 2014.
    52. 52)
      • 52. Alexander, P., Haley, D., Grant, A.: ‘Cooperative intelligent transport systems: 5.9-GHz field trials’, Proc. IEEE, 2011, 99, (7), pp. 12131235.
    53. 53)
      • 53. Chai, L., Cai, B., ShangGuan, W., et al: ‘Basic simulation environment for highly customized connected and autonomous vehicle kinematic scenarios’, Sensors, 2017, 17, (9), p. 1938.
    54. 54)
      • 54. Hussein, A., Marin-Plaza, P., Garcia, F., et al: ‘Autonomous cooperative driving using V2x communications in Off-road environment’. IEEE Int. Conf. on Intelligent Transportation Systems, Yokohama, Japan, 2017.
    55. 55)
      • 55. Tian, D., Zhu, K., Zhou, J., et al: ‘Swarm model for cooperative multi-vehicle mobility with inter-vehicle communications’, IET Intell. Transp. Syst., 2015, 9, (10), pp. 887896.
    56. 56)
      • 56. Ammoun, S., Nashashibi, F., Laurgeau, C.: ‘Crossroads risk assessment using GPS and inter-vehicle communications’, IET Intell. Transp. Syst., 2007, 1, (2), pp. 95101.
    57. 57)
      • 57. Chen, L., Li, Q., Martin, K.M., et al: ‘Private reputation retrieval in public – a privacy-aware announcement scheme for VANETs’, IET Inf. Sec., 2017, 11, (4), pp. 204210.
    58. 58)
      • 58. Ding, J.W., Wang, C.F., Meng, F.H., et al: ‘Real-time vehicle route guidance using vehicle-to-vehicle communication’, IET Commun., 2010, 4, (7), pp. 870883.
    59. 59)
      • 59. Rakipi, A., Kamo, B., Cakaj, S., et al: ‘Standard positioning performance evaluation of a single-frequency GPS receiver implementing ionospheric and tropospheric error Corrections’.
    60. 60)
      • 60. Pan, L., Zhang, X., Liu, J., et al: ‘Performance evaluation of single-frequency precise point positioning with GPS, Glonass, Beidou and Galileo’, J. Navig., 2017, 70, (3), pp. 465482.
    61. 61)
      • 61. Saka, M.: ‘Performance evaluation of outlier detection methods in GNSS vector networks using 1d and 3d component analysis’, Measurement, 2016, 82, pp. 145150.
    62. 62)
      • 62. Otegui, J., Bahillo, A., Lopetegi, I., et al: ‘A survey of train positioning solutions’, IEEE Sens. J., 2017, 17, (20), pp. 67886797.
    63. 63)
      • 63. Glorot, X., Bordes, A., Bengio, Y.: ‘Deep sparse rectifier neural networks’. Int. Conf. on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2011.
    64. 64)
      • 64. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2005. CVPR 2005, San Diego, CA, USA, 2005.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2018.5239
Loading

Related content

content/journals/10.1049/iet-its.2018.5239
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address