http://iet.metastore.ingenta.com
1887

Advances in human action recognition: an updated survey

Advances in human action recognition: an updated survey

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Research in human activity recognition (HAR) has seen tremendous growth and continuously receiving attention from both the Computer Vision and the Image Processing communities. Due to the existence of numerous publications in this field, undoubtedly, there have been a number of review papers on this subject that categorise these techniques. Many of the recent works have started to tackle more challenging problems and these proposed techniques are addressing more realistic real-world scenarios. Conspicuously, an updated survey that covers these methods is timely due. To simplify the categorisation, this study takes a two-layer hierarchical approach. At the top level, the categorisation is based on the basic process flow of HAR, i.e. input data-type, features-type, descriptor-type, and classifier-type. At the second layer, each of these components is further subcategorised based on the diversity of the proposed methods. Finally, a remark on the coming popularity of deep learning approach in this field is also given.

References

    1. 1)
      • 1. Aggarwal, J.K., Cai, Q.: ‘Human motion analysis: a review’. Proc. on IEEE Nonrigid and Articulated Motion Workshop, San Juan, Puerto Rico, June 1997, pp. 90102.
    2. 2)
      • 2. Aggarwal, J.K., Cai, Q.: ‘Human motion analysis: a review’, Comput. Vis. Image Underst. (CVIU), 1999, 73, (3), pp. 428440.
    3. 3)
      • 3. Turaga, P., Chellappa, R., Subrahmanian, V.S., et al: ‘Machine recognition of human activities: a survey’, IEEE Trans. Circuits Syst. Video Technol., 2008, 18, (11), pp. 14731489.
    4. 4)
      • 4. Poppe, R.: ‘A survey on vision-based human action recognition’, Image Vis. Comput., 2010, 28, pp. 976990.
    5. 5)
      • 5. Aggarwal, J.K., Ryoo, M.S.: ‘Human activity analysis: a review’, ACM Comput. Surv., 2011, 43, pp. 143.
    6. 6)
      • 6. Cheng, G., Wan, Y., Saudagar, A.N., et alAdvances in human action recognition: A survey’, 2015. Available at https://arxiv.org/abs/1501.05964.
    7. 7)
      • 7. Vrigkas, M., Nikou, C., Kakadiaris, I.A.: ‘A review of human activity recognition methods’, Front. Robot. AI, 2015, 2, p.28. doi: 10.3389/frobt.2015.00028.
    8. 8)
      • 8. Chaquet, J.M., Carmona, E.J., Fernández-Caballero, A.: ‘A survey of video datasets for human action and activity recognition’, Comput. Vis. Image Underst., 2013, 117, pp. 633659.
    9. 9)
      • 9. Rodomagoulakis, I., Kardaris, N., Pitsikalis, V., et al: ‘Multimodal human action recognition in assistive human-robot interaction’. IEEE Int. Conf. Acoustic, Speech, and Signal Processing (ICASSP), Shanghai, China, March 2016, pp. 27022706.
    10. 10)
      • 10. Amor, B.B., Su, J., Srivastava, A.: ‘Action recognition using rate-invariant analysis of skeletal shape trajectories’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (1), pp. 113.
    11. 11)
      • 11. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘A real-time human action recognition using depth and inertial sensor fusion’, IEEE Sens. J., 2016, 16, (3), pp. 773781.
    12. 12)
      • 12. Guo, Y., Tao, D., Liu, W., et al: ‘Multiview Cauchy estimator feature embedding for depth and inertial sensor-based human action recognition’, IEEE Trans. Syst. Man Cybernet., Syst., 2017, 47, (4), pp. 617627.
    13. 13)
      • 13. Rahmani, H., Mahmood, A., Huynh, D., et al: ‘Histogram of oriented principal components for cross-view action recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (12), pp. 24302443.
    14. 14)
      • 14. Liu, A.-A., Xu, N., Nie, W.-Z., et al: ‘Benchmarking a multimodal and multiview and interactive dataset for human action recognition’, IEEE Trans. Cybernet., 2017, 47, (7), pp. 17811794.
    15. 15)
      • 15. Liu, A.-A., Su, Y.-T., Jia, P.-P., et al: ‘Multipe/single-view human action recognition via part-induced multitask structural learning’, IEEE Trans. Cybernet., 2015, 45, (6), pp. 11941208.
    16. 16)
      • 16. Gori, I., Aggarwal, J.K., Matthies, L., et al: ‘Multi-type activity recognition in robot-centric scenarios’, IEEE Robot. Autom. Lett., 2016, 1, (1), pp. 593600.
    17. 17)
      • 17. Vishwakarma, D.K., Singh, K.: ‘Human activity recognition based on spatial distribution of gradients at sub-levels of average energy silhouette images’, IEEE Trans. Cogn. Dev. Syst., 2017, 9, (4), pp. 316327.
    18. 18)
      • 18. Belhadj, L.C., Mignotte, M.: ‘Spatio-temporal fastmap-based mapping for human action recognition’. 2016 IEEE Int. Conf. Image Processing (ICIP), Phoenix, Arizona, September 2016, pp. 30463050.
    19. 19)
      • 19. Liu, L., Shao, L., Li, X., et al: ‘Learning spatio-temporal representations for action recognition: a genetic programming approach’, IEEE Trans. Cybernet., 2016, 46, (1), pp. 158170.
    20. 20)
      • 20. Wu, T., Gurram, P., Rao, R.M., et al: ‘Clustering-aware structure-constrained low-rank representation model for learning human action attributes’. 2016 IEEE 12th Workshop on Image, Video, and Multidimensional Signal Processing (IVMSP), Bordeaux, France, 2016.
    21. 21)
      • 21. Zhao, Y., Di, H., Zhang, J., et al: ‘Recognizing human actions from low-resolution videos by region-based mixture models’. 2016 IEEE Int. Conf. Multimedia and Expo (ICME), Seattle, Washington, July 2016, pp. 16.
    22. 22)
      • 22. Wang, P., Li, W., Gao, Z., et al: ‘Action recognition from depth maps using deep convolutional neural networks’, IEEE Trans. Human–Mach. Syst., 2016, 46, (4), pp. 498509.
    23. 23)
      • 23. Le, C.-Q., Ngo, T.D., Le, D.-D., et al: ‘Human action recognition from depth videos using multi-projection based representation’. IEEE 17th Int. Workshop on Multimedia Signal Processing (MMSP), Xiamen, China, October 2015.
    24. 24)
      • 24. Yang, Y., Deng, C., Gaq, S., et al: ‘Discriminative multi-instance multi-task learning for 3D action recognition’, IEEE Trans. Multimed., 2017, 19, (3), pp. 519529.
    25. 25)
      • 25. Cai, X., Zhou, W., Wu, L., et al: ‘Effective active skeleton representation for low latency human action recognition’, IEEE Trans. Multimed., 2016, 18, (2), pp. 141154.
    26. 26)
      • 26. Cippitelli, E., Gambi, E., Spinsante, S., et al: ‘Evaluation of a skeleton-based method for human activity recognition on a large-scale RGB-D dataset’. 2nd IET Int. Conf. Technologies for Active and Assisted Living (TechAAL), London, UK, October 2016.
    27. 27)
      • 27. Aly Halim, A., Dartigues-Pallez, C., Precioso, F., et al: ‘Human action recognition based on 3D skeleton part-based pose estimation and temporal multi-resolution analysis’. IEEE Int. Conf. Image Processing (ICIP), Phoenix, Arizona, September 2016, pp. 30413045.
    28. 28)
      • 28. Yadav, G.K., Shukla, P., Sethi, A.: ‘Action recognition using interest points capturing differential motion information’. IEEE Int. Conf. Acoustic, Speech, and Signal Processing (ICASSP), Shanghai, China, March 2016, pp. 18811885.
    29. 29)
      • 29. Cao, X.-Q., Liu, Z.-Q.: ‘Type-2 fuzzy topic models for human action recognition’, IEEE Trans. Fuzzy Syst., 2015, 23, (5), pp. 15811593.
    30. 30)
      • 30. Zhang, Z., Wang, C., Xiao, B., et al: ‘Attribute regularization based human action recognition’, IEEE Trans. Inf. Forensics Sec., 2013, 8, (10), pp. 16001609.
    31. 31)
      • 31. Liu, A.-A., Su, Y.-T., Nie, W.-Z., et al: ‘Hierarchical clustering multi-task learning for joint human action grouping and recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (1), pp. 102114.
    32. 32)
      • 32. Zhang, J., Han, Y., Tang, J., et al: ‘Semi-supervised image-to-video adaptation for video action recognition’, IEEE Trans. Cybernet., 2017, 47, (4), pp. 960973.
    33. 33)
      • 33. Zhang, H.-B., Lei, Q., Chen, D.-S., et al: ‘Probability-based method for boosting human action recognition using scene context’, IET Comput. Vis., 2016, 10, (6), pp. 528536.
    34. 34)
      • 34. Qin, J., Liu, L., Zhang, Z., et al: ‘Compressive sequential learning for action similarity labeling’, IEEE Trans. Image Process., 2016, 25, (2), pp. 756769.
    35. 35)
      • 35. Iosifidis, A., Gabbouj, M.: ‘Combining multi-class maximum margin classification with linear discriminant analysis for human action recognition’. IEEE Int. Conf. Image Processing (ICIP), Phoenix, Arizona, September 2016, pp. 41804184.
    36. 36)
      • 36. Zhang, S., Gao, C., Chen, F., et al: ‘Group sparse-based mid-level representation for action recognition’, IEEE Trans. Syst. Man Cybernet., Syst., 2017, 47, (4), pp. 660672.
    37. 37)
      • 37. Hu, B., Yuan, J., Wu, Y.: ‘Discriminative action states discovery for online action recognition’, IEEE Signal Process. Lett., 2016, 23, (10), pp. 13741378.
    38. 38)
      • 38. Nguyen, T.V., Song, Z., Yan, S.: ‘STAP: spatial–temporal attention-aware pooling for action recognition’, IEEE Trans. Circuits Syst. Video Technol., 2015, 25, (1), pp. 7786.
    39. 39)
      • 39. Zhang, J., Zhang, L., Shum, H.P.H., et al: ‘Arbitrary view action recognition via transfer dictionary learning on synthetic training data’. 2016 IEEE Int. Conf. Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016, pp. 16781684.
    40. 40)
      • 40. Zhang, X., Zhang, H., Zhang, Y., et al: ‘Deep fusion of multiple semantic cues for complex event recognition’, IEEE Trans. Image Process., 2016, 25, (3), pp. 10331046.
    41. 41)
      • 41. Al-Berry, M.N., Salem, M.A.M., Ebeid, H.M., et al: ‘Fusing directional wavelet local binary pattern and moments for human action recognition’, IET Comput. Visi. J., 2016, 10, (2), pp. 153162.
    42. 42)
      • 42. Zhu, L., Zhou, Q., Li, Z.: ‘A new method of feature description for human action recognition’. 2016 8th Int. Conf. Intelligent Human–Machine Systems and Cybernatics, Hangzhou, China, August 2016, pp. 396400.
    43. 43)
      • 43. Rahman, S., See, J.: ‘Spatio-temporal mid-level feature bank for action recognition in low quality video’. IEEE Int. Conf. Acoustic, Speech, and Signal Processing (ICASSP), Shanghai, China, March 2016, pp. 18461850.
    44. 44)
      • 44. Papadopoulos, G.T., Darass, P.: ‘Human action recognition using 3D reconstruction data’, IEEE Trans. Circuits Syst. Video Technol., 2018, 28, (8), pp. 18071823.
    45. 45)
      • 45. Hu, J.-F., Zheng, W.-S., Lai, J., et al: ‘Exemplar-based recognition of human–object interactions’, IEEE Trans. Circuits Syst. Video Technol., 2016, 26, (4), pp. 647660.
    46. 46)
      • 46. Kong, Y., Fu, Y.: ‘Close human interaction recognition using patch-aware models’, IEEE Trans. Image Process., 2016, 25, (1), pp. 167178.
    47. 47)
      • 47. Zhang, Y., Cheng, L., Cai, J., et al: ‘Action recognition in still images with minimum annotation efforts’, IEEE Trans. Image Process., 2016, 25, (11), pp. 54795490.
    48. 48)
      • 48. Antunes, M., Aouada, D., Ottersten, B.: ‘A revisit to human action recognition from depth sequences: guided SVM-sampling for joint selection’. 2016 IEEE Winter Conf. Applications of Computer Vision (WACV), New York, March 2016, pp. 18.
    49. 49)
      • 49. Wang, C., Yang, H., Meinel, C.: ‘Exploring multimodal video representation for action recognition’. 2016 Int. Joint Conf. Neural Networks (IJCNN), Vancouver, Canada, 2016, pp. 19241931.
    50. 50)
      • 50. Wu, Q., Wang, Z., Deng, F., et al: ‘Realistic human action recognition with multimodal feature selection and fusion’, IEEE Trans. Syst. Man Cybernet., Syst., 2013, 43, (4), pp. 875885.
    51. 51)
      • 51. Chun, S.Y., Lee, C.-S.: ‘Human action recognition using histogram of motion intensity and direction from multiple views’, IET Comput. Vis., 2016, 10, (4), pp. 250257.
    52. 52)
      • 52. Liu, J., Akhtar, N., Mian, A.: ‘Viewpoint invariant RGB-D human action recognition’. 2017 Int. Conf. Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, 2017, pp. 18.
    53. 53)
      • 53. Javier Traver, V., Latorre-Carmona, P., Salvador-Balaguer, E., et al: ‘Three-dimensional integral imaging for gesture recognition under occlusions’, IEEE Signal Process. Lett., 2017, 24, (2), pp. 171175.
    54. 54)
      • 54. Rosenfeld, A., Ullman, S.: ‘Hand-object interaction and precise localization in transitive action recognition’. 2016 13th Conf. Computer and Robot Vision, British Columbia, Canada, 2016, pp. 148155.
    55. 55)
      • 55. Xie, H., Wu, Q.X.: ‘Human action recognition based on variation energy images features’. 11th Int. Conf. Natural Computation (ICNC), Zhangjiajie, China, August 2015, pp. 479484.
    56. 56)
      • 56. Subetha, T., Chitrakala, S.: ‘An adaptive framework for silhouette based human action recognition’. Int. Conf. Computing Technologies and Intelligent Data Engineering (ICCTIDE), Kovilpatti, India, January 2016.
    57. 57)
      • 57. Zhang, D.: ‘Recognizing human actions via silhouette image analysis’. 27th Chinese Control and Decision Conf. (CCDC), Qingdao, China, 23–25 May 2015.
    58. 58)
      • 58. Batabyal, T., Chattopadhyay, T., Mukherjee, D.P.: ‘Action recognition using joint coordinates of 3D skeleton data’. 2015 IEEE Int. Conf. Image Processing (ICIP), Quebec City, Canada, September 2015, pp. 41074111.
    59. 59)
      • 59. Huynh-The, T., Banos, O., Le, B.-V., et al: ‘PAM-based flexible generative topic model for 3D interactive activity recognition’. 2015 Int. Conf. Advanced Technologies for Communications (ATC), Ho Chi Min City, Vietnam, October 2015, pp. 117122.
    60. 60)
      • 60. Ji, X., Cheng, J., Feng, W.: ‘Spatio-temporal cuboid pyramid for action recognition using depth motion sequences’. 8th Int. Conf. Advanced Computational Intelligence, Chiang Mai, Thailand, February 2016, pp. 208213.
    61. 61)
      • 61. Dogan, E., Eren, G., Wolf, C., et al: ‘Activity recognition with volume motion templates and histograms of 3D gradients’. 2015 IEEE Int. Conf. Image Processing (ICIP), Quebec City, Canada, September 2015, pp. 44214425.
    62. 62)
      • 62. Zhang, H.-B., Ma, Y., Guo, F., et al: ‘Motion difference histogram for recognizing human action in video’. 2016 Int. Conf. Applied System Innovation (ICASI), Okinawa, Japan, 26–30 May 2016, pp. 14.
    63. 63)
      • 63. Ding, S., Qu, S.: ‘An improved interest point detector for human action recognition’. 28th Chinese Control and Decision Conf. (CCDC), Yinchuan, China, May 2016, pp. 16.
    64. 64)
      • 64. Luo, H., Lu, H.: ‘Multi-level sparse coding for human action recognition’. 8th Int. Conf. Intelligent Human–Machine Systems and Cybernetics, Zhejiang, China, September 2016, pp. 460463.
    65. 65)
      • 65. Wang, H., Schmid, C.: ‘Action recognition with improved trajectories’. 2013 Int. Conf. Computer Vision (ICCV), Sydney, Australia, 2013, pp. 35513558.
    66. 66)
      • 66. Srinivasan, V., Gul, S., Bosse, S., et al: ‘On the robustness of action recognition methods in compressed and pixel domain’. 2016 6th European Workshop on Visual Information Processing (EUVIP), Marseille, France, December 2016, pp. 16.
    67. 67)
      • 67. See, J., Rahman, S.: ‘On the effects of low video quality in human action recognition’. 2015 Int. Conf. Digital Image Computing: Techniques and Applications (DICTA), Adelaide, Australia, November 2015, pp. 18.
    68. 68)
      • 68. Bian, W., Tao, D., Rui, Y.: ‘Cross-domain human action recognition’, IEEE Trans. Syst. Man Cybernet. – B, Cybernet., 2012, 42, (2), pp. 298307.
    69. 69)
      • 69. Hai, P.T., Kha, H.H.: ‘An efficient star skeleton extraction for human action recognition using hidden Markov models’. 2016 IEEE Sixth Int. Conf. Communications and Electronics (ICCE), Ha Long, Vietnam, July 2016, pp. 351356.
    70. 70)
      • 70. Pan, H., Li, J.: ‘Online human action recognition based on improved dynamic time warping’. 2016 IEEE Int. Conf. Big Data Analysis (ICBDA), Hangzhou, China, March 2016, pp. 15.
    71. 71)
      • 71. Zhu, J., Mei, X., Wu, L., et al: ‘On-line human action recognition based on P-LDCNF’. 35th Chinese Control Conf., Chengdu, China, July 2016, pp. 39563961.
    72. 72)
      • 72. Liu, F., Xu, X., Qiu, S., et al: ‘Simple to complex transfer learning for action recognition’, IEEE Trans. Image Process., 2016, 25, (2), pp. 949960.
    73. 73)
      • 73. Yang, Y., Deng, C., Tao, D., et al: ‘Latent max-margin multitask learning with skelets for 3-D action recognition’, IEEE Trans. Cybernet., 2017, 47, (2), pp. 439448.
    74. 74)
      • 74. Liu, T., Wang, X., Dai, X., et al: ‘Deep recursive and hierarchical conditional random fields for human action recognition’. 2016 IEEE Winter Conf. Applications of Computer Vision (WACV), New York, March 2016, pp. 19.
    75. 75)
      • 75. Miao, J., Jia, X., Mathew, R., et al: ‘Efficient action recognition from compressed depth maps’. 2016 IEEE Int. Conf. Image Processing (ICIP), Phoenix, Arizona, September 2016, pp. 1620.
    76. 76)
      • 76. Liang, C., Qi, L., Chen, E., et al: ‘Depth-based action recognition using multiscale sub-actions depth motion maps and local auto-correlation of space-time gradients’. IEEE 8th Int. Conf. Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, USA, December 2016.
    77. 77)
      • 77. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘Fusion of depth, skeleton, and inertial data for human action recognition’. IEEE Int. Conf. Acoustic, Speech, and Signal Processing (ICASSP), Shanghai, China, March 2016, pp. 27122716.
    78. 78)
      • 78. Lee, J.H., Min, H.-S., Seo, J.-J., et al: ‘Sub-sampled dictionaries for coarse-to-fine sparse representation-based human action recognition’. 2014 IEEE Int. Conf. Multimedia and Expo (ICME), Chengdu, China, July 2014, pp. 16.
    79. 79)
      • 79. Lillo, I., Niebles, J.C., Soto, A.: ‘A hierarchical pose-based approach to complex action understanding using dictionaries of actionlets and motion poselets’. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, 2016, pp. 19811990.
    80. 80)
      • 80. Devanne, M., Wannous, H., Berretti, S., et al: ‘3-D human action recognition by shape analysis of motion trajectories on riemannian manifold’, IEEE Trans. Cybernet., 2015, 45, (7), pp. 13401352.
    81. 81)
      • 81. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘Improving human action recognition using fusion of depth camera and inertial sensors’, IEEE Trans. Human–Mach. Syst., 2015, 45, (1), pp. 5161.
    82. 82)
      • 82. Chen, C., Liu, K., Kehtarnavaz, N.: ‘Real-time human action recognition based on depth motion maps’, J. Real-Time Image Process., 2016, 12, pp. 155163.
    83. 83)
      • 83. Imran, J., Kumar, P.: ‘Human action recognition using RGB-D sensor and deep convolutional neural networks’. Int. Conf. Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, September 2016, pp. 144148.
    84. 84)
      • 84. Krishnan, K., Prabhu, N., Venkatesh Babu, R.: ‘ARRNET: action recognition through recurrent neural networks’. 2016 Int. Conf. Signal Processing and Communications (SPCOM), Bangalore, India, June 2016.
    85. 85)
      • 85. Iosifidis, A., Tefas, A., Pitas, I.: ‘Graph embedded extreme learning machine’, IEEE Trans. Cybernet., 2016, 46, (1), pp. 311324.
    86. 86)
      • 86. Giachetti, A., Fornasa, F., Parezzan, F., et al: ‘Shrec'16 track: retrieval of human subjects from depth sensor data’. Eurographics Workshop on 3D Object Retrieval (2016), Lisbon, Portugal, May 2016, pp. 16.
    87. 87)
      • 87. Microsoft Kinect. Available at https://www.xbox.com/en-US/xbox-one/accessories/kinect, accessed 26 September 2017.
    88. 88)
      • 88. Asus Xtion Pro Live. Available at https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE/, accessed 26 September 2017.
    89. 89)
      • 89. Available at https://msdn.microsoft.com/en-us/library/microsoft.kinect.jointtype.aspx, accessed 28 September 2017.
    90. 90)
      • 90. Zhang, J., Gu, R., Ye, Q., et al: ‘Monocular human action recognition utilizing silhouette feature extraction and skin color detection’. 13th Int. Conf. Parallel and Distributed Computing, Applications and Technologies, Beijing, China, 2012, pp. 745748.
    91. 91)
      • 91. Sjarif, N.N.A., Shamsuddin, S.M.: ‘Human action invarianceness for human action recognition’. 9th Int. Conf. Software, Knowledge, Information Management and Applications (SKIMA), Kathmandu, Nepal, December 2015.
    92. 92)
      • 92. He, J.: ‘Self-taught learning features for human action recognition’. 3rd Int. Conf. Information Science and Control Engineering (ICISCE), Beijing, China, July 2016, pp. 611615.
    93. 93)
      • 93. Valle, E.A., Starostenko, O.: ‘Recognition of human walking/running actions based on neural network’. 10th Int. Conf. Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 30 September–4 October 2013.
    94. 94)
      • 94. Leightley, D., McPhee, J.S., Yap, M.H.: ‘Automated analyses and quantification of human mobility using depth sensor’, J. Biomed. Health Inf., 2017, 21, (4), pp. 939948.
    95. 95)
      • 95. Rashid, M., Abu-Bakar, S.A.R., Mokji, M., et al: ‘Human action concentric video retrieval system using features weight updating method as relevance feedback’. 2012 IEEE Int. Conf. Control System, Computing and Engineering, Penang, Malaysia, November, pp. 366370.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0350
Loading

Related content

content/journals/10.1049/iet-ipr.2019.0350
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address