Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon openaccess Multi-feature consultation model for human action recognition in depth video sequence

In the field of computer vision research, the research on human action recognition of depth video sequence is an important research direction. Herein, considering the characteristics of temporal and spatial depth video sequence, the authors propose a framework of the consultation model of several action sequence features to solve the classification problem in-depth video sequence. According to the characteristics of the 3D human action space, a variety of action sequence feature data is obtained, and then these data is projected to three coordinate planes, the acquired fusion features are used to train the consultation model, and finally the model is validated through the data. The authors have achieved good results by comparing the two publicly available datasets with the other methods. Experimental results demonstrate that the model performs well in existing identification methods.

References

    1. 1)
      • 14. Wang, J., Liu, Z., Chorowski, J., et al: ‘Robust 3d action recognition with random occupancy patterns’. Computer Vision – ECCV, Florence, Italy, 2012, pp. 872885.
    2. 2)
      • 18. Vemulapalli, R., Arrate, F., Chellappa, R.: ‘Human action recognition by representing 3D skeletons as points in a lie group’. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 588595.
    3. 3)
      • 20. Hu, M.K.: ‘Visual pattern recognition by moment invariants’, IRE Trans. Information Theory, 1962, 8, (2), pp. 179187.
    4. 4)
      • 22. Wang, C., Flynn, J., Wang, Y., et al: ‘Recognizing actions in 3D using action-snippets and activated simplices’. AAAI, Feinikesi, AZ, USA, 2016, pp. 36043610.
    5. 5)
      • 8. Gao, Z., Zhang, H., Liu A, A., et al: ‘Human action recognition on depth dataset’, Neural Comput. Appli., 2016, 27, (7), pp. 20472054.
    6. 6)
      • 10. Li, W., Zhang, Z., Liu, Z.: ‘Action recognition based on a bag of 3D points’. Computer Vision and Pattern Recognition, San Francisco, CA, USA, 2010, pp. 914.
    7. 7)
      • 25. Liu, H., Tian, L., Liu, M., et al: ‘Sdm-bsm: A fusing depth scheme for human action recognition’, Image Processing. IEEE, 2015, pp. 46744678.
    8. 8)
      • 6. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘A survey of depth and inertial sensor fusion for human action recognition’, Multimed. Tools Appl., 2017, 76, (3), pp. 44054425.
    9. 9)
      • 24. Gori, I., Aggarwal J, K., Matthies, L., et al: ‘Multitype activity recognition in robot-centric scenarios’, IEEE Robot. Autom. Lett., 2016, 1, (1), pp. 593600.
    10. 10)
      • 13. Yang, X., Tian Y, L.: ‘Super normal vector for activity recognition using depth sequences’. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 804–811.
    11. 11)
      • 7. Herath, S., Harandi, M., Porikli, F.: ‘Going deeper into action recognition: A survey’, Image and Image Vis. Comput., 2017, 60, pp. 421.
    12. 12)
      • 17. Xia, L., Aggarwal J, K.: ‘Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera’. Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 28342841.
    13. 13)
      • 15. Du, Y., Wang, W., Wang, L.: ‘Hierarchical recurrent neural network for skeleton based action recognition’. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 11101118.
    14. 14)
      • 23. Yang, X., Tian Y, L.: ‘Super normal vector for human activity recognition with depth cameras’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (5), pp. 10281039.
    15. 15)
      • 1. Cai, Z., Han, J., Liu, L., et al: ‘RGB-D datasets using microsoft kinect or similar sensors: a survey’, Multimed. Tools Appl., 2017, 76, (3), pp. 43134355.
    16. 16)
      • 2. Ramamurthy S, R., Roy, N.: ‘Recent trends in machine learning for human activity recognition – a survey’, Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 2018, 8, (1), p. 1254.
    17. 17)
      • 21. Lin Y, C., Hu M, C., Cheng W, H., et al: ‘Human action recognition and retrieval using sole depth information’. Proc. the 20th ACM Int. Conf. Multimedia. ACM, Nara, Japan, 2012, pp. 10531056.
    18. 18)
      • 3. Haria, A., Subramanian, A., Asokkumar, N., et al: ‘Hand gesture recognition for human computer interaction’, Procedia Comput. Sci., 2017, 115, pp. 367374.
    19. 19)
      • 16. Wang, H., Wang, L.: ‘Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks’. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 499508.
    20. 20)
      • 26. Gao, Z., Song, J., Zhang, H., et al: ‘Human action recognition via multi-modality information’, J. Electr. Eng. Technol., 2014, 9, (2), pp. 739748.
    21. 21)
      • 12. Oreifej, O., Liu, Z.: ‘HON4D: histogram of oriented 4D normals for activity recognition from depth sequences’. Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 716723.
    22. 22)
      • 11. Chen, C., Liu, K., Kehtarnavaz, N.: ‘Real-time human action recognition based on depth motion maps’, J. Real-Time Image Process., 2016, 12, (1), pp. 155163.
    23. 23)
      • 4. Guler, A., Kardaris, N., Chandra, S., et al: ‘Human joint angle estimation and gesture recognition for assistive robotic vision’. Computer Vision – ECCV 2016 Workshops, Springer, Cham, 2016, pp. 415431.
    24. 24)
      • 19. Haque, A., Alahi, A., Li, F.F.: ‘Recurrent attention models for depth-based person identification’. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 12291238.
    25. 25)
      • 9. Shotton, J., Fitzgibbon, A., Cook, M., et al: ‘Real-time human pose recognition in parts from single depth images’. Computer Vision and Pattern Recognition, 2011, pp. 12971304.
    26. 26)
      • 5. Aziz N, N A., Mustafah Y, M., Azman A, W., et al: ‘Features-based moving objects tracking for smart video surveillances: a review’, Int. J. Artif. Intell. Tools, 2018, 27, (02), p. 1830001.
http://iet.metastore.ingenta.com/content/journals/10.1049/joe.2018.8301
Loading

Related content

content/journals/10.1049/joe.2018.8301
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address