http://iet.metastore.ingenta.com
1887

access icon openaccess Multi-feature consultation model for human action recognition in depth video sequence

  • PDF
    1.9634685516357422MB
  • XML
    73.990234375Kb
  • HTML
    75.2705078125Kb
Loading full text...

Full text loading...

/deliver/fulltext/joe/2018/16/JOE.2018.8301.html;jsessionid=1d4c1kfos81u1.x-iet-live-01?itemId=%2fcontent%2fjournals%2f10.1049%2fjoe.2018.8301&mimeType=html&fmt=ahah

References

    1. 1)
      • 1. Cai, Z., Han, J., Liu, L., et al: ‘RGB-D datasets using microsoft kinect or similar sensors: a survey’, Multimed. Tools Appl., 2017, 76, (3), pp. 43134355.
    2. 2)
      • 2. Ramamurthy S, R., Roy, N.: ‘Recent trends in machine learning for human activity recognition – a survey’, Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 2018, 8, (1), p. 1254.
    3. 3)
      • 3. Haria, A., Subramanian, A., Asokkumar, N., et al: ‘Hand gesture recognition for human computer interaction’, Procedia Comput. Sci., 2017, 115, pp. 367374.
    4. 4)
      • 4. Guler, A., Kardaris, N., Chandra, S., et al: ‘Human joint angle estimation and gesture recognition for assistive robotic vision’. Computer Vision – ECCV 2016 Workshops, Springer, Cham, 2016, pp. 415431.
    5. 5)
      • 5. Aziz N, N A., Mustafah Y, M., Azman A, W., et al: ‘Features-based moving objects tracking for smart video surveillances: a review’, Int. J. Artif. Intell. Tools, 2018, 27, (02), p. 1830001.
    6. 6)
      • 6. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘A survey of depth and inertial sensor fusion for human action recognition’, Multimed. Tools Appl., 2017, 76, (3), pp. 44054425.
    7. 7)
      • 7. Herath, S., Harandi, M., Porikli, F.: ‘Going deeper into action recognition: A survey’, Image and Image Vis. Comput., 2017, 60, pp. 421.
    8. 8)
      • 8. Gao, Z., Zhang, H., Liu A, A., et al: ‘Human action recognition on depth dataset’, Neural Comput. Appli., 2016, 27, (7), pp. 20472054.
    9. 9)
      • 9. Shotton, J., Fitzgibbon, A., Cook, M., et al: ‘Real-time human pose recognition in parts from single depth images’. Computer Vision and Pattern Recognition, 2011, pp. 12971304.
    10. 10)
      • 10. Li, W., Zhang, Z., Liu, Z.: ‘Action recognition based on a bag of 3D points’. Computer Vision and Pattern Recognition, San Francisco, CA, USA, 2010, pp. 914.
    11. 11)
      • 11. Chen, C., Liu, K., Kehtarnavaz, N.: ‘Real-time human action recognition based on depth motion maps’, J. Real-Time Image Process., 2016, 12, (1), pp. 155163.
    12. 12)
      • 12. Oreifej, O., Liu, Z.: ‘HON4D: histogram of oriented 4D normals for activity recognition from depth sequences’. Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 716723.
    13. 13)
      • 13. Yang, X., Tian Y, L.: ‘Super normal vector for activity recognition using depth sequences’. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 804–811.
    14. 14)
      • 14. Wang, J., Liu, Z., Chorowski, J., et al: ‘Robust 3d action recognition with random occupancy patterns’. Computer Vision – ECCV, Florence, Italy, 2012, pp. 872885.
    15. 15)
      • 15. Du, Y., Wang, W., Wang, L.: ‘Hierarchical recurrent neural network for skeleton based action recognition’. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 11101118.
    16. 16)
      • 16. Wang, H., Wang, L.: ‘Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks’. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 499508.
    17. 17)
      • 17. Xia, L., Aggarwal J, K.: ‘Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera’. Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 28342841.
    18. 18)
      • 18. Vemulapalli, R., Arrate, F., Chellappa, R.: ‘Human action recognition by representing 3D skeletons as points in a lie group’. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 588595.
    19. 19)
      • 19. Haque, A., Alahi, A., Li, F.F.: ‘Recurrent attention models for depth-based person identification’. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 12291238.
    20. 20)
      • 20. Hu, M.K.: ‘Visual pattern recognition by moment invariants’, IRE Trans. Information Theory, 1962, 8, (2), pp. 179187.
    21. 21)
      • 21. Lin Y, C., Hu M, C., Cheng W, H., et al: ‘Human action recognition and retrieval using sole depth information’. Proc. the 20th ACM Int. Conf. Multimedia. ACM, Nara, Japan, 2012, pp. 10531056.
    22. 22)
      • 22. Wang, C., Flynn, J., Wang, Y., et al: ‘Recognizing actions in 3D using action-snippets and activated simplices’. AAAI, Feinikesi, AZ, USA, 2016, pp. 36043610.
    23. 23)
      • 23. Yang, X., Tian Y, L.: ‘Super normal vector for human activity recognition with depth cameras’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (5), pp. 10281039.
    24. 24)
      • 24. Gori, I., Aggarwal J, K., Matthies, L., et al: ‘Multitype activity recognition in robot-centric scenarios’, IEEE Robot. Autom. Lett., 2016, 1, (1), pp. 593600.
    25. 25)
      • 25. Liu, H., Tian, L., Liu, M., et al: ‘Sdm-bsm: A fusing depth scheme for human action recognition’, Image Processing. IEEE, 2015, pp. 46744678.
    26. 26)
      • 26. Gao, Z., Song, J., Zhang, H., et al: ‘Human action recognition via multi-modality information’, J. Electr. Eng. Technol., 2014, 9, (2), pp. 739748.
http://iet.metastore.ingenta.com/content/journals/10.1049/joe.2018.8301
Loading

Related content

content/journals/10.1049/joe.2018.8301
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address