Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Human action recognition based on tensor shape descriptor

Human action recognition is an important task. This study presents an efficient framework for recognising action with a 3D skeleton kinematic joint model in less computational time for practical usage. First, a tensor shape descriptor (TSD) is proposed in this study, which takes advantage of the spatial independence of body joints, avoids a lot of difficult problem of the explicit motion estimation required in traditional methods, reserves the spatial information of each frame. Thus, the new TSD is a complete and view-invariant descriptor. Second, a novel tensor dynamic time warping (TDTW) method is proposed to measure joint-to-joint similarity of 3D skeletal body joints locally in the temporal extent, which is implemented by extending DTW to that of two multiway data arrays (or tensors). Then, a multi-linear projection process is employed to map the TSD to a low-dimensional tensor subspace, which is classified by the nearest neighbour classifier. The experiment results on the public action data set (MSR-Action3D) and motion capture data set (CMU_Mocap) show that the proposed method can achieve a comparable or better performance in recognition accuracy compared with the state-of-the-art approaches.

References

    1. 1)
      • 19. Zhou, F., De La Torre, F.: ‘Generalized time warping for multi-modal alignment of human motion’. Proc. Int. Conf. Computer Vision and Pattern Recognition, (CVPR), 2012, pp. 12821289.
    2. 2)
      • 21. Haiping, L., Plataniotis, K.N., Venetsanopoulos, A.N.: ‘MPCA: Multilinear principal component analysis of tensor objects’, IEEE Trans. Neural Netw., 2008, 19, (1), pp. 1839.
    3. 3)
      • 29. Tang, J., Cheng, H., Yang, L.: ‘Recognizing 3D continuous letter trajectory gesture using dynamic time warping’. Advances in Multimedia Information Processing, 2015, pp. 191200.
    4. 4)
      • 18. Zhou, F., De la Torre, F.: ‘Canonical time warping for alignment of human behaviour’, Proc. Int. Conf. Advances in Neural Information Processing System., 2009, pp. 22862294.
    5. 5)
      • 8. Shechtman, E., Irani, M.: ‘Space-time behavior based correlation’. Proc. Int. Conf. Computer Vision and Pattern Recognition, 2005, pp. 405412.
    6. 6)
      • 14. Ohn-Bar, E., Trivedi, M.M.: ‘Joint angles similarities and HOG2 for action recognition’. Proc. Int. Conf. Computer Vision and Pattern Recognition Workshops, 2013, pp. 465470.
    7. 7)
      • 15. Lu, G., Zhou, Y., Li, X., et al: ‘Efficient action recognition via local position offset of 3D skeletal body joints’, Multimedia Tools Appl., 2015, 75, (6), pp. 34793494.
    8. 8)
      • 28. Wu, X., Mao, X., Chen, L., et al: ‘View-invariant gesture recognition using nonparametric shape descriptor’. Proc. Int. Conf. on Pattern Recognition (ICPR), 2014, pp. 544549.
    9. 9)
      • 10. Lu, X., Chia-Chih, C., Aggarwal, J.K.: ‘View invariant human action recognition using histograms of 3d joints’. Proc. Int. Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 2027.
    10. 10)
      • 11. Wang, J., Liu, Z., Wu, Y., et al: ‘Mining actionlet ensemble for action recognition with depth cameras’. Proc. Int. Conf. Computer Vision and Pattern Recognition, 2012, pp. 12901297.
    11. 11)
      • 16. Pazhoumand-Dar, H., Lam, C.-P., Masek, M.: ‘Joint movement similarities for robust 3D action recognition using skeletal data’, J. Vis Commun. Image Represent., 2015, 30, (C), pp. 1021.
    12. 12)
      • 31. De la Torre, F.: ‘Cmu graphics lab motion capture database’, 2015. Available at http://mocap.cs.cmu.edu/.
    13. 13)
      • 22. Shuicheng, Y., Dong, X., Qiang, Y., et al: ‘Multilinear discriminant analysis for face recognition’, IEEE Trans. Image Process., 2007, 16, (1), pp. 212220.
    14. 14)
      • 1. Shotton, J., Fitzgibbon, A., Cook, M., et al: ‘Real-time human pose recognition in parts from single depth images’, Postgrad. Med. J., 2011, 56, (1), pp. 12971304.
    15. 15)
      • 7. Bobick, A., Davis, J.: ‘The recognition of human movements using temporal templates’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (3), pp. 257267.
    16. 16)
      • 6. Li, W., Zhang, Z., Liu, Z.: ‘Action recognition based on a bag of 3D points’. Proc. Int. Conf. Computer Vision and Pattern Recognition Workshops, 2010, pp. 914.
    17. 17)
      • 9. Laptev, I., Lindeberg, T.: ‘Space-time interest points’. Proc. Int. Conf. Computer Vision, 2003, pp. 432439.
    18. 18)
      • 25. Kim, T.-K., Cipolla, R.: ‘Canonical correlation analysis of video volume tensors for action categorization and detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (8), pp. 14151428.
    19. 19)
      • 24. Haiping, L., Plataniotis, K.N., Venetsanopoulos, A.N.: ‘Gait recognition through mpca plus LDA’. Proc. Int. Conf. Biometrics Sympos.: Special Session on Research at the, 2006, pp. 16.
    20. 20)
      • 12. Oreifej, O., Liu, Z.: ‘HON4D: histogram of oriented 4D normal for activity recognition from depth sequences’. Proc. Int. Conf. Computer Vision and Pattern Recognition, 2013, pp. 716724.
    21. 21)
      • 30. Kolda, T.G., Bader, B.W.: ‘Tensor decompositions and applications’, SIAM Rev., 2007, 51, (3), pp. 455500.
    22. 22)
      • 2. Kerola, T., Inoue, N., Shinoda, K., et al: ‘Spectral graph skeletons for 3D action recognition’. Proc. Int. Conf. ACCV 2014, Part IV, 2014 (LNCS, 9006), pp. 417432.
    23. 23)
      • 27. Wu, X., Mao, X., Chen, L., et al: ‘Point context: An effective shape descriptor for RST-invariant trajectory recognition’, J. Math. Imaging Vis., 2015, pp. 114.
    24. 24)
      • 17. Ding, W., Liu, K., Cheng, F., et al: ‘Skeleton-based human action recognition with profile hidden Markov models’, Comput. Vis., 2015, (546), pp. 1221.
    25. 25)
      • 32. Sempena, S., Maulidevi, N.U., Aryan, P.R.: ‘Human action recognition using dynamic time warping’. Proc. Int. Conf. Electrical Engineering and Informatics (ICEEI), 2011, pp. 15.
    26. 26)
      • 20. Gong, D., Medioni, G., Zhao, X.: ‘Structured time series analysis for human action segmentation and recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (7), pp. 14141427.
    27. 27)
      • 5. Wu, X., Mao, X., Chen, L., et al: ‘Trajectory-based view-invariant hand gesture recognition by fusing shape and orientation’, IET Comput. Vis., 2015, 9, (6), pp. 19.
    28. 28)
      • 23. Davis, J.W., Gao, H.: ‘Recognizing human action efforts: An adaptive three-mode PCA framework’. Proc. Int. Conf. Computer Vision, 2003, pp. 14631469.
    29. 29)
      • 26. Martin, C.D.M.: ‘Tensor decompositions workshop discussion notes’ (American Institute of Mathematics, 2004), pp. 127.
    30. 30)
      • 4. Vaswani, N., Roy-Chowdhury, A., Chellappa, R.: ‘Shape activity: a continuous-state HMM for moving/deforming shapes with application to abnormal activity detection’, IEEE Trans. Image Process., 2005, 14, (10), pp. 16031616.
    31. 31)
      • 13. Xiaodong, Y., Zhang, C., Wu, Y., et al: ‘Recognizing actions using depth motion maps-based histograms of oriented gradients’. ACM Multimedia, 2012, pp. 10571060.
    32. 32)
      • 3. Zhang, Z.: ‘Microsoft kinect sensor and its effect’, IEEE Multimedia, 2012, 19, (2), pp. 410.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2016.0048
Loading

Related content

content/journals/10.1049/iet-cvi.2016.0048
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address