Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Enhanced hand part classification from a single depth image using random decision forests

Hand pose recognition has received increasing attention in an area of human–computer interaction. With the recent spread of many low-cost three-dimensional (3D) cameras, research into understanding more natural gestures has increased. In this study, the authors present a method for hand part classification and joint estimation from a single depth image. They apply random decision forests (RDFs) for hand part classification. Foreground pixels in the hand image are estimated by RDF. Then hand joints are estimated based on the classified hand parts. They suggest a robust feature extraction method for per-pixel classification, which enhances the accuracy of hand part classification. They also propose a tree selection algorithm using legacy trained RDF to classify unseen test data. Selecting trees using the proposed method show better performance than using all the trees as in conventional method. Depth images and label images synthesised by 3D hand mesh model were used for training forests and algorithm verification. The authors’ experiments show that the enhanced algorithm outperforms the state-of-the-art method in accuracy.

References

    1. 1)
      • 13. Tang, D., Chang, H., Tejani, A., et al: ‘Latent regression forest: Structured estimation of 3d articulated hand posture’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 37863793.
    2. 2)
      • 14. Doliotis, P., Athitsos, V., Kosmopoulos, D., et al: ‘Hand shape and 3d pose estimation using depth data from a single cluttered frame’, Adv. Visual Comput., 2012, pp. 148158.
    3. 3)
      • 7. Hackenberg, G., McCall, R., Broll, W.: ‘Lightweight palm and finger tracking for real-time 3d gesture control’. Virtual Reality Conf. (VR), 2011, 2011, pp. 1926.
    4. 4)
      • 3. Premaratne, P., Nguyen, Q.: ‘Consumer electronics control system based on hand gesture moment invariants’, IET Comput. Vis., 2007, 1, (1), pp. 3541.
    5. 5)
      • 8. Doliotis, P., Stefan, A., McMurrough, C., et al: ‘Comparing gesture recognition accuracy using color and depth information’. Proc. of the Fourth Int. Conf. on PErvasive Technologies Related to Assistive Environments, 2011, p. 20.
    6. 6)
      • 25. Molina, J., Pajuelo, J.A., Escudero-Viñolo, M., et al: ‘A natural and synthetic corpus for benchmarking of hand gesture recognition systems’, Mach. Vis. Appl., 2014, 25, (4), pp. 943954.
    7. 7)
      • 20. Shotton, J., Sharp, T., Kipman, A., et al: ‘Real-time human pose recognition in parts from single depth images’, Commun. ACM, 2013, 56, (1), pp. 116124.
    8. 8)
      • 19. Stenger, B., Mendonça, P.R., Cipolla, R.: ‘Model-based hand tracking using an unscented Kalman filter’. BMVC, 2001, pp. 6372.
    9. 9)
      • 24. Molina, J., Martínez, J.M.: ‘A synthetic training framework for providing gesture scalability to 2.5 d pose-based hand gesture recognition systems’, Mach. Vis. Appl., 2014, 25, (5), pp. 13091315.
    10. 10)
      • 23. Causo, A., Matsuo, M., Ueda, E., et al: ‘Hand pose estimation using voxel-based individualized hand model’. IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics, 2009, AIM 2009, 2009, pp. 451456.
    11. 11)
      • 15. Liu, X., Fujimura, K.: ‘Hand gesture recognition using depth data’. Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2004, 2004, pp. 529534.
    12. 12)
      • 11. Qian, C., Sun, X., Wei, Y., et al: ‘Realtime and robust hand tracking from depth’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 11061113.
    13. 13)
      • 29. Fanello, S.R., Keskin, C., Izadi, S., et al: ‘Learning to be a depth camera for close-range human capture and interaction’, ACM Trans. Graph. (TOG), 2014, 33, (4), p. 86.
    14. 14)
      • 21. Breiman, L.: ‘Random forests’, Mach. Learn., 2001, 45, (1), pp. 532.
    15. 15)
      • 18. Oikonomidis, I., Kyriazis, N., Argyros, A.A.: ‘Efficient model-based 3d tracking of hand articulations using kinect’. British Machine Vision Conf. (BMVC), 2011, p. 3.
    16. 16)
      • 4. Erol, A., Bebis, G., Nicolescu, M., et al: ‘Vision-based hand pose estimation: a review’, Comput. Vis. Image Underst., 2007, 108, (1), pp. 5273.
    17. 17)
      • 22. Keskin, C., Kıraç, F., Kara, Y.E., et al: ‘Real time hand pose estimation using depth sensors’. Consumer Depth Cameras for Computer Vision, Springer, 2013, pp. 119137.
    18. 18)
      • 27. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition 2001, CVPR 2001, 2001, p. I511.
    19. 19)
      • 17. Suryanarayan, P., Subramanian, A., Mandalapu, D.: ‘Dynamic hand pose recognition using depth data’. 20th Int. Conf. on Pattern Recognition (ICPR), 2010, 2010, pp. 31053108.
    20. 20)
      • 2. Alon, J., Athitsos, V., Yuan, Q., et al: ‘A unified framework for gesture recognition and spatiotemporal gesture segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (9), pp. 16851699.
    21. 21)
      • 28. Keskin, C., Kıraç, F., Kara, Y.E., et al: ‘Hand pose estimation and hand shape classification using multi-layered randomized decision forests’. European Conf. on Computer Vision – ECCV 2012, Springer, 2012, pp. 852863.
    22. 22)
      • 1. Rautaray, S.S., Agrawal, A.: ‘Vision based hand gesture recognition for human computer interaction: a survey’, Artif. Intell. Rev., 2015, 43, (1), pp. 154.
    23. 23)
      • 9. Tara, R., Santosa, P., Adji, T.: ‘Hand segmentation from depth image using anthropometric approach in natural interface development’, Int. J. Sci. Eng. Res., 2012, 3, (5), pp. 14.
    24. 24)
      • 10. Sridhar, S., Mueller, F., Oulasvirta, A., et al: ‘Fast and robust hand tracking using detection-guided optimization’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 32133221.
    25. 25)
      • 26. Lepetit, V., Lagger, P., Fua, P.: ‘Randomized trees for real-time keypoint recognition’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2005, CVPR 2005, 2005, pp. 775781.
    26. 26)
      • 5. Ge, S.S., Yang, Y., Lee, T.H.: ‘Hand gesture recognition and tracking based on distributed locally linear embedding’, Image Vis. Comput., 2008, 26, (12), pp. 16071620.
    27. 27)
      • 12. Xu, C., Cheng, L.: ‘Efficient hand pose estimation from a single depth image’. Proc. of the IEEE Int. Conf. on Computer Vision, 2013, pp. 34563462.
    28. 28)
      • 6. Microsoft: ‘Kinect cameraAvailable: http://www.xbox.com/en-us/kinect.
    29. 29)
      • 16. Ren, Z., Yuan, J., Zhang, Z.: ‘Robust hand gesture recognition based on finger-earth mover's distance with a commodity depth camera’. Proc. of the 19th ACM Int. Conf. on Multimedia, 2011, pp. 10931096.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0239
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0239
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address