Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Review of constraints on vision-based gesture recognition for human–computer interaction

The ability of computers to recognise hand gestures visually is essential for progress in human–computer interaction. Gesture recognition has applications ranging from sign language to medical assistance to virtual reality. However, gesture recognition is extremely challenging not only because of its diverse contexts, multiple interpretations, and spatio-temporal variations but also because of the complex non-rigid properties of the hand. This study surveys major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition. Current challenges are explored in detail.

References

    1. 1)
      • 47. Bhuyan, M.K., Ghosh, D., Bora, P.: ‘Feature extraction from 2D gesture trajectory in dynamic hand gesture recognition’. Proc. IEEE Conf. Cybernetics and Intelligent Systems, June 2006, p. 16.
    2. 2)
      • 124. Keskin, C., Kirac, F., Kara, Y.: ‘Real time hand pose estimation using depth sensors’. Proc. IEEE Int. Conf. Computer Vision Workshops (ICCV Workshops), November 2011, pp. 12281234.
    3. 3)
      • 142. Varkonyi-Koczy, A., Tusor, B.: ‘Human-computer interaction for smart environment applications using fuzzy hand posture and gesture models’, IEEE Trans. Instrum. Meas., 2011, 60, (5), pp. 15051514.
    4. 4)
      • 145. Wu, D., Pigou, L., Kindermans, P.J., et al: ‘Deep dynamic neural networks for multimodal gesture segmentation and recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (8), pp. 15831597.
    5. 5)
      • 123. Dardas, N., Georganas, N.D.: ‘Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques’, IEEE Trans. Instrum. Meas., 2011, 60, (11), pp. 35923607.
    6. 6)
      • 8. Song, F., Tan, X., Chen, S., et al: ‘A literature survey on robust and efficient eye localization in real-life scenarios’, Pattern Recogn., 2013, 46, (12), pp. 31573173.
    7. 7)
      • 73. Chakraborty, B.K., Bhuyan, M.K., Kumar, S.: ‘Combining image and global pixel distribution model for skin colour segmentation’, Pattern Recognit. Lett., 2017, 41, pp. 3340.
    8. 8)
      • 91. Dondi, P., Lombardi, L., Porta, M.: ‘Development of gesture-based human computer interaction applications by fusion of depth and color video streams’, IET Comput. Vis., 2014, 8, (6), pp. 568578.
    9. 9)
      • 56. Tsai, C.Y., Lee, Y.H.: ‘Multiple-camera-based gesture recognition by MDA method’. Proc. Fifth Int. Conf. Fuzzy Systems and Knowledge Discovery, 2008, vol. 3, pp. 599603.
    10. 10)
      • 7. Kumar, S., Bhuyan, M.K., Chakraborty, B.K.: ‘Extraction of informative regions of a face for facial expression recognition’, IET Comput. Vis., 2016, 10, (6), p. 567576.
    11. 11)
      • 59. Uebersax, D., Gall, J., Van den Bergh, M., et al: ‘Real-time sign language letter and word recognition from depth data’. Proc. IEEE Int. Conf. Computer Vision Workshops (ICCV Workshops), November 2011, pp. 383390.
    12. 12)
      • 122. Burges, C.J.C.: ‘A tutorial on support vector machines for pattern recognition’, Data Min. Knowl. Discov., 1998, 2, (2), pp. 121167.
    13. 13)
      • 147. Du, Y., Wang, W., Wang, L.: ‘Hierarchical recurrent neural network for skeleton based action recognition’. Proc. Conf. CVPR, 2015, pp. 11101118.
    14. 14)
      • 66. Lu, W., Tong, Z., Chu, J.: ‘Dynamic hand gesture recognition with leap motion controller’, IEEE Signal Process. Lett., 2016, 23, (9), pp. 11881192.
    15. 15)
      • 96. Fang, G., Gao, W., Zhao, D.: ‘Large-vocabulary continuous sign language recognition based on transition-movement models’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2007, 37, (1), pp. 19.
    16. 16)
      • 36. Moeslund, T.B., Granum, E.: ‘A survey of computer vision-based human motion capture’, Comput. Vis. Image Underst., 2001, 81, (3), pp. 231268.
    17. 17)
      • 108. Dardas, N., Chen, Q., Georganas, N.D., et al: ‘Hand gesture recognition using Bag-of-features and multi-class support vector machine’. Proc. Int. Symp. Haptic Audio-Visual Environments and Games, October 2010, pp. 15.
    18. 18)
      • 13. Kumar, P., Rautaray, S., Agrawal, A.: ‘Hand data glove: a new generation real-time mouse for human-computer interaction’. Proc. 1st Int. Conf. Recent Advances in Information Technology, 2012, pp. 750755.
    19. 19)
      • 19. Kulshreshth, A., Pfeil, K., LaViola, J.J.: ‘Enhancing the gaming experience using 3D spatial user interface technologies’, IEEE Comput. Graph. Appl., 2017, 38, (3), pp. 1623.
    20. 20)
      • 111. Forgy, E.W.: ‘Cluster analysis of multivariate data: efficiency vs interpretability of classifications’, Biometrics, 1965, 21, pp. 768769.
    21. 21)
      • 68. Juang, C.-F., Chiu, S.-H., Shiu, S.-J.: ‘Fuzzy system learned through fuzzy clustering and support vector machine for human skin color segmentation’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2007, 37, (6), pp. 10771087.
    22. 22)
      • 29. Hariharan, B., Padmini, S., Gopalakrishnan, U.: ‘Gesture recognition using Kinect in a virtual classroom environment’. Proc. Fourth Int. Conf. Digital Information and Communication Technology and its Applications (DICTAP), Bangkok, 2014, pp. 118124.
    23. 23)
      • 23. Jacob, M., Cange, C., Packer, R., et al: ‘Intention, context and gesture recognition for sterile MRI navigation in the operating room’, in Alvarez, L., Mejail, M., Gomez, L, et al (Eds.): ‘Progress in pattern recogn., image anal., comput. vis., and applicat.’, ser. Lecture Notes in Comput. Sci., (Springer, Berlin Heidelberg, 2012), vol. 7441, pp. 220227.
    24. 24)
      • 4. Jaimes, A., Sebe, N.: ‘Multimodal human-computer interaction: a survey’, Comput. Vis. Image Underst., 2007, 108, (1-2), pp. 116134.
    25. 25)
      • 121. Gupta, B., Shukla, P., Mittal, A.: ‘K-nearest correlated neighbor classification for Indian sign language gesture recognition using feature fusion’. Proc. Int. Conf. Computer Communication Information (ICCCI), Coimbatore, 2016, pp. 15.
    26. 26)
      • 39. Moni, M.A., Ali, A.: ‘Hmm based hand gesture recognition: a review on techniques and approaches’. Proc. 2nd IEEE Int. Conf. Computer Science and Information Technology (ICCSIT), August 2009, pp. 433437.
    27. 27)
      • 138. Hasan, H., Abdul-Kareem, S.: ‘Static hand gesture recognition using neural networks’, Artif. Intell. Rev., 2014, 41, (2), pp. 147181.
    28. 28)
      • 2. Preece, J., Carey, T., Rogers, Y., et al: ‘Human-computer interaction’ (Pearson Educ. Ltd., 1994).
    29. 29)
      • 90. Xu, D., Chen, Y.L., Lin, C., et al: ‘Real-time dynamic gesture recognition system based on depth perception for robot navigation’. Proc. IEEE Int. Conf. Robotics and Biomimetics (ROBIO), December 2012, pp. 689694.
    30. 30)
      • 52. Zhenyao, M., Neumann, U.: ‘Real-time hand pose recognition using low-resolution depth images’. Proc. CVPR, 2006, pp. 14991505.
    31. 31)
      • 18. Sagayam, K.M., Hemanth, D.J.: ‘Hand posture and gesture recognition techniques for virtual reality applications: a survey’, Virtual Real., 2017, 21, (2), pp. 91107.
    32. 32)
      • 144. Kim, Y., Toomajian, B.: ‘Hand gesture recognition using micro-Doppler signatures with convolutional neural network’, IEEE Access, 2016, 4, pp. 71257130.
    33. 33)
      • 143. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘ImageNet classification with deep convolutional neural networks’. Proc. Conf. Advances in Neural Information Processing Systems, 2012, pp. 11061114.
    34. 34)
      • 109. Zhang, R., Ming, Y., Sun, J.: ‘Hand gesture recognition with SURF-BOF based on gray threshold segmentation’. Proc. Int. Signal Processing, November 2016, pp. 118122.
    35. 35)
      • 97. Song, Y., Demirdjian, D., Davis, R.: ‘Continuous body and hand gesture recognition for natural human-computer interaction’, ACM Trans. Interact. Intell. Syst., 2012, 2, (1), pp. 5:15:28.
    36. 36)
      • 16. Jiang, R., Sadka, A., Crookes, D.: ‘Multimodal biometric human recognition for perceptual human-computer interaction’, IEEE Trans. Syst. Man Cybern. C, Appl. Rev., 2010, 40, (6), pp. 676681.
    37. 37)
      • 131. Hussain, S., Rashid, A.: ‘User independent hand gesture recognition by accelerated DTW’. Proc. Int. Conf. Informatics, Electronics and Vision (ICIEV), May 2012, pp. 10331037.
    38. 38)
      • 100. Nguyen, T.N., Vo, D.H., Huynh, H.H., et al: ‘Geometry based static hand gesture recognition using support vector machine’. Proc. 13th Int. Conf. Control, Automation, Robotics and Vision, December 2014, pp. 769774.
    39. 39)
      • 110. Ghosh, D.K., Ari, S.: ‘Static hand gesture recognition using mixture of features and SVM classifier’. Proc. 5th Int. Conf. Communication Systems and Network Technologies, April 2015, pp. 10941099.
    40. 40)
      • 125. Liu, L., Xing, J., Ai, H., et al: ‘Hand posture recognition using finger geometric feature’. Proc. 21st Int. Conf. Pattern Recognition (ICPR), November 2012, pp. 565568.
    41. 41)
      • 79. Yao, Y., Fu, Y.: ‘Real-time hand pose estimation from RGB-D sensor’. Proc. IEEE Int. Conf. Multimedia and Expo, July 2012, pp. 705710.
    42. 42)
      • 69. Juang, C.-F., Chang, C.-M., Wu, J.-R., et al: ‘Computer vision based human body segmentation and posture estimation’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2009, 39, (1), pp. 119133.
    43. 43)
      • 113. Ghosh, D.K., Ari, S.: ‘A static hand gesture recognition algorithm using k-mean based radial basis function neural network’. Proc. 8th Int. Conf. Information, Communication, and Signal Processing, 2011, pp. 15.
    44. 44)
      • 41. Suarez, J., Murphy, R.: ‘Hand gesture recognition with depth images: a review’. Proc. IEEE RO-MAN, 2012, pp. 411417.
    45. 45)
      • 46. Campbell, L., Becker, D., Azarbayejani, A., et al: ‘Invariant features for 3-d gesture recognition’. Proc. 2nd Int. Conf. Automatic Face and Gesture Recognition, October 1996, pp. 157162.
    46. 46)
      • 26. Smith, P., Shah, M., da Vitoria Lobo, N.: ‘Determining driver visual attention with one camera’, IEEE Trans. Intell. Transp. Syst., 2003, 4, (4), pp. 205218.
    47. 47)
      • 152. Tsironi, E., Barros, P., Wermter, S.: ‘Gesture recognition with a convolutional long short-term memory recurrent neural network’. Proc. European Symp. Artificial Neural Network, Computational Intelligence and Machine Learning (ESANN), 2016, pp. 213218.
    48. 48)
      • 30. Arafa, Y., Mamdani, A.: ‘Building multi-modal personal sales agents as interfaces to e-commerce applications’, in Liu, J., Yuen, P., Li, C.-h., Ng, J., Ishida, T, et al (Eds.): ‘Active media technol., in lecture notes in comput. sci’, (Springer, Berlin Heidelberg, 2001), vol. 2252, pp. 113133.
    49. 49)
      • 50. Berman, S., Stern, H.: ‘Sensors for gesture recognition systems’, IEEE Trans. Syst. Man Cybern. Syst. C, Appl. Rev., 2012, 42, (3), pp. 277290.
    50. 50)
      • 95. Lee, H.-K., Kim, J.: ‘An HMM-based threshold model approach for gesture recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 1999, 21, (10), pp. 961973.
    51. 51)
      • 37. Derpanis, K.G.: ‘A review of vision-based hand gestures’, Dept. of Comput. Sci. York University, Internal Rep., 2004.
    52. 52)
      • 11. Kinnunen, T., Li, H.: ‘An overview of text-independent speaker recognition: from features to supervectors’, Speech Commun., 2010, 52, (1), pp. 1240.
    53. 53)
      • 128. Hsu, C.-W., Lin, C.-J.: ‘A comparison of methods for multiclass support vector machines’, IEEE Trans. Neural Netw., 2002, 13, (2), pp. 415425.
    54. 54)
      • 129. Murugeswari, M., Veluchamy, S.: ‘Hand gesture recognition system for real-time application’. Proc. Int. Conf. Advanced Communication Contol and Computing Technologies, 2014, pp. 12201225.
    55. 55)
      • 28. Zeng, B., Wang, G., Lin, X.: ‘A hand gesture based interactive presentation system utilizing heterogeneous cameras’, Tsinghua Sci. Technol., 2012, 17, (3), pp. 329336.
    56. 56)
      • 116. Nadgeri, S., Sawarkar, S., Gawande, A.: ‘Hand gesture recognition using camshift algorithm’. Proc. 3rd Int. Conf. Emerging Trends in Engineering and Technology (ICETET), November 2010, pp. 3741.
    57. 57)
      • 130. Wenjun, T., Chengdong, W., Shuying, Z., et al: ‘Dynamic hand gesture recognition using motion trajectories and key frames’. Proc. 2nd Int. Conf. Advanced Computer Control (ICACC), vol. 3, March 2010, pp. 163167.
    58. 58)
      • 45. Karam, M.: ‘A framework for research and design of gesture based human-computer interactions’, PhD dissertation, University of Southampton, October 2006.
    59. 59)
      • 127. Weston, J., Watkins, C.: ‘Multi-class support vector machines’, Proc. European Symp. Artificial Neural Networks (ESANN), 1999, pp. 219224.
    60. 60)
      • 67. Porfirio, A.J., Oliveira, L.E.S., Lais Wiggers, K., et al: ‘Libras sign language hand configuration recognition based on 3D meshes’. Proc. IEEE SMC, October 2013, pp. 15881593.
    61. 61)
      • 51. Socolinsky, D.A., Selinger, A.: ‘A comparative analysis of face recognition performance with visible and thermal infrared imagery’. Proc. Int. Conf. Pattern Recognition, 2002, vol. 4, pp. 217222.
    62. 62)
      • 43. Hasan, H., Abdul-Kareem, S.: ‘Human-computer interaction using vision-based hand gesture recognition systems: a survey’, Neural Comput. Appl., 2014, 25, (2), pp. 251261.
    63. 63)
      • 10. El Ayadi, M., Kamel, M.S., Karray, F.: ‘Survey on speech emotion recognition: features, classification schemes, and databases’, Pattern Recogn., 2011, 44, (3), pp. 572587.
    64. 64)
      • 84. Wang, J., Liu, Z., Chorowski, J., et al: ‘Robust 3D action recognition with random occupancy patterns’. Proc. of the Asian Conf. Computer Vision, ECCV, 2012, pp. 872885.
    65. 65)
      • 63. Ren, Z., Yuan, J., Meng, J., et al: ‘Robust part-based hand gesture recognition using kinect sensor’, IEEE Trans. Multimedia, 2013, 15, (5), pp. 11101120.
    66. 66)
      • 150. Jain, A., Zamir, A.R., Savarese, S.: ‘Structural-RNN: Deep learning on spatio-temporal graphs’. Proc. Conf. Computer Vision and Pattern Recognition, 2016, pp. 53085317.
    67. 67)
      • 54. Cerlinca, T., Pentiuc, S.: ‘Robust 3D hand detection for gestures recognition’, in Brazier, F., Nieuwenhuis, K., Pavlin, G., Warnier, M., BadicaIntell, C, et al (Eds): ‘Distrib. comput. V, ser. stud. in comput. Intell’ (Springer, Berlin Heidelberg, 2012), vol. 382, pp. 259264.
    68. 68)
      • 5. Moeslund, T.B., Hilton, A., Krüger, V.: ‘A survey of advances in vision-based human motion capture and analysis’, Comput. Vis. Image Underst., 2006, 104, (2), pp. 90126.
    69. 69)
      • 103. Akhter, I., Sheikh, Y., Khan, S., et al: ‘Trajectory space: a dual representation for nonrigid structure from motion’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (7), pp. 14421456.
    70. 70)
      • 22. Ng, W.L., Ng, C.K., Noordin, N.K., et al: ‘Gesture based automating household appliances’. Proc. Int. Conf. Human–Computer Interaction, Beijing, China, 2011, pp. 285293.
    71. 71)
      • 17. Lichtenauer, J.F., Hendriks, E.A., Reinders, M.J.T.: ‘Sign language recognition by combining statistical DTW and independent classification’, IEEE Trans. Pattern Anal. Mach. Intell., 2008, 30, (11), pp. 20402046.
    72. 72)
      • 12. Li, J., Deng, L., Gong, Y., et al: ‘An overview of noise-robust automatic speech recognition’, IEEE/ACM Trans. Audio, Speech, Lang. Process., 2014, 22, (4), pp. 745777.
    73. 73)
      • 118. Coomans, D., Massart, D.: ‘Alternative k-nearest neighbour rules in supervised pattern recognition – part 1: k-nearest neighbour classification by using alternative voting rules’, Anal. Chim. Acta, 1982, 136, pp. 1527.
    74. 74)
      • 77. Shotton, J., Fitzgibbon, A., Cook, M., et al: ‘Real-time human pose recognition in parts from single depth images’. Proc. Conf. Computer Vision & Pattern Recognition, CVPR, June 2011, pp. 12971304.
    75. 75)
      • 92. Chai, Y., Shin, S., Chang, K., et al: ‘Real-time user interface using particle filter with integral histogram’, IEEE Trans. Consum. Electron., 2010, 56, (2), pp. 510515.
    76. 76)
      • 9. Mitra, S., Acharya, T.: ‘Gesture recognition: a survey’, IEEE Trans. Syst. Man, Cybern. C, Appl. Rev., 2007, 37, (3), pp. 311324.
    77. 77)
      • 76. Lee, J., Kunii, T.: ‘Constraint-based hand animation’, in Thalmann, N., Thalmann, D. (Eds.): ‘Models and techn. in comput. animation, ser. comput. animation series’ (Springer, Japan, 1993), pp. 110127.
    78. 78)
      • 35. Wu, Y., Huang, T.S.: ‘Vision-based gesture recognition: a review’. Proc. Int. Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction, 1999, pp. 103115.
    79. 79)
      • 140. Patsadu, O., Nukoolkit, C., Watanapa, B.: ‘Human gesture recognition using kinect camera’. Proc. Int. Joint Conf. Computer Science and Software Engineering (JCSSE), May 2012, pp. 2832.
    80. 80)
      • 106. Wu, X., Cia, M., Chen, L., et al: ‘Point context: an effective shape descriptor for RST-invariant trajectory recognition’, J. Math. Imag. Vis., 2016, 56, (3), pp. 441454.
    81. 81)
      • 105. Priyal, S.P., Bora, P.K.: ‘A study on static hand gesture recognition using moments’. Proc. 2010 Int. Conf. Signal Processing and Communication, July 2010, pp. 15.
    82. 82)
      • 146. Chai, X., Liu, Z., Yin, F., et al: ‘Two streams recurrent neural networks for large-scale continuous gesture recognition’. Proc. Int. Conf. Pattern Recognition, 2016, pp. 3136.
    83. 83)
      • 107. Feng, K.P., Yuan, F.: ‘Static hand gesture recognition based on hog characters and support vector machines’. Proc. 2nd Int. Symp. Instrumentation and Measurement, Sensor Network and Automation, December 2013, pp. 936938.
    84. 84)
      • 137. Peng, S.-Y., Wattanachote, K., Lin, H.-J., et al: ‘A real-time hand gesture recognition system for daily information retrieval from internet’. Proc. 4th Int. Conf. Ubi-Media Computing, July 2011, pp. 146151.
    85. 85)
      • 153. Liu, J., Shahroudy, A., XuGang Wang, D.: ‘Spatio-temporal LSTM with trust gates for 3D human action recognition’. Proc. ECCV, 2016, pp. 816833.
    86. 86)
      • 151. Donahue, J., Hendricks, L.A., Rohrbach, M., et al: ‘Long-term recurrent convolutional networks for visual recognition and description’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (4), pp. 677691.
    87. 87)
      • 119. Hall, P., Park, B.U., Samworth, R.J.: ‘Choice of neighbor order in nearest-neighbor classification’, Ann. Statist., 2008, 36, (5), pp. 21352152.
    88. 88)
      • 3. Dix, A., Finlay, J.E., Abowd, G.D., et al: ‘Human-computer interaction’ (Prentice-Hall, Inc., 2003, 3rd edn.).
    89. 89)
      • 44. Cheng, H., Yang, L., Liu, Z.: ‘Survey on 3D hand gesture recognition’, IEEE Trans. Circuits Syst. Video Technol., 2016, 26, (9), pp. 16591673.
    90. 90)
      • 58. Droeschel, D., Stuckler, J., Behnke, S.: ‘Learning to interpret pointing gestures with a time-of-flight camera’. Proc. 6th ACM/IEEE Int. Conf. Human-Robot Interaction (HRI), March 2011, pp. 481488.
    91. 91)
      • 33. ‘SoftKinetic's gesture control technology rolls out in additional car model’, https://www.softkinetic.com/AboutUs/NewsEvents/ArticleView/ArticleId/545/PRESS-RELEASE-SoftKinetic-s-Gesture-Control-Technology-Rolls-Out-in-Additional-Car-Model.html, accessed May 2017.
    92. 92)
      • 99. Kong, Y., Ding, Z., Li, J., et al: ‘Deeply learned view-Invariant features for cross-View action recognition’, IEEE Trans. Image Process., 2017, 26, (6), pp. 30283037.
    93. 93)
      • 133. Bhuyan, M.K.: ‘FSM-based recognition of dynamic hand gestures via gesture summarization using key video object planes’, World Acad. Sci., Eng. Technol., 2012, 6, (68), pp. 724735.
    94. 94)
      • 71. Störring, M., Andersen, H.J., Granum, E.: ‘Skin color detection under changing lighting conditions’. Proc. 7th Symp. Intelligent Robotic Systems, 1999, pp. 187195.
    95. 95)
      • 60. Gonzalez-Sanchez, T., Puig, D.: ‘Real-time body gesture recognition using depth camera’, Electron. Lett., 2011, 47, (12), pp. 697698.
    96. 96)
      • 136. Heracleous, P., Aboutabit, N., Beautemps, D.: ‘Lip shape and hand position fusion for automatic vowel recognition in cued speech for French’, IEEE Signal Process. Lett., 2009, 16, (5), pp. 339342.
    97. 97)
      • 88. Doliotis, P., Stefan, A., McMurrough, C., et al: ‘Comparing gesture recognition accuracy using color and depth information’. Proc. 4th Int. Conf. Pervasive Technologies Related to Assistive Environments, 2011, pp. 20:120:7.
    98. 98)
      • 82. Oikonomidis, I., Kyriazis, N., Argyros, A.: ‘Markerless and efficient 26-DOF hand pose recovery’. Proc. of the Asian Conf. Computer Vision, ACCV, 2010, pp. 744757.
    99. 99)
      • 25. Kim-Tien, N., Truong-Thinh, N., Cuong, T.: ‘A method for controlling wheelchair using hand gesture recognition’, in Kim, J.-H., Matson, E.T., Myung, H, et al (Eds.): ‘Robot intell. technol. and applicat. 2012’, ser. Advances in Intell. Syst. and Comput., (Springer, Berlin Heidelberg, 2013), vol208, pp. 961970.
    100. 100)
      • 93. Alon, J., Athitsos, V., Yuan, Q., et al: ‘A unified framework for gesture recognition and spatiotemporal gesture segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (9), pp. 16851699.
    101. 101)
      • 135. Kwon, J., Park, F.: ‘Natural movement generation using hidden Markov models and principal components’, IEEE Trans. Syst. Man Cybern. B, Syst. Cybern., 2008, 38, (5), pp. 11841194.
    102. 102)
      • 64. Rioux-Maldague, L., Giguere, P.: ‘Sign language finger spelling classification from depth and color images using a deep belief network’. Proc. Canadian Conf. Computer and Robot Vision (CRV), May 2014, pp. 9297.
    103. 103)
      • 20. Reifinger, S., Wallhoff, F., Ablaßmeier, M., et al: ‘Static and dynamic hand-gesture recognition for augmented reality applications’. Proc. Int. Conf. Human–Computer Interaction, Beijing, China, 2007, pp. 728737.
    104. 104)
      • 74. Kawulok, M., Kawulok, J., Nalepa, J.: ‘Spatial-based skin detection using discriminative skin-presence features’, Pattern Recognit. Lett., 2014, 8, pp. 313.
    105. 105)
      • 148. Asadi-Aghbolaghi, M., Clapés, A., Bellantonio, M., et al: ‘A survey on deep learning based approaches for action and gesture recognition in image sequences’. Proc. Conf. Automatic Face & Gesture Recognition, 2017, pp. 476483.
    106. 106)
      • 38. Erol, A., Bebis, G., Nicolescu, M., et al: ‘Vision-based hand pose estimation: a review’, Comput. Vis. Image Underst., 2007, 108, (1-2), pp. 5273.
    107. 107)
      • 98. Bhuyan, M.K., Bora, P.K., Ghosh, D.: ‘An integrated approach to the recognition of a wide class of continuous hand gestures’, Int. J. Pattern Recogn. Artif. Intell., 2011, 25, (2), pp. 227252.
    108. 108)
      • 115. Bradski, G.: ‘Real time face and object tracking as a component of a perceptual user interface’. Proc. 4th IEEE Workshop Applications of Computer Vision (WACV), October 1998, pp. 214219.
    109. 109)
      • 80. de La Gorce, M., Paragios, N.: ‘A variational approach to monocular hand-pose estimation’, Comput. Vis. Image Underst., 2010, 114, (3), pp. 363372.
    110. 110)
      • 24. Tao, L., Zappella, L., Hager, G., et al: ‘Surgical gesture segmentation and recognition’, in Mori, K., Sakuma, I., Sato, Y., Barillot, C. (Eds.): ‘Med. image comput. and comput – assisted intervention MICCAI 2013’, ser. Lecture Notes in Comput. Sci., (Springer, Berlin Heidelberg, 2013), vol. 8151, pp. 339346.
    111. 111)
      • 102. Harding, P.R., Ellis, T.J.: ‘Recognizing hand gesture using Fourier descriptors’. Proc. Int. Conf. Pattern Recognition, 2004, pp. 286289.
    112. 112)
      • 94. Dourish, P.: ‘What we talk about when we talk about context’, Pers. Ubiquitous Comput., 2004, 8, (1), pp. 1930.
    113. 113)
      • 65. Regenbrecht, J.C.H., Hoermann, S.: ‘A leap-supported, hybrid interface approach’. Proc. Australia on Computer-Human Interaction Conf., 2013, pp. 281284.
    114. 114)
      • 27. Pickering, C.: ‘The search for a safer driver interface: a review of gesture recognition human machine interface’, J. Comput. Control Eng., 2005, 16, (1), pp. 3440.
    115. 115)
      • 81. de La Gorce, M., Fleet, D.J., Paragios, N.: ‘Model-based 3D hand pose estimation from monocular video’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (9), pp. 17931805.
    116. 116)
      • 78. Keskin, C., Kirac, F., Kara, Y.E., et al: ‘Real time hand pose estimation using depth sensors’ (Springer, 2013), pp. 119137.
    117. 117)
      • 72. Chakraborty, B.K., Bhuyan, M.K., Kumar, S.: ‘Fusion-based skin detection using image distribution model’. Proc. Tenth Indian Conf. Computer Vision, Graphics and Image Processing, 2016, pp. 67:167:8.
    118. 118)
      • 120. Marasovic, T., Papic, V.: ‘Feature weighted nearest neighbour classification for accelerometer-based gesture recognition’. Proc. 20th Int. Conf. Software in Telecommunications and Computer Networks (SoftCOM), September 2012, pp. 15.
    119. 119)
      • 114. ‘Mean shift clustering, lecture notes’, http://www.cse.yorku.ca/kosta/CompVis Notes/mean shift.pdf/.
    120. 120)
      • 139. Nolker, C., Ritter, H.: ‘Visual recognition of continuous hand postures’, IEEE Trans. Neural Netw., 2002, 13, (4), pp. 983994.
    121. 121)
      • 1. ‘ACM SIGCHI curricula for human–computer interaction’, http://www.acm.org/sigchi/cdg/cdg2.html, accessed 1992.
    122. 122)
      • 75. Chakraborty, B.K., Bhuyan, M.K.: ‘Skin segmentation using possibilistic fuzzy c-means clustering in presence of skin-colored background’. Proc. IEEE Recent Advances in Intelligent Computational Systems, December 2015, pp. 246250.
    123. 123)
      • 55. Hongo, H., Ohya, M., Yasumoto, M., et al: ‘Focus of attention for face and hand gesture recognition using multiple cameras’. Proc. Fourth IEEE Int. Conf. Automatic Face and Gesture Recognition, 2000, pp. 156161.
    124. 124)
      • 132. Hong, P., Turk, M., Huang, T.: ‘Gesture modelling and recognition using finite state machines’. Proc. 4th IEEE Int. Conf. Automatic Face and Gesture Recognition, 2000, pp. 410415.
    125. 125)
      • 87. Sohn, M.-K., Lee, S.-H., Kim, D.-J., et al: ‘A comparison of 3D hand gesture recognition using dynamic time warping’. Proc. 27th Conf. Image Vision Computing, 2012, pp. 418422.
    126. 126)
      • 31. Zhang, Z.: ‘Microsoft kinect sensor and its effect’, IEEE MultiMedia, 2012, 19, (2), pp. 410.
    127. 127)
      • 104. Hung, K.C.: ‘The generalized uniqueness wavelet descriptor for planar closed curves’, IEEE Trans. Image Process., 2000, 9, (5), pp. 834845.
    128. 128)
      • 14. Pantic, M., Rothkrantz, L.J.M.: ‘Toward an affect-sensitive multimodal human-computer interaction’, Proc. IEEE, 2003, 91, (9), pp. 13701390.
    129. 129)
      • 57. Ogawara, K., Takamatsu, J., Hashimoto, K., et al: ‘Grasp recognition using a 3D articulated model and infrared images’. Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), October 2003, pp. 15901595.
    130. 130)
      • 40. Wachs, J.P., Kölsch, M., Stern, H., et al: ‘Vision-based hand-gesture applications’, Commun. ACM, 2011, 54, (2), pp. 6071.
    131. 131)
      • 42. Rautaray, S., Agrawal, A.: ‘Vision based hand gesture recognition for human computer interaction: a survey’, Artif. Intell. Rev., 2012, 43, (1), pp. 154.
    132. 132)
      • 61. ‘Introduction to the Time-of-Flight (ToF) system design – user's guide’, http://www.ti.com/lit/ml/sbau219d/sbau219d.pdf, accessed 2014.
    133. 133)
      • 6. Sandbach, G., Zafeiriou, S., Pantic, M., et al: ‘Static and dynamic 3D facial expression recognition: a comprehensive survey’, Image Vis. Comput., 2012, 30, (10), pp. 683697.
    134. 134)
      • 101. Bhuyan, M.K., Ajay Kumar, D., MacDorman, K., et al: ‘A novel set of features for continuous hand gesture recognition’, J. Multimodal User Interfaces, 2014, 8, (4), pp. 333343.
    135. 135)
      • 15. Oviatt, S.: ‘Multimodal interfaces’, in Sears, A., Jacko, J.A. (Eds.): ‘The human–computer interaction handbook: fundamentals, evolving technologies, and emerging application (human factors and ergonomics series)’ (L. Erlbaum Associates Inc., Hillsdale, NJ, USA, 2007), ch. 18.
    136. 136)
      • 34. Pavlovic, V., Sharma, R., Huang, T.: ‘Visual interpretation of hand gestures for human-computer interaction: a review’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 677695.
    137. 137)
      • 149. Gers, F.A., Schraudolph, N.N., Schmidhuber, J.: ‘Learning precise timing with LSTM recurrent networks’, J. Mach. Learn. Res., 2003, 3, pp. 115143.
    138. 138)
      • 70. Powar, V., Jahagirdar, A., Sirsikar, S.: ‘Skin detection in YCbCr color space’. IJCA Proc. Int. Conf. Computational Intelligence, March 2012.
    139. 139)
      • 117. ‘A detailed introduction to knearest neighbor (knn) algorithm’, http://saravananthirumuruganathan.wordpress.com/2010/05/17/a-detailed-introduction-to-k-nearest-neighbor-knn-algorithm/.
    140. 140)
      • 53. Wang, Y., Yu, T., Shi, L., et al: ‘Using human body gestures as inputs for gaming via depth analysis’. Proc. IEEE Int. Conf. Multimedia and Expo, June 2008, pp. 993996.
    141. 141)
      • 32. Kumara, W.G.C.W., Wattanachote, K., Battulga, B., et al: ‘A kinect-based assessment system for smart classroom’, Int. J. Dist. Edu. Technol. (IJDET), 2017, 13, (2), pp. 3453.
    142. 142)
      • 126. Rodriguez, K.O., Chavez, G.C.: ‘Finger spelling recognition from RGB-D information using kernel descriptor’. 26th SIBGRAPI Conf. Graphics, Patterns and Images (SIBGRAPI), August 2013, pp. 17.
    143. 143)
      • 21. McCowan, I., Gatica-Perez, D., Bengio, S., et al: ‘Automatic analysis of multimodal group actions in meetings’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (3), pp. 305317.
    144. 144)
      • 49. Bhuyan, M.K., Ghosh, D., Bora, P.: ‘Estimation of 2D motion trajectories from video object planes and its application in hand gesture recognition’, in Pal, S., Bandyopadhyay, S., Biswas, S. (Eds.): ‘Pattern recognition and machine intelligence’, in Lecture Notes in Computer Science (Springer, Berlin Heidelberg, 2005), vol. 3776, pp. 509514.
    145. 145)
      • 86. Ohn-Bar, E., Trivedi, M.M.: ‘The power is in your hands: 3D analysis of hand gestures in naturalistic video’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, June 2013, pp. 912917.
    146. 146)
      • 85. Liang, B.: ‘Gesture recognition using depth images’. Proc. 15th ACM Int. Conf. Multimodal Interaction, 2013, pp. 353356.
    147. 147)
      • 62. ‘Basler ToF Camera. USER'S MANUAL’, https://www.baslerweb.com/en/support/ downloads/document-downloads/basler-tof-camera-users-manual/.html, accessed 2016.
    148. 148)
      • 112. Macqueen, J.: ‘Some methods for classification and analysis of multivariate observations’. Proc. 5th Berkeley Symp. Mathematical Statistics and Probability, 1967, pp. 281297.
    149. 149)
      • 83. Oikonomidis, I., Kyriazis, N., Argyros, A.: ‘Efficient model-based 3D tracking of hand articulations using kinect’. Proc. British Machine Vision Conf., 2011, pp. 101.1101.11.
    150. 150)
      • 141. Yang, M.-H., Ahuja, N., Tabb, M.: ‘Extraction of 2D motion trajectories and its application to hand gesture recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (8), pp. 10611074.
    151. 151)
      • 48. Bhuyan, M.K., Ghosh, D., Bora, P.: ‘Continuous hand gesture segmentation and co-articulation detection’, in Kalra, P., Peleg, S. (Eds): ‘Computer vision, graphics and image processing’, in Lecture Notes in Computer Science’ (Springer, Berlin Heidelberg, 2006), vol. 4338, pp. 564575.
    152. 152)
      • 89. Zhu, H.M., Pun, C.M.: ‘Real-time hand gesture recognition from depth image sequences’. Proc. 9th Int. Conf. Computer Graphics, Imaging, Visualization, July 2012, pp. 4952.
    153. 153)
      • 134. Rabiner, L., Juang, B.-H.: ‘An introduction to hidden Markov models’, IEEE ASSP Mag., 1986, 3, (1), pp. 416.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0052
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0052
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address