access icon free Optimisation of both classifier and fusion based feature set for static American sign language recognition

Sign language recognition becomes a popular research field in human–computer interaction. Attention on hand signal analysis helps to make easy communication among computer and human for information sharing. Major focus of the gesture recognition system is to identify and recognise various gestures, by a computer. This study introduces optimisation of both classifier and feature set for static American sign language recognition. Initially, the hand part is segmented from other parts of the image through effective edge and skin colour detection. Thereafter, robust features are obtained using discrete cosine transform, Zernike moment, scale-invariant feature transform, speeded-up robust features, histogram of oriented gradients and binary object features from the segmented hand image. From these extracted features, an optimal feature set is selected by social ski driver optimisation algorithm. Deep Elman recurrent neural network classifier is then introduced for recognition purpose. Optimisation is performed on feature sets, derived by fusion of features obtained from the above methods, based on precision, accuracy, F-measure and recall. Finally, optimised feature set and best classifier are used to recognise the hand gesture for classification purpose. The performance of this proposed method is evaluated and compared with existing literature.

Inspec keywords: image segmentation; discrete cosine transforms; gesture recognition; Zernike polynomials; recurrent neural nets; feature extraction; image classification; image fusion; image colour analysis; optimisation

Other keywords: recognition purpose; human–computer interaction; hand gesture; histogram of oriented gradients; static American sign language recognition; Zernike moment; hand signal analysis; fusion based feature set; scale-invariant feature transform; optimal feature set; segmented hand image; gesture recognition system; optimised feature set; skin colour detection; social ski driver optimisation algorithm; binary object features; discrete cosine transform; Deep Elman recurrent neural network classifier

Subjects: Sensor fusion; Computer vision and image processing techniques; Neural computing techniques; Interpolation and function approximation (numerical analysis); Interpolation and function approximation (numerical analysis); Optimisation techniques; Integral transforms in numerical analysis; Image recognition; Optimisation techniques; Integral transforms in numerical analysis

References

    1. 1)
      • 3. Plouffe, G., Cretu, A.-M.: ‘Static and dynamic hand gesture recognition in depth data using dynamic time warping’, IEEE Trans. Instrum. Meas., 2016, 65, (2), pp. 305316.
    2. 2)
      • 18. Oyedotun, O.K., Khashman, A.: ‘Deep learning in vision-based static hand gesture recognition’. Neural Comput. Appl., 2017, 28, (12), pp. 39413951.
    3. 3)
      • 23. Quesada, L., López, G., Guerrero, L.: ‘Automatic recognition of the American sign language fingerspelling alphabet to assist people living with speech or hearing impairments’, J. Ambient. Intell. Humaniz. Comput., 2017, 8, (4), pp. 625635.
    4. 4)
      • 29. Sagayam, M.K., Hemanth, D.J.: ‘ABC algorithm based optimization of 1-D hidden Markov model for hand gesture recognition applications’, Comput. Ind., 2018, 99, pp. 313323.
    5. 5)
      • 15. Fan, T., Ma, C., Gu, Z., et al: ‘Wireless hand gesture recognition based on continuous-wave Doppler radar sensors’, IEEE Trans. Microw. Theory Tech., 2016, 64, (11), pp. 40124020.
    6. 6)
      • 41. Tewari, A., Taetz, B., Frederic, G., et al: ‘A probablistic combination of CNN and RNN estimates for hand gesture based interaction in car’. Proc. IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR), Nantes, France, 2017, pp. 913.
    7. 7)
      • 21. Kumar, P., Gauba, H., Roy, P.P., et al: ‘Coupled HMM-based multi-sensor data fusion for sign language recognition’, Pattern Recognit. Lett., 2017, 86, pp. 18.
    8. 8)
      • 6. Kim, Y., Toomajian, B.: ‘Hand gesture recognition using micro-Doppler signatures with convolutional neural network’, IEEE Access, 2016, 4, pp. 71257130.
    9. 9)
      • 40. Suri, K., Gupta, R.: ‘Classification of hand gestures from wearable IMUs using deep neural network’. 2018 Second Int. Conf. on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 2018, pp. 4550.
    10. 10)
      • 17. Ong, C., Lim, I., Lu, J., et al: ‘Sign-language recognition through gesture & movement analysis (SIGMA)’. In Mechatronics and Machine Vision in Practice Springer, Cham, 2018, vol. 3, pp. 235245.
    11. 11)
      • 7. Antoshchuk, S., Kovalenko, M., Sieck, J.: ‘Gesture recognition-based human–computer interaction interface for multimedia applications’. Digitisation of Culture: Namibian and Int. Perspectives, Singapore, 2018, pp. 269286.
    12. 12)
      • 26. Shaik, B.K., Ganesan, P., Kalist, V., et al: ‘Comparative study of skin color detection and segmentation in HSV and YCbCr color space’, Procedia Comput. Sci., 2015, 57, pp. 4148.
    13. 13)
      • 32. Cheok, J.M., Omar, Z., Jaward, M.H.: ‘A review of hand gesture and sign language recognition techniques’. Int. J. Mach. Learning Cybern., 2019, 10, pp. 131153.
    14. 14)
      • 16. Li, G., Zhang, R., Ritchie, M., et al: ‘Sparsity-driven micro-Doppler feature extraction for dynamic hand gesture recognition’, IEEE Trans. Aerosp. Electron. Syst., 2018, 54, (2), pp. 655665.
    15. 15)
      • 10. Liu, K., Kehtarnavaz, N.: ‘Real-time robust vision-based hand gesture recognition using stereo images’, J. Real-Time Image Process., 2016, 11, (1), pp. 201209.
    16. 16)
      • 31. Sykora, P., Kamencay, P., Hudec, R.: ‘Comparison of SIFT and SURF methods for use on hand gesture recognition based on depth map’, AASRI Proc., 2014, 9, pp. 1924.
    17. 17)
      • 38. Massey data set available at: http://mro.massey.ac.nz/bitstream/handle/10179/4514/GestureDatasetRLIMS2011.pdf?sequence=1.
    18. 18)
      • 14. Saha, H.N., Tapadar, S., Ray, S., et al: ‘A machine learning based approach for hand gesture recognition using distinctive feature extraction’. Computing and Communication Workshop and Conf. (CCWC), 2018 IEEE Eighth Annual, IEEE, Las Vegas, NV, USA, January 2018, pp. 9198.
    19. 19)
      • 24. Singha Joyeeta Roy, A., Laskar, R.H.: ‘Dynamic hand gesture recognition using vision-based approach for human–computer interaction’, Neural Comput. Appl., 2018, 29, (4), pp. 11291141.
    20. 20)
      • 30. Dhakad, S., Gangrade, J., Bharti, J., et al: ‘Real time hand gesture recognition using histogram of oriented gradient with support vector machine’. Int. Conf. on Next Generation Computing Technologies, Singapore, 2017, pp. 752760.
    21. 21)
      • 37. ASL [dataset]. available at https://www.kaggle.com/ashish8898/sign-language-recognition/data.
    22. 22)
      • 27. Halder, C., Obaidullah, S.M., Roy, K.: ‘Offline writer identification from isolated characters using textural features’. Proc. of the Fourth Int. Conf. on Frontiers in Intelligent Computing: Theory and Applications (FICTA) 2015, New Delhi, 2016, pp. 221231.
    23. 23)
      • 28. Yong, H.: ‘Finger spelling recognition using depth information and support vector machine’, Multimedia Tools Appl., 2018, 77, pp. 115.
    24. 24)
      • 1. Haria, A., Subramanian, A., Asokkumar, N., et al: ‘Hand gesture recognition for human computer interaction’, Proc. Comput. Sci., 2017, 115, pp. 367374.
    25. 25)
      • 39. Rao, A.G., Kishore, P.V.V., Sastry, A.S.C.S., et al: ‘Selfie continuous sign language recognition with neural network classifier’. Proc. Second Int. Conf. on Micro-Electronics, Electromagnetics and Telecommunications, Singapore, 2018, pp. 3140.
    26. 26)
      • 9. Pomboza-Junez, G., Holgado-Terriza, J.A., Medina-Medina, N.: ‘Toward the gestural interface: comparative analysis between touch user interfaces versus gesture-based user interfaces on mobile devices’, Univers. Access. Inf. Soc., 2019, 18, pp. 107126.
    27. 27)
      • 36. Tharwat, A., Gabel, T.: ‘Parameters optimization of support vector machines for imbalanced data using social ski driver algorithm’, Neural Comput. Appl., 2019, 1, pp. 114.
    28. 28)
      • 5. Zhou, Y., Jiang, G., Lin, Y.: ‘A novel finger and hand pose estimation technique for real-time hand gesture recognition’, Pattern Recognit., 2016, 49, pp. 102114.
    29. 29)
      • 33. Achanta, S., Gangashetty, S.V.: ‘Deep Elman recurrent neural networks for statistical parametric speech synthesis’, Speech Commun., 2017, 93, pp. 3142.
    30. 30)
      • 25. Pan, T.-Y, Lo, L.-Y., Yeh, C.-W., et al: ‘Sign language recognition in complex background scene based on adaptive skin colour modelling and support vector machine’, Int. J. Big Data Intell., 2018, 5, (1–2), pp. 2130.
    31. 31)
      • 2. Marin, G., Dominio, F., Zanuttigh, P.: ‘Hand gesture recognition with jointly calibrated leap motion and depth sensor’, Multimedia Tools Appl., 2016, 75, (22), pp. 1499115015.
    32. 32)
      • 8. Poularakis, S., Katsavounidis, I.: ‘Low-complexity hand gesture recognition system for continuous streams of digits and letters’, IEEE Trans. Cybern., 2016, 46, (9), pp. 20942108.
    33. 33)
      • 19. Kim, Y.S., Han, H.G., Kim, J.W., et alA hand gesture recognition sensor using reflected impulses’, IEEE Sens. J., 2017, 17, (10), pp. 29752976.
    34. 34)
      • 13. Feng, B., He, F., Wang, X., et al: ‘Depth-projection-map-based bag of contour fragments for robust hand gesture recognition’, IEEE Trans. Human-Mach. Syst., 2017, 47, (4), pp. 511513.
    35. 35)
      • 35. Jin, X., Jiang, Q., Yao, S., et al: ‘Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain’, Infrared Phys. Technol., 2018, 88, pp. 112.
    36. 36)
      • 4. Lu, W., Tong, Z., Chu, J.: ‘Dynamic hand gesture recognition with leap motion controller’, IEEE Signal Process. Lett., 2016, 23, (9), pp. 11881192.
    37. 37)
      • 22. Li, Y., Wang, X., Liu, W., et al: ‘Deep attention network for joint hand gesture localization and recognition using static RGB-D images’, Inf. Sci., 2018, 441, pp. 6678.
    38. 38)
      • 12. Islam, R.M., Mitu, U.K., Bhuiyan, R.A., et al: ‘Hand gesture feature extraction using deep convolutional neural network for recognizing American sign language’. 2018 Fourth Int. Conf. on Frontiers of Signal Processing (ICFSP), Poitiers, France, 2018, pp. 115119.
    39. 39)
      • 34. Goel, K., Sehrawat, M., Agarwal, A.: ‘Finding the optimal threshold values for edge detection of digital images & comparing among bacterial foraging algorithm, canny and sobel edge detector’. 2017 Int. Conf. on Computing, Communication and Automation (ICCCA), Greater Noida, India, 2017, pp. 10761080.
    40. 40)
      • 20. Mendes, N., Ferrer, J., Vitorino, J., et al: ‘Human behavior and hand gesture classification for smart human-robot interaction’, Procedia Manuf., 2017, 11, pp. 9198.
    41. 41)
      • 11. Xie, R., Cao, J.: ‘Accelerometer-based hand gesture recognition by neural network and similarity matching’, IEEE Sens. J., 2016, 16, (11), pp. 45374545.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0195
Loading

Related content

content/journals/10.1049/iet-ipr.2019.0195
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading