http://iet.metastore.ingenta.com
1887

Recognition of complex static hand gestures by using the wristband-based contour features

Recognition of complex static hand gestures by using the wristband-based contour features

For access to this article, please select a purchase option:

Buy eFirst article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
— Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Recognition of complex static hand gestures is a challenging problem due to the complexity of hand gestures, which are rich in diversities because of high degrees of freedom involved by the human hand. It is more difficult especially when gestures are represented by two hands. This study proposes a framework that can recognise complex static hand gestures by using the wristband-based contour features (WBCFs). The authors require the user to wear a pair of black wristbands on his (her) two hand wrists, so that the hand region(s) can be segmented accurately. The topmost and sharpest corner point of the wristband on a gesturing hand is detected first. It is treated as a landmark to extract the WBCF of a hand gesture. Then, a simple feature matching method is proposed to obtain a recognition result. To deal with the cases where hand region(s) cannot be segmented correctly, watershed segmentation, and region merging techniques are adopted to provide improvements on hand region segmentation. Experimental results show that their system can be used to recognise 29 Turkish fingerspelling sign hand gestures and achieve a recognition accuracy of 99.31% with only six training images for each gesture.

http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2016.1139
Loading

Related content

content/journals/10.1049/iet-ipr.2016.1139
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address