Gesture synthesis from sign language notation using MPEG-4 humanoid animation parameters and inverse kinematics
Gesture synthesis from sign language notation using MPEG-4 humanoid animation parameters and inverse kinematics
- Author(s):
- DOI: 10.1049/cp:20060637
For access to this article, please select a purchase option:
Buy conference paper PDF
Buy Knowledge Pack
IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.
2nd IET International Conference on Intelligent Environments (IE 06) — Recommend this title to your library
Thank you
Your recommendation has been sent to your librarian.
- Author(s): Source: 2nd IET International Conference on Intelligent Environments (IE 06), 2006 page ()
- Conference: 2nd IET International Conference on Intelligent Environments (IE 06)
- DOI: 10.1049/cp:20060637
- ISBN: 0 86341 663 2
- Location: Athens, Greece
- Conference date: 5-6 July 2006
- Format: PDF
This paper presents a novel approach for generating VRML animation sequences from sign language notation, based on MPEG-4 face and body animation. Sign language notation, in the well-known sign writing system, is provided as input and is initially converted to SWML (Sign Writing Markup Language), an XML-based format that has recently been developed for the storage, indexing and processing of sign writing notation. Each basic sign, namely signbox, is then converted to a sequence of body animation parameters (BAPs) of the MPEG-4 standard, corresponding to the represented gesture. Inverse Kinematics are also employed for synthesizing complex animation sequences (e.g. contacts). In addition, if a sign contains facial expressions, these are converted to a sequence of MPEG-4 facial animation parameters (FAPs), while exact synchronization between facial and body movements is guaranteed. These sequences, which can also be coded and/or reproduced by MPEG-4 BAP and FAP players, are then used to animate H-anim compliant VRML avatars, reproducing the exact gestures represented in the sign language notation. Envisaged applications include interactive information systems for the persons with hearing disabilities (Web, E-mail, info-kiosks) and automatic translation of written texts to sign language (e.g. for TV newscasts). (10 pages)
Inspec keywords: human computer interaction; face recognition; image sequences; gesture recognition; video coding; XML; handicapped aids; computer animation; avatars
Subjects: Computer vision and image processing techniques; Image and video coding; User interfaces; Graphics techniques; Document processing and analysis techniques; Computer assistance for persons with handicaps; Video signal processing; Virtual reality
Related content
content/conferences/10.1049/cp_20060637
pub_keyword,iet_inspecKeyword,pub_concept
6
6