Distributed-computing-based multimodal fusion interface using VoiceXML and KSSL for wearable PC
A WPS (wearable personal station) and a VoiceXML-based robust multimodal fusion interface have been implemented, fusing speech and gesture-sign language for improved multimodal language processing, and improved fusion and fission rules are suggested depending on signal plus noise to noise ratio (SNNR) and fuzzy value for simultaneous multimodality.