Generating 3D interactive behaviours
Generating 3D interactive behaviours
- Author(s): Y. Zheng ; Y. Hicks ; D. Marshal ; J. Chambers
- DOI: 10.1049/cp:20061944
For access to this article, please select a purchase option:
Buy conference paper PDF
Buy Knowledge Pack
IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.
3rd European Conference on Visual Media Production (CVMP 2006). Part of the 2nd Multimedia Conference 2006 — Recommend this title to your library
Thank you
Your recommendation has been sent to your librarian.
- Author(s): Y. Zheng ; Y. Hicks ; D. Marshal ; J. Chambers Source: 3rd European Conference on Visual Media Production (CVMP 2006). Part of the 2nd Multimedia Conference 2006, 2006 page ()
- Conference: 3rd European Conference on Visual Media Production (CVMP 2006). Part of the 2nd Multimedia Conference 2006
- DOI: 10.1049/cp:20061944
- ISBN: 0 86341 729 9
- Location: London, UK
- Conference date: 29-30 Nov. 2006
- Format: PDF
In this paper, we present a novel approach for generating a variety of complex behaviours. Our approach is model-based. We train a dual hidden Markov model (HMM) on 3D motion capture (MoCap) data representing a number of interactions between two people. Then we track 3D motion of a person in ordinary 2D video. Finally, using the dual HMM and the Viterbi algorithm, we generate a moving "virtual friend" reacting to the given motion of tracked person.
Inspec keywords: image motion analysis; hidden Markov models; virtual reality
Subjects: Computer vision and image processing techniques; Markov processes; Virtual reality; Optical, image and video signal processing; Markov processes
Related content
content/conferences/10.1049/cp_20061944
pub_keyword,iet_inspecKeyword,pub_concept
6
6