Towards automatic performance-driven animation between multiple types of facial model

Towards automatic performance-driven animation between multiple types of facial model

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The authors describe a method to re-map animation parameters between multiple types of facial model for performance-driven animation. A facial performance can be analysed in terms of a set of facial action parameter trajectories using a modified appearance model with modes of variation encoding specific facial actions which can be pre-defined. These parameters can then be used to animate other modified appearance models or 3D morph-target-based facial models. Thus, the animation parameters analysed from the video performance may be re-used to animate multiple types of facial model. The authors demonstrate the effectiveness of the proposed approach by measuring its ability to successfully extract action parameters from performances and by displaying frames from example animations. The authors also demonstrate its potential use in fully automatic performance-driven animation applications.


    1. 1)
      • Performance driven facial animation
    2. 2)
      • Zalewski, L., Gong, S.: `2d statistical models of facial expressions for realistic 3d avatar animation', Proc. IEEE Computer Vision and Pattern Recognition, 2005, 2, p. 217–222
    3. 3)
      • Geometry-driven photorealistic facial expression synthesis
    4. 4)
      • Blanz, V., Basso, C., Poggio, T., Vetter, T.: `Reanimating faces in images and video', Proc. Eurographics, 2003, p. 641–650
    5. 5)
      • Noh, J., Neumann, U.: `Expression cloning', Proc. SIGGRAPH, 2001, p. 277–288
    6. 6)
      • Face transfer with multilinear models
    7. 7)
      • Joshi, P., Tien, W., Desbrun, M., Pighin, F.: `Learning controls for blend shape based realistic facial animation', Proc. Eurographics/SIGGRAPH Symp. Computer Animation, 2003, p. 187–192
    8. 8)
      • Active appearance models
    9. 9)
      • Active appearance models revisited
    10. 10)
      • Hack, C., Taylor, C.J.: `Modelling talking head behaviour', Proc. British Machine Vision Conf. (BMVC), 2003, p. 33–42
    11. 11)
      • Toward perceptually realistic talking heads: models, methods, and McGurk
    12. 12)
      • Theobald, B., Cawley, G., Glauert, J., Bangham, A.: `2.5d visual speech synthesis using appearance models', Proc. British Machine Vision Conference (BMVC), 2003, p. 43–52
    13. 13)
      • Liu, Z., Shan, Y., Zhang, Z.: `Expressive expression mapping with ratio images', Proc. SIGGRAPH, 2001, p. 271–276
    14. 14)
      • De la Torre, F., Black, M.J.: `Dynamic coupled component analysis', Proc. IEEE Computer Vision and Pattern Recognition, 2001, 2, p. 643–650
    15. 15)
      • Mood swings: expressive speech animation
    16. 16)
      • Chang, Y., Ezzat, T.: `Transferable videorealistic speech animation', 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2005, p. 143–152
    17. 17)
      • Facial action coding system
    18. 18)
      • Cosker, D.: `Animation of a hierarchical image based facial model and perceptual analysis of visual speech', 2006, PhD, Cardiff University, School of Computer Science
    19. 19)
      • Costen, N., Cootes, T., Edwards, G., Taylor, C.: `Automatic extraction of the face identity subspace', Proc. British Machine Vision Conference (BMVC), 1999, 1, p. 513–522
    20. 20)
      • Robust real-time face detection
    21. 21)
      • Ahonen, T., Hadid, A., Pietikainen, M.: `Face recognition with local binary patterns', Proc. 8th European Conf. Computer Vision, 2004, 3021, p. 469–481, LNCS
    22. 22)
      • Automatic feature generation for handwritten digit recognition
    23. 23)
      • Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: `Recognising facial expression: machine learning and application to spontaneous behaviour', Proc. IEEE Computer Vision and Pattern Recognition, 2005, 2, p. 568–573

Related content

This is a required field
Please enter a valid email address