Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Towards automatic performance-driven animation between multiple types of facial model

Towards automatic performance-driven animation between multiple types of facial model

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The authors describe a method to re-map animation parameters between multiple types of facial model for performance-driven animation. A facial performance can be analysed in terms of a set of facial action parameter trajectories using a modified appearance model with modes of variation encoding specific facial actions which can be pre-defined. These parameters can then be used to animate other modified appearance models or 3D morph-target-based facial models. Thus, the animation parameters analysed from the video performance may be re-used to animate multiple types of facial model. The authors demonstrate the effectiveness of the proposed approach by measuring its ability to successfully extract action parameters from performances and by displaying frames from example animations. The authors also demonstrate its potential use in fully automatic performance-driven animation applications.

References

    1. 1)
      • T. Cootes , G. Edwards , C. Taylor . Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. , 6 , 681 - 684
    2. 2)
      • Ahonen, T., Hadid, A., Pietikainen, M.: `Face recognition with local binary patterns', Proc. 8th European Conf. Computer Vision, 2004, 3021, p. 469–481, LNCS.
    3. 3)
      • E. Chuang , C. Bregler . Mood swings: expressive speech animation. ACM Trans. Graph. , 2 , 331 - 347
    4. 4)
      • D. Cosker , S. Paddock , D. Marshall , P.L. Rosin , S. Rushton . Toward perceptually realistic talking heads: models, methods, and McGurk. ACM Trans. Appl. Percept. , 3 , 270 - 285
    5. 5)
      • Costen, N., Cootes, T., Edwards, G., Taylor, C.: `Automatic extraction of the face identity subspace', Proc. British Machine Vision Conference (BMVC), 1999, 1, p. 513–522.
    6. 6)
      • P. Viola , M. Jones . Robust real-time face detection. Int. J. Comput. Vis. , 2 , 137 - 154
    7. 7)
      • Chang, Y., Ezzat, T.: `Transferable videorealistic speech animation', 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2005, p. 143–152.
    8. 8)
      • Liu, Z., Shan, Y., Zhang, Z.: `Expressive expression mapping with ratio images', Proc. SIGGRAPH, 2001, p. 271–276.
    9. 9)
      • I. Matthews , S. Baker . Active appearance models revisited. Int. J. Comput. Vis. , 2 , 135 - 164
    10. 10)
      • Blanz, V., Basso, C., Poggio, T., Vetter, T.: `Reanimating faces in images and video', Proc. Eurographics, 2003, p. 641–650.
    11. 11)
      • Hack, C., Taylor, C.J.: `Modelling talking head behaviour', Proc. British Machine Vision Conf. (BMVC), 2003, p. 33–42.
    12. 12)
      • D. Vlasic , M. Brand , H. Pfister , J. Popovic . Face transfer with multilinear models. ACM Trans. Graph. , 3 , 426 - 433
    13. 13)
      • Theobald, B., Cawley, G., Glauert, J., Bangham, A.: `2.5d visual speech synthesis using appearance models', Proc. British Machine Vision Conference (BMVC), 2003, p. 43–52.
    14. 14)
      • P. Ekman , W.V. Friesen . (1977) Facial action coding system.
    15. 15)
      • Cosker, D.: `Animation of a hierarchical image based facial model and perceptual analysis of visual speech', 2006, PhD, Cardiff University, School of Computer Science.
    16. 16)
      • De la Torre, F., Black, M.J.: `Dynamic coupled component analysis', Proc. IEEE Computer Vision and Pattern Recognition, 2001, 2, p. 643–650.
    17. 17)
      • Joshi, P., Tien, W., Desbrun, M., Pighin, F.: `Learning controls for blend shape based realistic facial animation', Proc. Eurographics/SIGGRAPH Symp. Computer Animation, 2003, p. 187–192.
    18. 18)
      • Zalewski, L., Gong, S.: `2d statistical models of facial expressions for realistic 3d avatar animation', Proc. IEEE Computer Vision and Pattern Recognition, 2005, 2, p. 217–222.
    19. 19)
      • Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: `Recognising facial expression: machine learning and application to spontaneous behaviour', Proc. IEEE Computer Vision and Pattern Recognition, 2005, 2, p. 568–573.
    20. 20)
      • Noh, J., Neumann, U.: `Expression cloning', Proc. SIGGRAPH, 2001, p. 277–288.
    21. 21)
      • Q. Zhang , Z. Liu , B. Guo , D. Terzopoulos , H. Shum . Geometry-driven photorealistic facial expression synthesis. IEEE Trans. Vis. Comput. Graphics , 1 , 48 - 60
    22. 22)
      • L. Williams . Performance driven facial animation. Comput. Graphics , 4 , 235 - 242
    23. 23)
      • P. Gader , M. Khabou . Automatic feature generation for handwritten digit recognition. IEEE Trans. Pattern Anal. Mach. Intell. , 12 , 1256 - 1261
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi_20070041
Loading

Related content

content/journals/10.1049/iet-cvi_20070041
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address