Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Mouse face tracking using convolutional neural networks

Facial expressions of laboratory mice provide important information for pain assessment to explore the effect of drugs being developed for medical purposes. For automatic pain assessment, a mouse face tracker is needed to extract the face regions in videos recorded in pain experiments. However, since the body and face of mice are the same colour and mice move fast, tracking their face is a challenging task. In recent years, with their ability to learn from data, deep learning provides effective solutions for a wide variety of problems. In particular, convolutional neural networks (CNNs) are very successful in computer vision tasks. In this study, a CNN based tracker network called MFTN is proposed for mouse face tracking. CNNs are good at extracting hierarchical features from the training dataset. High-level features contain semantic features and low-level features have high spatial resolution. In the proposed MFTN architecture, target information is extracted from a combination of low- and high-level features by a sub-network, namely the Feature Adaptation Network (FAN), to achieve a robust and accurate tracker. Among the MFTN versions, the MFTN/c tracker achieved an accuracy of 0.8, robustness of 0.67, and a throughput of 213 fps on a workstation with GPU.

References

    1. 1)
      • 20. Li, H., Li, Y., Porikli, F.: ‘Deeptrack: learning discriminative feature representations by convolutional neural networks for visual tracking’. Proc. British Machine Vision Conf., 2014.
    2. 2)
      • 8. Li, Y., Zhu, J., Hoi, S.C.H.: ‘Reliable patch trackers: robust visual tracking by exploiting reliable patches’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015.
    3. 3)
      • 19. Wang, L., Liu, T., Wang, G., et al: ‘Video tracking using learned hierarchical features’, IEEE Trans. Image Process., 2015, 24, (4), pp. 14241435.
    4. 4)
      • 3. Le Cun, Y., Boser, B., Denker, J.S., et al: ‘Handwritten digit recognition with a back-propagation network’. Proc. Advances in Neural Information Processing Systems, 1990, pp. 396404.
    5. 5)
      • 26. CS231n convolutional neural networks for visual recognition’. Available at http://cs231n.github.io/transfer-learning, accessed 19 August 2017.
    6. 6)
      • 28. Akkaya, I.B.: ‘Mouse face tracking using convolutional neural networks’. MSc thesis, Middle East Technical University, 2016.
    7. 7)
      • 16. Hong, S., You, T., Kwak, S., et al: ‘Online tracking by learning discriminative saliency map with convolutional neural network’. Proc. 32nd Int. Conf. Machine Learning, Lille, France, 6–11 July 2015.
    8. 8)
      • 12. Li, Y., Zhu, J.: ‘A scale adaptive kernel correlation filter tracker with feature integration’. ECCV Workshops, 2014, vol. 2.
    9. 9)
      • 5. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’, Adv. Neural Inf. Process. Syst., 2012, 25, pp. 10971105.
    10. 10)
      • 18. Nam, H., Han, B.: ‘Learning multi-domain convolutional neural networks for visual tracking’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 42934302.
    11. 11)
      • 11. Henriques, J.F., Caseiro, R., Martins, P., et al: ‘High-speed tracking with kernelized correlation filters’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (3), pp. 583596.
    12. 12)
      • 22. Held, D., Thrun, S., Savarese, S.: ‘Learning to track at 100 fps with deep regression networks’. European Conf. Computer Vision, 2016, pp. 749765.
    13. 13)
      • 14. Chen, Y., Yang, X., Zhong, B., et al: ‘Cnntracker: online discriminative object tracking via deep convolutional neural network’, Appl. Soft Comput., 2016, 38, pp. 10881098.
    14. 14)
      • 7. Danelljan, M., Robinson, A., Khan, F. S., et al: ‘Beyond correlation filters: learning continuous convolution operators for visual tracking’. European Conf. Computer Vision, 2016, pp. 472488.
    15. 15)
      • 24. Jia, Y., Shelhamer, E., Donahue, J., et al: ‘Caffe: convolutional architecture for fast feature embedding’. Proc. 22nd ACM Int. Conf. Multimedia, 2014, pp. 675678.
    16. 16)
      • 13. Wang, N., Yeung, D.-Y.: ‘Learning a deep compact image representation for visual tracking’, Adv. Neural Inf. Process. Syst., 2013, 26, pp. 809817.
    17. 17)
      • 2. Langford, D.L., Bailey, A.L., Chanda, M.L., et al: ‘Coding of facial expressions of pain in the laboratory mouse’, Nat. Methods, 2010, 7, (6), pp. 447449.
    18. 18)
      • 23. Glorot, X., Bengio, Y.: ‘Understanding the difficulty of training deep feedforward neural networks’, Aistats, 2010, 9, pp. 249256.
    19. 19)
      • 10. Danelljan, M., Hager, G., Shahbaz Khan, F., et al: ‘Learning spatially regularized correlation filters for visual tracking’. Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 43104318.
    20. 20)
      • 4. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, CoRR, abs/1409.1556, 2014.
    21. 21)
      • 9. Danelljan, M., Hager, G., Shahbaz Khan, F., et al: ‘Adaptive decontamination of the training set: a unified formulation for discriminative visual tracking’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 14301438.
    22. 22)
      • 27. Cehovin, L., Kristan, M., Leonardis, A.: ‘Is my new tracker really better than yours?’. IEEE Winter Conf. Applications of Computer Vision, 2014, pp. 540547.
    23. 23)
      • 15. Ma, C., Huang, J.-B., Yang, X., et al: ‘Hierarchical convolutional features for visual tracking’. Proc. IEEE Int. Conf. Computer Vision, 2015.
    24. 24)
      • 1. Jhuang, H., Garrote, E., Yu, X., et al: ‘Automated home-cage behavioral phenotyping of mice’, Nat. Commun., 2010, 1, ncomms1064.
    25. 25)
      • 6. Russakovsky, O., Deng, J., Su, H., et al: ‘Imagenet large scale visual recognition challenge’, Int. J. Comput. Vis., 2015, 115, (3), pp. 211252.
    26. 26)
      • 25. Kingma, D., Ba, J.: ‘Adam: a method for stochastic optimization’. Available at https://arxiv.org/abs/1412.6980, accessed 29 January 2017.
    27. 27)
      • 21. Wang, N., Li, S., Gupta, A., et al: ‘Transferring rich feature hierarchies for robust visual tracking’. Available at https://arxiv.org/abs/1501.04587, accessed 29 January 2017.
    28. 28)
      • 17. Wang, L., Ouyang, W., Wang, X., et al: ‘Visual tracking with fully convolutional networks’. 2015 IEEE Int. Conf. Computer Vision (ICCV), December 2015, pp. 31193127.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0084
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0084
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address