http://iet.metastore.ingenta.com
1887

Enhancing person re-identification by late fusion of low-, mid- and high-level features

Enhancing person re-identification by late fusion of low-, mid- and high-level features

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Biometrics — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Person re-identification is the process of finding people across different cameras. In this process, focus often lies in developing strong feature descriptors or a robust metric learning algorithm. While the two aspects are the most important steps in order to secure a high performance, a less explored aspect is late fusion of complementary features. For this purpose, this study proposes a late fusing scheme that, based on an experimental analysis, combines three systems that focus on extracting features and provide supervised learning on different abstraction levels. To analyse the behaviour of the proposed system, both rank aggregation and score-level fusion are applied. The authors’ proposed fusion scheme increases results on both small and large datasets. Experimental results on VIPeR show accuracies 5.43% higher than related systems, while results on PRID450S and CUHK01 increase state-of-the-art results by 10.94 and 14.84%, respectively. Furthermore, a cross-dataset test shows an increased rank-1 accuracy of 28.26% when training on CUHK02 and testing on VIPeR. Finally, an analysis of the late fusion shows aggregation to be better when individual results are unequally distributed within top-10 while score-level fusion provides better results when two individual results lie within top-5 while the last lies outside top-10.

References

    1. 1)
      • 1. Liu, Z., Zhang, Z., Wu, Q., et al: ‘Enhancing person re-identification by integrating gait biometric’, Neurocomputing, 2015, 168, pp. 11441156.
    2. 2)
      • 2. DeCann, B., Ross, A.: ‘Modelling errors in a biometric re-identification system’, IET Biometrics, 2015, 4, (4), pp. 209219.
    3. 3)
      • 3. Schumann, A., Monari, E.: ‘A soft-biometrics dataset for person tracking and reidentification’. Proc. AVSS, 2014, pp. 193198.
    4. 4)
      • 4. Li, W., Zhao, R., Xiao, T., et al: ‘Deepreid: deep filter pairing neural network for person re-identification’. Proc. CVPR, 2014, pp. 152159.
    5. 5)
      • 5. Zheng, L., Shen, L., Tian, L., et al: ‘Scalable person re-identification: a benchmark’. Proc. ICCV, 2015, pp. 11161124.
    6. 6)
      • 6. Gray, D., Tao, H.: ‘Viewpoint invariant pedestrian recognition with an ensemble of localized features’. Proc. ECCV, 2008, pp. 262275.
    7. 7)
      • 7. Bazzani, L., Cristani, M., Murino, V.: ‘Symmetry-driven accumulation of local features for human characterization and re-identification’, Comput. Vis. Image Underst., 2013, 117, (2), pp. 130144.
    8. 8)
      • 8. Bak, S., Brémond, F.: ‘Re-identification by covariance descriptors’, in Gong, S., Cristani, M., Yan, S., et al (Eds.): ‘Person re-identification, volume 1 of advances in computer vision and pattern recognition’ (Springer, London, 2014, 1st edn.), Ch. 4, pp. 7191.
    9. 9)
      • 9. Zhao, R., Ouyang, W., Wang, X.: ‘Learning mid-level filters for person re-identification’. Proc. CVPR, 2014, pp. 144151.
    10. 10)
      • 10. Lazebnik, S., Schmid, C., Ponce, J.: ‘Beyond bags of features: spatial pyramid matching for recognizing natural scene categories’. Proc. CVPR, 2006, vol. 2, pp. 21692178.
    11. 11)
      • 11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’. Proc. NIPS, 2012, pp. 10971105.
    12. 12)
      • 12. Koestinger, M., Hirzer, M., Wohlhart, P., et al: ‘Large scale metric learning from equivalence constraints’. Proc. CVPR, 2012, pp. 22882295.
    13. 13)
      • 13. Xiong, F., Gou, M., Camps, O., et al: ‘Person re-identification using kernel-based metric learning methods’. Proc. ECCV, 2014, pp. 116.
    14. 14)
      • 14. Pantic, M., Rothkrantz, L.J.: ‘Toward an affect-sensitive multimodal human-computer interaction’, Proc. IEEE, 2003, 91, (9), pp. 13701390.
    15. 15)
      • 15. Kittler, J., Hatef, M., Duin, R.P., et al: ‘On combining classifiers’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (3), pp. 226239.
    16. 16)
      • 16. Dwork, C., Kumar, R., Naor, M., et al: ‘Rank aggregation methods for the web’. Proc. WWW, 2001, pp. 613622.
    17. 17)
      • 17. Zhao, R., Ouyang, W., Wang, X.: ‘Unsupervised salience learning for person re-identification’. Proc. CVPR, 2013, pp. 35863593.
    18. 18)
      • 18. Liao, S., Hu, Y., Zhu, X., et al: ‘Person re-identification by local maximal occurrence representation and metric learning’. Proc. CVPR, 2015, pp. 21972206.
    19. 19)
      • 19. Yang, Y., Yang, J., Yan, J., et al: ‘Salient color names for person reidentification’. Proc. ECCV, 2014, pp. 536551.
    20. 20)
      • 20. Prosser, B., Zheng, W.-S., Gong, S., et al: ‘Person re-identification by support vector ranking’. Proc. BMVC, 2010, pp. 21.121.11.
    21. 21)
      • 21. Zheng, W.-S., Gong, S., Xiang, T.: ‘Person re-identification by probabilistic relative distance comparison’. Proc. CVPR, 2011, pp. 649656.
    22. 22)
      • 22. Zheng, L., Shen, L., Tian, L., et al: ‘Person re-identification meets image search’. CoRR, 2015, pp. 43214330, abs/1502.02171. Available at http://arxiv.org/abs/1502.02171, retrieved from: http://arxiv.org/abs/1502.02171.
    23. 23)
      • 23. Liu, X., Song, M., Tao, D., et al: ‘Semi-supervised coupled dictionary learning for person re-identification’. Proc. CVPR, 2014, pp. 35503557.
    24. 24)
      • 24. Ahmed, E., Jones, M., Marks, T.K.: ‘An improved deep learning architecture for person reidentification’. Proc. CVPR, 2015, pp. 39083916.
    25. 25)
      • 25. Yi, D., Lei, Z., Liao, S., et al: ‘Deep metric learning for person re-identification’. Proc. ICPR, 2014, pp. 3439.
    26. 26)
      • 26. Paisitkriangkrai, S., Shen, C., van den Hengel, A.: ‘Learning to rank in person re-identification with metric ensembles’. Proc. CVPR, 2015, pp. 18461855.
    27. 27)
      • 27. Wu, S., Chen, Y.-C., Zheng, W.-S.: ‘An enhanced deep feature representation for person reidentification’. Proc. WACV, 2016, pp. 18.
    28. 28)
      • 28. Zheng, L., Wang, S., Tian, L., et al: ‘Query-adaptive late fusion for image search and person re-identification’. Proc. CVPR, 2015, pp. 17411750.
    29. 29)
      • 29. Martinel, N., Micheloni, C., Foresti, G.L.: ‘A pool of multiple person re-identification experts’, Pattern Recognit. Lett., 2016, 71, pp. 2330.
    30. 30)
      • 30. Liu, X., Wang, H., Wu, Y., et al: ‘An ensemble color model for human reidentification’. Proc. WACV, 2015, pp. 868875.
    31. 31)
      • 31. Ye, M., Chen, J., Leng, Q., et al: ‘Coupled-view based ranking optimization for person re-identification’. Proc. MMM, 2015, pp. 105117.
    32. 32)
      • 32. de Carvalho Prates, R.F., Schwartz, W.R.: ‘Cbra: color-based ranking aggregation for person re-identification’. Proc. ICIP, 2015, pp. 19751979.
    33. 33)
      • 33. Jobson, D.J., Rahman, Z.-u., Woodell, G.A.: ‘A multiscale retinex for bridging the gap between color images and the human observation of scenes’, IEEE Trans. Image Process., 1997, 6, (7), pp. 965976.
    34. 34)
      • 34. Liao, S., Zhao, G., Kellokumpu, V., et al: ‘Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes’. Proc. CVPR, 2010, pp. 13011306.
    35. 35)
      • 35. Li, S., Shao, M., Fu, Y.: ‘Cross-view projective dictionary learning for person re-identification’. Proc. AAAI, 2015, pp. 21552161.
    36. 36)
      • 36. Chen, Y.-C., Zheng, W.-S., Lai, J.: ‘Mirror representation for modeling view-specific transform in person re-identification’. Proc. IJCAI, 2015, pp. 34023408.
    37. 37)
      • 37. de Carvalho Prates, R.F., Schwartz, W.R.: ‘Appearance-based person re-identification by intra-camera discriminative models and rank aggregation’. Proc. ICB, 2015, pp. 6572.
    38. 38)
      • 38. Stuart, J.M., Segal, E., Koller, D., et al: ‘A gene-coexpression network for global discovery of conserved genetic modules’, Science, 2003, 302, (5643), pp. 249255.
    39. 39)
      • 39. Kolde, R., Laur, S., Adler, P., et al: ‘Robust rank aggregation for gene list integration and meta-analysis’, Bioinformatics, 2012, 28, (4), pp. 573580.
    40. 40)
      • 40. Roth, P.M., Hirzer, M., Köstinger, M.: ‘Mahalanobis distance learning for person re-identification’, Gong, S., Cristani, M., Yan, S., et al (Eds.): ‘Person re-identification, volume 1 of advances in computer vision and pattern recognition’ (Springer, London, 2014, 1st edn.), ch. 12, pp. 247267.
    41. 41)
      • 41. Li, W., Zhao, R., Wang, X.: ‘Human reidentification with transferred metric learning’. Proc. ACCV, 2012, pp. 3144.
    42. 42)
      • 42. Chen, S.-Z., Guo, C.-C., Lai, J.-H.: ‘Deep ranking for person re-identification via joint representation learning’, IEEE Trans. Image Process., 2016, 25, (5), pp. 23532367.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-bmt.2016.0200
Loading

Related content

content/journals/10.1049/iet-bmt.2016.0200
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address