http://iet.metastore.ingenta.com
1887

QUIS-CAMPI: an annotated multi-biometrics data feed from surveillance scenarios

QUIS-CAMPI: an annotated multi-biometrics data feed from surveillance scenarios

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Biometrics — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The accuracy of biometric recognition in unconstrained scenarios has been a major concern for a large number of researchers. Despite such efforts, no system can recognise in a fully automated manner human beings in totally wild conditions such as in surveillance environments. Several sets of degraded data have been made available to the research community, where the reported performance by state-of-the-art algorithms is already saturated, suggesting that these sets do not reflect faithfully the conditions in such hard settings. To this end, the authors introduce the QUIS-CAMPI data feed, comprising samples automatically acquired by an outdoor visual surveillance system, with subjects on-the-move and at-a-distance (up to 50 m). Also, they supply a high-quality set of enrolment data. When compared to similar data sources, the major novelties of QUIS-CAMPI are: (i) biometric samples are acquired in a fully automatic way; (ii) it is an open dataset, i.e. the number of probe images and enroled subjects grow on a daily basis; and (iii) it contains multi-biometric traits. The ensemble properties of QUIS-CAMPI ensure that the data span a representative set of covariate factors of real-world scenarios, making it a valuable tool for developing and benchmarking biometric recognition algorithms capable of working in unconstrained scenarios.

References

    1. 1)
      • 1. NeuroTechnology.: ‘Verilook surveillance’. Available at http://www.neurotechnology.com/verilook-surveillance.html, accessed on 10 March 2015.
    2. 2)
      • 2. Klontz, J., Jain, A.: ‘A case study of automated face recognition: the Boston marathon bombings suspects’, IEEE Comput., 2013, 46, (11), pp. 9194.
    3. 3)
      • 3. Huang, G.B., Mattar, M., Berg, T., et al: ‘Labeled faces in the wild: a database for studying face recognition in unconstrained environments’. Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition, Marseille, France, 2008.
    4. 4)
      • 4. Guillaumin, M., Verbeek, J., Schmid, C.: ‘Is that you? Metric learning approaches for face identification’. IEEE Int. Conf. Computer Vision, 2009, pp. 498505.
    5. 5)
      • 5. Zhu, X., Lei, Z., Yan, J., et al: ‘High-fidelity pose and expression normalization for face recognition in the wild’. IEEE Conf. Computer Vision and Pattern Recognition, 2015.
    6. 6)
      • 6. Klare, B.F., Klein, B., Taborsky, E., et al: ‘Pushing the frontiers of unconstrained face detection and recognition: Iarpa Janus benchmark a’. IEEE Conf. Computer Vision and Pattern Recognition, 2015.
    7. 7)
      • 7. Neves, J., Proença, H.: ‘ICB-RW 2016: International Challenge on Biometric Recognition in the Wild’. Int. Conf. Biometrics, 2016, pp. 16.
    8. 8)
      • 8. Samaria, F.S.: ‘Face recognition using hidden Markov models’, PhD thesis. University of Cambridge, 1994.
    9. 9)
      • 9. Martinez, A., Benavente, R.: ‘The AR face database’. Technical Report 24, 1998.
    10. 10)
      • 10. Phillips, P.J., Moon, H., Rizvi, S.A., et al: ‘The FERET evaluation methodology for face-recognition algorithms’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (10), pp. 10901104.
    11. 11)
      • 11. Sim, T., Baker, S., Bsat, M.: ‘The CMU pose, illumination, and expression (pie) database’. IEEE Int. Conf. Automatic Face and Gesture Recognition, 2002, pp. 4651.
    12. 12)
      • 12. Gross, R., Matthews, I., Cohn, J., et al: ‘Multi-pie’. IEEE Int. Conf. Automatic Face Gesture Recognition, 2008, pp. 18.
    13. 13)
      • 13. Matas, J., et al: ‘Comparison of face verification results on the XM2VTFS database’. Int. Conf. Pattern Recognition, 2000, vol. 4, pp. 858863.
    14. 14)
      • 14. Messer, K., et al: ‘Face authentication test on the BANCA database’. Int. Conf. Pattern Recognition, 2004, vol. 4, pp. 523532.
    15. 15)
      • 15. Phillips, P.J., Flynn, P.J., Scruggs, T., et al: ‘Overview of the face recognition grand challenge’. Conf. Computer Vision and Pattern Recognition, 2005, vol. 1, pp. 947954.
    16. 16)
      • 16. Proença, H., Filipe, S., Santos, R., et al: ‘The UBIRIS.v2: a database of visible wavelength images captured on-the-move and at-a-distance’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (8), pp. 15291535.
    17. 17)
      • 17. Padole, C., Proença, H.: ‘Compensating for pose and illumination in unconstrained periocular biometrics’, Int. J. Biometrics, 2013, 5, (3/4), pp. 336359.
    18. 18)
      • 18. Juefei-Xu, F., Savvides, M.: ‘Unconstrained periocular biometric acquisition and recognition using cots PTZ camera for uncooperative and non-cooperative subjects’. IEEE Workshop on Applications of Computer Vision, 2012, pp. 201208.
    19. 19)
      • 19. Huang, G.B., Learned-Miller, E.: ‘Labeled faces in the wild: updates and new reporting procedures’. Technical Report UM-CS-2014-003, University of Massachusetts, Amherst, 2014.
    20. 20)
      • 20. Kumar, N., Berg, A., Belhumeur, P., et al: ‘Attribute and simile classifiers for face verification’. IEEE Int. Conf. Computer Vision, 2009, pp. 365372.
    21. 21)
      • 21. Ng, H.-W., Winkler, S.: ‘A data-driven approach to cleaning large face datasets’. IEEE Int. Conf. Image Processing, 2014, pp. 343347.
    22. 22)
      • 22. Wang, T.Y., Kumar, A.: ‘Recognizing human faces under disguise and makeup’. IEEE Int. Conf. Identity, Security and Behavior Analysis (ISBA), 2016.
    23. 23)
      • 23. Wolf, L., Hassner, T., Maoz, I.: ‘Face recognition in unconstrained videos with matched background similarity’. IEEE Conf. Computer Vision and Pattern Recognition, 2011, pp. 529534.
    24. 24)
      • 24. Grgic, M., Delac, K., Grgic, S.: ‘SCFACE – surveillance cameras face database’, Multimedia Tools Appl., 2011, 51, (3), pp. 863879.
    25. 25)
      • 25. Wong, Y., Chen, S., Mau, S., et al: ‘Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition’. IEEE Computer Vision and Pattern Recognition Workshops, 2011, pp. 8188.
    26. 26)
      • 26. Li, L., Nawaz, T., Ferryman, J.: ‘PETS 2015: datasets and challenge’. IEEE Int. Conf. Advanced Video and Signal Based Surveillance, 2015, pp. 16.
    27. 27)
      • 27. Wang, T., Gong, S., Zhu, X., et al: ‘Chapter person re-identification by video ranking’. European Conf. Computer Vision, 2014, pp. 688703.
    28. 28)
      • 28. Fisher, R.: ‘Caviar dataset’. Available at http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/, 2005, accessed 10th December 2016.
    29. 29)
      • 29. Vezzani, R., Cucchiara, R.: ‘Video surveillance online repository (VISOR): an integrated framework’, Multimedia Tools Appl.., 2010, 50, (2), pp. 359380.
    30. 30)
      • 30. Hofmann, M., Schmidt, S.M., Rajagopalan, A., et al: ‘Combined face and gait recognition using alpha matte preprocessing’. Int. Conf. Biometrics, New Delhi, India, 2012, pp. 18.
    31. 31)
      • 31. Muramatsu, D., Iwama, H., Makihara, Y., et al: ‘Multi-view multi-modal person authentication from a single walking image sequence’. Int. Conf. Biometrics, 2013, pp. 18.
    32. 32)
      • 32. Shutler, J.D., Grant, M.G., Nixon, M.S., et al: ‘On a large sequence-based human gait database’, Appl. Sci. Soft Comput., 2004, pp. 339346.
    33. 33)
      • 33. Yu, S., Tan, D., Tan, T.: ‘A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition’. Proc. Int. Conf. Pattern Recognition, 2006, pp. 441444.
    34. 34)
      • 34. Iwama, H., Okumura, M., Makihara, Y., et al: ‘The OU-ISIR gait database comprising the large population dataset and performance evaluation of gait recognition’, IEEE Trans. Inf. Forensics Sec., 2012, 7, (5), pp. 15111521.
    35. 35)
      • 35. Phillips, P., Scruggs, W., O'Toole, A., et al: ‘FRVT 2006 and ice 2006 large-scale experimental results’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (5), pp. 831846.
    36. 36)
      • 36. Maeng, H., Liao, S., Kang, D., et al: ‘Nighttime face recognition at long distance: cross-distance and cross-spectral matching’. Asian Conf. Computer Vision, 2012, pp. 708721.
    37. 37)
      • 37. Neves, J.C., Santos, G., Filipe, S., et al: ‘QUIS-CAMPI: extending in the wild biometric recognition to surveillance environments’. ICIAP 2015 Workshops, 2015, pp. 5968.
    38. 38)
      • 38. Park, U., Choi, H.-C., Jain, A., et al: ‘Face tracking and recognition at a distance: a coaxial and concentric PTZ camera system’, IEEE Trans. Inf. Forensics Sec., 2013, 8, (10), pp. 16651677.
    39. 39)
      • 39. Wheeler, F., Weiss, R., Tu, P.: ‘Face recognition at a distance system for surveillance applications’. IEEE Int. Conf. Biometrics: Theory Applications and Systems, 2010, pp. 18.
    40. 40)
      • 40. Choi, H.-C., Park, U., Jain, A.: ‘PTZ camera assisted face acquisition, tracking & recognition’. IEEE Int. Conf. Biometrics: Theory Applications and Systems, 2010, pp. 16.
    41. 41)
      • 41. Neves, J.C., Moreno, J.C., Barra, S., et al: ‘Acquiring high-resolution face images in outdoor environments: a master–slave calibration algorithm’. IEEE Int. Conf. Biometrics: Theory, Applications and Systems, 2015.
    42. 42)
      • 42. Neves, J.C., Proença, H.: ‘Dynamic camera scheduling for visual surveillance in crowded scenes using Markov random fields’. IEEE Conf. Advanced Video and Signal Based Surveillance, 2015.
    43. 43)
      • 43. Best-Rowden, L., Han, H., Otto, C., et al: ‘Unconstrained face recognition: identifying a person of interest from a media collection’, IEEE Trans. Inf. Forensics Sec., 2014, 9, (12), pp. 21442157.
    44. 44)
      • 44. Tome, P., Fierrez, J., Vera-Rodriguez, R., et al: ‘Soft biometrics and their application in person recognition at a distance’, IEEE Trans. Inf. Forensics Sec., 2014, 9, (3), pp. 464475.
    45. 45)
      • 45. Wu, C., Agarwal, S., Curless, B., et al: ‘Multicore bundle adjustment’. IEEE Conf. Computer Vision and Pattern Recognition, 2011, pp. 30573064.
    46. 46)
      • 46. Zhu, X., Ramanan, D.: ‘Face detection, pose estimation, and landmark localization in the wild’. IEEE Conf. Computer Vision and Pattern Recognition, 2012, pp. 28792886.
    47. 47)
      • 47. Davis, J.V., Kulis, B., Jain, P., et al: ‘Information-theoretic metric learning’. Int. Conf. Machine Learning, 2007, pp. 209216.
    48. 48)
      • 48. Koestinger, M., Hirzer, M., Wohlhart, P., et al: ‘Large scale metric learning from equivalence constraints’. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2012.
    49. 49)
      • 49. Simonyan, K., Parkhi, O.M., Vedaldi, A., et al: ‘Fisher vector faces in the wild’. British Machine Vision Conf., 2013.
    50. 50)
      • 50. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    51. 51)
      • 51. Everingham, M., Sivic, J., Zisserman, A.: ‘Hello! My name is...buffy – automatic naming of characters in TV video’. Proc. British Machine Vision Conf., 2006.
    52. 52)
      • 52. Chang, C.-C., Lin, C.-J.: ‘LibSVM: a library for support vector machines’, ACM Trans. Intell. Syst. Technol., 2011, 2, (3), pp. 127.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-bmt.2016.0178
Loading

Related content

content/journals/10.1049/iet-bmt.2016.0178
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address