access icon free Biometric evidence evaluation: an empirical assessment of the effect of different training data

For an automatic comparison of a pair of biometric specimens, a similarity metric called ‘score’ is computed by the employed biometric recognition system. In forensic evaluation, it is desirable to convert this score into a likelihood ratio. This process is referred to as calibration. A likelihood ratio is the probability of the score given the prosecution hypothesis (which states that the pair of biometric specimens are originated from the suspect) is true divided by the probability of the score given the defence hypothesis (which states that the pair of biometric specimens are not originated from the suspect) is true. In practice, a set of scores (called training scores) obtained from the within-source and between-sources comparison is needed to compute a likelihood ratio value for a score. In likelihood ratio computation, the within-source and between-sources conditions can be anchored to a specific suspect in a forensic case or it can be generic within-source and between-sources comparisons independent of the suspect involved in the case. This results in two likelihood ratio values which differ in the nature of training scores they use and therefore consider slightly different interpretations of the two hypotheses. The goal of this study is to quantify the differences in these two likelihood ratio values in the context of evidence evaluation from a face, a fingerprint and a speaker recognition system. For each biometric modality, a simple forensic case is simulated by randomly selecting a small subset of biometric specimens from a large database. In order to be able to carry out a comparison across the three biometric modalities, the same protocol is followed for training scores set generation. It is observed that there is a significant variation in the two likelihood ratio values.

Inspec keywords: digital forensics; face recognition; fingerprint identification

Other keywords: fingerprint recognition; biometric recognition system; forensic evaluation; similarity metric; biometric evidence evaluation; likelihood ratio; speaker recognition system; biometric modality

Subjects: Image recognition; Computer vision and image processing techniques; Data security

References

    1. 1)
      • 15. Ali, T., Spreeuwers, L.J., Veldhuis, R.N.J.: ‘A review of calibration methods for biometric systems in forensic applications’. 33rd WIC Symp. on Information Theory in the Benelux, Boekelo, Netherlands, 2012, pp. 126133.
    2. 2)
      • 3. Lucy, D.: ‘Introduction to statistics for forensic scientists’ (John Wiley & Sons, Inc., 2005).
    3. 3)
      • 21. Ramos-Castro, D., González-Rodrguez, J., Montero-Asenjo, A., et al: ‘Suspect-adapted map estimation of within-source distributions in generative likelihood ratio estimation’. Proc. IEEE Odyssey Speaker and Language Recognition Workshop, 2012, doi:10.1109/ODYSSEY.2006.248090.
    4. 4)
    5. 5)
      • 5. Robertson, B., Vignaux, G.A.: ‘Interpreting evidence’ (Wiley, Chichester, UK, 1995).
    6. 6)
    7. 7)
      • 9. Ramos, D.: ‘Forensic evaluation of the evidence using automatic speaker recognition systems’. PhD dissertation, Universidad Autonoma de Madrid, 2007.
    8. 8)
      • 29. Mandasari, M.I., McLaren, M., Van Leeuwen, D.: ‘The effect of noise on modern automatic speaker recognition systems’. Proc. ICASSP, Kyoto, 2012.
    9. 9)
      • 25. Evett, I.W., Buckleton, J.S.: ‘Statistical analysis of STR data’, in Carracedo, A., Brinkmann, B., Bär, W. (eds): ‘Advances in forensic haemogenetics’ (Springer-Verlag, New York, 1996).
    10. 10)
    11. 11)
    12. 12)
    13. 13)
      • 4. Aitken, C.G.G., Taroni, F.: ‘Statistics and the evaluation of forensic evidence for forensic scientist’ (Wiley, Chichester, UK, 2004, 2nd edn).
    14. 14)
    15. 15)
      • 1. Jain, A.K., Flynn, P., Ross, A.: ‘Handbook of biometrics’ (Springer-Verlag, 2007).
    16. 16)
      • 6. Morrison, G.S.: ‘Forensic voice comparison’, in Freckelton, I., Selby, H. (Eds.): ‘Expert evidence’ (Thomson Reuters, Sydney, Australia, 2010), ch. 99.
    17. 17)
    18. 18)
      • 30. Doddington, G., Liggett, W., Martin, A., et al: ‘Sheep, Goats, Lambs and Wolves: a statistical analysis of speaker performance in the NIST 1998 speaker recognition evaluation’. Proc. Int. Conf. Spoken Language Processing, 1998.
    19. 19)
      • 26. Phillips, P.J., Flynn, P.J., Scruggs, T., et al: ‘Overview of the face recognition grand challenge’. Int. Conf. Computer Vision Pattern Recognition, 2005.
    20. 20)
      • 12. Ali, T., Spreeuwers, L.J., Veldhuis, R.N.J., et al: ‘Effect of calibration data on forensic likelihood ratio from a face recognition system’. IEEE Sixth Int. Conf. Biometrics: Theory, Applications and Systems (BTAS), Washington DC, USA, 2013.
    21. 21)
      • 27. NIST Speaker Recognition Evaluation 2010, http://www.itl.nist.gov/iad/mig/tests/sre/2010/.
    22. 22)
    23. 23)
    24. 24)
      • 23. Botti, F., Alexander, A., Drygajlo, A.: ‘An interpretation framework for the evaluation of evidence in forensic automatic speaker recognition with limited suspect data’. Proc. Odyssey the Speaker and Language Recognition Workshop, Toledo, Spain, 2004, pp. 6368.
    25. 25)
      • 2. Meuwly, D., Veldhuis, R.N.J.: ‘Forensic biometrics: from two communities to one discipline’. Proc. Int. Conf. Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2012, pp. 112.
    26. 26)
    27. 27)
      • 28. Cognitec FaceVACS SDK version 8.4.0, 2010, http://www.cognitec.com/.
    28. 28)
    29. 29)
      • 13. Ali, T., Spreeuwers, L.J., Veldhuis, R.N.J.: ‘Towards automatic forensic face recognition’. Int. Conf. Informatics Engineering and Information Science (ICIEIS), Communications in Computer and Information Science, Springer Verlag, Kuala Lumpur, Malaysia, 2011, vol. 252, pp. 4755.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-bmt.2014.0009
Loading

Related content

content/journals/10.1049/iet-bmt.2014.0009
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading