http://iet.metastore.ingenta.com
1887

Biometric evidence evaluation: an empirical assessment of the effect of different training data

Biometric evidence evaluation: an empirical assessment of the effect of different training data

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Biometrics — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

For an automatic comparison of a pair of biometric specimens, a similarity metric called ‘score’ is computed by the employed biometric recognition system. In forensic evaluation, it is desirable to convert this score into a likelihood ratio. This process is referred to as calibration. A likelihood ratio is the probability of the score given the prosecution hypothesis (which states that the pair of biometric specimens are originated from the suspect) is true divided by the probability of the score given the defence hypothesis (which states that the pair of biometric specimens are not originated from the suspect) is true. In practice, a set of scores (called training scores) obtained from the within-source and between-sources comparison is needed to compute a likelihood ratio value for a score. In likelihood ratio computation, the within-source and between-sources conditions can be anchored to a specific suspect in a forensic case or it can be generic within-source and between-sources comparisons independent of the suspect involved in the case. This results in two likelihood ratio values which differ in the nature of training scores they use and therefore consider slightly different interpretations of the two hypotheses. The goal of this study is to quantify the differences in these two likelihood ratio values in the context of evidence evaluation from a face, a fingerprint and a speaker recognition system. For each biometric modality, a simple forensic case is simulated by randomly selecting a small subset of biometric specimens from a large database. In order to be able to carry out a comparison across the three biometric modalities, the same protocol is followed for training scores set generation. It is observed that there is a significant variation in the two likelihood ratio values.

References

    1. 1)
      • 1. Jain, A.K., Flynn, P., Ross, A.: ‘Handbook of biometrics’ (Springer-Verlag, 2007).
    2. 2)
      • 2. Meuwly, D., Veldhuis, R.N.J.: ‘Forensic biometrics: from two communities to one discipline’. Proc. Int. Conf. Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2012, pp. 112.
    3. 3)
      • 3. Lucy, D.: ‘Introduction to statistics for forensic scientists’ (John Wiley & Sons, Inc., 2005).
    4. 4)
      • 4. Aitken, C.G.G., Taroni, F.: ‘Statistics and the evaluation of forensic evidence for forensic scientist’ (Wiley, Chichester, UK, 2004, 2nd edn).
    5. 5)
      • 5. Robertson, B., Vignaux, G.A.: ‘Interpreting evidence’ (Wiley, Chichester, UK, 1995).
    6. 6)
      • 6. Morrison, G.S.: ‘Forensic voice comparison’, in Freckelton, I., Selby, H. (Eds.): ‘Expert evidence’ (Thomson Reuters, Sydney, Australia, 2010), ch. 99.
    7. 7)
    8. 8)
    9. 9)
      • 9. Ramos, D.: ‘Forensic evaluation of the evidence using automatic speaker recognition systems’. PhD dissertation, Universidad Autonoma de Madrid, 2007.
    10. 10)
    11. 11)
    12. 12)
      • 12. Ali, T., Spreeuwers, L.J., Veldhuis, R.N.J., et al: ‘Effect of calibration data on forensic likelihood ratio from a face recognition system’. IEEE Sixth Int. Conf. Biometrics: Theory, Applications and Systems (BTAS), Washington DC, USA, 2013.
    13. 13)
      • 13. Ali, T., Spreeuwers, L.J., Veldhuis, R.N.J.: ‘Towards automatic forensic face recognition’. Int. Conf. Informatics Engineering and Information Science (ICIEIS), Communications in Computer and Information Science, Springer Verlag, Kuala Lumpur, Malaysia, 2011, vol. 252, pp. 4755.
    14. 14)
      • 15. Ali, T., Spreeuwers, L.J., Veldhuis, R.N.J.: ‘A review of calibration methods for biometric systems in forensic applications’. 33rd WIC Symp. on Information Theory in the Benelux, Boekelo, Netherlands, 2012, pp. 126133.
    15. 15)
    16. 16)
    17. 17)
    18. 18)
    19. 19)
    20. 20)
      • 21. Ramos-Castro, D., González-Rodrguez, J., Montero-Asenjo, A., et al: ‘Suspect-adapted map estimation of within-source distributions in generative likelihood ratio estimation’. Proc. IEEE Odyssey Speaker and Language Recognition Workshop, 2012, doi:10.1109/ODYSSEY.2006.248090.
    21. 21)
    22. 22)
      • 23. Botti, F., Alexander, A., Drygajlo, A.: ‘An interpretation framework for the evaluation of evidence in forensic automatic speaker recognition with limited suspect data’. Proc. Odyssey the Speaker and Language Recognition Workshop, Toledo, Spain, 2004, pp. 6368.
    23. 23)
    24. 24)
      • 25. Evett, I.W., Buckleton, J.S.: ‘Statistical analysis of STR data’, in Carracedo, A., Brinkmann, B., Bär, W. (eds): ‘Advances in forensic haemogenetics’ (Springer-Verlag, New York, 1996).
    25. 25)
      • 26. Phillips, P.J., Flynn, P.J., Scruggs, T., et al: ‘Overview of the face recognition grand challenge’. Int. Conf. Computer Vision Pattern Recognition, 2005.
    26. 26)
      • 27. NIST Speaker Recognition Evaluation 2010, http://www.itl.nist.gov/iad/mig/tests/sre/2010/.
    27. 27)
      • 28. Cognitec FaceVACS SDK version 8.4.0, 2010, http://www.cognitec.com/.
    28. 28)
      • 29. Mandasari, M.I., McLaren, M., Van Leeuwen, D.: ‘The effect of noise on modern automatic speaker recognition systems’. Proc. ICASSP, Kyoto, 2012.
    29. 29)
      • 30. Doddington, G., Liggett, W., Martin, A., et al: ‘Sheep, Goats, Lambs and Wolves: a statistical analysis of speaker performance in the NIST 1998 speaker recognition evaluation’. Proc. Int. Conf. Spoken Language Processing, 1998.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-bmt.2014.0009
Loading

Related content

content/journals/10.1049/iet-bmt.2014.0009
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address