IET Biometrics
Volume 3, Issue 4, December 2014
Volumes & issues:
Volume 3, Issue 4
December 2014
Periocular biometrics: constraining the elastic graph matching algorithm to biologically plausible distortions
- Author(s): Hugo Proença and Juan C. Briceño
- Source: IET Biometrics, Volume 3, Issue 4, p. 167 –175
- DOI: 10.1049/iet-bmt.2013.0039
- Type: Article
- + Show details - Hide details
-
p.
167
–175
(9)
In biometrics research, the periocular region has been regarded as an interesting trade-off between the face and the iris, particularly in unconstrained data acquisition setups. As in other biometric traits, the current challenge is the development of more robust recognition algorithms. Having investigated the suitability of the ‘elastic graph matching’ (EGM) algorithm to handle non-linear distortions in the periocular region because of facial expressions, the authors observed that vertices locations often not correspond to displacements in the biological tissue. Hence, they propose a ‘globally coherent’ variant of EGM (GC-EGM) that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions. Two main adaptations were carried out: (i) a new term for measuring vertices similarity and (ii) a new term in the edges-cost function penalises changes in orientation between the model and test graphs. Experiments were carried out both in synthetic and real data and point for the advantages of the proposed algorithm. Also, the recognition performance when using the EGM and GC-EGM was compared, and statistically significant improvements in the error rates were observed when using the GC-EGM variant.
.
On robust face recognition via sparse coding: the good, the bad and the ugly
- Author(s): Yongkang Wong ; Mehrtash T. Harandi ; Conrad Sanderson
- Source: IET Biometrics, Volume 3, Issue 4, p. 176 –189
- DOI: 10.1049/iet-bmt.2013.0033
- Type: Article
- + Show details - Hide details
-
p.
176
–189
(14)
In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l 1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l 1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.
Elastic strips normalisation model for higher iris recognition performance
- Author(s): Alaa Hilal ; Pierre Beauseroy ; Bassam Daya
- Source: IET Biometrics, Volume 3, Issue 4, p. 190 –197
- DOI: 10.1049/iet-bmt.2013.0026
- Type: Article
- + Show details - Hide details
-
p.
190
–197
(8)
Iris recognition is among the best biometric systems. Characterised by the iris's uniqueness, universality, distinctiveness, permanence and collectability, the iris recognition system achieves high performance and real time response. In this study, the authors propose an improved iris normalisation model applied after a precise iris segmentation process. The normalisation model defines a new reference space for iris features. It normalises the iris using radial strips with a shape that changes between the pupil's boundary and the circular approximation of the iris's outer boundary. Moreover, the effect of the centres of the normalisation strips is evaluated by assessing the recognition performance when comparing three different centres configurations. The approach is tested on 2491 images from the CASIA V3 database. The system's performance is measured at the matching stage. Higher decidability and recognition accuracy at equal error rate is obtained. Detection error tradeoff curves are estimated by using the proposed model and compared with Daugman's reference system for assessing performance improvement.
Palm vein recognition with local texture patterns
- Author(s): Leila Mirmohamadsadeghi and Andrzej Drygajlo
- Source: IET Biometrics, Volume 3, Issue 4, p. 198 –206
- DOI: 10.1049/iet-bmt.2013.0041
- Type: Article
- + Show details - Hide details
-
p.
198
–206
(9)
Biometric recognition using the palm vein characteristics is emerging as a touchless and spoof-resistant hand-based means to identify individuals or to verify their identity. One of the open challenges in this field is the creation of fast and modality-dependent feature extractors for recognition. This article investigates features using local texture description methods. The local binary pattern (LBP) operator as well as the local derivative pattern (LDP) operator and the fusion of the two are studied in order to create efficient descriptors for palm vein recognition by systematically adapting their parameters to fit palm vein structures. Results of experiments are reported on the CASIA multi-spectral palm print image database V1.0 (CASIA database). It is found that the local texture patterns proposed in this study can be adapted to the vein description task for biometric recognition and that the LDP operator consistently outperforms the LBP operator in palm vein recognition.
On application of bloom filters to iris biometrics
- Author(s): Christian Rathgeb ; Frank Breitinger ; Christoph Busch ; Harald Baier
- Source: IET Biometrics, Volume 3, Issue 4, p. 207 –218
- DOI: 10.1049/iet-bmt.2013.0049
- Type: Article
- + Show details - Hide details
-
p.
207
–218
(12)
In this study, the application of adaptive Bloom filters to binary iris biometric feature vectors, that is, iris-codes, is proposed. Bloom filters, which have been established as a powerful tool in various fields of computer science, are applied in order to transform iris-codes to a rotation-invariant feature representation. Properties of the proposed Bloom filter-based transform concurrently enable (i) biometric template protection, (ii) compression of biometric data and (iii) acceleration of biometric identification, whereas at the same time no significant degradation of biometric performance is observed. According to these fields of application, detailed investigations are presented. Experiments are conducted on the CASIA-v3 iris database for different feature extraction algorithms. Confirming the soundness of the proposed approach, the application of adaptive Bloom filters achieves rotation-invariant cancellable templates maintaining biometric performance, a compression of templates down to 20–40% of original size and a reduction of bit-comparisons to less than 5% leading to a substantial speed-up of the biometric system in identification mode.
Presentation attack detection methods for fingerprint recognition systems: a survey
- Author(s): Ctirad Sousedik and Christoph Busch
- Source: IET Biometrics, Volume 3, Issue 4, p. 219 –233
- DOI: 10.1049/iet-bmt.2013.0020
- Type: Article
- + Show details - Hide details
-
p.
219
–233
(15)
Nowadays, fingerprint biometrics is widely used in various applications, varying from forensic investigations and migration control to access control as regards security sensitive environments. Any biometric system is potentially vulnerable against a fake biometric characteristic, and spoofing of fingerprint systems is one of the most widely researched areas. The state-of-the-art sensors can often be spoofed by an accurate imitation of the ridge/valley structure of a fingerprint. An individual may also try to avoid identification by altering his own fingerprint pattern. This study is a survey of presentation attack detection methods for fingerprints, both in terms of liveness detection and alteration detection.
Personal identification based on multiple keypoint sets of dorsal hand vein images
- Author(s): Yiding Wang ; Ke Zhang ; Lik-Kwan Shark
- Source: IET Biometrics, Volume 3, Issue 4, p. 234 –245
- DOI: 10.1049/iet-bmt.2013.0042
- Type: Article
- + Show details - Hide details
-
p.
234
–245
(12)
This paper presents a biometric identification system based on near-infrared imaging of dorsal hand veins and matching of the keypoints that are extracted from the dorsal hand vein images by the scale-invariant feature transform. The whole system is covered in detail, which includes the imaging device used, image processing methods proposed for geometric correction, region-of-interest extraction, image enhancement and vein pattern segmentation, as well as image classification by extraction and matching of keypoints. In addition to several constraints introduced to minimise incorrectly matched keypoints, a particular focus is placed on the use of multiple training images of each hand class to improve the recognition performance for a large database with more than 200 hand classes. By organising multiple keypoint sets extracted from multiple training images of each hand class into three sets, namely, the union, the intersection and the exclusion, based on their inter-class and intra-class relationships, this study shows the contribution made by each set to the recognition performance and demonstrates the feasibility of achieving 100% correct recognition by combining the three sets, based on the experiments conducted using more than 2000 dorsal hand vein images.
Score calibration in face recognition
- Author(s): Miranti Indar Mandasari ; Manuel Günther ; Roy Wallace ; Rahim Saeidi ; Sébastien Marcel ; David A. van Leeuwen
- Source: IET Biometrics, Volume 3, Issue 4, p. 246 –256
- DOI: 10.1049/iet-bmt.2013.0066
- Type: Article
- + Show details - Hide details
-
p.
246
–256
(11)
An evaluation of the verification and calibration performance of a face recognition system based on inter-session variability modelling is presented. As an extension to calibration through linear transformation of scores, categorical calibration is introduced as a way to include additional information about images for calibration. The cost of likelihood ratio, which is a well-known measure in the speaker recognition field, is used as a calibration performance metric. The results obtained from the challenging mobile biometrics and surveillance camera face databases indicate that linearly calibrated face recognition scores are less misleading in their likelihood ratio interpretation than uncalibrated scores. In addition, the categorical calibration experiments show that calibration can be used not only to improve the likelihood ratio interpretation of scores, but also to improve the verification performance of a face recognition system.
Individual identification based on chaotic electrocardiogram signals during muscular exercise
- Author(s): Shyan-Lung Lin ; Ching-Kun Chen ; Chun-Liang Lin ; Wen-Chan Yang ; Cheng-Tang Chiang
- Source: IET Biometrics, Volume 3, Issue 4, p. 257 –266
- DOI: 10.1049/iet-bmt.2013.0014
- Type: Article
- + Show details - Hide details
-
p.
257
–266
(10)
An electrocardiogram (ECG) records changes in the electric potential of cardiac cells using a noninvasive method. Previous studies have shown that each person's cardiac signal possesses unique characteristics. Thus, researchers have attempted to use ECG signals for personal identification. However, most studies verify results using ECG signals taken from databases which are obtained from subjects under the condition of rest. Therefore, the extraction and analysis of a subject's ECG typically occurs in the resting state. This study presents experiments that involve recording ECG information after the heart rate of the subjects was increased through exercise. This study adopts the root mean square value, nonlinear Lyapunov exponent, and correlation dimension to analyse ECG data, and uses a support vector machine (SVM) to classify and identify the best combination and the most appropriate kernel function of a SVM. Results show that the successful recognition rate exceeds 80% when using the nonlinear SVM with a polynomial kernel function. This study confirms the existence of unique ECG features in each person. Even in the condition of exercise, chaotic theory can be used to extract specific biological characteristics, confirming the feasibility of using ECG signals for biometric verification.
Mobile signature verification: feature robustness and performance comparison
- Author(s): Marcos Martinez-Diaz ; Julian Fierrez ; Ram P. Krish ; Javier Galbally
- Source: IET Biometrics, Volume 3, Issue 4, p. 267 –277
- DOI: 10.1049/iet-bmt.2013.0081
- Type: Article
- + Show details - Hide details
-
p.
267
–277
(11)
In this study, the effects of using handheld devices on the performance of automatic signature verification systems are studied. The authors compare the discriminative power of global and local signature features between mobile devices and pen tablets, which are the prevalent acquisition device in the research literature. Individual feature discriminant ratios and feature selection techniques are used for comparison. Experiments are conducted on standard signature benchmark databases (BioSecure database) and a state-of-the-art device (Samsung Galaxy Note). Results show a decrease in the feature discriminative power and a higher verification error rate on handheld devices. It is found that one of the main causes of performance degradation on handheld devices is the absence of pen-up trajectory information (i.e. data acquired when the pen tip is not in contact with the writing surface).
Fast neighbourhood component analysis with spatially smooth regulariser for robust noisy face recognition
- Author(s): Faqiang Wang ; Hongzhi Zhang ; Kuanquan Wang ; Wangmeng Zuo
- Source: IET Biometrics, Volume 3, Issue 4, p. 278 –290
- DOI: 10.1049/iet-bmt.2013.0087
- Type: Article
- + Show details - Hide details
-
p.
278
–290
(13)
For the robust recognition of noisy face images, in this study, the authors improved the fast neighbourhood component analysis (FNCA) model by introducing a novel spatially smooth regulariser (SSR), resulting in the FNCA-SSR model. The SSR can enforce local spatial smoothness by penalising large differences between adjacent pixels, and makes FNCA-SSR model robust against noise in face image. Moreover, the gradient of SSR can be efficiently computed in image space, and thus the optimisation problem of FNCA-SSR can be conveniently solved by using the gradient descent algorithm. Experimental results on several face data sets show that, for the recognition of noisy face images, FNCA-SSR is robust against Gaussian noise and salt and pepper noise, and can achieve much higher recognition accuracy than FNCA and other competing methods.
Separating the real from the synthetic: minutiae histograms as fingerprints of fingerprints
- Author(s): Carsten Gottschlich and Stephan Huckemann
- Source: IET Biometrics, Volume 3, Issue 4, p. 291 –301
- DOI: 10.1049/iet-bmt.2013.0065
- Type: Article
- + Show details - Hide details
-
p.
291
–301
(11)
In this study, the authors show that by the current state-of-the-art synthetically generated fingerprints can easily be discriminated from real fingerprints. They propose a non-parametric distribution-based method using second-order extended minutiae histograms (MHs) which can distinguish between real and synthetic prints with very high accuracy. MHs provide a fixed-length feature vector for a fingerprint which are invariant under rotation and translation. This ‘test of realness’ can be applied to synthetic fingerprints produced by any method. In this study, tests are conducted on the 12 publicly available databases of FVC2000, FVC2002 and FVC2004 which are well established benchmarks for evaluating the performance of fingerprint recognition algorithms; 3 of these 12 databases consist of artificial fingerprints generated by the SFinGe software. In addition, they evaluate the discriminative performance on a database of synthetic fingerprints generated by the software of Bicz against real fingerprint images. They conclude with suggestions for the improvement of synthetic fingerprint generation.
Hand vein authentication using biometric graph matching
- Author(s): Seyed Mehdi Lajevardi ; Arathi Arakala ; Stephen Davis ; Kathy J. Horadam
- Source: IET Biometrics, Volume 3, Issue 4, p. 302 –313
- DOI: 10.1049/iet-bmt.2013.0086
- Type: Article
- + Show details - Hide details
-
p.
302
–313
(12)
This study proposes an automatic dorsal hand vein verification system using a novel algorithm called biometric graph matching (BGM). The dorsal hand vein image is segmented using the K-means technique and the region of interest is extracted based on the morphological analysis operators and normalised using adaptive histogram equalisation. Veins are extracted using a maximum curvature algorithm. The locations and vascular connections between crossovers, bifurcations and terminations in a hand vein pattern define a hand vein graph. The matching performance of BGM for hand vein graphs is tested with two cost functions and compared with the matching performance of two standard point patterns matching algorithms, iterative closest point (ICP) and modified Hausdorff distance. Experiments are conducted on two public databases captured using far infrared and near infrared (NIR) cameras. BGM's matching performance is competitive with state-of-the-art algorithms on the databases despite using small and concise templates. For both databases, BGM performed at least as well as ICP. For the small sized graphs from the NIR database, BGM significantly outperformed point pattern matching. The size of the common subgraph of a pair of graphs is the most significant discriminating measure between genuine and imposter comparisons.
Design and evaluation of photometric image quality measures for effective face recognition
- Author(s): Ayman Abaza ; Mary Ann Harrison ; Thirimachos Bourlai ; Arun Ross
- Source: IET Biometrics, Volume 3, Issue 4, p. 314 –324
- DOI: 10.1049/iet-bmt.2014.0022
- Type: Article
- + Show details - Hide details
-
p.
314
–324
(11)
The performance of an automated face recognition system can be significantly influenced by face image quality. Designing effective image quality index is necessary in order to provide real-time feedback for reducing the number of poor quality face images acquired during enrollment and authentication, thereby improving matching performance. In this study, the authors first evaluate techniques that can measure image quality factors such as contrast, brightness, sharpness, focus and illumination in the context of face recognition. Second, they determine whether using a combination of techniques for measuring each quality factor is more beneficial, in terms of face recognition performance, than using a single independent technique. Third, they propose a new face image quality index (FQI) that combines multiple quality measures, and classifies a face image based on this index. In the author's studies, they evaluate the benefit of using FQI as an alternative index to independent measures. Finally, they conduct statistical significance Z-tests that demonstrate the advantages of the proposed FQI in face recognition applications.
Shape primitive histogram: low-level face representation for face recognition
- Author(s): Sheng Huang ; Dan Yang ; Haopeng Zhang ; Luwen Huangfu ; Xiaohong Zhang
- Source: IET Biometrics, Volume 3, Issue 4, p. 325 –334
- DOI: 10.1049/iet-bmt.2013.0089
- Type: Article
- + Show details - Hide details
-
p.
325
–334
(10)
Human face contains abundant shape features. This fact motivates a lot of shape feature-based face detection and three-dimensional (3D) face recognition approaches. However, as far as we know, there is no prior low-level face representation which is purely based on shape feature proposed for conventional 2D (image-based) face recognition. In this study, the authors present a novel low-level shape-based face representation named ‘shape primitives histogram’ (SPH) for face recognition. In this approach, the face images are separated into a number of tiny shape fragments and they reduce these shape fragments to several uniform atomic shape patterns called ‘shape primitives’. Then the face representation is obtained by implementing a histogram statistic of shape primitives in a local image region. To take scale information into consideration, they also produce multi-scale SPHs (MSPHs) by concatenating the SPHs extracted from different scales. Moreover, they experimentally study the influences of each stage of SPH computation on performance, concluding that a small cell with 1/2 overlap and a fine size block with 1/2 overlap are important for good results. Four popular face databases, namely ORL, AR, YaleB and LFW-a, are employed to evaluate SPH and MSPH. Surprisingly, such seemingly naive shape-based face representations outperform the state-of-the-art low-level face representations.
Biometric evidence evaluation: an empirical assessment of the effect of different training data
- Author(s): Tauseef Ali ; Luuk Spreeuwers ; Raymond Veldhuis ; Didier Meuwly
- Source: IET Biometrics, Volume 3, Issue 4, p. 335 –346
- DOI: 10.1049/iet-bmt.2014.0009
- Type: Article
- + Show details - Hide details
-
p.
335
–346
(12)
For an automatic comparison of a pair of biometric specimens, a similarity metric called ‘score’ is computed by the employed biometric recognition system. In forensic evaluation, it is desirable to convert this score into a likelihood ratio. This process is referred to as calibration. A likelihood ratio is the probability of the score given the prosecution hypothesis (which states that the pair of biometric specimens are originated from the suspect) is true divided by the probability of the score given the defence hypothesis (which states that the pair of biometric specimens are not originated from the suspect) is true. In practice, a set of scores (called training scores) obtained from the within-source and between-sources comparison is needed to compute a likelihood ratio value for a score. In likelihood ratio computation, the within-source and between-sources conditions can be anchored to a specific suspect in a forensic case or it can be generic within-source and between-sources comparisons independent of the suspect involved in the case. This results in two likelihood ratio values which differ in the nature of training scores they use and therefore consider slightly different interpretations of the two hypotheses. The goal of this study is to quantify the differences in these two likelihood ratio values in the context of evidence evaluation from a face, a fingerprint and a speaker recognition system. For each biometric modality, a simple forensic case is simulated by randomly selecting a small subset of biometric specimens from a large database. In order to be able to carry out a comparison across the three biometric modalities, the same protocol is followed for training scores set generation. It is observed that there is a significant variation in the two likelihood ratio values.
Off-line signature verification: upper and lower envelope shape analysis using chord moments
- Author(s): Medam Manoj Kumar and Niladri Bihari Puhan
- Source: IET Biometrics, Volume 3, Issue 4, p. 347 –354
- DOI: 10.1049/iet-bmt.2014.0024
- Type: Article
- + Show details - Hide details
-
p.
347
–354
(8)
Signature is an important and useful behavioural biometric which exhibits significant amount of non-linear variability. In this study, the authors concentrate on finding an envelope shape feature known as ‘chord moments’. Central moments such as the variance, skewness and kurtosis along with the first moment (mean) are computed from sets of chord lengths and angles for each envelope reference point. The proposed chord moments adequately quantify the spatial inter-relationship among upper and lower envelope points. The moment-based approach significantly reduces the dimension of highly detailed chord sets and is experimentally found to be robust in handling non-linear variability from signature images. The proposed chord moments coupled with the support vector machine classifier lead to a writer dependent off-line signature verification system that achieves state-of-the-art performance on the noisy Center of Excellence for Document Analysis and Recognition database.
Most viewed content
Most cited content for this Journal
-
Overview of research on facial ageing using the FG-NET ageing database
- Author(s): Gabriel Panis ; Andreas Lanitis ; Nicholas Tsapatsoulis ; Timothy F. Cootes
- Type: Article
-
Strengths and weaknesses of deep learning models for face recognition against image degradations
- Author(s): Klemen Grm ; Vitomir Štruc ; Anais Artiges ; Matthieu Caron ; Hazım K. Ekenel
- Type: Article
-
Multimodal biometric recognition using human ear and palmprint
- Author(s): Nabil Hezil and Abdelhani Boukrouche
- Type: Article
-
Extended evaluation of the effect of real and simulated masks on face recognition performance
- Author(s): Naser Damer ; Fadi Boutros ; Marius Süßmilch ; Florian Kirchbuchner ; Arjan Kuijper
- Type: Article
-
Survey on real-time facial expression recognition techniques
- Author(s): Shubhada Deshmukh ; Manasi Patwardhan ; Anjali Mahajan
- Type: Article