IET Biometrics
Volume 7, Issue 6, November 2018
Volumes & issues:
Volume 7, Issue 6
November 2018
-
- Source: IET Biometrics, Volume 7, Issue 6, page: 501 –501
- DOI: 10.1049/iet-bmt.2018.5222
- Type: Article
- + Show details - Hide details
-
p.
501
(1)
- Author(s): Raul Sanchez-Reillo ; Pablo Heredia-da-Costa ; Kevin Mangold
- Source: IET Biometrics, Volume 7, Issue 6, p. 502 –509
- DOI: 10.1049/iet-bmt.2018.5069
- Type: Article
- + Show details - Hide details
-
p.
502
–509
(8)
In current society, the deployment of distributed applications necessitates the development of trustable human recognition mechanisms that could be executed through open networks. If the recognition mechanism to be used is based on biometrics, then the biometric solution shall be developed as a network-based service. Additionally, as the biometric service will be executed by applications around the globe, it will be recommended to use standardised technology. This work shows the current standards in this area, as well as some of the gaps still existing. A solution for one of these gaps has been developed by the authors and is explained in the last section of this study. With the current definitions and reference implementations, the industry could implement network-based biometric services in an interoperable way.
- Author(s): Alper Kanak
- Source: IET Biometrics, Volume 7, Issue 6, p. 510 –518
- DOI: 10.1049/iet-bmt.2018.5067
- Type: Article
- + Show details - Hide details
-
p.
510
–518
(9)
With the fast adoption of cloud computing, the use of biometric technologies has evolved to a different way of providing security, preserving privacy, and analysing personal traits for various purposes. The main components of any biometric system, such as biometric sensing, data gathering, feature extraction, identification, verification, recognition, and analytics, are now handled over distributed networks. Many of the biometric system services are presented over such networks which are followed by the creation of a new concept ‘biometric-as-a-service (BaaS)’. Recent BaaS approaches usually focus on identifying the effective distributed architectures, policies, and use case recommendations. However, there is a strong need to focus on developing a semantic framework which should rely on a biometric ontology. This study presents such an ontology covering the uses of different biometric modalities, evaluation and assessment of biometric systems, modelling biometric processes, and analyses through interlinked relations with biometric stakeholders. In order to shed light on how such an ontology is useful for BaaS solutions, a case study focusing on the various uses of biometric modalities is presented. The selected use case addresses the asylum seeker or immigrant identification problems regarding the border security challenges where facial biometrics are benefited.
- Author(s): Iyyakutti Iyappan Ganapathi ; Surya Prakash ; Ishan Rajendra Dave ; Piyush Joshi ; Syed Sadaf Ali ; Akhilesh Mohan Shrivastava
- Source: IET Biometrics, Volume 7, Issue 6, p. 519 –529
- DOI: 10.1049/iet-bmt.2018.5064
- Type: Article
- + Show details - Hide details
-
p.
519
–529
(11)
This study presents a novel approach for human recognition using co-registered three-dimensional (3D) and 2D ear images. The proposed technique is based on local feature detection and description. The authors detect feature key-points in 2D ear images utilising curvilinear structure and map them to the 3D ear images. Considering a neighbourhood around each mapped key-point in 3D, a feature descriptor vector is computed. To match a probe 3D ear image with a gallery 3D ear image for recognition, first highly similar feature key-points of these images are used as correspondence points for an initial alignment. Afterwards, a fine iterative closest point matching is performed on entire data of the 3D ear images being matched. An extensive experimental analysis is performed to demonstrate the recognition performance of the proposed approach in the presence of noise and occlusions, and compared with the available state-of-the-art 3D ear recognition techniques. The recognition rate of the proposed technique is found to be 98.69% on the University of Notre Dame-Collection J2 dataset with an equal error rate of 1.53%.
- Author(s): Sanaz Rasti ; Mehran Yazdi ; Mohammad Ali Masnadi-Shirazi
- Source: IET Biometrics, Volume 7, Issue 6, p. 530 –535
- DOI: 10.1049/iet-bmt.2018.5059
- Type: Article
- + Show details - Hide details
-
p.
530
–535
(6)
The authors propose a novel-makeup detection approach that assists face recognition systems to achieve a higher-accuracy rate while dealing with makeup images. Makeup features are defined in this work using biologically inspired features (BIFs). To establish makeup depictive features more specifically, colour and texture features are essential to be extracted from images. Hence, they create makeup depictive features in the last complex layer of BIFs (C2) as average skin tone (AST) and a histogram of oriented gradient (HOG), where AST is representative of colour and HOG exhibits texture. The proposed makeup BIFs are extracted from grayscale images and instead of breaking the face image into several patches, the whole face image is employed. This resulted in the performance acceleration as well as higher accuracy rate compared with state-of-the-art makeup detection schemes. Subsequently, they employ a machine learning scheme to train the makeup detection system by feeding it makeup and non-makeup labelled images. They exploit the correlation-based method for the face recognition system and compare the results with the direct two-dimensional principal component analysis face recognition scheme for makeup datasets. Experimental results show the highest accuracy rate of 97.07% was achieved by the proposed algorithm for face recognition system considering makeup.
- Author(s): Syed Sadaf Ali ; Iyyakutti Iyappan Ganapathi ; Surya Prakash
- Source: IET Biometrics, Volume 7, Issue 6, p. 536 –549
- DOI: 10.1049/iet-bmt.2018.5070
- Type: Article
- + Show details - Hide details
-
p.
536
–549
(14)
Fingerprint authentication systems generally save data extracted from fingerprint as minutiae template in the database. However, it is often found that databases can be attacked and compromised by the adversary. So if minutiae points of a user are leaked, the fingerprint can be generated from them. The fingerprint cannot be changed as the finger is a part of the human body. Hence, securing information extracted from the fingerprint is required. In this study, the authors propose a highly secure technique that uses location information of the minutia points to construct a highly secured template for a user. For every minutia point, secured modified location is generated by using information of its neighbouring minutiae and a key-set. They have achieved 2, 1, and 3.1% equal error rate values for FVC2002 DB1, DB2, and DB3 fingerprint databases, respectively, under the same key scenario. Analysis done for several attacks shows that the proposed technique is very robust and secure. The experimental results are extremely encouraging and they demonstrate the efficiency of the proposed technique.
- Author(s): Pavlo Tertychnyi ; Cagri Ozcinar ; Gholamreza Anbarjafari
- Source: IET Biometrics, Volume 7, Issue 6, p. 550 –556
- DOI: 10.1049/iet-bmt.2018.5074
- Type: Article
- + Show details - Hide details
-
p.
550
–556
(7)
Fingerprint recognition systems mainly use minutiae points information. As shown in many previous research works, fingerprint images do not always have good quality to be used by automatic fingerprint recognition systems. To tackle this challenge, in this work, the authors are focusing on very low-quality fingerprint images, which contain several well-known distortions such as dryness, wetness, physical damage, presence of dots, and blurriness. They develop an efficient, with high accuracy, deep neural network algorithm, which recognises such low-quality fingerprints. The experimental results have been obtained from the real low-quality fingerprint database, and the achieved results show the high performance and robustness of the introduced deep network technique. The VGG16-based deep network achieves the highest performance of 93% for dry and the lowest performance of 84% for blurred fingerprint classes.
Guest Editorial: ‘Biometrics as-a-service’: the path ahead?
Developing standardised network-based biometric services
Biometric ontology for semantic biometric-as-a-service (BaaS) applications: a border security use case
Ear recognition in 3D using 2D curvilinear features
Biologically inspired makeup detection system with application in face recognition
Robust technique for fingerprint template protection
Low-quality fingerprint classification using deep neural network
-
- Author(s): Ibrahim Omara ; Xiaohe Wu ; Hongzhi Zhang ; Yong Du ; Wangmeng Zuo
- Source: IET Biometrics, Volume 7, Issue 6, p. 557 –566
- DOI: 10.1049/iet-bmt.2017.0087
- Type: Article
- + Show details - Hide details
-
p.
557
–566
(10)
Convolutional neural networks (CNNs)-based deep features have been demonstrated with remarkable performance in various vision tasks, such as image classification and face verification. Compared with the hand-crafted descriptors, deep features exhibit more powerful representation ability. Typically, higher layer features contain more semantic information, while lower layer features can provide more low-level description. In addition, it turns out that the fusion of different layer features will lead to superior performance. Here, we propose a novel approach for human ear identification by combining hierarchical deep features. First, hierarchical deep features are extracted from ear images using CNN pre-trained on large-scale data set. To enhance the feature representation and reduce the high dimension of deep features, the discriminant correlation analysis (DCA) is adopted for fusing deep features from different layers for further improvement. Owing to the lack of ear images per person, the authors propose to transform the ear identification problem to the binary classification by composing pairwise samples and resolve it with the pairwise support vector machine (SVM). Experiments are conducted on four public databases: USTB I, USTB II, IIT Delhi I, and IIT Delhi II. The proposed method achieves promising recognition rate and exhibits decent performance compared with the state-of-the-art methods.
- Author(s): Johannes Kotzerke ; Stephen A. Davis ; Jodie McVernon ; Kathy J. Horadam
- Source: IET Biometrics, Volume 7, Issue 6, p. 567 –572
- DOI: 10.1049/iet-bmt.2017.0282
- Type: Article
- + Show details - Hide details
-
p.
567
–572
(6)
The pressing infant biometric problem is to find a biometric means to identify infants cheaply, reliably, and automatically. Physical traits of infants are tiny, delicate, and grow rapidly. The authors focus on a novel area of friction-ridge skin as a potential answer: the ball under the big toe. The ballprint is readily accessible, with more features and larger ridges than a fingerprint. The authors followed 54 newborns for 2 years, capturing their ballprints with an adult fingerprint scanner within 3 days of birth, at 2 months, at 6 months, and at 2 years. The authors show the growth of the ballprint is isotropic rather than affine during infancy. The isotropic growth rate from birth can be measured by the change in inter-ridge spacing, which the authors show precisely mirrors change in physical length from birth, as recorded by World Health Organisation for large, diverse infant populations. From 2 months of age, by using isotropic scaling to compensate for growth, the authors successfully matched good quality images with 0% equal error rate using existing adult fingerprint technology, even for captures 22 months apart. These findings flag the value of ballprints as a practical means of infant identification, by themselves, or together or sequentially with other biometrics.
- Author(s): Hossein Soleimani and Mohsen Ahmadi
- Source: IET Biometrics, Volume 7, Issue 6, p. 573 –580
- DOI: 10.1049/iet-bmt.2017.0128
- Type: Article
- + Show details - Hide details
-
p.
573
–580
(8)
Using the palmprint in recognition systems has received a lot of interest during the last two decades. Some of these systems are based on first-level features, such as the existing lines and creases in palmprint images, and others use second-level features, such as minutiae, which are more reliable in comparison with the first group. Owing to a large number of minutiae in a palmprint, ∼1000 minutiae, the matching process is time consuming. In this study, a new minutia-based matching strategy is proposed to make the matching process faster and more efficient. First, an orientation field estimation algorithm based on region-growing is proposed, which emphasises selecting seed points with higher quality. Second, the estimated orientation field is used to align palmprint images to the same coordinate system, resulting in fewer computations during minutia matching. Finally, a new minutia descriptor based on the orientation field is designed to distinguish minutiae with different local orientation structures. This descriptor helps to find two mated minutiae much faster, speeding up the matching process. The proposed palmprint matching algorithm has been evaluated on the THUPALMLAB database, and the results show the superiority of the proposed algorithm over most of the state-of-the-art algorithms.
- Author(s): Yiding Wang ; Chenyan Yang ; Lik-Kwan Shark
- Source: IET Biometrics, Volume 7, Issue 6, p. 581 –588
- DOI: 10.1049/iet-bmt.2017.0052
- Type: Article
- + Show details - Hide details
-
p.
581
–588
(8)
When adopting an image-based biometric system, an important factor for consideration is its potential recognition capacity, since it not only defines the potential number of individuals likely to be identifiable, but also serves as a useful figure-of-merit for performance. Based on block transform coding commonly used for image compression, this study presents a method to enable coarse estimation of potential recognition capacity for texture-based biometrics. Essentially, each image block is treated as a constituent biometric component, and image texture contained in each block is binary coded to represent the corresponding texture class. The statistical variability among the binary values assigned to corresponding blocks is then exploited for estimation of potential recognition capacity. In particular, methodologies are proposed to determine appropriate image partition based on separation between texture classes and informativeness of an image block based on statistical randomness. By applying the proposed method to a commercial fingerprint system and a bespoke hand vein system, the potential recognition capacity is estimated to around 1036 for a fingerprint area of 25 mm2 which is in good agreement with the estimates reported, and around 1015 for a hand vein area of 2268 mm2 which has not been reported before.
- Author(s): Emad Taha Khalaf ; Muamer N. Mohammad ; Kohbalan Moorthy
- Source: IET Biometrics, Volume 7, Issue 6, p. 589 –597
- DOI: 10.1049/iet-bmt.2017.0130
- Type: Article
- + Show details - Hide details
-
p.
589
–597
(9)
Explosive growth in the volume of stored biometric data has resulted in classification and indexing becoming important operations in image database systems. Consequently, researchers are focused on finding suitable features of images that can be used as indexes. Stored templates have to be classified and indexed based on these extracted features in a manner that enables access to and retrieval of those data by efficient search processes. This paper proposes a method that extracts the most relevant features of iris images to facilitate minimisation of the indexing time and the search area of the biometric database. The proposed method combines three transformation methods DCT, DWT and SVD to analyse iris images and extract their local features. Further, the scalable K-means++ algorithm is used for partitioning and classification processes, and an efficient parallel technique that divides the features groups causing the formation of two b-trees based on index keys is applied for search and retrieval. Moreover, search within a group is achieved using a proposed half search algorithm. Experimental results on three different publicly iris databases indicate that the proposed method results in a significant performance improvement in terms of bin miss rate and penetration rate compared with conventional methods.
- Author(s): Samik Banerjee and Sukhendu Das
- Source: IET Biometrics, Volume 7, Issue 6, p. 598 –605
- DOI: 10.1049/iet-bmt.2017.0265
- Type: Article
- + Show details - Hide details
-
p.
598
–605
(8)
Facial make-up changes the appearance of a person and significantly degrades the performance of automated face verification (FV) systems. Here, the authors propose the design of an end-to-end siamese convolutional neural network (SCNN) that simultaneously replicates the facial make-up of a subject using its target image (under facial make-up) on a query face image and verifies the identity of the query face sample either with or without make-up. The SCNN model is designed using loss functions to deal with the variations due to make-up. The proposed architecture can reciprocate the make-up at appropriate locations of the face without any human interventions. Rigorous experimentations on four benchmark facial make-up datasets reveal the efficiency of their proposed model. Ablation studies show improvement of 4% for genuine acceptance rate at 0.1% false acceptance rate and reduction of equal error rate by 42% for FV in case of YouTube Make-up dataset, and ‘10%’ in case of Virtual Make-up dataset, when compared to the nearest state-of-the-art method. For the transfer of make-up, the similarity measures also show the effectiveness of their method, where the peak signal-to-noise ratio and structural similarity values show an improvement by ∼20–24 and ∼29–32%, respectively, when compared to a recent state-of-the-art technique.
- Author(s): Pinar Santemiz ; Luuk J. Spreeuwers ; Raymond N.J. Veldhuis
- Source: IET Biometrics, Volume 7, Issue 6, p. 606 –614
- DOI: 10.1049/iet-bmt.2017.0203
- Type: Article
- + Show details - Hide details
-
p.
606
–614
(9)
Face recognition from side-view positions is an essential task for recognition systems with real-world scenarios. Most of the existing face recognition methods rely on alignment of face images into some canonical form. However, alignment in side-view faces can be challenging due to lack of symmetry and a small number of reliable reference points. To the best of the author's knowledge, only a few of the existing methods deal with video-based face recognition from side-view images, and not many databases include sufficient video footage to study this task. Here, the authors propose an automatic side-view face recognition system designed for home safety applications. They first contribute a newly collected video face database, named UT-DOOR, where 98 subjects were recorded with four cameras attached at doorposts as they pass through doors. Secondly, they propose a face recognition system, where they automatically detect and recognise faces using side-view images in videos. One of the attractive properties of this system is that they use cameras with limited view angle to preserve the privacy of the people. They review several databases and test their system both on the CMU Multi-PIE database and the UT-DOOR database for comparison. Experimental results show that their system can successfully recognise side-view faces from videos.
- Author(s): Abhijit Das ; Hemmaphan Suwanwiwat ; Miguel A. Ferrer ; Umapada Pal ; Michael Blumenstein
- Source: IET Biometrics, Volume 7, Issue 6, p. 615 –627
- DOI: 10.1049/iet-bmt.2017.0218
- Type: Article
- + Show details - Hide details
-
p.
615
–627
(13)
This study focuses on a comprehensive study of Automatic Signature Verification (ASV) for off-line Thai signatures; an investigation was carried out to characterise the challenges in Thai ASV and to baseline the performance of Thai ASV employing baseline features, being Local Binary Pattern, Local Directional Pattern, Local Binary and Directional Patterns combined (LBDP), and the baseline shape/feature-based hidden Markov model. As there was no publicly available Thai signature database found in the literature, the authors have developed and proposed a database considering real-world signatures from Thailand. The authors have also identified their latent challenges and characterised Thai signature-based ASV. The database consists of 5,400 signatures from 100 signers. Thai signatures could be bi-script in nature, considering the fact that a single signature can contain only Thai or Roman characters or contain both Roman and Thai, which poses an interesting challenge for script-independent SV. Therefore, along with the baseline experiments, the investigation on the influence and nature of bi-script ASV was also conducted. From the equal error rates and Bhattacharyya distance, the score achieved in the experiments indicate that the Thai SV scenario is a script-independent problem. The open research area on this subject of research has also been addressed.
Learning pairwise SVM on hierarchical deep features for ear recognition
Steps to solving the infant biometric problem with ridge-based biometrics
Fast and efficient minutia-based palmprint matching
Method for estimating potential recognition capacity of texture-based biometrics
Robust partitioning and indexing for iris biometric database based on local features
MakeUpMirror: mirroring make-ups and verifying faces post make-up
Automatic face recognition for home safety using video-based side-view face images
Thai Automatic signature verification System Employing Textural Features
-
- Author(s): Guangyi Chen ; Tien D. Bui ; Adam Krzyżak
- Source: IET Biometrics, Volume 7, Issue 6, p. 628 –635
- DOI: 10.1049/iet-bmt.2016.0195
- Type: Article
- + Show details - Hide details
-
p.
628
–635
(8)
In this study, the authors develop a new algorithm for face recognition with varying lighting conditions. Their method first performs low-pass and high-pass filtering to the face image, and then takes the ratio between the two filtered images. The authors take the arctangent to the ratio and use these features to classify an unknown face image. In addition, their method works for any combination of low-pass and high-pass filters. The authors studied two sets of the low-pass and high-pass filters in their experiments and their results are better than gradient faces, Weber faces, and self-quotient images (SQIs) in the noisy environment, no matter denoising or no denoising is performed to the noisy face images for the CMU-PIE and Extended Yale-B databases. Nevertheless, the SQI is best for the CAS-PEAL face database in authors’ experiments. The SQI takes a convolution operation with a low-pass filter. The implementation of SQI may have chosen a more suitable low-pass filter for the CAS-PEAL face database, but not for the YALE and CMU-PIE face databases. This may be the main reason why SQI outperformed their proposed method in this study for the CAS-PEAL face database. Nevertheless, the SQI is many times slower than their proposed method.
Filter-based face recognition under varying illumination
Most viewed content
Most cited content for this Journal
-
Overview of research on facial ageing using the FG-NET ageing database
- Author(s): Gabriel Panis ; Andreas Lanitis ; Nicholas Tsapatsoulis ; Timothy F. Cootes
- Type: Article
-
Strengths and weaknesses of deep learning models for face recognition against image degradations
- Author(s): Klemen Grm ; Vitomir Štruc ; Anais Artiges ; Matthieu Caron ; Hazım K. Ekenel
- Type: Article
-
Multimodal biometric recognition using human ear and palmprint
- Author(s): Nabil Hezil and Abdelhani Boukrouche
- Type: Article
-
Extended evaluation of the effect of real and simulated masks on face recognition performance
- Author(s): Naser Damer ; Fadi Boutros ; Marius Süßmilch ; Florian Kirchbuchner ; Arjan Kuijper
- Type: Article
-
Survey on real-time facial expression recognition techniques
- Author(s): Shubhada Deshmukh ; Manasi Patwardhan ; Anjali Mahajan
- Type: Article