IET Biometrics
Volume 7, Issue 3, May 2018
Volumes & issues:
Volume 7, Issue 3
May 2018
-
- Source: IET Biometrics, Volume 7, Issue 3, p. 173 –174
- DOI: 10.1049/iet-bmt.2018.0011
- Type: Article
- + Show details - Hide details
-
p.
173
–174
(2)
- Author(s): Žiga Emeršič ; Luka L. Gabriel ; Vitomir Štruc ; Peter Peer
- Source: IET Biometrics, Volume 7, Issue 3, p. 175 –184
- DOI: 10.1049/iet-bmt.2017.0240
- Type: Article
- + Show details - Hide details
-
p.
175
–184
(10)
Object detection and segmentation represents the basis for many tasks in computer and machine vision. In biometric recognition systems the detection of the region-of-interest (ROI) is one of the most crucial steps in the processing pipeline, significantly impacting the performance of the entire recognition system. Existing approaches to ear detection, are commonly susceptible to the presence of severe occlusions, ear accessories or variable illumination conditions and often deteriorate in their performance if applied on ear images captured in unconstrained settings. To address these shortcomings, we present a novel ear detection technique based on convolutional encoder-decoder networks (CEDs). We formulate the problem of ear detection as a two-class segmentation problem and design and train a CED-network architecture to distinguish between image-pixels belonging to the ear and the non-ear class. Unlike competing techniques, our approach does not simply return a bounding box around the detected ear, but provides detailed, pixel-wise information about the location of the ears in the image. Experiments on a dataset gathered from the web (a.k.a. in the wild) show that the proposed technique ensures good detection results in the presence of various covariate factors and significantly outperforms competing methods from the literature.
- Author(s): Yi Zhang ; Zhichun Mu ; Li Yuan ; Chen Yu
- Source: IET Biometrics, Volume 7, Issue 3, p. 185 –198
- DOI: 10.1049/iet-bmt.2017.0176
- Type: Article
- + Show details - Hide details
-
p.
185
–198
(14)
The capabilities of biometric systems have recently made extraordinary leaps by the emergence of deep learning. However, due to the lack of enough training data, the applications of the deep neural network in the ear recognition filed have run into the bottleneck. Moreover, the effect of fine-tuning from some pre-trained models is far less than expected due to the diversity among different tasks. Therefore, the authors propose a large-scale ear database and explore the robust convolutional neural network (CNN) architecture for the ear feature representation. The images in this USTB-Helloear database were taken under uncontrolled conditions with illumination, pose variation and different level of ear occlusions. Then they fine-tuned and modified some deep models on the proposed database through the ear verification experiments. First, they replaced the last pooling layers by spatial pyramid pooling layers to fit arbitrary data size and obtain multi-level features. In the training phase, the CNNs were trained both under the supervision of the softmax loss and centre loss to obtain more compact and discriminative features to identify unseen ears. Finally, three CNNs with different scales of ear images were assembled as the multi-scale ear representations for ear verification. The experimental results demonstrate the effectiveness of the proposed modified CNN deep model.
- Author(s): Fevziye Irem Eyiokur ; Dogucan Yaman ; Hazım Kemal Ekenel
- Source: IET Biometrics, Volume 7, Issue 3, p. 199 –206
- DOI: 10.1049/iet-bmt.2017.0209
- Type: Article
- + Show details - Hide details
-
p.
199
–206
(8)
Here, the authors have extensively investigated the unconstrained ear recognition problem. The authors have first shown the importance of domain adaptation, when deep convolutional neural network (CNN) models are used for ear recognition. To enable domain adaptation, the authors have collected a new ear data set using the Multi-PIE face data set, which they named as Multi-PIE ear data set. The authors have analysed in depth the effect of ear image quality, for example, illumination and aspect ratio, on the classification performance. Finally, the authors have addressed the problem of data set bias in the ear recognition field. Experiments on the UERC data set have shown that domain adaptation leads to a significant performance improvement. For example, when VGG-16 model is used and the domain adaptation is applied, an absolute increase of around 10% has been achieved. Combining different deep CNN models has further improved the accuracy by 4%. In the experiments that the authors have conducted to examine the data set bias, given an ear image, they were able to classify the data set that it has come from with 99.71% accuracy, which indicates a strong bias among the ear recognition data sets.
- Author(s): Samuel Dodge ; Jinane Mounsef ; Lina Karam
- Source: IET Biometrics, Volume 7, Issue 3, p. 207 –214
- DOI: 10.1049/iet-bmt.2017.0208
- Type: Article
- + Show details - Hide details
-
p.
207
–214
(8)
The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.
- Author(s): Earnest E. Hansley ; Maurício Pamplona Segundo ; Sudeep Sarkar
- Source: IET Biometrics, Volume 7, Issue 3, p. 215 –223
- DOI: 10.1049/iet-bmt.2017.0210
- Type: Article
- + Show details - Hide details
-
p.
215
–223
(9)
The authors present an unconstrained ear recognition framework that outperforms state-of-the-art systems in different publicly available image databases. To this end, they developed convolutional neural network (CNN)-based solutions for ear normalisation and description, they used well-known handcrafted descriptors, and they fused learned and handcrafted features to improve recognition. They designed a two-stage landmark detector that successfully worked under untrained scenarios. They used the results generated to perform a geometric image normalisation that boosted the performance of all evaluated descriptors. The proposed CNN descriptor outperformed other CNN-based works in the literature, especially in more challenging scenarios. The fusion of learned and handcrafted matchers appears to be complementary and achieved the best performance in all experiments. The obtained results outperformed all other reported results for the Unconstrained Ear Recognition Challenge, which contains the most difficult database nowadays.
- Author(s): Alireza Sepas-Moghaddam ; Fernando Pereira ; Paulo Lobato Correia
- Source: IET Biometrics, Volume 7, Issue 3, p. 224 –231
- DOI: 10.1049/iet-bmt.2017.0204
- Type: Article
- + Show details - Hide details
-
p.
224
–231
(8)
Ear recognition is an emerging research area in image-based biometrics. The commercial availability of lenslet light field cameras able to capture full spatio-angular information has brought momentum to biometric and forensic research exploiting this new type of imaging sensors. This study is the first to consider the usage of light field cameras for ear recognition, proposing both an appropriate content database and an ear recognition solution. The novel Lenslet Light Field Ear DataBase (LLFEDB) includes 536 light field images corresponding to four different poses, from 67 subjects, captured with a Lytro ILLUM lenslet light field camera. The LLFEDB includes critical cases such as ear images partly occluded by ear piercing, earing, hair and combination of occlusions. The novel light field-based ear recognition solution proposed exploits the richer spatio-angular information available in the light field images. A comparative performance evaluation study using the LLFEDB, and focusing on the most recent light field and non-light field based descriptors for ear recognition, shows a very promising performance of the proposed descriptor, outperforming all the assessed descriptors.
- Author(s): Iyyakutti Iyappan Ganapathi and Surya Prakash
- Source: IET Biometrics, Volume 7, Issue 3, p. 232 –241
- DOI: 10.1049/iet-bmt.2017.0212
- Type: Article
- + Show details - Hide details
-
p.
232
–241
(10)
This paper proposes a method for human recognition based on 3D ears and makes two important contributions. First, it proposes a global 3D descriptor, and second, it proposes a strategy to combine local and global descriptors for superior recognition performance. The proposed global descriptor is computed as follows. Spheres of different radii are placed at each point p in the point cloud, considering it as a center. A histogram is computed from the number of points falling in the annular regions of the concentric spheres. By utilizing the histograms of point p and its neighbors in a ring of fixed radius, an encoded value is obtained and is used to construct a coded image. This image is further divided into blocks which are subsequently binned to a histogram. Finally, the global descriptor is computed by concatenating all the histograms. The combined representation of four popular local 3D descriptors and the proposed global descriptor, yields superior results as compared to the case when local and global descriptors are used alone. The proposed technique has been validated on University of Notre Dame (UND), collection-J2 database and on our in-house database and has achieved a rank-1 recognition rate of 98.69% and 98.90%, respectively.
- Author(s): Sherif Nagib Abbas Seha and Dimitrios Hatzinakos
- Source: IET Biometrics, Volume 7, Issue 3, p. 242 –250
- DOI: 10.1049/iet-bmt.2017.0185
- Type: Article
- + Show details - Hide details
-
p.
242
–250
(9)
This study presents a new technique for human recognition using transient auditory evoked potentials (AEPs). AEPs are electrical potentials that are triggered by stimulating ears with auditory stimulus reflecting the neural response from the cochlea to the auditory cortex. These signals feature some advantages over conventional biometric traits as they cannot be easily forged or stolen like fingerprints or faces. Moreover, these signals are cancellable and can be changed by modifying the auditory stimulus. This allows system reuse even if the registered signal was breached. To investigate the biometric potential of this signal, a database of ten subjects was collected where transient AEPs signals were recorded by stimulating the left and the right ears separately. Machine learning techniques were employed to extract unique features for each subject using 1D convolutional neural network. The proposed system was evaluated over single-session and two-session recordings. Moreover, a fusion of left and right ear stimulated AEP signals was adopted for performance improvement. Using single-session and two-session recordings, the proposed system achieved a correct recognition rate over 95% and an equal error rate below 7%. The achieved results show that AEPs carry subject discriminative features allowing the possibility of employing AEP signal as a biometric trait.
Guest Editorial: Unconstrained Ear Recognition
Convolutional encoder–decoder networks for pixel-wise ear detection and segmentation
Ear verification under uncontrolled conditions with convolutional neural networks
Domain adaptation for ear recognition using deep convolutional neural networks
Unconstrained ear recognition using deep neural networks
Employing fusion of learned and handcrafted features for unconstrained ear recognition
Ear recognition in a light field imaging framework: a new perspective
3D ear recognition using global and local features
Human recognition using transient auditory evoked potentials: a preliminary study
-
- Author(s): Debanjan Sadhya and Sanjay Kumar Singh
- Source: IET Biometrics, Volume 7, Issue 3, p. 251 –259
- DOI: 10.1049/iet-bmt.2017.0049
- Type: Article
- + Show details - Hide details
-
p.
251
–259
(9)
In a soft biometric-based model, multiple soft biometric characteristics are fused with one or more primary biometric traits in a multimodal environment. In this study, the authors have reviewed a Bayesian decision theory-based fusion technique and considerably improved its performance by first identifying some of its limitations and subsequently modifying it accordingly. Specifically speaking, they have utilised the notion of Gaussian functions and a novel dynamic soft biometric weight assignment (DSWA) scheme for achieving these objectives. They have tested the modified framework on real-life data, which resulted in improved performances over the basic fusion model. They have also attempted to address here some security and privacy concerns associated with such frameworks. Although the soft biometric characteristics possess much lower uniqueness in comparison to a primary trait, they can be exploited by an active adversary to mine sensitive information about any individual. As such, the authors have proposed a secure fusion technique which performs one-way transformations of the soft biometric characteristics. They have also tested this secure design on real-life data and found the results satisfying.
- Author(s): Pawel Drozdowski ; Christian Rathgeb ; Christoph Busch
- Source: IET Biometrics, Volume 7, Issue 3, p. 260 –268
- DOI: 10.1049/iet-bmt.2017.0007
- Type: Article
- + Show details - Hide details
-
p.
260
–268
(9)
Large-scale biometric deployments are becoming ubiquitous. The computational workload of the conventional retrieval method, requiring 1:N comparisons in the identification mode, is impractical for such systems. In recent years, many approaches for efficient biometric identification were proposed, but their scalability is often questionable. Furthermore, the lack of a unified methodology for biometric workload reduction reporting often makes a direct benchmark or a thorough evaluation of the proposed schemes cumbersome. We present an iris indexing scheme based on Bloom filters and binary search trees. With a statistical model, the system is shown to be theoretically scalable for arbitrarily many enrollees. We evaluate this system on a combined database from several publicly available datasets, containing a total of 11,936 iris images from 1477 instances. In an open-set identification scenario, the system maintains the biometric performance of an iris-code 1:N baseline – a true positive identification rate of approximately 98% measured at 0.1% false positive identification rate, at only 10% of the baseline workload. In a proof-of-concept multi-iris indexing experiment, the true positive identification rate is further increased to over 99%, without additional workload costs. Lastly, we define several prerequisites necessary for a transparent and comprehensive methodology of biometric workload reduction results dissemination.
- Author(s): Jeevan Medikonda ; Hanmandlu Madasu ; Panigrahi Bijaya Ketan
- Source: IET Biometrics, Volume 7, Issue 3, p. 269 –277
- DOI: 10.1049/iet-bmt.2016.0136
- Type: Article
- + Show details - Hide details
-
p.
269
–277
(9)
A novel speed-invariant gait features called two-fold information set (2FInS) features that capture both spatial and temporal variations in a gait cycle are proposed in this study. These features are obtained by applying first histogram of oriented gradients descriptors on the gait images followed by the representation of the underlying possibilistic uncertainty using the Hanman–Jeevan entropy function. The 2FInS features are validated on three databases: CASIA-C, OU-ISIR Treadmill-A and OU-ISIR Treadmill-D using Procrustes distance based classifier. In view of accounting both spatial and temporal information distributed throughout a gait cycle, the results obtained are superior to those of the existing methods.
- Author(s): Alawi A Al-Saggaf
- Source: IET Biometrics, Volume 7, Issue 3, p. 278 –284
- DOI: 10.1049/iet-bmt.2016.0146
- Type: Article
- + Show details - Hide details
-
p.
278
–284
(7)
Remote user authentication schemes using smart cards provide solutions for securing user authentication. The schemes rely on passwords or biometric or both, which fall apart if the password is not kept secret. Additionally, the passwords can be stolen, lost, or forgotten and the stored biometric template is highly susceptible to a number of security and privacy attacks. This study presents a new biometric-based remote user authentication scheme using smart cards. In this study, the cryptographic key is concealed with a biometric template and discarded in the registration phase. During the login phase, the key is generated using a query biometric template in such a way that the key cannot be retrieved without a successful biometric verification. The strength of the proposed scheme is that it provides resistance to tampering and theft by way of a stored biometric template, privileged insider, and a smart card attacks and achieves some good properties such as it works without password (human memorised) and provides renewable property for the protected biometric template. Additionally, this scheme combines the good features of previous works to further capitalise on their sound security and design features. Security and performance analysis of the proposed scheme shows the advantage over other existing schemes.
Construction of a Bayesian decision theory-based secure multimodal fusion framework for soft biometric traits
Bloom filter-based search structures for indexing and retrieving iris-codes
Information set based features for the speed invariant gait recognition
Key binding biometrics-based remote user authentication scheme using smart cards
Most viewed content
Most cited content for this Journal
-
Overview of research on facial ageing using the FG-NET ageing database
- Author(s): Gabriel Panis ; Andreas Lanitis ; Nicholas Tsapatsoulis ; Timothy F. Cootes
- Type: Article
-
Strengths and weaknesses of deep learning models for face recognition against image degradations
- Author(s): Klemen Grm ; Vitomir Štruc ; Anais Artiges ; Matthieu Caron ; Hazım K. Ekenel
- Type: Article
-
Multimodal biometric recognition using human ear and palmprint
- Author(s): Nabil Hezil and Abdelhani Boukrouche
- Type: Article
-
Extended evaluation of the effect of real and simulated masks on face recognition performance
- Author(s): Naser Damer ; Fadi Boutros ; Marius Süßmilch ; Florian Kirchbuchner ; Arjan Kuijper
- Type: Article
-
Survey on real-time facial expression recognition techniques
- Author(s): Shubhada Deshmukh ; Manasi Patwardhan ; Anjali Mahajan
- Type: Article