IET Biometrics
Volume 7, Issue 5, September 2018
Volumes & issues:
Volume 7, Issue 5
September 2018
-
- Author(s): Mohamed Cheniti ; Naceur-Eddine Boukezzoula ; Zahid Akhtar
- Source: IET Biometrics, Volume 7, Issue 5, p. 391 –395
- DOI: 10.1049/iet-bmt.2017.0015
- Type: Article
- + Show details - Hide details
-
p.
391
–395
(5)
Multimodal biometric systems, which combine information from multiple biometric sources, have shown to improve the identity recognition performance by overcoming the weaknesses and some inherent limitations of unimodal systems. A new framework for score level fusion based on symmetric sums (S-sums) has been presented. These S-sums are generated via triangular norms. The proposed framework has been tested on two publicly available benchmark databases. In particular, the authors used two partitions of NIST-BSSR1, i.e. NIST-multimodal database and NIST-fingerprint database. The experimental results show that the proposed method outperforms the existing approaches for the NIST-multimodal database and NIST-fingerprint database.
- Author(s): Mohammad Mogharen Askarin ; KokSheik Wong ; Raphael C.-W. Phan
- Source: IET Biometrics, Volume 7, Issue 5, p. 396 –404
- DOI: 10.1049/iet-bmt.2016.0113
- Type: Article
- + Show details - Hide details
-
p.
396
–404
(9)
The fingerprint is arguably the most successfully deployed biometric data in a broad spectrum of applications for identification and verification purposes. While fingerprint matching algorithms are fairly matured, most studies have so far focused on improving the matching precision, and some effort has been channelled to combat spoofing attacks on biometric readers through liveness detection. To the best of the authors’ knowledge, the feasibility of planting attacks of latent fingerprints has not been reported in the literature. In this study, the authors present a low-cost latent fingerprint planting attack involving steps that can be performed by an untrained person who has no prior knowledge in forensics. Experiment results based on a publicly available database suggest that this approach feasibly leads to planted latent fingerprints being indistinguishable from real ones. It is also verified that the planted latent fingerprints could be utilised to identify their corresponding rolled fingerprints, suggesting the viability of the proposed latent fingerprint planting attack.
- Author(s): Hossein Zeinali ; Bagher BabaAli ; Hossein Hadian
- Source: IET Biometrics, Volume 7, Issue 5, p. 405 –414
- DOI: 10.1049/iet-bmt.2017.0059
- Type: Article
- + Show details - Hide details
-
p.
405
–414
(10)
Signature verification (SV) is one of the common methods for identity verification in banking, where for security reasons, it is very important to have an accurate method for automatic SV (ASV). ASV is usually addressed by comparing the test signature with the enrolment signature(s) signed by the individual whose identity is claimed in two manners: online and offline. In this study, a new method based on the i-vector is proposed for online SV. In the proposed method, a fixed-length vector, called i-vector, is extracted from each signature and then this vector is used for template making. Several techniques such as nuisance attribute projection (NAP) and within-class covariance normalisation (WCCN) are also investigated in order to reduce the intra-class variation in the i-vector space. In the scoring and decision making stage, they also propose to apply a 2-class support vector machine method. Experimental results show the proposed method could achieve 8.75% equal error rate (EER) on SigWiComp2013 database in the best case. On SVC2004 database, it also achieved 5% EER that means 11% relative improvement compared with the best reported result. In addition to its considerable accuracy gain, it has shown significant improvement in the computational cost over conventional dynamic time warping method.
- Author(s): Adam Switonski ; Tomasz Krzeszowski ; Henryk Josinski ; Bogdan Kwolek ; Konrad Wojciechowski
- Source: IET Biometrics, Volume 7, Issue 5, p. 415 –422
- DOI: 10.1049/iet-bmt.2017.0134
- Type: Article
- + Show details - Hide details
-
p.
415
–422
(8)
In this study, a framework for view-invariant gait recognition on the basis of markerless motion tracking and dynamic time warping (DTW) transform is presented. The system consists of a proposed markerless motion capture system as well as introduced classification method of mocap data. The markerless system estimates the three-dimensional locations of skeleton driven joints. Such skeleton-driven point clouds represent poses over time. The authors align point clouds in every pair of frames by calculating the minimal sum of squared distances between the corresponding joints. A point cloud distance measure with temporal context has been utilised in k-nearest neighbours algorithm to compare time instants of motion sequences. To enhance the generalisation of the recognition and to shorten the processing time, for every individual a single multidimensional time series among several multidimensional time series describing the individual's gait is established. The correct classification rate has been determined on the basis of a real dataset of human gait. It contains 230 gait cycles of 22 subjects. The tracking results on the basis of markerless motion capture are referenced to Vicon system, whereas the achieved accuracies of recognition are compared with the ones obtained by DTW that is based on rotational data.
- Author(s): Ajita Rattani ; Narsi Reddy ; Reza Derakhshani
- Source: IET Biometrics, Volume 7, Issue 5, p. 423 –430
- DOI: 10.1049/iet-bmt.2017.0171
- Type: Article
- + Show details - Hide details
-
p.
423
–430
(8)
Automated gender prediction has drawn significant interest in numerous applications such as surveillance, human–computer interaction, anonymous customised advertisement system, image retrieval system, and biometrics. In the context of smartphone devices, gender information has been used to enhance the accuracy of the integrated biometric authentication and mobile healthcare system. Here, the authors thoroughly investigate gender prediction from ocular images acquired using front-facing cameras of smartphones. This is a new problem as previous research in this area has not explored RGB ocular images captured by smartphones. The authors used deep learning for the task. Specifically, pre-trained and custom convolutional neural network architectures have been implemented for gender prediction. Multi-classifier fusion has been used to improve the prediction accuracy. Further, evaluation of off-the-self-texture descriptors and study of human ability in gender prediction has been conducted for comparative analysis.
- Author(s): Gonzalo Bailador ; Belén Ríos-Sánchez ; Raúl Sánchez-Reillo ; Hiroshi Ishikawa ; Carmen Sánchez-Ávila
- Source: IET Biometrics, Volume 7, Issue 5, p. 431 –438
- DOI: 10.1049/iet-bmt.2017.0166
- Type: Article
- + Show details - Hide details
-
p.
431
–438
(8)
Segmentation is a crucial stage in hand biometric recognition due to its direct influence on the feature extraction process. The actual trend toward contactless biometrics adds new challenges to traditional defiances, which are mainly related to the capturing conditions and the limitations on computational resources. Traditional methods do not succeed when variable capturing conditions are imposed and methods which are able to deal with daily-life situations are, in general, computationally expensive. In this study, a competitive flooding-based segmentation method oriented to mobile devices is proposed in order to achieve a compromised solution between accuracy and computational resources consumption. The method has been evaluated using images coming from five different databases which cover a wide spectrum of capturing conditions, one of them recorded as a part of this study. The results have been compared with other two well known segmentation techniques in terms of both accuracy and computation time.
- Author(s): Emanuela Piciucco ; Emanuele Maiorana ; Patrizio Campisi
- Source: IET Biometrics, Volume 7, Issue 5, p. 439 –446
- DOI: 10.1049/iet-bmt.2017.0192
- Type: Article
- + Show details - Hide details
-
p.
439
–446
(8)
In this study, the authors propose a novel approach for palm vein recognition relying on high dynamic range (HDR) imaging. Specifically, the authors speculate that the exploitation of multiple-exposure vein images guarantees better recognition performance than a baseline system relying on single-exposure acquisitions. To verify the authors’ assumptions, a multiple-exposure dataset is collected from 86 subjects, with 12 sets of palm vein images captured for each user. Each set is composed of five images, acquired at different exposures, which can be fused to generate a HDR representation of the actual vein pattern. Local binary pattern and local derivative pattern are employed to extract features from single-exposure images, raw HDR images, and tone-mapped HDR images. The obtained experimental results show that significant performance improvement can be achieved when discriminative features are extracted from HDR contents, with respect to the use of single-exposure images.
- Author(s): Weihong Deng and Hongjun Wang
- Source: IET Biometrics, Volume 7, Issue 5, p. 447 –453
- DOI: 10.1049/iet-bmt.2017.0194
- Type: Article
- + Show details - Hide details
-
p.
447
–453
(7)
Representations generated by Fisher vector (FV) have shown decent performances on many facial image datasets. However, discriminative information could be masked by noise if the authors directly sum all local responses with respect to the learned dictionary. Further, the high dimension of FV prohibits its practical use. To mitigate these problems, the authors propose a new framework called joint compressed Fisher vector (JCFV), which generate task-specific data representation by jointly encoding multiscale deep convolutional activations. Firstly, they feed into the deep network facial images cropped with cascaded sub-windows and resized into various scales. Next, they select discriminative convolutional features to form a dictionary. Then, they aggregate multiscale features with respect to the dictionary by calculating a re-weighted first-order statistics. JCFV halves the dimension of FV, and they could further compress the dimension with several combinations of subspace methods. They prove the effectiveness of their JCFV descriptor with comprehensive experiments on FERET, AR, LFW and FRGC 2.0 Experiment 4.
- Author(s): Rıdvan Salih Kuzu and Albert Ali Salah
- Source: IET Biometrics, Volume 7, Issue 5, p. 454 –466
- DOI: 10.1049/iet-bmt.2017.0121
- Type: Article
- + Show details - Hide details
-
p.
454
–466
(13)
On-line social platforms implement moderation mechanisms to filter out unwanted content and to take action against possible cases of verbal aggression and abuse, sexual harassment, and such. In this study, the authors investigate chat biometrics, the identification of users from their verbal behaviour on a social platform. The typical application scenarios are the re-identification of banned users, returning under different identities, and aggressors operating through multiple fake accounts. They propose a novel processing pipeline, and contrast the problem with the authorship recognition problem, which is relatively well-studied in the literature. They evaluate the proposed approach on a large corpus of multiparty chat records in Turkish, which they have previously collected from a multiplayer game environment. They also introduce a new corpus in this study, collected from a well-known Turkish social platform called Ekşisözlük, in order to test the robustness of the system across domain changes, as well as on Portuguese and English news datasets to test it on different languages. They evaluate both instance-based and profile-based approaches, and provide detailed analyses with regards to the required amount of text to identify a person reliably.
- Author(s): Sarasi Munasinghe ; Clinton Fookes ; Sridha Sridharan
- Source: IET Biometrics, Volume 7, Issue 5, p. 467 –473
- DOI: 10.1049/iet-bmt.2017.0050
- Type: Article
- + Show details - Hide details
-
p.
467
–473
(7)
The last two decades have seen an escalating interest in methods for large-scale unconstrained face recognition. While the promise of computer vision systems to efficiently and accurately verify and identify faces in naturally occurring circumstances still remains elusive, recent advances in deep learning are taking us closer to human-level recognition. In this study, the authors propose a new paradigm which employs deep features in a feature extractor and intra-personal factor analysis as a recogniser. The proposed new strategy represents the face changes of a person using identity specific components and the intra-personal variation through reinterpretation of a Bayesian generative factor analysis model. The authors employ the expectation-maximisation algorithm to calculate model parameters which cannot be observed directly. Recognition outcomes achieved through benchmarking on large-scale wild databases, Labeled Faces in the Wild (LFW) and Youtube Face (YTF), clearly prove that the proposed approach provides remarkable face verification performance improvement over state-of-the-art approaches.
- Author(s): Renu Sharma ; Sukhendu Das ; Padmaja Joshi
- Source: IET Biometrics, Volume 7, Issue 5, p. 474 –481
- DOI: 10.1049/iet-bmt.2017.0076
- Type: Article
- + Show details - Hide details
-
p.
474
–481
(8)
Human recognition in a multi-biometric system is performed by combining biometric clues from different sources (multiple sensors, units, algorithms, samples and modalities) at different levels (sensor, feature, score, rank and decision level). Low computational complexity and adequate data for fusion make the score-level fusion a preferable option over other levels of fusion. However, incompatibility issue prevails at this level as scores obtained from different uni-biometric systems are disparate in nature. This disparity can be resolved by using score normalisation before fusion. This study first analysed the effect of generalised extreme value distribution-based score normalisation technique on different fusion techniques and then proposes an efficient score fusion technique based on Dezert–Smarandache theory (DSmT). A unique blend of belief assignment and decision-making methods in the DSmT framework is proposed for score-level fusion. For evaluation of the proposed method, experiments are performed on multi-algorithm, multi-unit and multi-modal biometrics systems created from three publicly available datasets: (i) NIST BSSR1 multi-biometric score database, (ii) face recognition grand challenge (FRGC) V2.0 and (iii) LG4000 iris dataset. Comparative studies of performance analysis show the efficiency of the authors’ proposed method over other recently published state-of-the-art fusion methods.
- Author(s): Basma Ammour ; Toufik Bouden ; Larbi Boubchir
- Source: IET Biometrics, Volume 7, Issue 5, p. 482 –489
- DOI: 10.1049/iet-bmt.2017.0251
- Type: Article
- + Show details - Hide details
-
p.
482
–489
(8)
A multi-modal biometric system is used to verify or identify a person by exploiting information of more than one biometric modality. It combines the strengths of the unimodal biometric system to solve their limitations. This study proposes schemes of multi-modal biometric system based on texture information extracted from face and two iris (left and right) using hybrid level of fusion. Feature extraction is the key step to get a robust recognition system. Multi-resolution two-dimensional Log-Gabor filter combined with spectral regression kernel discriminant analysis is exploited to extract features from both face and iris modalities. These features are used in the fusion and the classification process. The proposed schemes were tested using CASIA Iris Distance database in the verification mode. The experimental results show that the proposed multi-modal biometric system yields attractive performances of up to 0.24% in terms of equal error rate and outperforms the recent similar existing state-of-the-art methods.
- Author(s): Ninu Preetha Nirmala Sreedharan ; Brammya Ganesan ; Ramya Raveendran ; Praveena Sarala ; Binu Dennis ; Rajakumar Boothalingam R.
- Source: IET Biometrics, Volume 7, Issue 5, p. 490 –499
- DOI: 10.1049/iet-bmt.2017.0160
- Type: Article
- + Show details - Hide details
-
p.
490
–499
(10)
The channels used to convey the human emotions consider actions, behaviours, poses, facial expressions, and speech. An immense research has been carried out to analyse the relationship between the facial emotions and these channels. The goal of this study is to develop a system for Facial Emotion Recognition (FER) that can analyse the elemental facial expressions of human, such as normal, smile, sad, surprise, anger, fear, and disgust. The recognition process of the proposed FER system is categorised into four processes, namely pre-processing, feature extraction, feature selection, and classification. After preprocessing, scale invariant feature transform -based feature extraction method is used to extract the features from the facial point. Further, a meta-heuristic algorithm called Grey Wolf optimisation (GWO) is used to select the optimal features. Subsequently, GWO-based neural network (NN) is used to classify the emotions from the selected features. Moreover, an effective performance analysis of the proposed as well as the conventional methods such as convolutional neural network, NN-Levenberg–Marquardt, NN-Gradient Descent, NN-Evolutionary Algorithm, NN-firefly, and NN-Particle Swarm Optimisation is provided by evaluating few performance measures and thereby, the effectiveness of the proposed strategy over the conventional methods is validated.
Symmetric sum-based biometric score fusion
Planting attack on latent fingerprints
Online signature verification using i-vector representation
Gait recognition on the basis of markerless motion tracking and DTW transform
Convolutional neural networks for gender prediction from smartphone-based ocular images
Flooding-based segmentation for contactless hand biometrics oriented to mobile devices
Palm vein recognition using a high dynamic range approach
Face recognition with compressed Fisher vector on multiscale convolutional features
Chat biometrics
Human-level face verification with intra-personal factor analysis and deep face representation
Score-level fusion using generalized extreme value distribution and DSmT, for multi-biometric systems
Face–iris multi-modal biometric system using multi-resolution Log-Gabor filter with spectral regression kernel discriminant analysis
Grey Wolf optimisation-based feature selection and classification for facial emotion recognition
Most viewed content
Most cited content for this Journal
-
Overview of research on facial ageing using the FG-NET ageing database
- Author(s): Gabriel Panis ; Andreas Lanitis ; Nicholas Tsapatsoulis ; Timothy F. Cootes
- Type: Article
-
Strengths and weaknesses of deep learning models for face recognition against image degradations
- Author(s): Klemen Grm ; Vitomir Štruc ; Anais Artiges ; Matthieu Caron ; Hazım K. Ekenel
- Type: Article
-
Multimodal biometric recognition using human ear and palmprint
- Author(s): Nabil Hezil and Abdelhani Boukrouche
- Type: Article
-
Extended evaluation of the effect of real and simulated masks on face recognition performance
- Author(s): Naser Damer ; Fadi Boutros ; Marius Süßmilch ; Florian Kirchbuchner ; Arjan Kuijper
- Type: Article
-
Survey on real-time facial expression recognition techniques
- Author(s): Shubhada Deshmukh ; Manasi Patwardhan ; Anjali Mahajan
- Type: Article