Employing fusion of learned and handcrafted features for unconstrained ear recognition

Employing fusion of learned and handcrafted features for unconstrained ear recognition

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Biometrics — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The authors present an unconstrained ear recognition framework that outperforms state-of-the-art systems in different publicly available image databases. To this end, they developed convolutional neural network (CNN)-based solutions for ear normalisation and description, they used well-known handcrafted descriptors, and they fused learned and handcrafted features to improve recognition. They designed a two-stage landmark detector that successfully worked under untrained scenarios. They used the results generated to perform a geometric image normalisation that boosted the performance of all evaluated descriptors. The proposed CNN descriptor outperformed other CNN-based works in the literature, especially in more challenging scenarios. The fusion of learned and handcrafted matchers appears to be complementary and achieved the best performance in all experiments. The obtained results outperformed all other reported results for the Unconstrained Ear Recognition Challenge, which contains the most difficult database nowadays.

Related content

This is a required field
Please enter a valid email address