MakeUpMirror: mirroring make-ups and verifying faces post make-up

MakeUpMirror: mirroring make-ups and verifying faces post make-up

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Biometrics — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Facial make-up changes the appearance of a person and significantly degrades the performance of automated face verification (FV) systems. Here, the authors propose the design of an end-to-end siamese convolutional neural network (SCNN) that simultaneously replicates the facial make-up of a subject using its target image (under facial make-up) on a query face image and verifies the identity of the query face sample either with or without make-up. The SCNN model is designed using loss functions to deal with the variations due to make-up. The proposed architecture can reciprocate the make-up at appropriate locations of the face without any human interventions. Rigorous experimentations on four benchmark facial make-up datasets reveal the efficiency of their proposed model. Ablation studies show improvement of 4% for genuine acceptance rate at 0.1% false acceptance rate and reduction of equal error rate by 42% for FV in case of YouTube Make-up dataset, and ‘10%’ in case of Virtual Make-up dataset, when compared to the nearest state-of-the-art method. For the transfer of make-up, the similarity measures also show the effectiveness of their method, where the peak signal-to-noise ratio and structural similarity values show an improvement by ∼20–24 and ∼29–32%, respectively, when compared to a recent state-of-the-art technique.

Related content

This is a required field
Please enter a valid email address