Search In this Thesis
   Search In this Thesis  
العنوان
A study of Face Verification based on Deep Learning Models /
المؤلف
Kilany, Sohair Ali.
هيئة الاعداد
باحث / سهير على كيلانى عبدالباسط
مشرف / عونى عبد الهادى سيد
مشرف / احمد محمد محفوظ
مشرف / علاء محمد زكى احمد
الموضوع
Application software. Artificial intelligence. Optical data processing.
تاريخ النشر
2022.
عدد الصفحات
99 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
علوم الحاسب الآلي
تاريخ الإجازة
1/1/2022
مكان الإجازة
جامعة المنيا - كلية العلوم - علوم الحاسب
الفهرس
Only 14 pages are availabe for public view

from 103

from 103

Abstract

Nowadays, The accuracy, usability, and touchless acquisition of state-of-the-art face verification (FV) systems have led to their ubiquitous adoption in a plethora of domains, including mobile phone unlock, access control systems, and payment services.
Most face verification methods are built upon Deep Convolutional Neural Networks (DCNN) and achieved impressive performances and high accuracy results. However, DCNNs networks can be easily attacked from different adversarial examples by adding small perturbations to faces. Recent studies on GANs on the FV systems are interactive research areas. The attack models can deceive the FV systems via adding small amount of perturbation without affecting the structural similarity. In contrast with other defense methods that seeks to protect the FV systems from attacks without paying any attention to the amount of perturbation. In this thesis, we present an analysis about the most recent adversarial attacks on FV systems to find the best value of the amount of perturbation to be added to
the probed face images as well as maintaining a high structural similarity. We conducted an experiment and presented results that validate the effectiveness of the perturbation
amount with structural similarity on LFW dataset. We find that, increasing the amount of perturbation does not have any impact on the attack success rate, but negatively it does have an effect on the structural similarity which decreased by 28% from 54% to
26% in FGSM and from 98% to 70% in PGD.
Next, we address the problem of defending against adversarial attacks. To protect FV systems against these attacks, numerous Attack Defense (AD) methods have been proposed. We addressed two major issues with these AD approaches, namely, face AD generalization and interpretability. Our main focus is to improve the performance of the binary classification model across a wide variety of common attacks such
as FGSM, PGD, and AdvFaces. We analyzed two major defense methods that were
commonly used in the literature. First, we applied perturbation removal approaches to the input image before feeding it to the FV system. For instance, we used total variance
minimization, bit-depth reduction, and PCA techniques. These techniques learn a representation of the image space and retain mostly the information contained in the images and remove the noise. Second, we applied detection strategies as a preprocessing step in
the FV system. We show how a binary classifier can determine the adversarial images with an accuracy of 99.05%. Experimental results on the LFW dataset show that our binary classifier detector achieves detection accuracy of 99.05%, 99.05%, and 99.07% on three types of unseen adversarial attacks, FGSM, PGD, and AdvFaces respectively with different amounts of perturbations.