Search In this Thesis
   Search In this Thesis  
العنوان
User authentication based on multimodal biometrics /
المؤلف
Mohsen, Sally Mohamed Sameh Ahmed.
هيئة الاعداد
باحث / سالي محمد سامح أحمد محسن
مشرف / فايز ونيس زكي
مشرف / حسام الدين حسام مصطفى
مناقش / فايز ونيس زكي
الموضوع
Biometric identification.
تاريخ النشر
2018.
عدد الصفحات
93 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
Computational Mechanics
الناشر
تاريخ الإجازة
01/01/2018
مكان الإجازة
جامعة المنصورة - كلية الهندسة - Department of Electronics and Communication
الفهرس
Only 14 pages are availabe for public view

from 128

from 128

Abstract

The need for reliable user authentication techniques has increased in the wake of heightened concerns about security and rapid advancements in networking, communication and mobility. A wide variety of systems require reliable personal recognition schemes either to confirm or determine the identity of an individual requesting their facilities. The aim of such systems is to guarantee that the rendered services are accessed only by a genuine user. Traditional authentication techniques based on passwords and tokens are limited in their ability to address many security issues. The advent of biometrics has addressed some of the short comings of this traditional authentication system. Biometric recognition is an automatic pattern recognition system based on physiological or behavioral characteristics of an individual. The system performing person recognition based on a single source of biometric information, are called uni-modal biometric system. These systems have to contend with variety of problems such as noise in sensed data, intra-class variations, and spoof attacks. Multi-biometrics system strive to overcome some of these drawbacks by consolidating the evidence presented by multiple biometric sources of information. It could integrate information from multiple sensors, multiple snapshots, multiple representations and matching algorithms of same biometric and multiple biometrics traits. In this thesis, a study of multimodal fusion of voice and Iris at feature level is presented. The features are extracted from the voice signals using Power-Normalized Cepstral Coefficients (PNCC) and features of the preprocessed iris images are extracted using Single Value Decomposition (SVD). After that features of Voice and features of Iris are concatenated. The experiment have been done using samples collected from faculty of Engineering , Mansoura university for voice and CASIA iris database for iris which gave accuracy of 98.4% after fusion. This result is acceptable and gives high accuracy than using voice and iris individually.