Search In this Thesis
   Search In this Thesis  
العنوان
An interactive system to improve the deaf persons communication /
المؤلف
Shohieb, Samaa Mohammed Sabry Mohammed.
هيئة الاعداد
باحث / سماء محمد صبري محمد شهيب
مشرف / علاءالدين محمد رياض
مشرف / حمدي كمال المنير
الموضوع
Facial expression. Sign language.
تاريخ النشر
2014.
عدد الصفحات
150 p. :
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
Information Systems
تاريخ الإجازة
1/1/2014
مكان الإجازة
جامعة المنصورة - كلية الحاسبات والمعلومات - قسم نظم المعلومات
الفهرس
Only 14 pages are availabe for public view

from 89

from 89

Abstract

The communication is performed through sign language that consists of both Manual signs and Non-Manual signs. Manual signs (MSs) are performed using hands while non-manual signs (NMSs) mainly include facial expressions, lip movements, sight direction, body movement. Although manual signs constitute a large proportion of any SL vocabulary, NMS also own a significant share to convey the whole context. Hence it differs from a spoken language in a way that a spoken language structure uses the words in a sequential manner but the SL structure allows manual and NM components to be performed in parallel. Another unique feature of SL over any spoken language is its capability to convey multiple ideas at a single instant of time. The proposed framework could recognize both the MSs and NMs.
We present in this thesis a complete Arabic Sign Language data base (ArSL DB) that contain both MSs and NMSs which we call SignsWorld Atlas. The postures, gestures, and motions included in this DB are collected in lighting and background laboratory conditions.
Also we present a development for a powerful multi-detector technique to localize the key facial feature points so that contours of the facial components are sampled such as the eyes, nostrils, chin, and the mouth. Based on the 66 extracted facial features points, 20 geometric formulas (GFs), 15 ratios (Rs) are calculated, and the classifier based on rule-based reasoning approach are then formed for both of the gaze direction and some facial expressions (Normal, Sadness, Smiling or Surprising). We call our facial expression recognition system SignsWorld FERS that is a person independent and achieved a recognition rate of 97%.
A fully automatic multi-modal oculography-based mouse controlling system will also be presented. This is done via integration between the recognized facial expressions with the eye movement and utilizing them in the mouse controlling.
In addition, a multi-modal visual-speech recognition system to read the visemes of the Arabic words, that we called SignsWorld Arabic Lip Reading System (ALRS). Our idea aims at visually-recognizing Arabic words that have convergent phonetics; i.e. have the same main vowel but have different facial expression (Ex. ”ضار” and ”سار” each has the same vowel, ””ا also ”س” and ”ض” have convergent speech exits and lip shape in Arabic but have different facial expression, sadness and happiness respectively). When combining both of vowel viseme recognition and the accompanying facial expression, the word can be detected much more easily.
We will present an isolated SLR system that extracts geometric features from a camera for the hand gesture and build a geometric model for the hand gesture.
A rule based classifier was then used for the recognition process based on the determined geometric features of a specific gesture.
We also propose two dynamic SLR systems based on two different methods, real-time (online) and offline ones. We used the Microsoft Kinect camera in this task. We developed these methods for three different gestures; waving, pushing and circular. A comparison between the two methods has been performed.