Search In this Thesis
   Search In this Thesis  
العنوان
Deep Learning Approaches for Motor Imagery Brain-Computer Interfaces\
المؤلف
Mohamed,Raghda Hassan
هيئة الاعداد
باحث / رغدة حسن محمد محمد
مشرف / حازم محمود عباس
مشرف / سيف الدين محمد الدولتلى
مناقش / عمر حسن كرم
تاريخ النشر
2021.
عدد الصفحات
86p.:
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
1/1/2021
مكان الإجازة
جامعة عين شمس - كلية الهندسة - كهرباء حاسبات
الفهرس
Only 14 pages are availabe for public view

from 107

from 107

Abstract

Motor imagery represents one Brain-Computer Interface (BCI) paradigm that has been utilized in developing applications to assist subjects with motor disability. Such paradigm relies on analyzing brain electroencephalography (EEG) activity to identify the intended movement direction. Existing motor imagery feature extraction techniques are focused on utilizing traditional signal processing and machine learning techniques. Recent advances in the deep learning field has inspired the development of few methods for motor imagery classification that achieved further performance improvement. This thesis introduces a deep neural network approach for motor imagery classification using Long Short-Term Memory (LSTM) combined with Autoencoders based on a sequence-to-sequence architecture. The proposed network extracts features from the frequency-domain representation of EEG signals. This network is trained to obtain low-dimensional representation of EEG features that are then fed into a multi-layer perceptron of 3 layers for classification. Systematic and extensive examinations have been carried out by applying the approach to public benchmark EEG datasets. The obtained results outperform classical state–of-the-art methods employing standard frequency-domain features and common spatial patterns, and comparative results to methods such as filter bank common spatial pattern and its variants. Our results indicate the efficacy of the proposed LSTM autoencoder approach in EEG motor imagery classification.