Search In this Thesis
   Search In this Thesis  
العنوان
Target recognition using neural networks /
المؤلف
Abo Mandour, Yasser Kamal Ali.
هيئة الاعداد
باحث / Yasser Kamal Ali Abo Mandour
مشرف / Yehia Mohamed Enab
مشرف / Mohamed Mahmoud Kouta
مناقش / Yehia Mohamed Enab
الموضوع
Neural networks (Computer science).
تاريخ النشر
2001.
عدد الصفحات
180 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
هندسة النظم والتحكم
تاريخ الإجازة
1/1/2001
مكان الإجازة
جامعة المنصورة - كلية الهندسة - Computers and Systems Department
الفهرس
Only 14 pages are availabe for public view

from 184

from 184

Abstract

This thesis discusses the Pattern recognition techniques. Using neural network. Models. Pattern recognition applications come in many forms. In some instance, there is an underlying and quantifiable statistical basis for the generation of patterns. In other instances structure of the pattern provides the information fundamental for Pattern recognition. In still others, neither of the above cases holds true, but the ability to develop and train architecture to correctly associate input patterns with desired responses. Thus, a given problem may allow one or more of these different solution approaches. When faced with solving a problem When using Image Processing algorithms, to extract the main features in image, some pre-processing techniques are used. Pre-processing is the name given to a family of procedures which including smoothing, enhancing, filtering, clearing-up and otherwise managing a digital image so that subsequent algorithms along the road to final classification can be made simple and more accurate. Features may be symbolic, numerical or both. Color is an example of symbolic feature; weight is an example of numerical feature (measured in pounds). Feature may also result from applying a feature extraction algorithm or operator to the input data. Additionally, feature may be higher level entities: for example, geometric descriptors of either an image region or a 3-D object appearing in the image. The target to be recognized is an aeroplane image. Two main features were determined: 1-curvatures which measured by boundary pixel algorithms like Fourier Descriptors, Moment Invariant, Hough Transform or a line straightness algorithm. 2-region points, which extracted from object analysis in image. In order to analyse the object you must first have identified the objects in the image. The Object Analysis explain image attributes like area, perimeter, roundness elongation, feret diameter, compactness, major axis length, major axis angle, minor axis length, minor axis angle, centroid, grey centroid, integrated density. The thesis recovers recognition using Multilayer Perceptrons; Multilayer Perceptrons (MLPs) are feedforward neural networks trained with the standard backpropagation algorithm. They are supervised networks so they require a desired response to be trained. They learn how to transform input data into a desired response, so they are widely used for pattern classification. With one or two hidden layers, they can approximate virtually any input-output map. They have been shown to approximate the performance of optimal statistical classifiers in difficult problems. Images are presented to the system representing both features to be recognised as a training samples, The input patterns are ASCII files contains matrix of common attributes of objects analysis and Fourier descriptors of important points in boundaries, each column of that matrix represents one channel (PE) of input data. The exception to this rule is when a column contains symbolic instead of numeric data. Neural network model automatically normalizes the input data for you. This is either in the range between 0 and 1 or between -1 and 1 depending on the type of non-linearity selected in the Layer panels. This normalization can be overridden after the network is constructed. The output patterns are the desired data of two features curvature and regions. The Supervised Learning specifies when the weights are updated. Batch learning (training by samples) updates the weights after the presentation of the entire training set. During training, the input and desired data will be repeatedly presented to the network. As the network learns, the error will DROP towards zero. Lower error, however, does not always mean a better network. It is possible to over train a network. Finally, To test the system another set of bitmaps is prepared. Twenty images with sex invariance (rotation, translation, scaling, and noising) this time comprising of images taken from images unused in the training phase, the result was 90% for original images. And about 83% for images had invariance’s influence.