Search In this Thesis
   Search In this Thesis  
العنوان
An Adaptive Engine for RTS Games Using Opponent Modelling\
المؤلف
ElNahas, Ghada Mohamed Farouk Fouad.
هيئة الاعداد
باحث / Ghada Mohamed Farouk Fouad ElNahas
مشرف / Mustafa Mahmoud Aref
مشرف / Ibrahim Fathy Moawad
مشرف / Ibrahim Fathy Moawad
تاريخ النشر
2014.
عدد الصفحات
80P. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
علوم الحاسب الآلي
تاريخ الإجازة
1/1/2014
مكان الإجازة
اتحاد مكتبات الجامعات المصرية - Computer Science
الفهرس
Only 14 pages are availabe for public view

from 16

from 16

Abstract

Real-Time Strategy (RTS) games are war simulation games in which players control a number of units in real-time in order to achieve a certain goal; destroying the enemy’s units or controlling his territory. RTS games are game genre that becoming increasingly more popular to use in computer science. It offers challenging opportunities for research in planning, learning, opponent modelling and reasoning. The importance of research in RTS games is not only restricted to the game industries, but it’s also interesting to other industries like robotics and military industry.
Opponent Modelling is one of the challenging research areas in RTS games. Opponent modelling is a method of representing information about the enemy. Most current approaches to opponent modelling in RTS games pretended inefficiency. They are either computationally expensive or required numerous amount of online gameplay to start learn a successful model. Unfortunately, most successful approaches also were game specific. They mainly depend on the expert’s knowledge of the game.
This thesis present a new approach that automatically generates a significant opponent models for RTS games. The approach also designed to deal with performance degradation issue due to changing player’s behavior during online gameplay. The proposed approach has three main objectives:
 Generalization: Unlike most existing opponent modelling approaches, the proposed approach can be applied generically to any RTS game; with no expert knowledge of the game domain features needed.
 Robust adaptability: To better cope with opponents that switch strategies and to increase robustness of the approach, it is powerful to track the performance after classification to decide whether the opponent is classified correctly or needs reclassification during the online play.
 Efficiency: In complex domains such as RTS games, approaches that invoke online learning are not efficient. For many of these approaches, it is common that a game is ended before any efficient behavior is established. As a result, it is difficult for the players of a video game to detect and understand that the game AI is currently learning. In the proposed approach, all learning processes are performed in an offline phase first, using a case base of previously stored gameplay observations. This phase constructs the learned models. The models are utilized effectively during an online phase during gameplay.
The proposed approach consists of two phases: an offline phase and an online phase. The offline phase is responsible for designing the player models. It tackles the crucial problem of what information should be included in the model. On the other hand, the online phase is responsible for classifying the opponent behaviour to one of the models in the model base. The classification of the opponent is the basis for the game AI to decide which behaviour it should adapt to. An evaluation module is also incorporated for performance continuous monitoring after classification. This continuous monitoring of performance increases the robustness of the approach.
Abstract
iv
Experiments showed that the approach can be applied generically to any RTS game with no prior knowledge of the game domain features needed. The approach was able to construct significant opponent models automatically, and exploit them immediately (i.e., without trials and without resource-intensive learning) in an online play. Also, monitoring the performance after opponent model classification and deciding whether a reclassification is needed or not fulfills an important factor for the AI adaptability that the approach accomplishes. The approach was tested in the 3D RTS game named GLest for two of its different maps namely ‘Angry Forest’ and ‘Dark Waters’. Analysis to the results revealed that the approach improved the performance of the AI player significantly (64% compared to 28% for the original AI without opponent modelling) in the map ‘Angry Forest’ and (66% compared to 20% for the original AI without opponent modelling) in the map ‘Dark Waters’.