Search In this Thesis
   Search In this Thesis  
العنوان
Self-learning algorithm for demand response in smart grid /
المؤلف
Morsi, Sally Aladdin Ali.
هيئة الاعداد
باحث / سالي علاءالدين على مرسي
مشرف / عدلي شحاث تاج الدين
مناقش / السيد محمود عبدالحميد
مناقش / باسم ممدوح الحلواني
الموضوع
Self-learning algorithm for demand response.
تاريخ النشر
2021.
عدد الصفحات
71 P. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
18/4/2021
مكان الإجازة
جامعة بنها - كلية الهندسة بشبرا - الهندسة الكهربائية
الفهرس
Only 14 pages are availabe for public view

from 89

from 89

Abstract

The population is sharply growing in the last decade, resulting in a non-potential power requests in
dense urban areas, especially with the traditional power grid where the system is not compatible tothe infrequent changes in elements. Traditional power grids have been suffering from overloads, which manifested itself in the form of sudden outages. Increasing both cost and power plants
capability to solve this problem was not the best solution to follow, in order to reduce the demand to
a reasonable level to handle daily demands. Power grids are considered as blind grids that mainly used to deliver electricity from power plants to end users, on contrary, smart grids were introduced as an intelligent version of the traditional power grids, as it can not only monitor the demand curves, but also predict and make decisions to enhance the performance of the grid without being apprehensive about the massive growth of population.
Smart grids has received significant amount of focus and research and shown strong potential to effectively mitigate and smooth power consumption curves by adjusting and forecasting the cost function in real-time in response to consumption fluctuations to achieve the desired objectives. Different demand response programs are introduced by power utilities to encourage users to
reschedule their consumption thought out the hours of the day. The main challenge for the smart grid designers is to reduce the cost and Peak to Average Ratio (PAR) while maintaining a desired satisfaction level.
Reinforcement Learning (RL) has shown promising potential to generate an optimized solution for various challenges. The reinforcement learning agent learns the optimal action by interacting with the environment without the need for a mathematical model. The environments parameters (ex.
Demand and price at current time) are modeled as a number of possible states. At each state the reinforcement learning agent is responsible of taking some actions (ex. Choosing current price or switch on/off devices). Each stateaction pair affects mainly on an immediate reward (ex. increasing
user satisfaction or reducing the cost). The agent therefore chooses the state-action pairs that maximize the long term reward.
This thesis presents the development and evaluation of a Multiple Agent Reinforcement Learning
Algorithm for Efficient Demand Response in Smart Grid (MARLA-SG). Also, it shows a simple and flexible way of choosing state elements to reduce the possible number of states, regardless the device parameters (ex. Device type, range of operation, and maximum allowable delay). It also
produces a simple way to represent the reward function regardless of the used cost function. Two learning methods, Q-learning and SARSA were used and attained cost reduction of 7.8% and 10.2%, and PAR reduction of 12.16% and 9.6% respectively.