Search In this Thesis
   Search In this Thesis  
العنوان
Bayesian Estimations for the Inverted Kumaraswamy Distribution Parameters /
المؤلف
Osman، Sara Moheb Abd El-Hamid،
هيئة الاعداد
باحث / Sara Moheb Abd El-Hamid Osman
مشرف / . Ahmed Mohamed Rashad Mousa
مناقش / Mohammed Yusuf Abdelaziz Ali
مناقش / Mohammed Yusuf Abdelaziz Ali
الموضوع
mathematics. mathematics - Combinatorial Analysis.
تاريخ النشر
2020.
عدد الصفحات
133 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
التحليل العددي
تاريخ الإجازة
1/1/2020
مكان الإجازة
جامعة حلوان - كلية الفنون التطبيقية - الرياضيات
الفهرس
Only 14 pages are availabe for public view

from 128

from 128

Abstract

Abstract
Bayesian Estimations for the Inverted Kumaraswamy Distribution Parameters
Sara Moheb, MSc.
Faculty of Science, Helwan University, 2020
There are various real-life data sets like hydrological random variables and growth models that were not proper for fitting it through applying the ordinary probability distribution function such as normal, log-normal, beta, and empirical distribution. Therefore, Al-Fattah et al. (2017) introduced an alternative distribution to fit these data, called the inverted Kumaraswamy distribution.
Maximum likelihood and Bayesian estimation for the shape parameters of Inverted Kumaraswamy distribution, as well as some lifetime parameters, namely reliability, hazard rate, and reversed hazard rate functions under progressive type-II censoring sample, were studied. This was done using different cases of the prior distribution (non-informative and gamma-informative) based on a progressive type-II censored scheme. We got the Bayesian estimation relative to both symmetric (squared error (SELF)) and asymmetric (LINEX and general entropy (GELF)) loss functions by using Lindley’s approximation. By employing a Mathematica program, a Monte Carlo simulation study was also conducted, and the analytical results were applied to three real data sets. It was concluded that the Bayes estimator performed better than the MLE’s, Besides, the Bayes estimator under general entropy loss function had the smallest MSE’s as compared with other loss functions. Moreover, the accuracy of estimators increased, and mean square error decreased when increasing the sample sizes.