الفهرس | Only 14 pages are availabe for public view |
Abstract A major source of problems in computer networks is the packet queues that develop at intervening nodes between the sources and destinations of communications sessions. In the case of the Internet, the greatest computer network on earth today, since the number of such nodes is immense, those problems can be damaging. The problems that are caused by swollen queues include excessive response time, packet loss, congestion, and wasted resources. These are all factors that impact negatively the Quality of Service (QoS) of the network under consideration. Thus, to improve the network QoS, the queueing behavior ought to be studied, predicted and analyzed, and this thesis is intended to do just that. We focus in the thesis on the queueing phenomenon at two levels: the node level and the aggregate level. At the node level, we choose a bridge linking two networks. Each network can generate traffic which can be either network, to nodes on the same networks, or external, to nodes on the other network. We model the bridge by a two back-toback queueing system, with the two queues interfering with one another. Then we analyze each model in two modes: discrete and continuous. The discrete model is the natural choice as the bridge, like all computer equipment, is digital by default. We solve this model by simulation, as the analytical solution involves cumbersome mathematics which preclude obtaining useful results easily. The continuous model is considered an approximation, but has the advantage of easier mathematical tractability. At the aggregate level, we choose cloud computing, where a packet carries a request for service. At the cloud main data center, thousands of requests converge asking for different services. A resident scheduler is supposed to assign resources to these requests in order to provide the services needed. But since there are typically less resources than requests, the requests have to wait in a queue until picked by the scheduler. The difficulty is that the completion time of each request can be different. So, the perfect model for this cloud queueing phenomenon is a multi-server queueing system. For this model there is a ready made mathematical solution, but it is excessively time and space consuming. We overcome this challenge by deriving a novel formula to perform the same role as the solution, but in a shorter operational range, namely, when the arrival request rate is less than the cloud service rate. By facilitating the cloud queueing analysis, this formula can help improve the cloud QoS, even on a partial basis. |