Only 14 pages are availabe for public view
Autonomous vehicles, also known as self-driving cars, are vehicles that can move with little or no human interaction. It collects all the environment surrounding information to simulate human behavior in driving safely. These highly automated vehicles promise a lot of potential benefits: increase road safety, increase independence, save money, increase productivity, decrease congestion, and environmental gains. Autonomous vehicles rely on sensors, actuators, driving algorithms, machine learning technologies, and powerful microcontrollers with GPUs to execute the self-driving software. Self-driving software uses semantic segmentation algorithms that take road scene images as input and give a label to each pixel in the input images. These labels describe the object class that these pixels present (road, traffic light, vehicle, human, . . . etc).
Semantic segmentation is very powerful as it helps self-driving software with understand- ing scene images at the pixel level. In recent years, after the emergence of convolutional neural networks (CNNs), segmentation made huge progress. Many semantic segmenta- tion methodologies depending on CNN have been developed. These methodologies were trained and evaluated using large scale datasets. These networks are designed and tested to work efficiently with clear images. Also, all the images in the large scale datasets are clear images. Yet, semantic segmentation methodologies don’t take into consideration the different types of defects in images coming from video cameras. Defects in images could be a result of bad weather or electronic noise. These defects in images decrease the performance and the accuracy of semantic segmentation methodologies and thus lead to a wrong driving decision taken by the vehicle’s self-driving system. Overall, the state-of-the-art methods take into consideration only the performance of these methods on clear images, as these methods are limited by the existing datasets. These methods ignore the performance with unclear images. Semantic segmentation methods should take into consideration these challenges and handle these severe imaging conditions.
Here, we are studying road semantic segmentation methodologies with different chal- lenges. In this work, we address different kinds of severe imaging conditions: fog, rain, blurring, and noise. We study the performance of semantic segmentation with these four imaging defects. As collecting real datasets with these conditions is very hard, we de- cided to use Cityscapes dataset and introduce fog, rain, blurring, and noise on the clear images of the dataset. In this work, we are not only generating new evaluation datasets but also studying the performance of two powerful methods in semantic segmentation against imaging defects challenges. We study the performance of DeepLabv3+ and PSP- Net which are rated as two of the top methods in semantic segmentation. DeepLabv3+ and PSPNet score mIoU of 82.1% and 81.2% respectively on Cityscapes test set