Semantic Labelling of Objects in Street Scenes
Thesis by Andreas Wittmann
Supervision by Margrit Gelautz and Florian Seitner
Abstract
An automatic and robust semantic interpretation of street scenes is required in order to improve driving assistance systems and to reach fully autonomous driving. Recent publications achieved remarkable prediction performances by using Deep Learning. However, the calculation of Neural Networks is computationally demanding. Classical Machine Learning approaches can reduce the complexity of the algorithms and computational demand. In this diploma thesis, we first give a comprehensive literature review of classical machine learning approaches for semantic scene labelling with a focus on street scenes. Furthermore, we compare pixel-wise annotated, freely available datasets of street scenes for the training and evaluation of semantic scene labelling algorithms. The main part of this thesis documents the development and implementation of our semantic scene labelling system. We implement two texture- and context-based features and calculate them on-the-fly in a random forest. We extensively evaluate the influence of the feature parameters and random forest parameters on the prediction results and compare the performance of both features. Our results show that textural features in semantically unconnected regions fail to robustly detect small objects in challenging street scenes. Providing additional information by using a combination of multiple features and a pre-segmentation of the image in semantically connected regions could possibly improve the prediction results.
Reference
Reference currently not available.