Combined 3D-Vision and Adaptive Front-Lighting System for Safe Autonomous Driving
Research project in the area of Image and Video Analysis & Synthesis.
About this Project
Driven by the trend towards highly automated or even autonomous driving, the automotive industry currently undergoes dramatic changes. The evolution of driver assistance systems, also including adaptive front lighting technologies, exhibits enormous progress and paves the way. A fundamental topic in this context is the required further improvement of sensor systems supplying the information for regulation and control of these assistance systems. Dealing with this, problems with perception of the environment have to be particularly solved.
Despite the currently predominating development efforts concerning sensor data fusion, summing up information coming from several different sensor varieties like cameras in general, long/short range radar as well as Lidar, to one overall representation, at the moment optical image capture in the visible spectral range is still the dominating method to detect objects and to build a model of the driving environment.
At night the conditions for this optical survey task are complicated and require measures at overall system level to significantly improve the effective performance of driver assistance systems. Exemplarily a situation may be thought of where a collision prevention system could brake momentarily due to the fact that the optical survey result of the front facing camera did not deliver data of sufficient quality. Since there is no need for braking due to the lack of a real collision danger, the driver would be severely irritated by this unmotivated behaviour. A second example, being the incentive for this project, is the improvement of the video control equipment of a high beam assistant respectively a glare free high beam system. In these cases, improper evaluation of geometric information (e.g. distance measurement) may occur. Such a deficiency leads to irritating use of high beam illumination, by which the driver or passing traffic will be potentially distracted or exposed to discomfort by glare. To largely optimize the environmental survey by means of optical sensing it is the goal of the project to enable and put into effect stereoscopic, three dimensional image capturing using cameras directly installed in a pair of headlamps. In comparison to a central camera installed behind the wind screen according to the state of the art, in such an approach the camera is not only mounted in close vicinity but directly interacts with the headlamp control unit.
The concept of the resultant integrated, intelligent general vision system to be developed delivers outcomes and findings about in-depth automation steps as follows:
- Aimed at strongly improved image- and therefore environment perception, the headlamps provide a tailored adaptive scenery illumination to assigned lamp-cameras.
- The stereoscopic, high resolution camera system provides significantly improved information about data related to glare free high beam control, as object classification, positioning (horizontal, vertical), distance, direction of movement, lane,
- The stereoscopic camera system enables highly improved information processing based on data relevant for autonomous driving like addressed and labelled lanes, topology, free space, obstacles, traffic participants, pedestrians, etc.
Project Partners
Emotion3D GesmbH (Wien), ZKW Lichtsysteme GesmbH (Wieselburg)
Funding provided by
Austrian Research Promotion Agency (FFG)