Physics, Metrology & Control Engineering

Synthetic Image Data - How cars learn to recognize their surroundings correctly

Ref.-No. 5762

Keywords: simulations of camera systems, virtual pictures, synthetic oicture, autonomous driving, lens simulation, lens aberration, numerical simulation, combination folding algorithm

Almost all car manufacturers are currently developing camera-based autonomous or semi-autonomous vehicles. The systems are based on artificial intelligence (AI): To ensure that these systems can reliably detect objects and avoid misinterpretations, they are trained with a variety of different driving scenes. When these driving scenes or image data for teaching the AI evaluation unit are generated synthetically, the required real-life, cost-intensive car training drives can be greatly reduced - which significantly lowers costs.

The new method can be used to simulate images from a digital camera. Real images taken with an optical system A serve as a basis. These are used to generate artificially generated images with modified properties of an optical system B. Contrary to current methods, the new image is not generated by deconvolution of the lens properties A of the first lens and subsequent addition of the new parameters of the varied lens B, but directly by convolution with (∆(B/A)). To this end, the point spread functions of both objectives must first be physically measured in the laboratory. A delta transfer function is calculated from this, which mathematically represents the optical difference between the two lenses. This can then be used to simulate virtual images of camera B faithfully from real images of camera A.

Competitive Advantages

  • Objective simulation possible
  • Reusability of existing driving scenes
  • Look & feel adaptation of existing film sequences
  • Inexpensive

Commercial Opportunities

The generation of real image data is complex and cost-intensive. For this reason, additional image data are produced synthetically, both completely artificially as computer graphics and by post-processing already recorded driving sequences, for example by combination convolution. However, such virtual image data often contain simulation artifacts, which the AI learns unintentionally and sometimes even undetected. Since this impairs the reliability of the autonomous system, simulating virtual training data as close as possible to the original has a very high market potential for computer vision applications, because only a physically realistic virtual image can replace real driving records. An example of a concrete field of application would be the automatic traffic sign recognition of autonomous vehicles.

Current Status

There are simulation results available which can be shown in the laboratory of the University of Applied Sciences Düsseldorf. A patent application has been filed with the German Trademark and Patent Office. We offer interested companies the possibility of licensing as well as the further development of the technology with the inventors from the Düsseldorf University of Applied Sciences.

An invention of Düsseldorf University of Applied Sciences.


Dipl.-Ing. Martin van Ackeren

ma@provendis.info
+49 208 9410534