February 1, 2018
The number of cameras in cars is increasing. However, through the flood of data the internal networks are being pushed to their limits. Special compression methods reduce the amount of video data, but exhibit a high degree of latency for coding. Fraunhofer researchers have adapted video compression in such a way that a latency is almost no longer perceivable. It is therefore of
interest for use in road traffic or for autonomous driving. This technology will be on display at the Embedded World from 27 February until 1 March 2018 in Nuremberg in hall 4 (booth 4-470).
Up to 12 cameras are currently installed in new vehicle models, mostly in the headlights or taillights or the side mirrors. An on-board computer built into the car uses the data for the lane assistant, parking assist system or to recognize other road users or possible obstacles, for example. "If autonomous driving catches on as quickly as predicted, the number of cameras will increase further," forecasts Prof. Benno Stabernack of the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institut, HHI in Berlin.
Ten times more data
This means even more strain on the internal data networks of vehicles. Currently, these can process a data volume of around one gigabit per second. In HD quality, this data quantity is already reached with one camera. "Compression methods help here," says Stabernack. The Fraunhofer HHI, for example, has made a decisive contribution to the development of the two video coding standards H.264/Advanced Video Coding (AVC) and H.265/MPEG High Efficiency Video Coding (HEVC). "With these methods, the data quantities can be sharply reduced. In this way, more than ten times the quantity of data can be transmitted," emphasizes the group leader of the "Video Coding and Machine Learning" department at the Fraunhofer HHI.
Normally, 30 to 60 images per second are sent from a camera to the vehicle’s central computer. By compressing the image data, a small delay in transmission occurs, known as the latency. "Usually, this is five to six images per second," explains Stabernack. The reason for this is that the methods compare an image with those that have already been transmitted in order to determine the difference between the current image and its predecessors. The networks then only send the changes from image to image. This determination takes a certain amount of time.
Latency of less than one image per second
"However, this loss of time can be of decisive importance in road traffic," says Stabernack. In order to avoid latency, the professor and his team only use special mechanisms of the H.264-coding method, whereby determining the differences in individual images no longer takes place between images, but within an image. This makes it a lowlatency method. "With our method the delay is now less than one image per second, almost real time. We can therefore now also use the H.264 method for cameras in vehicles," is how Stabernack describes the additional value. The technology was implemented in the form of a special chip. In the camera it compresses the image data, and in the on-board computer it decodes them again.
Higher image repetition frequency and resolution
The researchers in Berlin have had their method patented and sell their know-how to the industry in the form of a license. Customers are automotive suppliers, and the first vehicle models with the Fraunhofer technology are already on the market. "During development we combined our know-how from work on the video compression standards and our hardware expertise. The transmission of image data in real time is a precondition for the video compression of video data from car cameras becoming established. With it, the use of devices with a higher image repetition frequency and resolution would then be possible. For camera models which produce even more data and are therefore more precise and faster," is how Stabernack summarizes the significance of the technology.
In the next stage, the researchers also want to transfer their method to the HEVC standard and put their experience to good use in upcoming standardization formats. The are exhibiting their technology at the Embedded World from 27 February until 1 March 2018 in Nuremberg in hall 4 (booth 4-470).