New Integrated Radar Sensor Module Could Bring Improved Safety to Autonomous Driving

At the Fraunhofer Institute in Berlin, researchers are creating an integrated radar module and camera that has the ability to react as much as 160 times faster than a human driver. Called KameRad, the new project aims to bring improved safety to autonomous driving.

A shot of the camera/radar module with its housing. (Image credit: Fraunhofer IZM/Volker Mai)

The average human driver will take 1.6 seconds to hit the brake pedal when a child suddenly runs out onto the road. However, for automated vehicles equipped with a camera system and radar/LIDAR sensors, the reaction time is reduced to 0.5 seconds. However, at a speed of 50 km/hour, that still implies that the vehicle will proceed for another 7 m before the brakes can be applied and it comes to a halt.

To overcome this problem, the Fraunhofer Institute for Reliability and Microintegration IZM has collaborated with a wide range of partners from both research institutes (DCAITI and Fraunhofer Institute for Open Communication Systems FOKUS) and industry (John Deere, InnoSenT, AVL, Jabil Optics Germany, and Silicon Radar) to design a unique camera radar module that is considerably faster in capturing the changes that take place in traffic conditions.

The latest system is the size of a smartphone and comes with a reaction time of below 10 ms, which, according to a research performed in the University of Michigan, makes it 160 times faster than the average human driver and 50 times faster than present sensor systems.

With the new unit, the vehicle from the previous example would usually travel on for merely 15 cm before the system will intervene and begin the braking maneuver—possibly preventing numerous inner-city road accidents.

Integrated signal processing reduce reaction time

Integrated signal processing and capacity are the real innovation in the novel system. This enables all the processing to occur directly inside the radar sensor module, while the system selectively filters information from the stereo camera and the radar system so that processing can either occur instantly or be deliberately delayed until the next processing stage.

While irrelevant data is detected, it is not forwarded. The data from the radar and the camera is combined by applying sensor fusion. This data is then evaluated by neural networks, which also establish real-world traffic implications on the basis of machine learning methods. Consequently, the system no longer has to send the status data to the vehicle, but only needs to convey reaction instructions. This subsequently frees up the bus line of the vehicle to tackle major signals, for example, recognizing a child who abruptly runs out onto the road.

Integrated signal processing drastically cuts down reaction times.

Christian Tschoban, Group Head, RF & Smart Sensor Systems Department, Fraunhofer Institute for Reliability and Microintegration IZM

Tschoban, along with his colleagues, is now working on the KameRad project. He and his team have created the functioning demonstrator that resembles a grey box with eyes to the left and right—the stereo cameras.

The latest project will run until 2020, and until then, DCAITI and AVL List GmbH—project partners—will be involved with testing the initial model,  including road testing in Berlin. Tschoban believes that within a few years’ time, every vehicle will be fitted with his “grey box” as standard and would bring improved safety to automated inner-city traffic.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.