BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Read My Lips: How AI Enables Safe Driving

NVIDIA

Self-driving cars may not be in production yet, but driver assist technology in vehicles today continues to get more advanced, improving road safety. These technologies can alert a driver if a car is in their blind spot, keep vehicles centered in the lane or hit the brakes to avoid an oncoming collision. Some systems can even take over most driving functions on pre-mapped highways, controlling both steering and braking. While these features have enhanced the driving experience for many car owners, new advancements in driver assistance are using AI to focus on the driver themselves.

With a smartphone in hand or raucous kids in the back seat, human drivers are easily distracted, and these distractions can have dangerous consequences. If a driver takes their eyes off the road for just two seconds while traveling 65 miles per hour, they could travel about 200 feet without seeing what lies ahead or around them. In turn, these moments of distraction account for nearly 400,000 accidents in the U.S. each year, according to the National Highway Traffic Safety Administration.

AI can help minimize these distractions and step in if necessary. The NVIDIA DRIVE IX intelligent experience software stack allows car manufacturers to leverage AI as a new kind of copilot. Running on the NVIDIA DRIVE platform and using a driver-facing camera, the DRIVE IX deep learning algorithms can monitor drivers, detecting where their attention is focused and whether they are able to react to an oncoming situation. If the driver hasn’t noticed a critical obstacle, DRIVE IX can step in as a guardian angel, preventing them from taking an action that could be dangerous.

However, gauging a driver’s attentiveness is no easy task. Their head is turned one way but their eyes could be looking in the opposite direction. Their eyes are open, but they could be exhausted and inattentive. That’s why NVIDIA’s deep learning algorithms use a method called landmark localization, identifying specific parts of an image to help infer the greater action that’s happening. For example, a driver may be facing forward with their eyes open. However, they’re blinking heavily while yawning and haven’t slowed yet for the traffic light that has just turned red. By reading the individual indicators on the driver’s face, the algorithms can determine they are drowsy and may not be paying attention, and step in to brake at the light.

Landmark localization in driver monitoring adds a robust layer of safety to human driving, and can enable convenience features in upcoming autonomous vehicles. It can help determine if the cabin climate needs to be changed, or if a passenger will need a cupholder in the next few seconds. NVIDIA researchers presented their work on landmark localization and its variety of uses at the Conference of Computer Vision and Pattern Recognition in June, which you can read more about here.

Resources: