Building infrastructure for autonomous vehicles can smooth down some of the rough edges of the technology, but the real work of navigation happens in the cars. Sensor packages of LiDAR arrays, 360° cameras and GPS units gather real-time information about the vehicle’s surroundings so an onboard robotics system can evaluate its options. Onboard computers must digest a constant stream of information—identifying obstacles, making sense of point cloud data and adapting to changing conditions.

But machine-driven decision-making works best under controlled conditions, and deviations from what is expected can perplex even the most sophisticated self-driving systems. As a result, many AVs rely on so-called “deep mapping,” which loads a centimeter-accurate scan of roadways and the environment into the computer’s memory to help vehicles orient themselves.

“The machine-vision system on a self-driving car is fairly simplistic,” says Wei Luo, COO and head of product for Deep Map, a software developer specializing in deep-mapping systems for AVs. “To compensate for the computational limitations of the system to process new data in real-time, it needs to already have documentary information on what the world really looks like.”

 

 

Rather than send out sensor-laden survey cars ahead of time to collect information on roadways, as in earlier digital-mapping efforts, Deep Map focuses on software to provide real-time answers to a robot driver’s queries about its environment. The information is based on data harvested from the sensors of other self-driving cars in the area. Thus, the fidelity and up-to-date quality of maps increases with the size of the self-driving fleet on the roads. Through both onboard software and a constantly updating, cloud-based service, Deep Map feeds self-driving vehicles the clarifications they require, rather than asking the vehicle’s system to sift through a massive database of maps while driving at speed. “What self-driving cars need is a system that can rapidly answer questions while going 80 mph on a highway,” says Luo. “They need a software service designed for a robotics system.”

The issue with self-driving vehicles is that the first 90% of accuracy in decision-making is relatively easy with current technology, but each percentage point after that is much more difficult by orders of magnitude. “The closer one is to that last 0.001%, the harder it gets to make it happen,” she explains.

Founded two years ago by engineers who previously worked on Google Maps, Google Earth and other digital-mapping solutions, Mountain View, Calif.-based Deep Map has forged partnerships with major players in the self-driving market, like Ford and Honda.

Luo says the key advance in performance for driverless vehicles will come from providing prompt answers to vehicles when they get confused, rather than waiting for machine learning to catch up with humans.

But better data and sensor tech can go only so far, and chasing those last edge cases could be improved by better infrastructure, she says. “We are at a stage where we don’t require [smart infrastructure] to make it happen, but on the other hand, if there are smart sensors on roads and traffic lights and whatnot, it will make the deployment of self-driving vehicles safer.”