Building infrastructure for autonomous vehicles can smooth down some of the rough edges of the technology, but the real work of navigation happens in the cars. Sensor packages of LiDAR arrays, 360° cameras and GPS units gather real-time information about the vehicle’s surroundings so an onboard robotics system can evaluate its options. Onboard computers must digest a constant stream of information—identifying obstacles, making sense of point cloud data and adapting to changing conditions.
But machine-driven decision-making works best under controlled conditions, and deviations from what is expected can perplex even the most sophisticated self-driving systems. As a result, many AVs rely on so-called “deep mapping,” which loads a centimeter-accurate scan of roadways and the environment into the computer’s memory to help vehicles orient themselves.