How to guarantee the safety of autonomous vehicles

[ad_1]

original version Of this story appeared in quanta magazine,

driverless cars and Planes are no longer a thing of the future. In the city of San Francisco alone, the two taxi companies have collectively recorded 8 million miles of autonomous driving by August 2023. And more than 850,000 autonomous aerial vehicles, or drones, are registered in the United States — not counting military-owned vehicles.

But concerns about security are legitimate. For example, over a 10-month period ending in May 2022, the National Highway Traffic Safety Administration reported nearly 400 crashes involving automobiles that used some type of autonomous control. Six people died and five were seriously injured in these accidents.

The usual way to address this problem – sometimes called “testing by exhaustion” – involves testing these systems until you are satisfied that they are safe. But you can never be assured that this process will uncover all possible flaws. “People test until their resources and patience are exhausted,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign. However, testing alone cannot guarantee.

Mitra and his colleagues can. His team has managed to prove the safety of lane-tracking capabilities for cars and landing systems for autonomous planes. Their strategy is now being used to help land drones on aircraft carriers, and Boeing plans to test it on an experimental aircraft this year. “Their way of providing end-to-end security guarantees is very important,” said Corina Pasareanu, a research scientist at Carnegie Mellon University and NASA's Ames Research Center.

Their work involves guaranteeing the results of machine-learning algorithms that are used to inform autonomous vehicles. At a high level, many autonomous vehicles have two components: a perceptual system and a control system. For example, the perception system tells you how far your car is from the center of the lane, or what direction the plane is going and what its angle is with respect to the horizon. The system operates by feeding raw data from cameras and other sensing devices into machine-learning algorithms based on neural networks, which recreate the environment outside the vehicle.

These estimates are then sent to a separate system, the control module, which decides what to do. For example, if there is an upcoming obstacle, it decides whether to brake or steer around it. According to Luca Carlone, an associate professor at the Massachusetts Institute of Technology, while the control module relies on well-established technology, “it is making decisions based on perceived outcomes, and there is no guarantee that those outcomes will be correct.” Are.”

To provide safety guarantees, Mitra's team worked on ensuring the reliability of the vehicle's perception system. He was the first to assume that it was possible to guarantee security if an accurate rendering of the outside world was available. They then determined how much error the perception system introduced in reconstructing the vehicle's surroundings.

The key to this strategy is to measure the uncertainties involved, known as the error band – or “known unknowns”, as Mitra calls it. This calculation comes from what he and his team call a perception contract. In software engineering, a contract is a commitment that, for given inputs into a computer program, the output will fall within a specified range. It is not easy to find this limit. How accurate are the car's sensors? How much fog, rain or solar glare can a drone tolerate? But if you can keep the vehicle within a specified range of uncertainty, and if the determination of that range is accurate enough, then Mitra's team has proven that you can ensure its safety.