The AI that drives autonomous cars can sometimes be deceived by minimal perturbations, such as graffiti on STOP signals or unusual objects coming into view. This is one of the problems facing the undergoing training of autonomous vehicles, which occurs largely via simulation.

In early August, a second driver’s family sued Tesla for a fatal crash on Autopilot. The previous case occurred in May, when the family of the driver killed in the March 23, 2018 crash of his Model X sued saying that Autopilot caused the accident.

The 2018 case, of which more facts are known at this time, is more interesting. The US National Transportation Safety Board have determined that the Model X accelerated (from 62 to 71 mph) 3 seconds before crashing. Tesla analyzed the log and said the driver had five seconds before the crash and did not take action to avoid it.

Although it may not have anything to do with the case, this brings us back to the realization that AI is still unfit for unattended mission-critical jobs.

And although it may not be in any way connected to what happened, I can’t help but think of the panda becoming a gibbon in a famous AI experiment. Or the stuffed baby elephant appearing in the room and suddenly disrupting the robot’s interpretation. Or the man mistaken for a starlet. The schoolbus becoming a snowplow. The STOP signs being interpreted as Speed Limit 45 after minimal graffiti perturbations…

These are problems observed using state-of-the-art AI systems. Some are artificially provoked by researchers, others occur spontaneously and unexpectedly. We think we [almost] know why they happen, although we don’t really know what we don’t know. And we still don’t know how to avoid them.

When transported in the domain of autonomous vehicles, those little funny problems can become killers. How frequent are they? We do not know, because case studies are still insufficient.

People trust brands when they tell them tales of magic, “smart” tech. So it’s plausible that in crashes occurred so far the drivers did overrate the capabilities of an “Autopilot” and did not do their part as responsible chief pilots. However that’s not my question today. My questions are:

Q1) Is Toyota correct in claiming that we need «trillion-mile reliability» of autonomous-car testing?

In order to assess the safety of self-driving cars, we should first have statistical bases comparable to those for regular cars. For example, since human drivers cause approximately 1 dead per 100 million miles driven, it is intuitive to think that autonomous cars would have to be driven many hundreds of millions of miles before we accumulate comparable statistics.

According to the Toyota Research Institute, we need «trillion-mile reliability» of autonomous-car testing. As a term of comparison, consider that Americans drive 8 billion miles a day, while Tesla Autopilot passed its cumulated 1 billion threshold after years of use, in October, 2018.

Q2) Is simulation going to be sufficient?

Since it would be decades before we get even to the more conservative levels of road testing, the current approach consists in running simulated road miles. Different approaches are being used.

One challenge is to properly take into account “edge case” scenarios that can occasionally occur on the road and fool the on-board AI. I am not aware of rigorous empirical studies in this field.

So what?

Before taking decisions or making business assumptions about autonomous vehicles, most notably cars on free roads, make sure you hear convincing answers to the above two questions.