For automakers such as Ford, predictive abilities are needed to ensure self-driving cars can function in complex urban environments.
LOS ANGELES — A pedestrian stands along a street with one foot on the curb and the other in the crosswalk. A bicyclist lifts a foot off the ground and places it on a pedal. A jogger maintains a steady pace while heading toward an intersection.
Determining whether those road users intend to wait for traffic to pass or start their trips across the street has become one of the most vexing challenges for developers of self-driving systems. They need the vehicular version of a highly accurate crystal ball.
“You can’t stop for every human being standing by the side of the road every time,” said Henrik Green, senior vice president for r&d with Volvo Car Group. “But you also need to stop at the right point when the pedestrian is about to step into the street.”
Last week at the Los Angeles Auto Show, Volvo said it had reached a milestone in honing that capability. While most companies attempt to decipher the intent of road users by using information from cameras, Volvo says it can now glean enough information from lidar sensors to both identify objects ahead of vehicles and predict their behavior.
Working with lidar supplier Luminar, Volvo says the companies can now track hand and leg movements of pedestrians at a distance of 250 meters. Driving at 75 mph, that would give autonomous systems as much as seven seconds to detect objects and predict their movements.
That’s critical for figuring out situations such as determining whether a police officer is directing traffic to go ahead or stop, says Austin Russell, founder of Luminar.
“The reality is even the best-performing autonomous vehicle systems cannot reliably respond to situations like that today,” Russell said.
More than movement
Such shortcomings were underscored in March, when a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Ariz. — the first known fatality involving a self-driving system.
There were a multitude of failures that contributed to the crash, starting with a human safety driver who watched “The Voice” on her phone instead of the road ahead. But over the course of the six seconds in which Elaine Herzberg was detected by the self-driving system, it nonetheless failed to classify her as a pedestrian, first determining she was an unknown object, then a vehicle and finally a bicycle. Nor did it predict that their paths would meet.
Sometimes detecting motion alone may not be enough to offer accurate predictions. If a jogger has been detected for three seconds running consistently toward an intersection, in one example, the best predictor of future intent may not be that detected trajectory, but whether their face has been detected looking at the approaching vehicle. If a pedestrian is detected looking at their smartphone, that would indicate a higher probability of risky behavior.
“The important thing is to understand these sorts of features rather than just looking at movement,” said Leslie Nooteboom, co-founder and chief design officer at Humanising Autonomy, a UK-based startup that has built a prediction platform for use in self-driving vehicles.
In addition to software engineers, Humanising Autonomy has employed a team of behavioral psychologists who sift through camera footage and help train deep-learning systems on how road users behave.
In the details
The scope of the challenge cannot be overestimated: It can be as broad as training systems on how pedestrians interact with vehicles on a regional basis or by cultural customs. Or it can be as local as training systems on how pedestrians most often behave at specific intersections in individual cities.
“You have to have general behavior models that are very detailed, and then the next step is to make them more localized,” Nooteboom said. “We have a foundation of general behaviors for a particular city, and then you can link to specific locations and identify how people will behave at an intersection with an obscured stoplight or a crossing that stops in the middle of the road.”
For automakers, there’s more at stake than merely avoiding the worst-case scenario — that’s a prerequisite for operations. For autonomous vehicles to be deployed in complex city environments, predictive abilities are needed to ensure they can function in a seamless way amid traffic and give riders the smooth rides they’re accustomed to with human drivers.
“If you don’t predict well, you have two options and neither of them are good enough,” said Pete Rander, president of Argo AI, the tech company developing Ford’s self-driving system. “You’re either left playing it safe and creating a much more cautious bubble around you. Or you’re slamming on the brakes.”