When humans and robots cross paths, the results arenâ€™t just frustratingâ€”the autonomous car, say, thatâ€™s too shy to turn leftâ€”they can also be fatal. Consider last yearâ€™s Uber crash, in which the self-driving algorithms werenâ€™t coded to yield to an unexpected human jaywalker.
At the WIRED25 conference Friday, Anca Dragan, a professor who studies human-robot interaction at UC Berkeley, spoke about what it takes to avoid those kinds of problems. Her interest is in what happens when robots graduate beyond virtual worlds and wide-open test tracks, and start dealing with unpredictable humans.
â€œIt turns out that really complicates matters,â€� she says.
The issues go beyond simply teaching robots to treat humans as obstacles to be avoided. Instead, robots need to be given a predictive model of how humans behave. That isnâ€™t easy; even to each other, humans are basically black boxes. But the work done in Draganâ€™s lab revolves around a fundamental insight: â€œHumans are not arbitrary because weâ€™re actually intentional beings,â€� she says. Her group designs algorithms that help robots figure out our goals: that weâ€™re trying to reach that door or pass on the freeway or take that turn. From there, a robot can begin to infer what actions youâ€™ll take to get there, and how best to avoid cutting you off.
Itâ€™s like that song, Dragan says: â€œEvery step you take; every move you make,â€� reveals your desires and intentions, and also the next moves you might take or make to get there.
Still, sometimes itâ€™s impossible for robots and humans to figure out what the other will do next. Dragan gives the example of a robot driver and a human one pulling up to an intersection at the same exact moment. How do you avoid a stalemate or crash? One potential fix is to teach robots social cues. Dragan might have the robo-car inch back a bitâ€”a signal to the human driver that itâ€™s OK for them to go first. Itâ€™s one step towards getting us all to play a bit nicer.