By Alex Davies
Our in-house Know-It-Alls answer questions about your interactions with
Q: How Do Self-Driving Cars See?
A: Itâ€™s a sunny day, and youâ€™re biking along one of Mountain Viewâ€™s tree-lined esplanades. You head into a left turn, and before you change lanes, you crane your head around for a quick look back. Thatâ€™s when you see it. The robot. Chugging along behind you, in that left lane youâ€™re aiming to call your own. Your pressing questionâ€”Does it see me?â€”is answered when the vehicle slows down, giving you plenty of space. And so now you wonder, how did it do that? How, exactly, do self-driving cars see?
Perhaps unwittingly, youâ€™ve hit on a crackler of a question. Making a robot that perceives its surroundingsâ€”not just spotting that lumpy mass but understanding itâ€™s a child someone has put actual time and effort intoâ€”is the main challenge of this young industry. Get the thing to understand whatâ€™s going on around it as well as humans do, and the process of deciding how to apply the throttle, brake, and steering becomes something like easy.
Dozens of companies are trying to build self-driving cars and self-driving car technology, and they all approach the engineering challenges differently. But just about everybody relies on three tools to mimic the humanâ€™s ability to see. Take a look for yourself. (Be carefulâ€”youâ€™re on a bike, remember?)
Weâ€™ll start with radar, which rides behind the carâ€™s sheet metal. Itâ€™s a technology that has been going into production cars for 20 years now, and it underpins familiar tech like adaptive cruise control and automatic emergency braking. Reliable and impervious to foul weather, it can see hundreds of yards and can pick out the speed of all the objects it perceives. Too bad it would lose a sightseeing contest to Mr. Magoo. The data it returns, to quote one robotics expert, are â€œgobbledegook.â€� Itâ€™s nowhere near precise enough to tell the computer that youâ€™re a cyclist, but it should be able to detect the fact that youâ€™re moving, along with your speed and direction, which is helpful when trying to decide how to avoid slicing your bike into a unicycle.
Now, gaze upon the roof. Up here, and maybe dotting the sides and bumpers of the car too, youâ€™ll find the second leg of this sense-ational trio.
The camerasâ€”sometimes a dozen to a car and often used in stereo setupsâ€”are what let robocars see lane lines and road signs. They only see what the sun or your headlights illuminate, though, and they have the same trouble in bad weather that you do. But theyâ€™ve got terrific resolution, seeing in enough detail to recognize your arm sticking out to signal that left turn. Thatâ€™s so vital that Elon Musk thinks cameras alone can enable a full robot takeover. Most engineers donâ€™t want to depend on just cameras, but theyâ€™re still working hard on the machine-learning techniques that will let a computer reliably parse a sea of pixels. Seeing your arm is one thing. Distinguishing it from everything else is the tricky bit.
If you spot something spinning, thatâ€™ll be the lidar. This gal builds a map of the world around the car by shooting out millions of light pulses every second and measuring how long they take to come back. It doesnâ€™t match the resolution of a camera, but it should bounce enough of those infrared lasers off you to get a general sense of your shape. It works in just about every lighting condition and delivers data in the computerâ€™s native tongue: numbers. Some systems can even detect the velocity of the things it sees, which makes deciding what matters far easier. The main problems with lidar are that itâ€™s expensive, its reliability is unproven, and itâ€™s unclear if anyone has found the right balance between range and resolution. The 50-plus companies developing lidar are working to solve all of these problems. (Oh, and they donâ€™t always spin.)
Some outfits also use ultrasonic sensors for close-range work (those are what let your car beep you into madness when youâ€™re backing into a tight space) and microphones to listen for sirens, but thatâ€™s just icing on the cake.
Once the sensors pull in their data, the carâ€™s computer puts it all together and starts the hard part: identifying whatâ€™s what. Is that a toddler or a garbage can? A leaf or a pigeon? A teen riding a scooter or a Wacky Waving Inflatable Arm-Flailing Tubeman? Better hardware makes answering such questions easier, but the real work here relies on machine learningâ€”the art of teaching a robot that this cluster of dots is an old man using a walker, and that swath of pixels is a three-legged dog. But once it knows how to see, the question of how to drive gets easy: Donâ€™t hit either one of them.
Alex Davies is the editor of WIREDâ€™s transportation section and routinely finds himself cycling on streets populated by robot cars, which he really, really hopes see as well as the techies promise.
What can we tell you? No, really, what do you want one of our in-house experts to tell you? Post your question in the comments or email the Know-It-Alls.
More Great WIRED Stories