Thrun now leads Google’s efforts, and is pushing the technology far beyond that seen in 2005. To get a sense of how far it has evolved, you only need to watch a video that was recently posted by the firm. It starts in a relatively innocuous way: a middle-aged man walks out of a house, and towards the driver’s side of a blue car. He gets in and then confidently pulls out into the street. And that is when things start to get unusual. The man is Steve Mahan, and he is blind. “Ninety-five percent of my vision is gone. I’m well past legally blind,” he explains.
During the course of the three-minute film the car drives through a residential area, obeying road signs. It pulls into a drive-through restaurant and up to a dry-cleaners, where Mr Mahan uses a white cane to walk into the shop. At no point in the video is he seen to touch the steering wheel.
The car works by combining inputs from a number of different sensors to build up a picture of its environment. At the heart of the system is a spinning laser range finder, known as a Lidar (Light Detection and Ranging). Mounted on the roof of the car, these fire a series of rapid pulses of laser light and measures the reflections to build up a detailed 360-degree picture of surrounding objects up to 60m (200ft) away.
This is then combined with a series of other inputs. For example, a video camera inside the car, mounted at the top of the windscreen helps to detect objects in more detail. It can spot traffic lights, as well as fast moving things like pedestrians or cyclists. It also has four radar sensors, three in the front and one on the back.
A GPS and inertial measurement unit is also used to figure out its compass heading, whilst a sensor on the left rear wheel helps keep track of small movements of the car. Four radars, mounted on the front and rear bumpers, allow the car to "see" other traffic. All of these inputs are then combined with a detailed map stored on an onboard computer, to keep track of the car’s exact position.
The map is crucial and is created by training the car. Google says that before the robot car tackles any road, the team first drive the route themselves using similar equipment to create a detailed, digital map of all of the features of the road. By mapping things like lane markers and traffic signs, the software in the car becomes familiar with the environment and its characteristics in advance. When they then want to drive that route autonomously, the car compares data coming in from its sensors with the previously recorded set. The sensors are used to help figure out where other cars are, how fast they are moving and the position of any unexpected objects – for instance, pedestrians stepping out into the road.
This allows the car to navigate and drive in a tame environment. But the real world is messy and dominated by subtle human behaviours. As a result, its developers have programmed it with other traits which allow it to compete in the real world. For example, at a four-way crossing, the car follows the rules of the road. However, if this does not work because other drivers do not follow the rules, the car may edge forward to signal its intention to other drivers.
Master and slave
Impressive as the Google technology is, it is unlikely that it will be introduced overnight.