You may soon be sharing the road with intelligent, self-driving cars that promise to save time, fuel, cut traffic jams and prevent accidents.
When the first cars hit British roads in the late 19th Century, they had an unusual safety feature. Every “horseless carriage”, as they were known, was chaperoned by a man walking in front waving a red flag or carrying a lantern, to warn other road users of the vehicle's approach. There was a certain reassurance, it seems, from having a human present, even if done in such a preposterous way.
These early precautions – known the “red flag laws” – seem laughable now. But future generations may look at the safety measures that are imposed on self-driving – or robotic - cars in much the same way.
On the rare occasions these autonomous vehicles are allowed out in public they are usually chaperoned by a human who sits in the “driver’s seat”, ready to take control if something goes wrong. But the nascent industry developing these cars believes this kind of insurance policy will soon go the same way as red flags.
In the US, laws are already being debated, and approved, to allow the vehicles to drive themselves on regular roads. In the US state of Nevada, for example, the government has begun to draft a set of regulations that will allow these vehicles on its roads. One of the proposals is for robotic cars to be identified by red license plates.
Developments like this show that it is a question of when – not if – robotic vehicles hit our roads. And with good reason. Proponents say self-driving cars will save time, fuel, cut traffic jams and prevent some of the estimated 1.2m deaths that occur globally every year due to car accidents.
“Safety is definitely the number one benefit,” says Sven Beiker, the executive director of the Center for Automotive Research at Stanford University. “In 95% of accidents, human error is at least a contributing factor.” A self-driving car on the other hand cannot become distracted, take a phone call, fall asleep, or drive under the influence of alcohol.
As a result, manufacturers such as Ford have announced that autonomous vehicles are the future. Bill Ford, executive chairman of the Ford Motor Company, recently said that the company sees “the introduction of semi-autonomous driving technology, including driver-initiated ‘auto pilot’ capabilities, and vehicle platooning in limited situations” as early as 2017.
In the longer term, from 2025 onwards he believes we will see the “arrival of smart vehicles capable of fully autonomous navigation, with increased ‘auto pilot’ operating duration, plus the arrival of autonomous valet functions, delivering effortless vehicle parking and storage."
And it is not just Ford who believes in this future. Car manufacturers from GM, BMW, Audi and Volvo are all working on systems that promise to allow drivers to take their hands off the wheel. But it is a project by the search giant Google, that has captured people’s attention.
The firm – which revealed details of its driverless car project in 2010 – has clocked up hundreds of thousands of miles in a fleet of seven vehicles including a Toyota Prius and an Audi TT. It is the evolution of a technology that really came to public consciousness in the 2004 Darpa Grand Challenge, a US military competition that saw robotic cars compete along a desert course. In the first race, none of the cars were able to complete the course. But one year later a car called Stanley, developed by Sebastian Thrun from Stanford University, romped home to claim a $2m prize for completing a 130-mile (210km) course in less than seven hours.
Thrun now leads Google’s efforts, and is pushing the technology far beyond that seen in 2005. To get a sense of how far it has evolved, you only need to watch a video that was recently posted by the firm. It starts in a relatively innocuous way: a middle-aged man walks out of a house, and towards the driver’s side of a blue car. He gets in and then confidently pulls out into the street. And that is when things start to get unusual. The man is Steve Mahan, and he is blind. “Ninety-five percent of my vision is gone. I’m well past legally blind,” he explains.
During the course of the three-minute film the car drives through a residential area, obeying road signs. It pulls into a drive-through restaurant and up to a dry-cleaners, where Mr Mahan uses a white cane to walk into the shop. At no point in the video is he seen to touch the steering wheel.
The car works by combining inputs from a number of different sensors to build up a picture of its environment. At the heart of the system is a spinning laser range finder, known as a Lidar (Light Detection and Ranging). Mounted on the roof of the car, these fire a series of rapid pulses of laser light and measures the reflections to build up a detailed 360-degree picture of surrounding objects up to 60m (200ft) away.
This is then combined with a series of other inputs. For example, a video camera inside the car, mounted at the top of the windscreen helps to detect objects in more detail. It can spot traffic lights, as well as fast moving things like pedestrians or cyclists. It also has four radar sensors, three in the front and one on the back.
A GPS and inertial measurement unit is also used to figure out its compass heading, whilst a sensor on the left rear wheel helps keep track of small movements of the car. Four radars, mounted on the front and rear bumpers, allow the car to "see" other traffic. All of these inputs are then combined with a detailed map stored on an onboard computer, to keep track of the car’s exact position.
The map is crucial and is created by training the car. Google says that before the robot car tackles any road, the team first drive the route themselves using similar equipment to create a detailed, digital map of all of the features of the road. By mapping things like lane markers and traffic signs, the software in the car becomes familiar with the environment and its characteristics in advance. When they then want to drive that route autonomously, the car compares data coming in from its sensors with the previously recorded set. The sensors are used to help figure out where other cars are, how fast they are moving and the position of any unexpected objects – for instance, pedestrians stepping out into the road.
This allows the car to navigate and drive in a tame environment. But the real world is messy and dominated by subtle human behaviours. As a result, its developers have programmed it with other traits which allow it to compete in the real world. For example, at a four-way crossing, the car follows the rules of the road. However, if this does not work because other drivers do not follow the rules, the car may edge forward to signal its intention to other drivers.
Master and slave
Impressive as the Google technology is, it is unlikely that it will be introduced overnight.
“I think features will creep in incrementally, and then one day the vehicle will actually be driving itself. It sounds like a revolution now, but it will actually happen very organically and very naturally,” says Raj Rajkumar, professor of Electrical and Computer Engineering at Carnegie Melon University – part of the team that designed the vehicle that won the Darpa Urban Challenge – a follow-up to the original Grand Challenge event.
Already, autonomous features – such as adaptive cruise control, self-parking and hazard awareness cameras – are becoming increasingly common features on cars. The next step may be semi-autonomous technology, such as that developed by a European project called Satre (Safe Road Trains for the Environment).
This project aims to design and build technology that allows vehicles to fall into semi-autonomous “platoons”, when travelling on highways, for example. The idea of this kind of project is to pack more cars onto limited road space, reduce congestion and use less fuel.
“Our highways are not necessarily being used efficiently,” says Sven Beiker, who is not involved with the project. “Even if you look at a highway during rush hour, not more than 20% to 30% of surface area is actually occupied by vehicles – “there is a lot of space left and right, in front and behind the vehicle.”
Intelligent cars should be able to drive much more closely together on fast roads, allowing them to slip-stream each other, reducing drag. As the computers are able to take a “big-picture” view of traffic on the road, they should also be able to reduce stop-start traffic.
In 2012, the Satre consortium of manufacturers and researchers, showed off a semi-autonomous road train of three cars, following a truck at a Volvo test track in Sweden. Each car – initially driven by a human – slots in behind the truck allowing a wireless system to take over the controls. Commands to steer, speed up and slow down all come from the driver of the lead vehicle. When the car wants to leave, the driver is able to take back control.
“As a driver in a road train, the idea is in fact to be able to both read the newspaper and eat breakfast whilst travelling at 90km/h,” says a plumy voiceover on a video that shows the trial. In the final system, the researchers envisage lots of cars “slaved” to a lead vehicle travelling at high speed along specific routes on motorways and highways.
If and when it becomes a reality, it may signal the beginning of the end for human drivers on most of our roads. Then, people who still want the pleasure of driving themselves may have to warn other road users that they are engaging in such a dangerous activity. Perhaps, then, we may have to consider reintroducing the red flag?