Google and the Self-Driving Car: A Nightmare for Regulators

February 3, 2016

Imagine never losing the ability to drive due to illness, age, or disability.

Think of a world where all cars drive the same, regardless of the risk-averse or risk-seeking personality of the individual in the car. Google has made this a reality with its self-driving car.

Google currently has self-driving Lexus SUVs and new prototype vehicles roaming the streets of Mountain View, California and Austin, Texas. Google’s cars are self-driving 30,000-40,000 miles per month, which is equal to two to four years of typical U.S. adult driving. These “test drives” or disengagements allow engineers to expand and assess software capability in real life conditions. Self-driving cars rely on sensors to detect objects in all directions, including pedestrians, birds, traffic barricades, and other vehicles. Following detection, the software then processes the information to determine what action to take next in order to safely operate and navigate the vehicle. In testing, human drivers have intervened 69 times. During thirteen human driver interventions, Google’s self-driving cars would have hit something an object had the human not taken control. In ten of the events, Google’s technology was at fault, while other drivers were to blame for the three other incidents. However, during the other three incidents, one of which involved another driver going the wrong way down a street, Google said that the self-driving car would not have taken the proper action to avoid a collision that a human driver would have and did take. Following these incidents Google implemented software fixes which were tested and re-tested.
Some are skeptical over the thought of a computer controlled car; some fear safety while others fear hacking. Cautious regulators fearful of computer failures and software bugs currently require that self-driving cars receive an independent certification that the car is safe and its software secure before ever hitting a public road. This certification process differs from human-driven cars where the manufacturer alone provides safety certifications. Furthermore, regulations require a licensed driver to be in any self-driving car operated on public roads, and they require that the self-driving car is equipped with steering wheels, and gas and brake pedals to allow licensed human drivers to intervene should technology go awry. The regulations show a preference for human drivers over computer drivers, yet the regulations also require manufacturers to protect humans against their own errors as they require that self-driving cars come to a safe stop if the supervising human intervenes in such a way that is inappropriate.
In reaction to regulators fearful of computer failure, Google has concluded that human error is the biggest risk in driving and believes that the relentless and tested software will reduce traffic accidents and death because it eliminates sleepy and distracted drivers from the road. Furthermore, Google points to the regulations themselves, which mandate that computer software protect humans from their own inappropriate actions by installing a stop mechanism. Google asks why we need human operators if regulators don’t fully trust human judgment?
Others fear hackers will compromise the software in self-driving cars. Human-driven cars have had recent issues with hackers, with the additional software, sensors, and internet connections it is more likely that hackers will find vulnerabilities in the software to manipulate self-driving cars. Some technology experts have said that it is possible to take control of vehicle software by dialing into a car’s built-in cellular connection or by giving a driver a CD that infiltrates the car with a virus or connects the car to the hacker’s computer. A successful invasion will allow a hacker to control the brakes, engine, door locks, and other parts of the car remotely. These potential security issues strike fear in many regulators who attempt to balance public safety with advancing technology. In response to these security fears Google has said that its cars are being improved and they are not yet ready for public release.