How Easy Is It To Hack A Self-Driving Car?
The doomsday scenario is all too easy to imagine.
In 2015, Wired reporter Andy Greenberg introduced the world to a terrifying new kind of threat: That hackers could, given the right circumstances, remotely take control of a car. Specifically, his 2014 Jeep Cherokee—and possibly all kinds of newer Chryslers, which makes the Jeep. So once the roads are dominated by fully automated self-driving cars, will that mean we’ll be hackers’ playthings, apt to crash into a wall or drive into a lake at their whim?
Not necessarily. Car hacking is a new phenomenon, given that auto manufacturers, many of which have been in the business for decades, have only just started experimenting with so-called connected cars, which can offer features like from letting owners unlock their doors with an app, running diagnostic tests from afar, or checking their battery status if the car uses electric power. As any hacker, whether they’re criminal or a security researcher looking for flaws, will tell you, new technology that only recently became exposed to the internet at large is ripe to be exploited.
And so far, automakers haven’t been terribly receptive to the recommendations of independent security researchers. Chrysler quickly patched that Jeep vulnerability before Greenberg’s story went live, though its announcement that the problem had been fixed didn’t bother to cite the researchers who initially found it. When the British security company Pen Test Live told Mitsubishi it had figured out how to hack its Outlander SUVs—and trace them around the world, thanks to a built-in Wi-Fi system that didn’t randomize their unique IDs—Mitsubishi initially ignored it. Then, once the BBC shot a video about the subject, Mitsubishi fixed the flaws.
“Now is the transitional period, and it’s kind of ugly,” Craig Smith, founder of Open Garages, a collective of ethical hackers focused on personal vehicles, told Vocativ. “They’re old-school industries. They were mechanic or electronic kinds of systems, and now they’re software-based companies—and they haven’t realized they’re software-based companies, and that’s sort of the problem.”
It’s worth noting, however, that as dire as these hacking scenarios were, those flaws were discovered by security researchers, and not hackers. Per a December 2015 report from the U.S. Department of Transportation National Highway Traffic Safety Administration, no car is known to have been maliciously hacked in the wild—only by the good guys.
Greenberg’s Jeep stunt was perhaps the most extreme example to date of the almost comically common scenario of a hacker wreaking havoc in a world slowly testing out the Internet of Things, which refers to the common consumer items that manufacturers give the capability to bring online. The problem with the IoT, of course, is that anything that connects to the internet has an access point, and it’s in the nature of any hacker—whether they’re a security researcher or a criminal—to try to exploit it. Greenberg’s Jeep, for example, was hacked by two enterprising researchers who figured out that the model had vulnerabilities in its internet-connected dashboard computer, giving them the ability to control the air conditioning and radio, to kill the engine, and to control the steering when it was in reverse.
But far more often, IoT hacking is much more mundane, like hacking a talking Barbie to see what its owner says to it, or hacking a smart refrigerator to send out spam. Despite the creepy and insecure implications of those kinds of hacks, they’re a far cry from a scenario that puts human lives in danger.
“In the past, a lot of our efforts in the cybersecurity community have been focused on desktop computers, corporate networks, stuff like that,” Will Glass, a researcher at cybersecurity firm FireEye’s Horizons division, which anticipates cybersecurity issues that can arise with technology of the future, told Vocativ. “And anytime anything bad happens—a hack, or ransomware, or something like that—the [physical] harm to actual humans is pretty limited.”
“But when we start allowing computers to take care of cars, the potential for malfunction that causes harm to humans is much higher,” Glass said.
Though, in a sense, self-driving cars are less prone to devastating hacks than the standard internet-connected cars we’ve seen attacked in the past, Smith says. It’s one thing for an attacker to be able to turn up the radio, and quite another to gain total authentication to a computer that runs the entire vehicle.
Driverless vehicles are equipped with lots a number of sensors—they had better be, if it wants to correctly identify a pothole from a puddle at night, or a deer on the edge of the road from a tree. So sensors are numerous, and work cohesively together—”Similar to how humans have five different senses to understand the world around them,” Smith says. As such, the network that allows a driverless car to make decisions is more resilient to attack, because it doesn’t fully trust one sensor that’s telling it vastly different information from the others.
“An attacker has two options when on a network like this: fake all the sensors simultaneously in a way that can’t be detected, or take over the core decision making [central processing unit],” he says. And since the former is nearly impossible, a hacker would have to focus on the latter, which gives security engineers a clearer model of what they’d want to protect.
“In a self-driving vehicle you have more ways an attacker can get ‘in’ to a vehicle, but since the network isn’t trusted, being ‘in,’ doesn’t get you much,” Smith says.
Google, similarly, doesn’t let its cars be driven remotely, making it far more difficult for a hacker to access the car’s driving mechanisms from their laptop. “We have a world-class team of Google engineers dedicated to making our technology secure,” a Google spokesperson told Vocativ, including “limiting what our cars can be told to do remotely (the car itself is responsible for making the driving decisions).”
It’s true that consumers have good reason not to trust that car manufacturers take your safety seriously. The industry certainly has made gross calculations of human life—just look at the times when it waited to recall a faulty part until enough people died that a recall would be cheaper than paying wrongful death lawsuits. GM, for instance, infamously stalled on recalling faulty ignition switches that kept airbags from inflating. Eventually, at least 124 people died and GM had to recall nearly 30 million cars, costing the company billions.
As mass adoption of self-driving cars becomes closer to reality, a combination of market demand and government legislation should lead to far more rigorous cybersecurity standards than are commonly employed for computers, Glass said.
Another reason manufacturers will likely invest heavily in cybersecurity is simple insurance economics. Insurance companies, for example, would likely hold manufacturers of self-driving cars far more liable, relative to drivers, than they currently do. With self-driving cars, Glass said, “The liability for manufacturers will go up.” With no one at the wheel, there simply wouldn’t be anyone else to blame.
Insurance companies know it’s in their best interest to capitalize on any possible recall. If one of their customers gets into an accident because a car malfunctions, it saves the insurance company money if the manufacturer was liable. Some insurance companies, like Liberty Mutual, actively inform customers if their vehicle is on a recall list.
And while most auto insurance companies haven’t yet created plans for self-driving cars, some that do, like England’s To The End, preemptively offer financial protection from hacking. “[I]f someone hacked into your car and somehow managed to control it remotely, resulting in an accident, you would be covered,” company spokesperson Matt Ware told Vocativ. “That would include damage to your own vehicle, and any third-party liability costs that arise.” Such companies would be drastically incentivized to blame manufacturers for hacking whenever possible.
There’s also an emerging call in government for basic cybersecurity standards. After seeing Greenberg’s Jeep story, Senator Ed Markey (D-Mass.) introduced a bill for the SPY Car Act, which would mandate car manufacturers follow a set of cybersecurity standards created by the National Highway Traffic Safety Administration. Though the bill has languished in committee, the NHTSA has created such standards anyway, full of common principles like making sure data that cars collect is encrypted, strong authentication measures for tools that allow a car to be remotely turned on or off, and an information-sharing database so the industry can better compare attacks. And while there’s no information sharing database to date, the Auto Alliance of Global Automakers, a group that includes every major car manufacturer in the U.S., has been committed to a basic framework of cybersecurity principles since 2014.
In April, four members of Congress formed the Smart Transportation Caucus, citing the need for Congress to consider cybersecurity in the auto industry. “Autonomous vehicle technology offers great promise both in reducing traffic fatalities and improving efficiency, but every computer system is only as strong as its weakest link,” Rep. Ted Lieu, one of the caucus’s founding members, said in a statement to Vocativ. “[B]ecause of the public safety and national security implications of a hack, automotive cybersecurity needs to be managed at the federal level and avoid a patchwork of inconsistent state laws.”
There’s no doubt that in the coming years, far more IoT devices will be hacked. And some of them will be cars. It’s also definitely possible that a hacker will take control of a car, leading to a person’s injury, or even death. But for those who would try, they should start soon. Because on the whole, car manufacturers haven’t given the issue the attention it deserves. But that won’t be the case for long.
“If today you were to announce, ‘Hey we have this new mode of transportation, it’s going to revolutionize how people get around, and it’s gonna kill like 30,000 people a year,’ people would be like, ‘No, we’re not gonna do that,'” Smith said. “But that’s what cars do today! Not saying that needs to be our bar, but in general that’s kind of a low bar.”
Where we’re going, we’ll still need roads, but will we need drivers? This week, Vocativ explores the state of autonomous vehicles—their regulation, technology, and security—and how close we really are to a driverless future. Read more: