Search

In Self-Driving Car Road Test, We Are the Guinea Pigs

Most of the more than 200,000 Teslas on the road have Autopilot, including this Model S P100D, shown at the MobilityX self-driving conference May 8 in Dublin, Ireland—and all are part of a massive road test of AI and self-driving technology.
Most of the more than 200,000 Teslas on the road have Autopilot, including this Model S P100D, shown at the MobilityX self-driving conference May 8 in Dublin, Ireland—and all are part of a massive road test of AI and self-driving technology. Photo: Artur Widak/Zuma Press

Even if you don’t own a Tesla, you are, or might soon become, part of the company’s massive experiment in automotive safety.

There are already more than 200,000 Teslas on the road, and all of them built after early 2015 are capable of Autopilot, that is, semiautonomous driving. This makes drivers, and anyone encountering these cars on the road, guinea pigs who are helping to train the artificial intelligence Tesla ultimately hopes to use for a fully autonomous driving system.

During this experiment, at least two people have died in driver’s seats of Teslas that crashed while Autopilot was engaged, but Chief Executive Elon Musk argues the system continues to improve and, overall, Teslas are safer than they would be without the technology.

Alphabet Inc.’s GOOGL -0.19% Waymo and Uber Technologies Inc., among others, are also road testing on public streets. They’re experimenting at much smaller scales, though an Uber autonomous vehicle killed a pedestrian in March.

These experiments are based on a number of assumptions about the abilities of AI, and the compatibility of humans and partially autonomous driving systems. If automobile companies are wrong about any of them—and there’s reason to believe they are—we’ll almost certainly see more self-driving car accidents, as semiautonomous technology becomes commonplace.

Waymo’s is testing fully self-driving cars, such as one seen at Google’s annual I/O developers conference in Mountain View, Calif., on May 8.
Waymo’s is testing fully self-driving cars, such as one seen at Google’s annual I/O developers conference in Mountain View, Calif., on May 8. Photo: stephen lam/Reuters

That isn’t to say we shouldn’t be on this path. Every year, some 40,000 people die in the U.S. in traffic-related accidents—a situation made worse by distracted driving.

We have established methods for responsibly rolling out life-saving new technologies before—think of clinical trials for new drugs, or seatbelts and airbags—and we can do it again. But it might mean pumping the brakes on the rollout of self-driving cars.

Tesla’s dangerous game

When engaged, Autopilot keeps the car within in its lane, can automatically change lanes, and maintains a safe distance from cars ahead and behind. When it senses a dangerous situation it alerts the driver, whether or not Autopilot is engaged. Sometimes, however, it’s up to the driver to realize the Autopilot system isn’t doing what it should.

Tesla says that its cars with autonomous driving technology are 3.7 times safer than the average American vehicle. It’s true that Teslas are among the safest cars on the road, but it isn’t clear how much of this safety is due to the driving habits of its enthusiast owners (for now, those who can afford Teslas) or other factors, such as build quality or the cars’ crash avoidance technology, rather than Autopilot.

In the wake of a fatal 2016 crash, which happened when Autopilot was engaged, Tesla cited a report by the National Highway Traffic Safety Administration as evidence that Autopilot mode makes Teslas 40% safer. NHTSA recently clarified the report was based on Tesla’s own unaudited data, and NHTSA didn’t take into account whether Autopilot was engaged. Complicating things further, Tesla rolled out an auto-braking safety feature—which almost certainly reduced crashes—shortly before it launched Autopilot.

The scene after the fatal March 23 crash of a Tesla SUV with the Autopilot driver-assist system engaged, on Highway 101 in Mountain View.
The scene after the fatal March 23 crash of a Tesla SUV with the Autopilot driver-assist system engaged, on Highway 101 in Mountain View. Photo: KTVU/Associated Press

There isn’t enough data to independently verify that self-driving vehicles cause fewer accidents than human-driven ones. A Rand Corp. study concluded that traffic fatalities already occur at such relatively low rates—on the order of 1 per 100 million miles traveled—that determining whether self-driving cars are safer than humans could take decades.

What we do have is evidence—acknowledged in Tesla’s own user manuals—that Tesla’s semiautonomous driving system is easily fooled by bright sunlight, faded lane markings, seams in the road, etc. Researchers continue to document other ways to trick these systems, as well.

Tesla emphasizes its system is driver-assist technology, not full autonomy, and blamed the driver in the most recent crash that occurred when the system was engaged. Yet Tesla drivers and news reports suggest that in some cases, the only thing keeping drivers from getting into Autopilot-related accidents is their own reflexes.

The company promised a cross-country drive accomplished entirely by its self-driving tech sometime in 2017 but decided the system wasn’t yet ready.

AI’s limitations

None of this surprises experts who understand the AI at the heart of autonomous driving systems. Deep learning—the “intelligent” component of these systems—is “brittle, opaque and shallow,” says Gary Marcus, a professor of psychology and neural science at New York University and the former head of Uber’s AI lab.

A Tesla in a showroom at a Brooklyn Tesla dealership on April 4, 2017 in New York City.
A Tesla in a showroom at a Brooklyn Tesla dealership on April 4, 2017 in New York City. Photo: Spencer Platt/Getty Images

AI is brittle because it can’t carry over insights from one context to another, opaque because humans can’t evaluate its neuron-like tangle of connections, and shallow because it’s easy to fool. You can’t just throw more deep learning at a problem and expect it to be as good as a human, says Dr. Marcus.

Decades of research on autopilot systems—whether in airplanes or automobiles—have shown that the most dangerous kind is that which requires the driver to take action when it fails. Less sophisticated semiautonomous driving systems, like adaptive cruise control and enhanced warnings, have been shown to increase safety. Full automation, where ultimately there’s no steering wheel or gas pedal, has only begun to be road tested.

Alphabet’s Waymo decided it was too dangerous to let drivers take control when needed, and skipped right to a fully self-driving ride-share service, Waymo CEO John Krafcik has said. According to the company, and many who research self-driving technology, a system that never asks a driver to take over is safer than making potentially tricky machine-human handoffs.

Tesla promised to release safety data on its self-driving tech regularly starting next quarter. It isn’t clear what kind of data it will release, but experts say public sharing of data, from all makers of autonomous vehicles, is the only way to ensure proper evaluation of the safety of these new technologies. Given that we already evaluate the safety of every other part of a motor vehicle in this way, it just makes sense.

Write to Christopher Mims at christopher.mims@wsj.com

Let's block ads! (Why?)

https://www.wsj.com/articles/in-self-driving-car-road-test-we-are-the-guinea-pigs-1526212802

Bagikan Berita Ini

0 Response to "In Self-Driving Car Road Test, We Are the Guinea Pigs"

Post a Comment

Powered by Blogger.