[AutonomousVehicles] Uber's Self-Driving Car Didn't Malfunction, It Was Just Bad

Cheryl Orgas & William Meeker meekerorgas at ameritech.net
Tue Jun 5 23:05:29 UTC 2018


Fellow Committee Members,

 

FYI about a work in progress.  As a sighted "driver" was in the vehicle at
the time of the fatal crash the consequences are less catastrophic  as they
would have been with a blind "driver."

 

In my opinion it is encumbent upon us blind pedestrians to listen
defensively.

 

William Meeker

 

 

Uber's Self-Driving Car Didn't Malfunction, It Was Just Bad

By
<https://www.nextgov.com/voices/alexis-madrigal/6700/?oref=ng-post-author?or
ef=rf-post-author> Alexis Madrigal,
 <http://www.theatlantic.com/> The Atlantic 

| May 29, 2018 01:00 PM ET

 

There were no software glitches or sensor breakdowns that led to a fatal
crash, merely poor object recognition, emergency planning, system design,
testing methodology, and human operation.

 

On March 18, at 9:58 p.m., a self-driving Uber car killed
<https://www.theatlantic.com/technology/archive/2018/03/can-you-sue-a-roboca
r/556007/>  Elaine Herzberg. The vehicle was driving itself down an
uncomplicated road in suburban Tempe, Arizona, when it hit her. Herzberg,
who was walking across the mostly empty street, was the first pedestrian
killed by an autonomous vehicle.

The preliminary National Transportation Safety Board
<https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prel
im.pdf>  report on the incident, released on Thursday, shows that Herzberg
died because of a cascading series of errors, human and machine, which
present a damning portrait of Uber's self-driving testing practices at the
time.

Perhaps the worst part of the report is that Uber's system functioned as
designed. There were no software glitches or sensor malfunctions. It just
didn't work very well.

According to the report, the object-detection system misclassified Herzberg
when its sensors first detected her "as an unknown object, as a vehicle, and
then as a bicycle with varying expectations of future travel path." That led
the planning software to make poor predictions for her speed and direction,
as well as its own speed and direction.

1.3 seconds before the impact, the self-driving computer realized that it
needed to make an emergency-braking maneuver to avoid a collision. But it
did not. Why? Uber's software prevented its system from hitting the brakes
if that action was expected to cause a deceleration of faster than 6.5
meters per second. That is to say, in an emergency, the computer could not
brake.

"According to Uber, emergency braking maneuvers are not enabled while the
vehicle is under computer control, to reduce the potential for erratic
vehicle behavior," the report says.

Instead, the system relied on the driver to take control in an emergency,
but "the system is not designed to alert the operator."

The driver, for her part, took control of the car less than 1 second before
the crash by grabbing the steering wheel. It wasn't until after impact that
she began braking.

In video footage of the interior of the car leading up to the crash, the
driver is repeatedly seen looking down toward the center console of the car.
Many commentators assumed that she was looking at a phone, but she told the
NTSB investigators "she had been monitoring the self-driving system
interface." In fact, the testing method requires operators "monitoring
diagnostic messages that appear on an interface in the center stack of the
vehicle dash and tagging events of interest for subsequent review."

Other self-driving companies' testing protocols involve two people: one to
drive and the other to monitor the system's outputs and do the computer
work. Uber itself did this too until late 2017, when the company decided
that the second operator's job could be done by looking at logs back at the
office. "We decided to make this transition because after testing, we felt
we could accomplish the task of the second person-annotating each
intervention with information about what was happening around the car-by
looking at our logs after the vehicle had returned to base, rather than in
real time," an Uber spokeswoman told
<https://www.citylab.com/transportation/2018/03/former-uber-backup-driver-we
-saw-this-coming/556427/>  CityLab earlier this year.

It's unclear what penalties Uber could face for this failure. The company
has already settled a court case
<https://www.ft.com/content/1d7f174a-3362-11e8-b5bf-23cb17fd1498>  with
Herzberg's family. It has also scaled back its autonomous-testing efforts.

"Over the course of the last two months, we've worked closely with the NTSB.
As their investigation continues, we've initiated our own safety review of
our self-driving vehicles program," Uber said in an emailed statement.
"We've also brought on former NTSB Chair Christopher Hart to advise us on
our overall safety culture, and we look forward to sharing more on the
changes we'll make in the coming weeks." 

 

 

 

 

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nfbnet.org/pipermail/autonomousvehicles_nfbnet.org/attachments/20180605/9261715f/attachment.html>


More information about the AutonomousVehicles mailing list