Uber's self-driving technology detected a pedestrian before a deadly crash in Arizona earlier this year but didn't react quickly enough because it had been set to ignore "false positives," the Information reports, citing "two people briefed on the matter." The sources say the vehicle's perception system was operating normally but to achieve a smoother ride without too many sudden stops, the software was tuned to ignore objects that posed no threat, like plastic bags in the road. But this meant that it did not react in time to prevent the vehicle from fatally striking 47-year-old Elaine Herzberg, and the human operator who was supposed to prevent accidents was not paying attention at the time of the crash, the insiders say.
Uber, which has settled with Herzberg's family, says it is cooperating with the National Transportation Safety Board's investigation and has initiated a "top-to-bottom safety review" of the self-driving program, but it can't comment on the specifics of the case, Recode reports. The Information's sources say that before the Tempe crash, Uber had been "racing to meet an end-of-year internal goal" of allowing customers in the area to ride in its self-driving vehicles with no backup driver present. The company is now banned from testing the vehicles in Arizona. The "death provides a tragic reminder that companies shouldn't get too far ahead of themselves," writes Timothy Lee at Ars Technica. "Getting fully self-driving cars on the road is a worthwhile goal. But making sure that's done safely is more important." (Read more self-driving car stories.)