A federal regulator has closed a six-month investigation into whether Tesla’s Autopilot played a role in recent crashes, finding that it could not “identify any defects in design or performance.”
The National Highway Traffic Safety Administration opened an investigation into Tesla’s Autopilot driver assistance system last June after a fatal crash in Florida. That investigation also considered a later crash in Pennsylvania. In a final report released Thursday, the agency said that it found no “defect” in Autopilot, “nor any incidents in which the systems did not perform as designed.”
In the fatal Florida accident that led NHTSA to open an investigation into the design and performance of Autopilot, “neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied,” Tesla said in a blog post. The Model S, speeding at 74 mph with a 65 mph speed limit, hit the trailer and traveled under it before veering off the road, killing driver Joshua Brown.
NHTSA, one of two federal agencies that opened investigations into Autopilot last year, requested information from Tesla about all crashes where airbags were deployed and Autopilot had been in use during or within the last 15 seconds. It found only two of those incidents – the Florida and Pennsylvania crashes – involved fatal injuries.
The investigators determined that for at least 7 seconds prior to the Florida crash, the tractor trailer should have been visible to the driver, according to the report.
After reviewing mileage and airbag deployment data provided by the company, the investigators found that Tesla’s vehicle crash rate dropped by 40 percent when Autosteer, one function of Autopilot that helps drivers remain within lanes, was activated.
“A safety-related defect trend has not been identified at this time and further examination of this issue does not appear to be warranted,” NHTSA’s report concluded.
The agency noted that manufacturers must consider the potential for misuse of any system by drivers. In Tesla’s case, people had filmed themselves going hands-free after activating Autopilot. The name Autopilot – by definition, a system that can drive itself in place of a person – also posed confusion. Drivers who activate Autopilot, however, see a warning to keep their hands on the wheel at all times.
“It appears that over the course of researching and developing Autopilot, Tesla considered the possibility that drivers could misuse the system in a variety of ways, including those identified above - i.e., through mode confusion, distracted driving, and use of the system outside preferred environments and conditions,” NHTSA wrote in its report. “The potential for driver misuse was evaluated as part of Tesla’s design process and solutions were tested, validated, and incorporated into the wide release of the product.”
And in September, Tesla announced it would send over-the-air updates to Autopilot that restrict the amount of time drivers can keep their hands off the wheel while Autopilot is activated. At the time, Tesla CEO Elon Musk said the updates would have prevented the fatal Florida crash.
NHTSA spokesman Bryan Thomas told reporters Thursday that the agency has concerns about the naming of driver assistance systems, and that “it’s important for manufacturers to design with the inattentive driver in mind.”
A footnote in the report nods to this issue, but says that “NHTSA recognizes that other jurisdictions have raised concerns about Tesla’s use of the name ‘Autopilot.’ This issue is outside the scoop [sic] of this investigation.”
Priya Anand is a tech and transportation reporter for BuzzFeed News and is based in San Francisco.
Contact Priya Anand at firstname.lastname@example.org.
Got a confidential tip? Submit it here.