sourcegraph
March 28, 2024

Tesla releases every three months safety report It provides the miles between collisions when drivers are using Autopilot, the company’s driver assistance system, and when they’re not.

These numbers consistently show that Autopilot, a collection of technologies that can drive, brake and accelerate Tesla cars by itself, has a low accident rate.

But these numbers are misleading.Autonomous driving is mainly used for highway driving, generally Twice as safe as driving on city streets, according to the Ministry of Transportation. Just because Autopilot is generally used in safer situations, it may have fewer collisions.

Tesla has yet to provide data to compare Autopilot’s safety on comparable roads. Neither do other automakers offering similar systems.

Autopilot has been used on public roads since 2015. GM launched Super Cruise in 2017, and Ford launched BlueCruise last year. But there are few publicly available data that can reliably measure the safety of these technologies. American drivers—whether using these systems or sharing the road with them—are actually guinea pigs in an experiment, the results of which have not yet been published.

Automakers and tech companies are adding more vehicle features they claim to improve safety, but it’s hard to verify those claims. All the while, the death toll on the country’s highways and streets has been climbing in recent years, 16-year high in 2021. It seems that any additional safety provided by technological advances cannot offset the driver’s bad decisions.

“There is a lack of data to convince the public that these systems will deliver the intended safety benefits when deployed,” said J. Christian Gerdes, professor and co-director of mechanical engineering at Stanford University. Automotive Research, he was the first Chief Innovation Officer at the Department of Transportation.

GM conducted a study in partnership with the University of Michigan to explore the potential safety benefits of Super Cruise, but concluded they didn’t have enough data to know whether the system reduced crashes.

A year ago, the government’s auto safety regulator, the National Highway Traffic Safety Administration, ordered companies to report potentially serious crashes involving the self-driving system’s advanced driver-assistance system within a day of learning about the circumstances. The order said the agency would make the reports public, but has yet to do so.

The security agency declined to comment on the information collected so far, but said in a statement that the data would be released “in the near future.”

Tesla and its CEO Elon Musk did not respond to requests for comment. GM said it has reported two incidents involving Super Cruise to NHTSA: one in 2018 and one in 2020. Ford declined to comment.

The agency’s data is unlikely to provide the full picture of the situation, but it could encourage lawmakers and drivers to take a closer look at the technologies and ultimately change how they are marketed and regulated.

“To solve a problem, you first have to understand it,” said Bryant Walker-Smith, an associate professor in the University of South Carolina’s School of Law and Engineering who specializes in emerging transportation technologies. “It’s a way to get more ground truth on which to base investigations, regulatory and other actions.”

As powerful as Autopilot is, it doesn’t relieve the driver of responsibility. Tesla told drivers to stay alert and ready to take control of the car at all times. The same goes for BlueCruise and Super Cruise.

But many experts worry that these systems, because they enable drivers to relinquish active control of the car, could fool them into thinking their car is driving itself. Then, when the technology fails or cannot handle the situation on its own, the driver may not be ready to take control as quickly as needed.

Older technologies such as automatic emergency braking and lane departure warning have long provided drivers with a safety net by slowing down or stopping or warning drivers when they veer out of their lanes. But newer driver assistance systems change that arrangement by making the driver a safety net for technology.

Security experts are particularly concerned about Autopilot because of the way it is marketed. Musk has said for years that the company’s cars are on the verge of true autonomy — one that can drive itself in almost any situation. The system’s name also hints at automation that the technology has yet to achieve.

This can lead to driver complacency. Autopilot has played a role in many fatal crashes, in some cases, because the driver was not ready to take control of the car.

Musk has long promoted Autopilot as a way to improve safety, and Tesla’s quarterly safety report appears to back him up.but a A recent study Research from the Virginia Transportation Research Council, part of the Virginia Department of Transportation, shows that these reports are not what they seem.

“We know that cars with Autopilot crash less frequently than cars without Autopilot,” said Noah Goodall, a researcher at the committee who studies the safety and operational issues of self-driving cars. “But are they driving the same way at the same time, at the same time, with the same driver?”

The Insurance Institute for Highway Safety, a nonprofit research group funded by the insurance industry, analyzed police and insurance data and found that older technologies such as automatic emergency braking and lane departure warning improved safety. But the group said studies have not yet shown that driver assistance systems can provide similar benefits.

Part of the problem is that police and insurance data don’t always indicate whether these systems were in use at the time of the accident.

Federal auto safety agencies have ordered companies to provide crash data when using driver-assistance technology within 30 seconds of a crash. This provides a broader understanding of the performance of these systems.

But even with that data, security experts say, it’s hard to tell if it’s safer to use these systems than to turn them off under the same circumstances.

The Alliance for Automotive Innovation, a trade group for auto companies, has warned that data from federal safety agencies could be misinterpreted or distorted. Some independent experts have expressed similar concerns.

“My biggest concern is that we will have detailed data on accidents involving these technologies without comparable data on accidents involving conventional cars,” said Matthew Wansley, a professor at Cardozo Law School in New York. General counsel for a self-driving car startup called nuTonomy. “It looks like these systems may be much less secure than they actually are.”

For this and other reasons, automakers may be reluctant to share some data with the agency. Under its order, companies can ask them to withhold certain data, purporting to reveal trade secrets.

The agency is also collecting crash data on self-driving systems — more advanced technology designed to remove the driver from the car entirely. These systems are often referred to as “self-driving cars.”

For the most part, the technology is still being tested in a handful of cars, with a driver as a backup. Waymo, a company owned by Google parent Alphabet, operates a driverless service in suburban Phoenix, and cities like San Francisco and Miami plan to offer similar services.

In some states, companies are already required to report accidents involving self-driving systems. Data from federal security agencies will cover the entire country and should also provide more insight in this area.

But the more pressing issue is the safety of Autopilot and other driver assistance systems installed in hundreds of thousands of cars.

“There’s an open question: Did Autopilot increase or decrease the frequency of crashes?” Mr. Wansley said. “We may not get the full answer, but we will get some useful information.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *