Yet, as Robert Lowell wrote, “No rocket has gone astray as man.” In recent months, as outrage on Twitter and elsewhere has begun to multiply, Musk seems determined to squander Most of the goodwill he’s built up in his career. I asked Slavik, the plaintiffs’ attorney, whether the recent shift in public sentiment toward Musk has made his job in court easier. “I think at least now there are more people who doubt his judgment than before,” he said. “If I was on the other side, I’d be worried.”
Still, it starts to make sense if some of Musk’s most questionable decisions are viewed as the result of straightforward utilitarian calculations. Last month, Reuters reported that Musk’s medical device company, Neuralink, ran experiments hastily, resulting in the unnecessary deaths of dozens of lab animals. Musk’s internal sources make it clear that the sense of urgency comes from the top. “We just can’t move fast enough,” he wrote. “It’s driving me nuts!” The cost-benefit analysis must have been clear to him: He believed Neuralink had the potential to cure paralysis, which would improve the lives of millions of future humans. The suffering of the few animals is worth it.
This crude form of long-termism is even present in Musk’s statement about the Twitter acquisition that the sheer size of future generations will increase their moral weight. He calls Twitter a “digital town square” whose job it is to prevent a new American civil war. “I’m not here to make more money,” he wrote. “I do this to help the human race I love.”
Autopilot and FSD represent the pinnacle of this approach. “The number one goal of Tesla’s engineering,” Musk wrote, “is to maximize the area under the user’s happiness curve.” Unlike Twitter or even Neuralink, people died because of his decisions—but never mind. In 2019, in an email with activist investor and staunch Tesla critic Aaron Greenspan, Musk bristled at claims that Autopilot was not a life-saving technology. “The data clearly show that Autopilot is significantly safer than human drivers,” he wrote. “It would be immoral and wrong for you to say otherwise. To do so is to endanger the public.”
i was going to ask Musk elaborated on his risk philosophy, but he did not respond to my interview request. So I turned to talking to the famous utilitarian philosopher Peter Singer to address some of the ethical issues involved. Is Musk right when he claims that anything that delays the development and adoption of self-driving cars is inherently unethical?
“I think he has a point,” Singer said, “if he’s right about the facts.”
Musk rarely talks about Autopilot or FSD without mentioning how superior it is to human drivers. At the August shareholder meeting, he said that Tesla “is solving a very important part of artificial intelligence and can ultimately save millions of lives and prevent tens of millions of people from being harmed by driving an order of magnitude safer than humans.” Serious injury.” Musk does have data to back this up: Beginning in 2018, Tesla has released quarterly safety reports to the public that show consistent advantages to using Autopilot. The most recent, from late 2022, says a Tesla with Autopilot enabled is one-tenth as likely to be involved in a collision as an average car.
That’s the argument Tesla will have to make to the public and juries this spring. In the words of the company’s safety report: “While no car can prevent all accidents, we work every day to make accidents significantly less likely.” Autopilot could cause WW crashes, but without this Technology, we’ll be at OOOOOOOOOOOOOOOOOOO.