
Last December, Elon Musk was outraged by the development of artificial intelligence and made up his mind.
He learned about the relationship between OpenAI, the startup behind the popular chatbot ChatGPT, and Twitter, which he acquired for $44 billion in October. OpenAI licensed Twitter’s data — a feed of every tweet — for about $2 million a year to help build ChatGPT, two people familiar with the matter said. Musk believed the AI startup wasn’t paying Twitter enough, they said.
As a result, Mr. Musk cut OpenAI from the Twitter data, they said.
Since then, Mr. Musk has stepped up his own AI activities while publicly debating the technology’s dangers. He is in talks with Jimmy Ba, a researcher and professor at the University of Toronto, about starting a new artificial intelligence company called X.AI, three people familiar with the matter said. He hired top AI researchers from Google’s DeepMind at Twitter. He has also spoken out about creating a ChatGPT competitor that would allow unlimited generation of politically charged material.
These actions are part of Mr. Musk’s long and complex history of AI, governed by his conflicting views on whether the technology will ultimately benefit or destroy humanity. Although he recently launched his AI project, he also signed an open letter last month calling for a six-month moratorium on the technology’s development because it “poses a profound risk to society”.
Despite his opposition to OpenAI and plans to compete with it, Mr. Musk helped found the nonprofit AI Lab in 2015. Since then, he says he’s disillusioned with OpenAI because it no longer operates as a nonprofit and is building technology that he sees as taking sides in political and social debates.
Mr Musk’s approach to artificial intelligence comes down to doing it himself. The 51-year-old billionaire, who runs both electric carmaker Tesla and rocket company SpaceX, has long argued that his AI efforts offer better intelligence than rivals, according to people who have discussed the issues with him. Good, safer alternative.
“He thinks AI is going to be a big tipping point, and if it’s not managed well, it’s going to be catastrophic,” said Anthony Aguirre, a theoretical cosmologist at the University of California, Santa Cruz and the agency’s founder. Future of Life Institute, the organization behind the open letter. “Like many others, he wondered: What are we going to do about this?”
Mr. Musk and Mr. Ba, known for creating popular algorithms used to train artificial intelligence systems, did not respond to requests for comment. Their discussions are continuing, three people familiar with the matter said.
Hannah Wong, a spokeswoman for OpenAI, said that while it now generates profits for investors, it is still run by a nonprofit whose profits are limited.
Mr. Musk’s roots in AI go back to 2011. At the time, he was an early investor in DeepMind, a London startup that in 2010 set out to build artificial general intelligence (AGI), a machine capable of doing anything the human brain can do. Less than four years later, google acquisition The 50-person company was acquired for $650 million.
At the 2014 Aerospace event at MIT, Mr Musk show he is hesitating Build AI yourself.
“I think we need to be very careful with AI,” he said in response to an audience question. “With AI, we’re summoning demons.”
That winter, the Future of Life Institute, which explores existential risks to humans, organized a private conference in Puerto Rico focused on the future of artificial intelligence. Mr Musk gave a speech there arguing that artificial intelligence could enter dangerous areas without anyone realizing it, and announced he would fund the institute. He donated $10 million.
In the summer of 2015, Mr. Musk met privately with several AI researchers and entrepreneurs over dinner at the Rosewood Hotel in Menlo Park, California, known for its Silicon Valley deals. By the end of that year, he and several others at the dinner — including Sam Altman, then president of startup incubator Y Combinator, and top AI researcher Ilya Sutskever — — Co-founded OpenAI.
OpenAI was founded as a nonprofit, and Mr. Musk and others pledged to donate $1 billion. The lab has vowed to “open source” all of its research, meaning it will share its underlying software code with the world. Mr Musk and Mr Altman argue that the threat of harmful AI would be lessened if everyone, not just tech giants such as Google and Facebook, had access to the technology.
But as OpenAI began building the technology that would lead to ChatGPT, many in the lab realized that sharing their software publicly could be dangerous. Using artificial intelligence, individuals and organizations may be able to generate and spread disinformation faster and more effectively than would otherwise be possible. Many OpenAI employees say the lab should keep some of its ideas and code secret from the public.
In 2018, Musk resigned from OpenAI’s board, in part because of his growing conflict of interest with the group, two people familiar with the matter said. At the time, he was building his own AI project at Tesla, Autopilot, a driver-assistance technology that automatically steers, accelerates and brakes cars on highways. To that end, he poached a key employee from OpenAI.
In a recent interview, Mr. Altman declined to discuss Mr. Musk specifically, but said Mr. Musk’s breakup with OpenAI was one of many splits at the company over the years.
“There is division, distrust and egos,” Mr Altman said. “The closer people are to being pointed in the same direction, the more contentious the disagreement is. You see this in sects and religious orders. There are fierce battles between those closest to you.”
After ChatGPT debuted in November, Mr. Musk has become increasingly critical of OpenAI. “We don’t want it to be some kind of profit-maximizing demon from hell, you know,” he said in an interview with former Fox News host Tucker Carlson last week.
Mr. Musk has again complained that AI is dangerous and hastened his own efforts to build it. At a Tesla investor event last month, he called on regulators to protect society from AI, even as his car company has used AI systems to push the boundaries of self-driving technology involved in fatal accidents.
That same day, Mr. Musk suggested in a tweet that Twitter would use its own data to train the technology the way ChatGPT did. Twitter has hired two researchers from DeepMind, two people familiar with the matter said. Information and insider Details of the hiring and Twitter’s AI efforts were reported earlier.
In an interview with Mr Carlson last week, Mr Musk said OpenAI was no longer a check on the power of the tech giants. He wants to build TruthGPT, which he says, “a maximally truth-seeking artificial intelligence that tries to understand the nature of the universe.”
Last month, Mr. Musk registered X.AI. The startup is incorporated in Nevada, according to a registration document that also lists the company’s executives as Mr. Musk and his financial manager, Jared Birchall.These documents were earlier issued by wall street journal.
Experts who have discussed AI with Mr. Musk believe his concerns about the dangers of the technology are genuine, even though he developed the technology himself. Others said his position was informed by other motives, most notably his efforts to promote and profit from his company.
“He said robots are going to kill us?” said Ryan Calo, a University of Washington Law School professor who has attended AI events with Mr. Musk. “A car made by his company has killed someone.”