Meta open-sources its AI technology. Rivals say it’s a risky decision.
In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to ditch its AI crown jewel.
The Silicon Valley giant that owns Facebook, Instagram and WhatsApp has created a Artificial intelligence technology, known as LLaMA, which can power online chatbots. But instead of keeping the technology secret, Meta has made the system’s underlying computer code publicly available. Academics, government researchers and others who gave their email addresses to Meta could download the code after the company vetted the individuals.
Essentially, Meta offers its AI technology as open source software — computer code that can be freely copied, modified, and reused — to give outsiders everything they need to quickly build their own chatbots.
“The winning platform will be the open platform,” Yann LeCun, Meta’s chief artificial intelligence scientist, said in an interview.
As the race to lead in artificial intelligence heats up across Silicon Valley, Meta differentiates itself from competitors by taking a different approach to technology. Pushed by its founder and CEO, Mark Zuckerberg, Meta decided it would be wise to share its underlying AI engine as a way to spread its influence and ultimately move faster into the future. a way of
Its actions stand in stark contrast to Google and OpenAI, which are leading a new AI arms race. The companies have become increasingly secretive about the methods and software underpinning their artificial intelligence products amid concerns that artificial intelligence tools such as chatbots could be used to spread disinformation, hate speech and other toxic content.
Google, OpenAI and others have been critical of Meta, saying an unfettered open-source approach is dangerous. The rapid rise of artificial intelligence in recent months has sounded alarm bells about the risks of the technology, including the potential to upend the job market if deployed incorrectly. Within days of LLaMA’s release, the system was leaked to 4chan, an online message board known for spreading false and misleading information.
“We want to think more carefully about giving up the details or open source code” of AI technology, said Zoubin Ghahramani, Google’s research vice president who helps oversee AI efforts. “Will this lead to abuse?”
Some inside Google also wondered whether open-sourcing AI technology would pose a competitive threat.In a memo this month that was leaked in an online publication Semi-Analytics Networka Google engineer warned colleagues that the rise of open-source software like LLaMA could cause Google and OpenAI to lose their lead in AI
But Meta said it sees no reason to keep its code private. Google and OpenAI’s growing secrecy was a “huge mistake”, said Dr LeCun, which “has a very bad view of what’s going on”. He argues that unless it is not controlled by companies like Google and Meta, consumers and governments will reject AI.
“Do you want every AI system to be under the control of a few powerful American companies?” he asked.
OpenAI declined to comment.
Meta’s open source approach to AI is not new. The history of technology is littered with battles between open source and proprietary or closed systems. Some stockpiled the most important tools for building the computing platforms of the future, while others discarded them. Most recently, Google open-sourced its Android mobile operating system in a bid to replace Apple’s dominance in smartphones.
At the researchers’ insistence, many companies have publicly shared their AI techniques in the past. But their tactics are changing because of the competition around AI. This shift started last year when OpenAI released ChatGPT. The massive success of chatbots has wowed consumers and sparked a race in artificial intelligence, with Google moving quickly to incorporate more AI into its products and Microsoft investing $13 billion in OpenAI.
While Google, Microsoft, and OpenAI have since gotten most of the attention in the AI space, Meta has also been investing in the technology for nearly a decade. The company has spent billions of dollars building the software and hardware needed to enable chatbots and other “generative artificial intelligence” that can generate text, images and other media on their own.
In recent months, Meta has been working furiously behind the scenes to weave its years of AI research and development into new products. Mr. Zuckerberg is focused on making the company an AI leader, holding weekly meetings with his executive team and product leaders on the topic.
On Thursday, as part of its commitment to artificial intelligence, Meta said it had designed a new computer chip and improved a new supercomputer specifically designed to build AI technology.It is also designing a new computer data center with an eye toward creating artificial intelligence
“We’ve been building advanced infrastructure for AI for years, and this work reflects long-term efforts that will lead to greater advancements and better use of this technology in everything we do, ’ said Mr Zuckerberg.
Meta’s biggest AI move in recent months has been the release of LLaMA, also known as Large Language Model or LLM (LLaMA stands for “Large Language Model Meta AI.”) LLMs are systems that learn skills by analyzing large amounts of text, including books, Wikipedia articles and chat logs. ChatGPT and Google’s Bard chatbot are also built on such systems.
LL.M.s identify patterns in the texts they analyze and learn to generate their own, including term papers, blog posts, poetry and computer code. They can even hold complex conversations.
In February, Meta released LLaMA publicly, allowing academics, government researchers, and others who provide their email addresses to download the code and use it to build their own chatbots.
But the company has gone further than many other open-source AI projects. It allows people to download a version of LLaMA after it has been trained on large amounts of digital text picked from the Internet. The researchers call this “release weights,” referring to specific mathematical values that the system learns as it analyzes data.
That’s important because analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources that most companies don’t have. Those who can deploy software quickly, easily, and cheaply, at a fraction of the cost of creating such powerful software.
As a result, many in the tech industry see Meta as setting a dangerous precedent. Within days, someone posted the LLaMA weights on 4chan.
At Stanford, researchers used Meta’s new technique to build their own AI system, which has been published on the Internet. According to screenshots seen by The New York Times, a Stanford researcher named Moussa Dubuya was quick to use it to generate questionable text. In one instance, the system provided instructions to dispose of the body without getting caught. It also produced racist material, including comments supporting the views of Adolf Hitler.
In private chats among the researchers, seen by The Times, Mr Doumbouya said distributing the technology to the public was like “a hand grenade that everyone can buy in the grocery store”. He did not respond to a request for comment.
Stanford immediately removed the AI system from the internet. Stanford University professor Tatsunori Hashimoto, who led the project, said the project aims to provide researchers with techniques to “capture the behavior of cutting-edge AI models.” “As we became increasingly concerned about the potential for abuse outside of the research setting, we canceled the demo.”
According to Dr. LeCun, this technique is not as dangerous as it seems. Already, a small number of people can create and spread disinformation and hate speech, he said. Toxic material may be heavily restricted by social networks such as Facebook, he added.
“You can’t stop people from making meaningless or dangerous messages or anything,” he said. “But you can stop it from spreading.”
For Meta, more people using open source software could also level the playing field as it competes with OpenAI, Microsoft, and Google. If every software developer in the world built programs using Meta’s tools, it could help companies solidify the next wave of innovation, avoiding potential irrelevance.
Dr. LeCun also uses recent history to explain why Meta is committed to open source AI technology. The growth of the consumer internet, he said, is the result of open, common standards that have helped create the world’s fastest and broadest knowledge-sharing network.
“When it’s open, progress is faster,” he said. “You have a more vibrant ecosystem where everyone can contribute.”