
The biggest companies in the tech industry have been warning for a year that artificial intelligence technology has grown beyond their wildest expectations and they need to limit who can use it.
Mark Zuckerberg is doubling down on a different tack: He’s dropping it.
Meta Chief Executive Mark Zuckerberg said on Tuesday he plans to make the code behind the company’s latest and greatest artificial intelligence technology freely available to developers and software enthusiasts around the world.
The decision is similar to Meta’s in February and could help the company court rivals like Google and Microsoft. These companies have been quicker to incorporate generative artificial intelligence (the technology behind OpenAI’s popular ChatGPT chatbot) into their products.
“When software is open, more people can scrutinize it to identify and fix potential issues,” Zuckerberg said in a post on his personal Facebook page.
Meta’s latest version of the AI uses 40 percent more data than the version the company released a few months ago, and believes it to be far more powerful. Meta provides a detailed roadmap of how developers are using the vast amounts of data it collects.
Researchers worry that generative artificial intelligence could increase the amount of disinformation and spam on the internet and pose dangers that even some of its creators don’t fully understand.
Meta upholds the long-held belief that allowing various programmers to tinker with technology is the best way to improve it. Until recently, most AI researchers agreed on this point. But over the past year, companies like Google and OpenAI, a San Francisco startup that works closely with Microsoft, have placed restrictions on who can use their latest technology and controls on what can be done with it.
The companies say they restrict access for security reasons, but critics say they are also trying to stifle competition. Meta believes that it is in everyone’s best interest to share what it is working on.
“Meta has always been a big proponent of open platforms, and it’s worked really well for our company,” Ahmad Al-Dahle, Meta’s vice president of generative AI, said in an interview.
The move would make the software “open source,” or computer code that can be freely copied, modified and reused. The technology, called LLaMA 2, provides everything anyone needs to build an online chatbot like ChatGPT. LLaMA 2 will be released under a commercial license, meaning developers can use Meta’s underlying artificial intelligence to support them in building their own businesses – all for free.
By open sourcing LLaMA 2, Meta can take advantage of improvements made by programmers outside the company while — Meta executives hope — spur AI experimentation.
Meta’s open source approach is not new. Companies often open-source technology to catch up with competitors. Fifteen years ago, Google open-sourced its Android mobile operating system to better compete with Apple’s iPhone. While the iPhone had an early lead, Android eventually became the dominant software used in smartphones.
But the researchers think someone could deploy Meta’s artificial intelligence without the safeguards that tech giants like Google and Microsoft often use to curb toxic content. For example, newly created open-source models can be used to flood the Internet with more spam, financial scams, and disinformation.
LLaMA 2, the abbreviation of Large Language Model Meta AI, is what scientists call a large language model, or LLM Chatbots like ChatGPT and Google Bard are built with large language models.
These models are systems that learn skills by analyzing large amounts of digital text, including Wikipedia articles, books, online forum conversations, and chat logs. By pinpointing patterns in text, these systems learn to generate their own text, including term papers, poems and computer code. They can even have a conversation.
Meta is working with Microsoft to open-source LLaMA 2, which will run on Microsoft’s Azure cloud service. LLaMA 2 will also be available through other providers, including Amazon Web Services and HuggingFace Corporation.
Dozens of Silicon Valley technologists have signed a statement of support Participants in the initiative include venture capitalist Reid Hoffman and executives from Nvidia, Palo Alto Networks, Zoom and Dropbox.
Meta isn’t the only company pushing open source AI projects. The Institute for Technological Innovation produced the Falcon LLM this year and released the code for free. Mosaic ML also offers open source software for training LLMs
Meta executives believe their strategy is not as risky as many believe. They say people can already generate massive amounts of disinformation and hate speech without the use of artificial intelligence, and that such toxic material can be heavily curbed by social networks like Facebook and Meta. They believe releasing the technology could ultimately enhance the ability of Meta and other companies to fight back against abuse of the software.
Mr Al-Dahle said Meta conducted additional “red team” testing of LLaMA 2 before releasing it. This term is used to test software for potential abuse and find ways to prevent such abuse. The company will also publish a responsible usage guide with best practices and guidelines for developers looking to build programs with the code.
But those tests and guidelines apply only to one of the models Meta is releasing, which will be trained and fine-tuned in a way that includes guardrails and curbs misuse. Developers can also use the code to create chatbots and programs without guardrails, a move skeptics say is risky.
In February, Meta released the first version of LLaMA to academics, government researchers, and others. The company also allows scholars to download LLaMA after they have been trained in large amounts of digital text. Scientists call this process “releasing the weight.”
It’s a notable move because analyzing all that digital data requires massive computational and financial resources. With weights, anyone can build a chatbot cheaper and easier than starting from scratch.
Many in the tech industry see Meta as setting a dangerous precedent, after Meta shared its artificial intelligence technology with a small group of academics in February, one of whom leaked it onto the public internet.
In a recent review article FTNick Clegg, Meta’s president of global public policy, argues that “it’s not sustainable to keep foundational technology in the hands of only a few large companies,” and that companies that have historically released open-source software have also been strategically served.
“I’m looking forward to seeing what you build!” Mr. Zuckerberg said in his post.