sourcegraph
June 6, 2023

In March, two Google employees tasked with reviewing the company’s artificial intelligence products tried to prevent Google from launching an artificial intelligence chatbot. They argued it produced inaccurate and dangerous statements.

Ten months earlier, Microsoft ethicists and other employees had raised similar concerns. The artificial intelligence technology behind the planned chatbot could flood Facebook groups with disinformation, cripple critical thinking and erode the factual foundation of modern society, they wrote in several documents.

Anyway, these companies released their chatbots. Microsoft was the first, hosting a high-profile event in February showing off an AI chatbot integrated into its Bing search engine. About six weeks later, Google launched Bard, its own chatbot.

The typically risk-averse company is taking the aggressive move to rein in the race for what could be the tech industry’s next big thing — generative artificial intelligence, the powerful new technology that powers these chatbots.

The competition became fierce last November when OpenAI, a San Francisco startup in partnership with Microsoft, released ChatGPT, a chatbot that captured the public’s imagination and now has an estimated 100 million monthly users.

According to 15 current and former employees and internal company documents, ChatGPT’s stunning success has led to Microsoft and Google’s willingness to take greater risks, with ethical guidelines developed over the years to ensure their technology doesn’t cause social problems.

An internal email sent last month by Microsoft tech chief Sam Schillace made the urgency of building with new artificial intelligence clear. “Worrying about things in this moment that can be fixed later is definitely a fatal mistake,” he wrote in the email, seen by The New York Times.

When the tech industry suddenly pivots to a new technology, the first company to launch “is the long-term winner because they started first,” he wrote. “Sometimes the difference is in weeks.”

Tensions between worryers and risk-takers in the industry played out last week when more than 1,000 researchers and industry leaders, including Elon Musk and Apple co-founder Steve Wozniak, called for a moratorium on the six Monthly development of powerful artificial intelligence technology. This poses “profound risks to society and humanity”, they said in an open letter.

Regulators have threatened to intervene. The European Union proposed legislation to regulate artificial intelligence, and Italy temporarily banned ChatGPT last week.In the U.S., President Biden on Tuesday became the latest official to question the safety of artificial intelligence

“Tech companies have a responsibility to make sure their products are safe before they go public,” he said at the White House. Asked whether AI was dangerous, he said: “It remains to be seen. Possibly.”

The questions being raised now are the ones that once drove some companies to turn their backs on new technologies. They learned that releasing AI too early could be embarrassing. For example, five years ago, Microsoft quickly removed a chatbot called Tay after users pushed it for a racist response.

Microsoft and Google are risking releasing technologies that even their developers don’t fully understand, researchers say. But the companies said they had limited the initial release of the new chatbots, and they had put in place sophisticated filtering systems to weed out hate speech and content that could cause obvious harm.

Microsoft’s six years of work around AI and ethics has allowed the company to “act nimbly and thoughtfully,” Natasha Crampton, Microsoft’s chief artificial intelligence chief, said in an interview. She added, “Our commitment to responsible AI remains steadfast.”

Bard comes after years of internal dissent at Google over whether the benefits of generating artificial intelligence outweigh the risks. It announced a similar chatbot, Meena, in 2020. But the system was deemed too risky to be released, said three people with knowledge of the process.These concerns were raised earlier by wall street journal.

Later in 2020, Google blocked its top ethical AI researchers Timnit Gebru and Margaret Mitchell from publishing a paper warning that the so-called large language models used in new AI systems, trained to recognize patterns from vast amounts of data, could spit out abusive words. or discriminatory language. Researchers were kicked out after Dr Gebru criticized the company’s diversity efforts, while Dr Mitchell was accused of violating the company’s code of conduct after saving some work emails to a personal Google Drive account.

Dr. Mitchell said she had tried to help Google release products responsibly and avoid regulation, but “they really shot themselves in the foot”.

“We continue to make responsible AI a top priority, using our principles of artificial intelligence and internal governance structures to responsibly share AI advances with our users. “

Concerns about larger models remain. In January 2022, Google refused to allow another researcher, El Mahdi El Mhamdi, to publish a critical paper.

Part-time employee and university professor Dr. El Mhamdi used mathematical theorems to warn that the largest AI models are more vulnerable to cybersecurity attacks and pose unusual privacy risks because they may have accessed private data stored in various locations around the internet.

While a later executive report warned of similar AI privacy violations, Google’s commentators demanded that Dr. El Mhamdi make substantive changes. He declined and published the paper through École Polytechnique.

He resigned from Google this year, in part because of “research censorship.” The risks of modern AI “substantially outweigh” the benefits, he said. “It’s a premature deployment,” he added.

After ChatGPT was released, Google’s top lawyer, Kent Walker, met with the research and security directors of the company’s powerful Advanced Technology Review Board. He told them that Google’s CEO, Sundar Pichai, was pushing hard to release Google’s AI

Jen Gennai, head of Google’s responsible innovation group, was in that meeting. She recalled what Mr Walker had said to her own staff.

The meeting was “Kent speaking in front of ATRC executives, telling them, ‘This is the company’s top priority,'” Ms. Gennai said in a recording seen by The New York Times. “‘What concerns do you have? Let’s line up.'”

Ms. Gennai said Mr. Walker told attendees to speed up AI projects, even as some executives said they would maintain safety standards.

Her team has documented concerns about chatbots: that they could generate disinformation, harm users emotionally attached to them, and enable “technology-facilitated violence” through mass harassment online.

In March, two reviewers from Ms. Gennai’s team submitted their risk assessment for Bard. Two people familiar with the process said they recommended blocking its imminent release. Despite protections, they don’t think chatbots are ready.

multiple sclerosis. Gennai has changed this document. She rescinded the recommendation and played down the seriousness of Bard’s risk, the people said.

In an email to The Times, Ms. Gennai said that because Bard is an experiment, reviewers should not weigh whether to proceed. She said she “corrected inaccurate assumptions and actually added more risks and harms to consider.”

Google said it had released Bard as a limited experiment because of the arguments, and Ms Gennai said ongoing training, guardrails and disclaimers made chatbots safer.

Google released Bard to some users on March 21. The company says it will soon integrate generative AI into its search engine.

Microsoft CEO Satya Nadella bet on generative artificial intelligence when Microsoft invested $1 billion in OpenAI in 2019.Nadella pushes every Microsoft product team to adopt AI after making sure technology is ready by summer

Five current and former employees said Microsoft’s policies were set by its Responsible Artificial Intelligence Office, which Ms. Crampton heads, but the guidelines were not consistently enforced or adhered to.

despite having a “Transparency” principleQuestions about what data OpenAI used to develop its systems left ethics experts who study chatbots left unanswered, according to three people involved in the effort. Some people think integrating a chatbot into a search engine is a particularly bad idea because it can sometimes provide inauthentic details, said a person with direct knowledge of the conversation.

Ms. Crampton said Microsoft experts were working on Bing, and key individuals had access to the training data. She added that the company is working to make chatbots more accurate by linking them to Bing search results.

Last fall, Microsoft began breaking down one of its largest tech ethics teams. The Ethics and Society group trains and consults company product owners to design and build responsibly. Most of the team was spun off to other teams in October, according to four people familiar with the team.

The few remaining people participated in daily meetings with the Bing team, racing to launch the chatbot. AI executive John Montgomery told them in an email in December that their work remains critical and that more teams “need our help too.”

The ethics team has documented lingering concerns following the introduction of artificial intelligence Bing. Users may become overly dependent on the tool. Inaccurate answers may mislead users. People can trust chatbots using “I” and emojis to be human.

In mid-March, the team was fired, an action first reported by Tech communication platform game. But Ms Crampton said hundreds of staff were still working on ethics.

Microsoft releases new products every week, a plan that Mr. Nadella kicked off when he previewed OpenAI’s latest models over the summer.

He asked the chatbot to translate the Persian poet Rumi into Urdu and write it in English characters. “It worked like a charm,” he said in an interview in February. “And then I said, ‘God, this thing.'”

Mike Isaac Contribution report. susan beach contributed research.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *