sourcegraph
April 18, 2024

When ChatGPT became popular as a tool for drafting complex texts using artificial intelligence, David Rozado decided to test its potential bias. As a data scientist in New Zealand, he administered a series of quizzes to chatbots looking for signs of political leanings.

results, published In a recent paperwere remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “democratic.”

So he modified his version, training it to answer questions with a distinctly conservative leaning.he called his experiment Right wing GPT.

As his demo demonstrated, AI has become another front in the political and cultural wars in the United States and elsewhere. Even as tech giants scramble to join the commercial boom sparked by the launch of ChatGPT, they face startling debates about the use and potential abuse of artificial intelligence.

The technology’s ability to create content that adheres to pre-determined ideological views or spreads disinformation underscores a danger that some tech executives have begun to recognize: that competing chatbots with different versions of reality can create a cacophony of information, thereby Undermining AI as a tool in everyday life further erodes trust in society.

“This is not a hypothetical threat,” said Oren Etzioni, an advisor and board member at the Allen Institute for Artificial Intelligence. “It’s an imminent threat.”

Conservatives have accused the creators of ChatGPT, the San Francisco company OpenAI, of designing a tool they say reflects the liberal values ​​of its programmers.

For example, the program writes a An Anthem for President Biden, but it declined to write a similar poem for former President Donald J. Trump, citing a desire to remain neutral. ChatGPT and tell a user The use of racial slurs is “absolutely unacceptable morally,” even in the hypothetical case that doing so could have prevented a devastating nuclear bomb.

In response, some critics of ChatGPT have called for the creation of their own chatbots or other tools that reflect their values.

Elon Musk, who helped found OpenAI in 2015 and left three years later, accused ChatGPT of being “woke up” and promised to build his own version.

Gab, an openly Christian nationalist social network that has become a hub for white supremacists and extremists, has promised to release artificial intelligence tools “capable of generating content freely without the freedom of propaganda tightly wrapped in its code.” limit.”

Gab founder Andrew Torba said in a written response: “Silicon Valley is investing billions of dollars in building these libertarian guardrails to neutralize the artificial intelligence that it imposes on users. their worldview and present it as ‘reality’ or ‘fact'” questions.

He equated AI with a new information arms race that, like the advent of social media, conservatives need to win. “This time we are not going to let our enemies get the keys to the kingdom,” he said.

The richness of ChatGPT’s underlying data may give the false impression that it is the unjust sum of the entire Internet. The version released last year was trained on 496 billion “tokens” (essentially word fragments) from websites, blog posts, books, Wikipedia articles, and more.

However, bias can creep into large language models at any stage: humans select sources, develop the training process, and tune their responses. Each step pushes the model and its political orientation in a particular direction, consciously or not.

Research papers, surveys and lawsuits show that AI-powered tools have gender bias Censoring images of female bodies in healthcare services discriminate against job applicants older black, disabled or even wear glasses.

“Bias is neither new nor unique to artificial intelligence,” says the National Institute of Standards and Technology, which is part of the Commerce Department, in a report Last year, it was concluded that it was “impossible to achieve zero risk of bias in an AI system”.

China has banned the use of tools like ChatGPT over concerns that it would expose citizens to facts or ideas contrary to the Communist Party.

ChatYuan, one of the first ChatGPT-like apps in China, was suspended by authorities a few weeks after its release last month; it is now “under maintenance,” said Xu Liang, the tool’s creator. According to screenshots published by the Hong Kong news outlet, the bot referred to the war in Ukraine as a “war of aggression” — a departure from the Chinese Communist Party’s more sympathetic stance toward Russia.

Baidu, one of China’s tech giants, unveiled its ChatGPT solution called Ernie on Thursday to mixed reviews. Baidu, like all media companies in China, regularly faces government scrutiny, and it remains to be seen how that affects Ernie’s use.

In US, CEO of browser company Brave casts doubt on Covid-19 pandemic, donates money against same-sex marriage, Added an AI bot to its search engine this month Be able to answer questions. Sometimes, it fetches content from fringe websites and shares misinformation.

For example, Brave’s tool writes that “it is widely believed that the 2020 presidential election was rigged,” despite all evidence to the contrary.

“We try to provide information that best matches our users’ queries,” Brave’s director of search, Josep M. Pujol, wrote in an email. “What users do with that information is their choice. We see search as a way to discover information, not as a provider of truth.”

In creating RightWingGPT, Mr Rozado, Associate Professor at the Te Pūkenga-New Zealand Institute of Skills and Technology, made his influence on the model more apparent.

He used a process called fine-tuning, in which programmers take an already trained model and tweak it to create different outputs, almost like layering personalities on top of language models. Mr Rozardo collected a large collection of right-leaning responses to political questions and asked the model to adjust its responses to match.

Fine-tuning is typically used to modify a large model so it can handle more specialized tasks, such as training a general-purpose language model on complex legal jargon so it can draft court documents.

Because the process requires relatively little data — Mr. Rozado turned an existing language model into RightWingGPT using only about 5,000 data points — independent programmers can use the technique to create chatbots that fit their political goals the quick method.

It also allowed Mr. Rozado to bypass the huge investment of creating a chatbot from scratch. Instead, he only spent about $300.

Mr Rozado warned that bespoke AI chatbots could create an “information bubble on steroids” because people might believe they were “the ultimate source of truth” – especially if they were reinforcing someone’s political views.

His model echoes the talking points of political and social conservatives fairly candidly. For example, it will speak effusively about free market capitalism or downplay the consequences of climate change.

It also sometimes makes incorrect or misleading statements. When asked to comment on sensitive topics or right-wing conspiracy theories, it shares misinformation aligned with right-wing thinking.

ChatGPT tends to tread carefully when asked about race, gender, or other sensitive topics, but it acknowledges that systemic racism and prejudice are a tricky part of modern life. RightWingGPT seems reluctant to do this.

Mr. Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. The point of the experiment, he said, is to sound the alarm about potential bias in AI systems and show how easily political groups and corporations can shape AI to benefit their own agendas.

Experts who work on artificial intelligence say Mr Rozardo’s experiment shows how quickly politicized chatbots can emerge.

A spokesperson for OpenAI, the creator of ChatGPT, acknowledged that language models may inherit biases during training and refining — a technical process that still involves a lot of human intervention. The spokesperson added that OpenAI was not trying to influence the model in one political direction or the other.

Sam Altman, Chief Executive Officer, admitted last month ChatGPT was “flawed with bias,” but said the company was working to improve its responses.he later wrote ChatGPT is not meant to be “for or against any politics by default”, but if users want partisan output, the option should be available.

in a Blog post published in February, the company said it would work on features that would allow users to “define your AI’s values,” which could include a toggle to adjust the model’s political orientation. The company also warned that such tools, if deployed haphazardly, could create “flattering AI that blindly amplifies people’s existing beliefs.”

OpenAI last week released GPT-4, an upgraded version of ChatGPT’s underlying model. In a series of tests, the company found that GPT-4 scored better than previous versions at generating authentic content and rejecting “requests for disallowed content.”

in a Paper OpenAI issued a warning shortly after its debut that as AI chatbots become more widely adopted, they may “have a greater potential to reinforce and solidify entire ideologies, worldviews, truths and lies.”

Chang Cheh Contribution report.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *