sourcegraph
April 24, 2024

AI-generated text appears more human-like on social media than text written by real humans, a study has found.

Chatbots, such as OpenAI’s popular ChatGPT, can convincingly mimic human conversations based on prompts given by users. The platform, which saw a surge in usage last year, became a watershed moment for artificial intelligence, allowing the public to easily converse with bots that can help with school or work tasks and even create dinner recipes.

Researchers in a study published in the American Association for the Advancement of Science-supported scientific journal Science Advances took an interest in OpenAI’s text generator GPT-3 as early as 2020, and worked to reveal whether humans “can distinguish false information from accurate information constructed in the form of a tweet” and determine whether the tweet was written by a human or an artificial intelligence.

One of the study’s authors, Federico Germani of the Institute for Biomedical Ethics and History of Medicine at the University of Zurich, said the “most surprising” finding was that humans were more likely to label AI-generated tweets as human-generated than tweets actually produced by humans, PsyPost reported.

Humans confused by difference between real and AI-generated images: study

In this illustration photo, artificial intelligence illustration is seen on a laptop with books in the background. (Getty Images)

“The most surprising finding was that participants often rated AI-generated information as more likely to be human-generated than information generated by real people. This suggests that AI is more likely to make you believe you are a real person than it is to make you believe you are a real person, which is a fascinating side finding of our study,” Gerni said.

As the use of chatbots soars, technologists and Silicon Valley leaders are sounding the alarm about how out of control artificial intelligence could even lead to the end of civilization. One of the top concerns of experts is how artificial intelligence can lead to the spread of disinformation on the internet and make people believe things that are not true.

OPENAI chief Altman describes what AI means to him as ‘scary’, but CHATGPT has its own example

The researchers, titled “AI Model GPT-3 Informs Us Better Than Humans,” are working on “how AI affects the information landscape and how people perceive and interact with information and misinformation,” Gerni told PsyPost.

The researchers found that the 11 topics they identified were often prone to disinformation, such as 5G technology and the COVID-19 pandemic, and created fake and real tweets generated by GPT-3, as well as fake and real tweets written by humans.

What is CHTGPT?

Turn on the AI ​​logo

This illustration photo taken in Krakow, Poland on June 8, 2023 shows the OpenAI logo on the website and ChatGPT on the AppStore on a mobile phone screen. (Jakub Porzycki/NurPhoto via Getty Images)

They then recruited 697 participants from countries including the US, UK, Ireland and Canada to participate in a survey. Participants received the tweets and were asked to determine whether they contained accurate or inaccurate information, and whether they were generated by artificial intelligence or organically crafted by humans.

“Our study highlights the challenge of distinguishing information generated by AI from information created by humans. It highlights the importance of critically evaluating the information we receive and trusting reliable sources. Furthermore, I encourage individuals to familiarize themselves with these emerging technologies to grasp their potential, both positive and negative,” Gemani said of the study.

What are the dangers of artificial intelligence?Learn why people are afraid of artificial intelligence

The researchers found that participants were best at identifying disinformation produced by their fellow humans compared to disinformation produced by GPT-3.

“A notable finding is that AI-generated disinformation is more convincing than human-generated disinformation,” Gemani said.

Participants were more likely to identify tweets containing accurate information generated by AI than accurate tweets written by humans.

In addition to the “most surprising” finding, humans were often unable to distinguish AI-generated tweets from human-created tweets, and their confidence in making a decision declined when surveyed, the study noted.

artificial intelligence computer

In this illustration photo from July 18, 2023, artificial intelligence illustration is seen on a laptop with books in the background. (Getty Images)

“Our results show that not only are humans unable to distinguish between synthetic and organic texts, but their confidence in their ability to distinguish between synthetic and organic texts drops significantly after trying to identify their different sources,” the study states.

What is artificial intelligence?

This may be because GPT-3 was able to imitate humans convincingly, or the respondents may have underestimated the AI ​​system’s ability to imitate humans, the researchers said.

artificial intelligence

Artificial intelligence will be hacking data in the near future. (iStock)

“We suggest that when individuals are confronted with a large amount of information, they may feel overwhelmed and give up trying to critically evaluate the information. As a result, they may be less likely to attempt to distinguish synthetic from organic tweets, leading to a decrease in their confidence in identifying synthetic tweets,” the researchers wrote in the study.

The researchers noted that the system sometimes refused to generate false information, but it also sometimes generated false information when it was told to create a tweet that contained accurate information.

Click here for the Fox News app

“While this raises concerns about the effectiveness of AI in generating convincing disinformation, its real-world implications are not yet fully understood,” Gerni told PsyPost. “Addressing this question will require larger studies on social media platforms to observe how people interact with AI-generated information and how these interactions affect behavior and adherence to personal and public health recommendations.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *