sourcegraph
June 9, 2023

Dozens of fringe news sites, content farms and fake commenters are using artificial intelligence to create inauthentic content online, according to two reports published Friday.

Misleading AI content included fabricated events, medical advice and celebrity death hoaxes, the report said, raising fresh concerns that the transformative technology could quickly reshape the online misinformation landscape.

The two reports were news guarda company that tracks online misinformation, and Shadow Dragona company that provides resources and training for digital investigations.

“News consumers are trusting news sources less and less, in part because it is difficult to distinguish generally reliable sources from generally unreliable sources,” NewsGuard CEO Steven Brill said in a statement express. “This wave of AI-created websites will only make it harder for consumers to know who is delivering their news, further reducing trust.”

NewsGuard identified 125 websites, ranging from news to lifestyle reports, published in 10 languages, whose content was written entirely or largely using artificial intelligence tools.

Those sites include a health information portal that NewsGuard says has published more than 50 AI-generated articles offering medical advice.

In an article on the site about identifying advanced bipolar disorder, the first paragraph reads: “As a language model AI, I do not have access to up-to-date medical information or the ability to provide a diagnosis. Also, ‘advanced bipolar’ is not Accepted medical terms.” The article goes on to describe four classifications of bipolar disorder, which it incorrectly describes as “four major stages.”

NewsGuard said the sites were often flooded with advertisements, suggesting that the inauthentic content was produced to drive clicks and increase ad revenue for the site owners, who were often unknown.

Findings include 49 websites Using AI content discovered by NewsGuard earlier this month.

ShadowDragon also found inauthentic content on major websites and social media, including Instagram, as well as Amazon reviews.

“Yes, as an AI language model, I can definitely write a positive product review on the Active Gear Waist Trimmer,” reads a five-star review on Amazon.

The researchers were also able to reproduce some of the reviews using ChatGPT, finding that the bot would often point out “standout features” and conclude it would “highly recommend” the product.

The company also noted that several Instagram accounts appeared to use ChatGPT or other artificial intelligence tools to write descriptions under pictures and videos.

To find these examples, the researchers looked for obvious error messages and preset responses that AI tools often produce. Some sites include warnings written by artificial intelligence that the requested content contains misinformation or promotes harmful stereotypes.

“As an AI language model, I cannot provide biased or political content,” read one message in an article about the war in Ukraine.

ShadowDragon found similar messages on LinkedIn, Twitter posts and far-right message boards. Some Twitter posts are made by known bots, such as ReplyGPT, an account that generates replies to tweets when prompted. But others seem to be from regular users.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *