sourcegraph
April 26, 2024

A new study by MIT researchers shows that artificial intelligence cannot match humans in judgment and is more likely to issue harsher punishments and punishments to people who break the rules.

The findings could have real-world implications if artificial intelligence systems are used to predict the likelihood of a criminal reoffending, which could lead to longer sentences or higher prices on bail, the study said.

Researchers at the University of Massachusetts, as well as Canadian universities and nonprofits, have studied machine learning models and found that when AI is not properly trained, it makes harsher judgments than humans.

The researchers created four hypothetical code settings to create scenarios where people might break the rules, such as keeping an aggressive dog in an apartment complex that bans certain breeds, or using obscene language in online comment sections

Human participants then tag photos or text, and their responses are used to train the AI ​​system.

“I think most AI/ML researchers assume that humans are biased in their judgment of data and labels, but this result says something even worse,” said Marzyeh Ghassemi, assistant professor and leader of the Computer Science and Health ML group. MIT Artificial Intelligence Laboratory.

“These models can’t even reproduce already biased human judgment because the data they’re trained on is flawed,” Ghassemi continued. “People label images and text features differently if they know those features are going to be used for judgment.”

Musk warns of AI impact on elections, calls for US surveillance: ‘Things are getting weirder…quickly’

Artificial intelligence may make harsher decisions than humans when it comes to judgment, a new study suggests. (iStock)

Companies across the country and around the world have begun to implement artificial intelligence technology or consider using it to assist with routine tasks normally handled by humans.

The new study, led by Ghassemi, examines the extent to which AI “can reproduce human judgment.” The researchers determined that when humans trained the system on “canonical” data — in which humans explicitly flag potential violations — the AI ​​system responded more like humans than when it was trained on “descriptive data.”

How Deepfakes Are on the Brink of Undermining Political Accountability

Descriptive data is defined as humans labeling photos or text in a factual way, such as describing the presence of fried food in a photo of a dinner plate. According to the study, when using descriptive data, AI systems often over-predicted violations, such as the presence of fried foods or foods high in sugar in schools that violated assumed rules.

artificial intelligence photo

The word artificial intelligence is seen in this illustration taken on March 31, 2023. (Reuters/Dado Ruvic/Illustration)

The researchers created hypothetical codes for four different settings, including: school meal restrictions, dress codes, apartment pet codes, and online comment section rules. They then asked humans to label factual features of the photo or text, such as whether a comment section contained obscene content, while another group was asked whether the photo or text violated a hypothetical rule.

For example, the study showed people pictures of dogs and asked whether the puppies violated a hypothetical apartment complex’s policy that forbids aggressive dog breeds on the premises. The researchers then compared responses to respondents under normative and descriptive data and found that humans were 20 percent more likely to report dogs violating apartment complex rules based on descriptive data.

AI could become ‘Terminator’, surpassing humans in Darwinian rules of evolution, report warns

The researchers then trained one AI system using canonical data and another system using descriptive data on four hypothetical settings. The study found that systems trained on descriptive data were more likely to incorrectly predict potential violations than canonical models.

court and gavel

Inside the courtroom, a gavel can be seen. (iStock)

“This shows that data does matter,” Aparna Balagopalan, an MIT graduate student in electrical engineering and computer science who helped author the study, told MIT News. “If you’re training a model to detect if a rule is violated, it’s important to match the training context to the deployment context.”

The researchers argue that data transparency could help solve the problem of artificial intelligence predicting hypothetical violations, or training systems with descriptive data and small amounts of normative data.

Cryptocurrency criminals beware: AI is after you

“The solution to this problem is to explicitly acknowledge that if we want to reproduce human judgment, we must use only data collected in that setting,” Ghassemi told MIT News.

“Otherwise, we’d end up with systems with extremely harsh moderation, much harsher than what humans do. Humans would see nuance or make another kind of distinction, and these models wouldn’t.”

Illustration of ChatGPT and Google Bard logos

Illustration of ChatGPT and Google Bard logos (Jonathan Raa/NurPhoto via Getty Images)

The report comes amid concerns in some professional industries that artificial intelligence could wipe out millions of jobs. A Goldman Sachs report earlier this year found that generative AI could replace and impact 300 million jobs worldwide. Another study by placement and executive coaching firm Challenger, Gray & Christmas found that ChatGPt, an AI chatbot, could replace at least 4.8 million U.S. jobs.

Click here for the Fox News app

AI systems such as ChatGPT are able to mimic human conversations based on cues given by humans. According to a recent working paper from the National Bureau of Economic Research, the system has already proven beneficial to some professional industries, such as customer service workers, who were able to improve their productivity with the help of OpenAI’s generatively pretrained Transformes.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *