Google has warned that its ruling in an ongoing Supreme Court (SC) case could put the entire internet at risk by removing key protections for litigation against content moderation decisions involving artificial intelligence (AI).
Section 230 of the Communications Decency Act 1996 (opens in a new tab) There is currently comprehensive “liability protection” for how companies manage content on their platforms.
However, it is reported CNN (opens in a new tab)google writes legal filing (opens in a new tab) That said, if the SC rules in favor of the plaintiffs in Gonzalez v. Google, a case revolving around YouTube’s algorithm recommending pro-ISIS content to users, the internet could be flooded with dangerous, offensive, and extreme content. content of doctrine.
moderate automation
As part of a nearly 27-year-old law, Has become the target of US President Biden’s reform (opens in a new tab)Section 230 does not legislate for modern developments such as AI algorithms, and this is where the problems begin.
Crucial to Google’s argument is that the internet has evolved so much since 1996 that incorporating artificial intelligence into content moderation solutions has become necessary. “Almost any modern website would not work if users had to categorize content themselves,” it said in the document.
“Rich content” means tech companies must use algorithms to present it to users in a manageable way, from search engine results to flight deals to job recommendations on employment sites.
Google also pointed out that under current law, tech companies’ refusal to adjust their platforms is a perfectly legal way to avoid liability, but it puts the internet at risk of becoming a “virtual cesspool.”
The tech giant also noted that YouTube’s community guidelines explicitly reject terrorism, adult content, violence and “other dangerous or offensive content,” and that it is constantly tweaking its algorithms to pre-emptively block banned content.
It also claimed that “approximately” 95% of videos in the second quarter of 2022 violated YouTube’s “violent extremism policy.”
Still, the petitioners in the case maintain that YouTube failed to remove all Isis-related content and, in doing so, contributed to the “rise of ISIS.”
To further avoid any liability on this point, Google responded that YouTube’s algorithm recommends content to users based on the similarity between a piece of content and content they are already interested in.
It’s a complicated case, and while it’s easy to agree that the internet has grown too large for manual review, it’s equally compelling that companies should be held accountable when their automated solutions go wrong.
After all, even the tech giants cannot guarantee the content on their sites, filters users and parental control Unable to determine if they are taking effective steps to block offensive content.