
Nohemi Gonzalez, a 23-year-old California college student, was studying in Paris in November 2015 when she was one of 130 people killed in a series of coordinated terrorist attacks across the city.
The following year, her father sued Google and other tech companies. He accused the companies of spreading content that radicalized users into terrorists and said they were legally responsible for the harm they caused to Ms. Gonzalez’s family. Her mother, stepfather and brother eventually joined the lawsuit.
Their appeal will be heard in the U.S. Supreme Court on Tuesday. Their lawsuit (of which Google is now the sole defendant) could have a potentially seismic impact on social media platforms that have become conduits for communication, commerce and culture for billions of people.
Their lawsuit targets Section 230 of the federal law, the Communications Decency Act, which protects online platforms such as Facebook, Instagram and Google’s YouTube from lawsuits over content users post or their decisions to remove. The case gives Supreme Court justices the chance to narrow the scope of legal protections or strike them down entirely, potentially making companies liable for user-posted content and potentially suing for defamation, discriminatory advertising and extremist propaganda.
On the second day of Gonzalez v. Google, the court is scheduled to hear a second technology lawsuit, Twitter v. Taamneh, over whether Twitter facilitated terrorism.
The Supreme Court’s eventual ruling in these cases will intensify a fierce debate around the world about how to regulate online speech. Many governments say social networks have become fertile ground for hate speech and misinformation. Some asked the platform to remove the posts. But in America, the First Amendment makes it very difficult for Congress to do the same.
Critics of Section 230 say it allows tech companies to avoid liability for harm caused under their watch. But proponents counter that without legal protections, the companies will remove more content than ever to avoid lawsuits, stifling free speech.
Spread of misinformation and lies
- cut the expenses: Layoffs in the social media industry reflect a trend that threatens Withdrawal of many safeguards The platform is already in place to ban or suppress disinformation.
- Key case: The outcome of the federal court lawsuit could help determine whether the First Amendment will thwart nearly any government effort to curb disinformation.
- Top misinformation purveyors: Steve Bannon’s “War Room” podcast has more lies and unsubstantiated claims than other political talk shows, a large study has found.
- artificial intelligence: AI-generated personas have been spotted for the first time in a state-aligned disinformation campaign, opening a new chapter in online manipulation.
Hany Farid, a professor at UC Berkeley’s School of Information, said the Supreme Court case “could have implications for how these companies operate and how we interact with the internet.” He filed a brief with the Supreme Court in support of members of the Gonzalez family suing Google.
Ms. Gonzalez, a first-generation college student who studied design at Cal State Long Beach, was killed while out with friends during the 2015 Paris attacks. Islamic State later claimed responsibility. She was the only American killed.
Her father, Reynaldo Gonzalez, sued Google, Facebook and Twitter in 2016, saying the platforms were spreading extremist content. This includes propaganda, messages from Islamic State leaders and videos of bloody violence, he said. The suit cites media reports citing specific videos showing footage of Islamic State fighters on the battlefield, as well as updates from outlets affiliated with the group. The lawsuit says online platforms did not do enough to prevent terrorist groups from accessing their sites.
YouTube and other platforms say they screen such videos and remove many of them.But in 2018, a study based on a tool developed by Mr Farid set up Some Islamic State videos are played for hours, including ones that encourage violent attacks in Western countries.
Facebook and Twitter were dismissed as defendants in the lawsuit in 2017, the same year Ms. Gonzalez’s mother, stepfather and siblings joined the plaintiffs. Last year, a federal appeals court ruled that Google does not have to face claims by members of the Gonzalez family because the company is protected by Section 230.
In May, attorneys for Ms. Gonzalez’s family asked the Supreme Court to intervene. The lawyers argue that by using algorithms to recommend content to users, YouTube is essentially engaging in its own form of speech, which is not protected by Section 230.
Ms. Gonzalez’s father and plaintiff in the Twitter case declined to comment through their attorney, Keith Altman. Mr Altman said the court had pushed the “limits” of Section 230’s legal protections to the point of “unrecognizable”. Attorneys for Ms. Gonzalez’s other family members did not respond to requests for comment. Attorney Eric Schnapper, who will argue both cases before the Supreme Court, also declined to comment.
Google has denied the Gonzalez family’s Section 230 arguments. It said the family’s claims that Google supported terrorism were based on “clichéd” and “speculative” arguments.
In Congress, efforts to reform Section 230 have stalled. Republicans, spurred by accusations that internet companies are more likely to remove conservative posts, are proposing changes to the law. Democrats say platforms should remove more content when they spread misinformation or hate speech.
Instead, courts set out to explore the limits of how the law should apply.
In a 2021 case, a federal appeals court in California ruled that Snap, the parent company of Snapchat, could not use Section 230 to avoid a lawsuit involving the deaths of three people in a car crash after using a Snapchat filter that showed users’ speed.
Last year, a federal judge in California said Meta, the parent company of Apple, Google and Facebook, not available Legal protections against claims by consumers that they were harmed by casino apps.A federal judge in Oregon also ruling The statute did not protect chat site Omegle, which connects users randomly, from a lawsuit alleging that an 11-year-old girl encountered predators through its service.
Tech companies say it would be devastating if the Supreme Court weakened Section 230. These protections were “critical to allowing Google and the Internet to thrive in its infancy and really become the broader important part of the U.S. economy.”
“It’s very important to maintain the status quo,” she said.
A spokesperson for Meta stated that a blog post The company’s top lawyer said the case “could make it harder for millions of online companies like Meta to provide the types of services that people love to use every day.”
Twitter did not respond to a request for comment.
Activists worry that changes to the law could lead to platforms cracking down on content posted by vulnerable groups. In 2018, a new law ended Section 230 protections for platforms that knowingly facilitate sex trafficking.Activists say this has led to sites removing adult sex workers and Posts about LGBTQ people.
The Gonzalez case has also drawn interest from the Justice Department. In a December briefing, the agency told the Supreme Court that it held that Section 230 “does not bar claims based on YouTube’s alleged targeted promotion of ISIS content.” The White House said the legal protection should be removed.
Mr. Farid acknowledged that courts risked undermining the protections of section 230, leading to unintended consequences. But he noted that social networks already comply with laws governing how they handle certain types of content, such as Germany’s restrictions on digital hate speech. He said they could also make narrow changes to the legal shield.
“These companies figured it out,” he said.