sourcegraph
April 2, 2023

Google did not tell the woman that the account had been activated again. Ten days after her account was reinstated, she learned of the decision from a Times reporter.

When she logged in, everything was restored except for the videos her son had made. A message popped up on YouTube featuring an illustration of a referee blowing a whistle and saying her content violated community guidelines. “Because this is the first time, this is just a warning,” the message said.

“I wish they had started here in the first place,” she said. “It will save me months of tears.”

Jason Scott, a digital archivist, writes unforgettable blasphemy A 2009 blog post warned against trusting cloud computing, saying companies were legally obliged to provide users with their data, even if accounts were closed for breaches.

“Data storage should act like tenant law,” Mr Scott said. “You shouldn’t hold onto someone’s data and not return it.”

The mother also received an email from “Google Teams,” which was sent on Dec. 9.

“We understand that you have attempted to appeal this on several occasions and apologize for the inconvenience caused,” it said. “We hope you understand that we have strict policies in place to prevent our services from being used to share harmful or illegal content, especially egregious content like child sexual abuse material.”

Many companies besides Google are monitoring their platforms in an attempt to prevent the rampant sharing of child sexual abuse images. last year, More than 100 companies send 29 million reports Provide information on suspected child exploitation to the National Center for Missing and Exploited Children, a nonprofit that acts as a clearinghouse for such material and forwards reports to law enforcement for investigation. The nonprofit does not track how many of those reports are actual abuse.

Meta sends the highest number of reports to national centers – more than 25 million reports from Facebook and Instagram in 2021.Last year, the company’s data scientists Analyzes Some flagged material and found examples deemed illegal under federal law But “non-malicious”. Of a sample of 150 flagged accounts, more than 75 percent “did not appear malicious,” the researchers said, citing “memes of children being bitten by animals on their genitals” that were shared humorously, Teenagers texting each other.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *