Musk pledged to clear Twitter of child abuse content. is it working?
A video showing a boy being sexually assaulted has more than 120,000 views. A recommendation engine that advises users to follow content related to exploited children. Users continue to post abusive material, delay removal of such material when it is detected, and create friction with organizations that police it.
Ever since Elon Musk declared that “elimination of child exploitation is the number one priority” tweet late november.
Under Mr Musk’s leadership, Twitter’s head of safety, Ella Irving, said she had been acting quickly to crack down on child sexual abuse material that was prevalent on the site under the previous owner — just as it was at the university. Same on most tech platforms. The company promises that “Twitter 2.0” will be different.
But a New York Times review found that the images, commonly referred to as child pornography, have persisted on the platform, including widely distributed material that authorities believe is the easiest to spot and remove.
After Musk took power in late October, Twitter largely laid off or fired employees who had encountered the problem and failed to stop the spread of abusive images that authorities had previously found, the review showed. Twitter also stopped paying for some detection software considered critical to its work.
People on darknet forums have been discussing how Twitter maintains a platform where they can easily find material while avoiding detection, according to records from the anti-abuse group that oversees those forums.
“If you let sewer rats in,” said Julie Inman Grant, Australia’s cybersecurity commissioner, “you know the plague is coming.”
In a Twitter audio chat with Ms. Irwin in early December, an independent researcher working with Twitter explain Illegal content has been posted publicly on the platform for years and garnered millions of views. But Ms. Irwin and others at Twitter say their efforts under Mr. Musk’s leadership are paying off. In the first full month of new ownership, the company suspended nearly 300,000 accounts for violating its “child sexual exploitation” policy, 57% more than usual, company says.
Twitter said the effort accelerated in January, when it suspended 404,000 accounts. “Our recent approach has been more aggressive,” the company declared in a statement series of tweets On Wednesday, it said it was also cracking down on people searching for exploitative material, with successful searches down 99% since December.
Ms. Owen said in an interview that most of the suspensions involved accounts that used the material or claimed to sell or distribute it, rather than accounts that posted it. She didn’t question the fact that child sexual abuse material is still exposed on the platform, saying “we absolutely know we’re still missing something that we need to be able to detect better.”
She added that Twitter is hiring and deploying “new mechanisms” to address the issue. “We’ve been working non-stop,” she said.
wired, NBC and others details Twitter’s ongoing battle against child abuse imagery under the leadership of Mr Musk. On Tuesday, Senator Richard J. Durbin, D-Illinois, ask the justice department Review Twitter’s record on problem solving.
To assess the company’s claimed progress, The Times created a personal Twitter account and wrote an automated computer program that would search the platform for content without showing the actual images, which are illegal to view. Materials are not hard to find. In fact, Twitter helped promote it with its recommendation algorithm — a feature that suggests accounts to follow based on user activity.
Among those testimonials was an account with a headshot of a shirtless boy. The child in the photo is a known sexual abuse victim, according to the Canadian Center for Child Protection, which helped identify exploitative material on the Times platform by matching it to a database of previously identified images.
The same user followed other suspicious accounts, including one that “liked” a video of a boy sexually assaulting another boy. By Jan. 19, the video, which had been on Twitter for over a month, had more than 122,000 views, nearly 300 retweets and more than 2,600 likes. Twitter later removed the video after the Canadian center flagged the company.
In the first few hours of searching, the computer program found images previously thought to be abusive — as well as accounts offering more sales. The Times, which flagged the posts without viewing any of the images, sent URLs to services run by Microsoft and the Canada Center.
One account in late December offered a discounted photo and video “Christmas bag.” The user tweeted a partially blurred image of a child who was abused from about 8 years old until adolescence. Twitter took down the post five days later, but only after repeated notifications to the company from the Canadian Center.
All told, the computer program found images of the 10 victims appearing more than 150 times across multiple accounts, most recently on Thursday. The accompanying tweets often promote videos of child rapes and include links to encrypted platforms.
Alex Stamos, director of the Stanford Internet Observatory and former security chief at Facebook, found the results shocking. “Given Musk’s focus on child safety, it’s surprising they didn’t do the most basic things,” he said.
Separately, to corroborate The Times’ findings, the Canadian Center conducted a test to determine how often a video series involving known victims appeared on Twitter. Analysts found more than 40 accounts sharing 31 different videos, some of which were retweeted and liked thousands of times. The videos depict a young teenager being blackmailed online to have sex with a preteen over the course of several months.
The center also did a more extensive scan of the most explicit videos in its database. It has over 260 hits, over 174,000 likes and 63,000 retweets.
“The amount we were able to find with minimal effort was remarkable,” said Lloyd Richardson, the Canadian center’s technical director. “It shouldn’t be the job of outsiders to find this type of content on their systems.”
In 2019, The Times reported that many tech companies have serious gaps in policing child exploitation on their platforms. In December last year, Ms. Inman Grant, an Australian cybersecurity officer, Audited It turns out that many of the same problems still exist in some tech companies.
The Australian scrutiny does not include Twitter, but some of the platform’s difficulties are similar to those of other tech companies and predate Mr. Musk’s arrival, according to multiple current and former employees.
Twitter, founded in 2006, began using more comprehensive tools to scan for child sexual abuse videos last fall, while an engineering team dedicated to finding illegal photos and videos was launched just 10 months ago, they said. In addition, the company’s trust and safety team has been understaffed, even as the company continued to expand them during a broad hiring freeze that began last April, four former employees said.
The company did build internal tools over the years to find and remove some images, and the National Center has often praised the integrity of the company’s reporting.
There have also been issues in recent months with the platform’s abuse reporting system, which allows users to notify the company when they encounter child exploitation material. (Twitter provided a guide Report abusive content on their platforms. )
The New York Times used its research account to report multiple profiles who claimed to have sold or traded content in December and January. Many accounts are still active and even appear on The Times’ own account as referrals to follow. The company said it needed more time to figure out why such recommendations came about.
To find the material, Twitter relied on software developed by an anti-human-trafficking group called Thorn. Twitter has not paid the group since Musk took over, presumably as part of a larger effort to cut costs, according to people familiar with the matter. Twitter has also stopped working with Thorn to improve the technology. The collaboration has benefits across the industry, as other companies are also using the software.
Ms. Owen declined to comment on Twitter’s business with specific vendors.
Twitter’s relationship with the National Center for Missing and Exploited Children has also suffered, according to people who work there.
John Shehan, an executive at the center, said he was critical of Twitter’s “high turnover rate” and the company’s “standing of trust and safety, as well as their commitment to identifying and removing child sexual abuse from their platform.” Commitment to material” is concerned.
According to the center, Twitter was initially slow to respond to the center’s notifications of sexual abuse content following the transition to Mr. Musk’s ownership, a delay that matters to abuse survivors because they will suffer again. Like other social media sites, Twitter has a two-way relationship with the Center. When a website finds illegal content, it notifies the hub (which can then notify law enforcement). When the center learns of illegal content on Twitter, it alerts the site so the images and accounts can be removed.
At the end of last year, the company’s response time was more than double what it was at the same time under previous ownership, even though the center sent it fewer alerts. In December 2021, Twitter took an average of 1.6 days to reply to 98 notifications; in December last year, after Mr. Musk took over the company, it took 3.5 days to reply to 55 notifications. In January, the situation has improved a lot, and it took 1.3 days to reply to 82 messages.
The Canadian Center, which offers the same function in the country, said it had seen delays of up to a week. In one instance, the Canadian Center detected a video on Jan. 6 depicting the abuse of a naked girl aged 8 to 10. The group said it sent out daily notifications for about a week before Twitter removed the video.
Additionally, Twitter and the US National Center appear to disagree that Twitter has an obligation to report accounts that claim to sell illegal material without posting it directly.
The company has yet to report to the national center the hundreds of thousands of accounts it has suspended because the rules require them to have “a high degree of confidence that the person is knowingly transmitting” illegal images, and the accounts did not meet that threshold, Ms Owen said.
Mr. Shehan of the National Center disputes this interpretation of the rule, noting that tech companies are also legally required to report users, even if they only claim to sell or solicit the material. So far, National Center data shows that Twitter posts about 8,000 reports a month, a fraction of its suspended accounts.
Ms Inman Grant, the Australian regulator, said she had been unable to communicate with the company’s local representatives because contacts at her agency in Australia had resigned or been fired since Musk took over. She fears the layoffs will lead to the trafficking of more exploitative images.
“These local contacts play a critical role in resolving time-sensitive issues,” said Ms. Inman Grant, a former security executive at Twitter and Microsoft.
Ms Owen said the company continued to engage with Australian institutions and, more broadly, expressed her belief that Twitter was “getting better”, while acknowledging the challenges ahead.
“We’re in no way patting ourselves on the back and saying, ‘Man, we’ve got it covered,'” Ms Irving said.
The perpetrators continued to exchange tips on dark web forums on how to find the material on Twitter, according to posts uncovered by the Canadian Center.
On Jan. 12, a user described hundreds of “legitimate” Twitter accounts selling videos of young boys tricked into sending explicit recordings of themselves. Another user described Twitter as an easy place to watch all types of sexual abuse videos. “People are sharing so much,” the user wrote.
Ryan Mack and Zhang Che Contribution report.