Twitter Ads: Brands Slam Twitter For Ads Next To Child Pornography Accounts

0
Some major advertisers, including Dyson, Mazda, Forbes and PBS Kids, suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets soliciting child pornography, the companies told Reuters .

Brands ranging from Walt Disney Co, NBCUniversal and Coca-Cola Co to a children’s hospital were among more than 30 advertisers that appeared on the profile pages of Twitter accounts peddling links to exploit material, according to a Reuters review of accounts identified in new research into online child sexual abuse by cybersecurity group Ghost Data.

Some of the tweets include keywords related to “rape” and “teenagers” and appeared alongside promoted tweets from corporate advertisers, according to the Reuters study. In one example, a tweet promoted for footwear and accessories brand Cole Haan appeared alongside a tweet in which a user said they were “swapping teen/kids content.”

“We are horrified,” David Maddocks, brand president at Cole Haan, told Reuters after being told the company’s ads were appearing alongside those tweets. “Twitter will either fix this or we will do it any way we can, including not buying Twitter ads.”

In another example, a user tweeted looking for the content “Yung girls ONLY, NO Boys”, which was immediately followed by a tweet promoted for the Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not return multiple requests for comment.

In a statement, Twitter spokeswoman Celeste Carswell said the company “has zero tolerance for the sexual exploitation of children” and is investing more resources dedicated to child safety, including hiring new staff. positions to write policies and implement solutions.

Discover the stories that interest you



She added that Twitter was working closely with its customers and advertising partners to investigate and take action to prevent the situation from happening again.

Twitter’s difficulties in identifying child pornography material were first reported in a survey conducted by tech news site The Verge in late August. The emerging pushback from advertisers who are essential to Twitter’s revenue stream is reported here by Reuters for the first time.

Like all social media platforms, Twitter prohibits depictions of child sexual exploitation, which are illegal in most countries. But it allows adult content in general and is home to a thriving exchange of pornographic images, which accounts for about 13% of all content on Twitter, according to an internal company document seen by Reuters.

Twitter declined to comment on the volume of adult content on the platform.

Ghost Data identified more than 500 accounts that openly shared or requested child pornography material over a 20-day period this month. Twitter failed to remove more than 70% of accounts during the study period, according to the group, which shared the results exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s discovery in its entirety, but reviewed dozens of accounts that remained online and solicited documents for “13+” and “naked youth”.

After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 more accounts from the network, but more than 100 more remained on the site the following day, according to Ghost Data and a Reuters review.

Reuters then shared the full list of more than 500 accounts on Monday after it was provided by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, Twitter’s Carswell said on Tuesday.

In an email to advertisers on Wednesday morning, before this story was published, Twitter said it “discovered that ads were running in profiles involved in the public sale or solicitation of child sexual abuse material.”

Andrea Stroppa, the founder of Ghost Data, said the study was an attempt to gauge Twitter’s ability to remove the material. He said he personally funded the research after receiving advice on the subject.

Twitter’s transparency reports on its website show it suspended more than a million accounts last year for child sexual exploitation.

He made about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded nonprofit that facilitates information sharing with law enforcement, according to that organization’s annual report.

“Twitter needs to address this as soon as possible, and until they do, we will cease all paid activity on Twitter,” a Forbes spokesperson said.

“There is no place for this type of content online,” a spokesperson for automaker Mazda USA said in a statement to Reuters, adding that in response the company is now banning its ads from appear on Twitter profile pages.

A Disney spokesperson called the content “objectionable” and said they are “increasing efforts to ensure that the digital platforms we advertise on and the media buyers we use are stepping up their efforts to prevent such errors from happening again”.

A spokesperson for Coca-Cola, who surfaced a promoted tweet on an account tracked by the researchers, said it does not condone material associated with its brand and said “any violation of these standards is unacceptable. and taken very seriously.”

NBCUniversal said it asked Twitter to remove ads associated with inappropriate content.

CODE WORDS

Twitter isn’t alone in grappling with moderation failures related to child safety online. Child protection advocates say the number of known child sexual abuse images has risen from thousands to tens of millions in recent years as predators have taken to social media, including Facebook and Instagram of Meta, to prepare victims and exchange explicit images.

For accounts identified by Ghost Data, nearly all marketers of child sexual exploitation material marketed the material on Twitter and then asked buyers to contact them on messaging services such as Discord and Telegram to complete payment and to receive the files, which have been stored. on cloud storage services like Mega in New Zealand and Dropbox in the US, according to the group’s report.

A Discord spokesperson said the company banned a server and a user for breaking its rules against sharing links or content that sexualizes children.

Mega said a link referenced in the Ghost Data report was created in early August and shortly thereafter deleted by the user, whom it declined to identify. Mega said it permanently closed the user’s account two days later.

Dropbox and Telegram said they use various tools to moderate content, but did not provide additional details on how they would respond to the report.

Still, the advertisers’ backlash poses a risk to Twitter’s business, which derives more than 90% of its revenue from the sale of digital ad placements to brands seeking to market products to the Twitter’s 237 million daily active users. service.

Twitter is also battling Tesla CEO and billionaire Elon Musk in court as he tries to walk out of a $44 billion deal to buy the social media company over complaints about the prevalence of spam accounts. and its impact on the business.

A team of Twitter employees concluded in a report dated February 2021 that the company needs more investment to identify and remove large-scale child exploitation material, noting that the company has a backlog of cases to review for possible reporting to law enforcement.

“While the amount of (child sexual exploitation content) has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not increased,” according to the report, which was prepared by an internal team to provide insight into the status of child exploitation material on Twitter and receive legal advice on proposed strategies.

“Recent reports on Twitter provide an outdated, snapshot view of just one aspect of our work in this space, and don’t accurately reflect where we are today,” Carswell said.

Traffickers often use code words such as “cp” for child pornography and are “intentionally as vague as possible” to avoid detection, according to internal documents. The more Twitter cracks down on certain keywords, the more users are pressured into using obfuscated text, which “tends to be harder for (Twitter) to automate,” according to the documents.

Ghost Data’s Stroppa said such tricks would complicate efforts to track down the materials, but noted that his small team of five researchers and no access to internal Twitter resources were able to find hundreds of accounts in 20 days.

Twitter did not respond to a request for additional comment.

Share.

Comments are closed.