Facebook said on Tuesday it had taken proactive steps to remove 33.3 million pieces of content that violated one of the platform’s 10 policies, while it took action on 2.8 million pieces of content that violated the platform. one of the 8 Instagram policies.
âOver the years, we have constantly invested in technology, people and processes to advance our agenda to keep our users safe online and allow them to express themselves freely on our platform. We use a combination of artificial intelligence, reports from our community and reviews by our teams to identify and review content against our policies, âa Facebook spokesperson said.
In its second monthly report published for the period June 16 to July 31, Facebook said it has taken proactive action against 25.6 million spam content, while also taking action to remove 3.5 million violent or graphic content. It also said it removed 2.6 million content that featured adult nudity or sexual activity.
Although the platform took action on 123,400 content in the form of bullying or harassment, its rate of proactive action on such content remained low at 42.3%.
Facebook’s Instagram, on the other hand, took action on 1.1 million violent and graphic content while also taking action on 811,000 pieces of content depicting suicide and self-harm content. Instagram does not yet have a metric for measuring spam.
In its first monthly report published in July, the social media conglomerate said it had taken proactive measures regarding 1.8 million content containing nudity and adult sexual activity, 2.5 million violent and graphic content and approximately 25 million pieces of content containing spam.
Apart from these, Facebook also received 1,504 reports between June 16 and July 31 through its complaints mechanism, while Instagram received 265 of these reports through the complaint channel. Both platforms acted on all complaints received through the complaints mechanism.
Meanwhile, Facebook-owned instant messaging platform WhatsApp said it deleted more than three million accounts between June 16 and July 31.
Google, in its monthly transparency reports, said it received 36,934 user complaints and removed 95,680 pieces of content based on those complaints in July.
These monthly reports are mandated by the Ministry of Electronics and Information Technologies in accordance with the new guidelines for intermediaries and the code of ethics for digital media. In accordance with the guidelines on intermediaries announced in February and effective from May 26, all major social media intermediaries – those with more than 50 lakh users in India – were to publish monthly reports mentioning the details of complaints received, the details of those complaints, the action taken on them, and the number of specific communication links the platform has removed through proactive monitoring.