Social media giant Meta said more than 16.2 million pieces of content were âprocessedâ on Facebook in 13 categories of violations proactively in India during the month of November.
Its photo-sharing platform, Instagram, took proactive action against more than 3.2 million pieces in 12 categories during the same period, according to data shared in a compliance report.
Under IT rules that came into effect earlier this year, large digital platforms (with more than 5 million users) are required to publish periodic compliance reports every month, listing details of complaints received and actions taken on them. .
It also includes details of deleted or disabled content through proactive monitoring using automated tools. Facebook had proactively “acted” on more than 18.8 million items of content as of October in 13 categories, while Instagram took action against more than 3 million items in 12 categories during the same period proactively.
In its latest report, Meta said that 519 user reports were received by Facebook through its Indian complaints mechanism between November 1 and November 30.
âOn these inbound reports, we provided tools for users to resolve their issues in 461 cases,â the report says.
These include pre-established channels to report content for specific breaches, self-correction feeds where they can upload their data, ways to troubleshoot account hijacking issues, and more, he added. .
Between November 1 and November 30, Instagram received 424 reports through the Indian complaints mechanism. Facebook’s parent company recently changed its name to Meta.
Applications under Meta include Facebook, WhatsApp, Instagram, Messenger, and Oculus. According to the latest report, the more than 16.2 million content posts implemented by Facebook in November included content related to spam (11 million), violent and graphic content (2 million), adult nudity and sexual activity (1.5 million) and hate speech (100,100).
Other categories in which content has been covered include bullying and harassment (102,700), suicide and self-harm (370,500), dangerous organizations and individuals: terrorist propaganda (71,700) and organizations and individuals dangerous: organized hatred (12,400).
Categories such as Endangering Children – Nudity and Physical Abuse category saw 163,200 content items processed, while Endangering Children – Sexual Exploitation saw 700,300 items and in the Violence and Incitement category, 190,500 items have been processed.
âProcessedâ content refers to the number of pieces of content (such as posts, photos, videos, or comments) for which action has been taken for violating the standards.
Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may disturb certain audiences with a warning.
The proactive rate, which shows the percentage of all content or accounts that Facebook found and reported on using technology before users reported it, ranged in most of these cases between 60.5 and 99, 9%.
The proactive rate of removal of bullying and harassment related content was 40.7%, as this content is contextual and very personal in nature. In many cases, people need to report this behavior to Facebook before it can identify or remove such content.
For Instagram, over 3.2 million pieces of content were processed in 12 categories in November 2021. This includes content related to suicide and self-harm (815,800), violent and graphic content (333,400), nudity adult and sexual activity (466,200); and bullying and harassment (285,900).
Other categories in which content has been the subject of action include hate speech (24,900), dangerous organizations and individuals: terrorist propaganda (8,400), dangerous organizations and individuals: organized hatred (1 400), endangerment of children – nudity and physical abuse (41,100) and violence and incitement (27,500).
The “Endangering Children – Sexual Exploitation” category saw 1.2 million pieces of content proactively processed in November.
(Only the title and image of this report may have been reworked by Business Standard staff; the rest of the content is automatically generated from a syndicated feed.)