On February 4, 2019, a Facebook researcher created a new user account to see what it was like to discover the social media site as a person living in Kerala, India.
For the next three weeks, the account operated under one simple rule: follow all recommendations generated by Facebook’s algorithms to join groups, watch videos, and explore new pages on the site.
The result was a flood of hate speech, disinformation and celebrations of violence, which were documented in an internal Facebook report released later in the month.
âAs a result of this test user’s news feed, I have seen more images of deceased people in the past three weeks than I have seen in my entire life,â the Facebook researcher wrote.
The report was among dozens of studies and notes written by Facebook employees grappling with the platform’s effects on India. They provide glaring evidence of one of the most serious criticisms leveled by human rights activists and politicians against the global enterprise: it settles in a country without fully understanding its potential impact on local culture and politics, and does not deploy the necessary resources to act on problems once they arise.
With 340 million people using Facebook’s various social media platforms, India is the company’s largest market. And Facebook’s problems on the subcontinent present an amplified version of the problems it has faced around the world, compounded by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.
The internal documents, obtained by a consortium of news agencies that included the New York Times, are part of a larger cache of material called The Facebook Papers. They were collected by Frances Haugen, a former Facebook product manager who became a whistleblower and recently testified before a Senate subcommittee on the company and its social media platforms. References to India were scattered among documents Ms Haugen filed with the Securities and Exchange Commission in a complaint earlier this month.
The documents include reports of how bots and fake accounts linked to ruling party and opposition figures wreaked havoc in national elections. They also detail how a plan championed by Mark Zuckerberg, chief executive of Facebook, to focus on “meaningful social interactions”, or exchanges between friends and family, led to more misinformation in India, especially during the pandemic.
Facebook did not have enough resources in India and was unable to tackle the issues it introduced there, including anti-Muslim messages, according to its documents. Eighty-seven percent of the company’s overall budget for time spent classifying disinformation goes to the United States, while only 13% goes to the rest of the world, even though North American users represent only 10% of users of the social network. daily active users, according to a document outlining Facebook’s resource allocation.
Andy Stone, a Facebook spokesperson, said the numbers were incomplete and did not include the company’s third-party fact-checking partners, most of whom are located outside of the United States.
This unbalanced focus on the United States has had consequences in a number of countries in addition to India. Company documents have shown that Facebook has put in place measures to demote disinformation in Myanmar’s November elections, including disinformation shared by Myanmar’s military junta.
The company rolled back the measures after the election, despite research showing they reduced views of inflammatory posts by 25.1% and photos with misinformation by 48.5%. Three months later, the military carried out a violent coup in the country. Facebook said that after the coup, it implemented a special policy to suppress praise and support for violence in the country, and then banned the Myanmar military from Facebook and Instagram.
In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to hateful content and inciting violence. In Ethiopia, a nationalist youth militia successfully coordinated calls for violence on Facebook and posted other inflammatory material.
Facebook has invested heavily in technology to detect hate speech in various languages, including Hindi and Bengali, two of the most widely used languages, Stone said. He added that Facebook has halved the number of hate speech people see around the world this year.
âHate speech against marginalized groups, including Muslims, is on the increase in India and around the world,â said Mr. Stone. âSo we’re improving the app and we’re committed to updating our policies as hate speech evolves online. “
In India, “there is certainly a question about resources” for Facebook, but the answer is not “just to spend more money on the problem,” said Katie Harbath, who spent 10 years at Facebook as as Director of Public Policy, and has worked directly on securing national elections in India. Facebook, she said, needs to find a solution that can be applied to countries around the world.
Facebook employees have been performing various tests and conducting field studies in India for several years. This work intensified ahead of India’s 2019 national elections; At the end of January of the same year, a handful of Facebook employees traveled to the country to meet with colleagues and talk to dozens of local Facebook users.
According to a note made after the trip, one of the main demands from users in India was that Facebook “take action against the kinds of misinformation linked to real harm, especially politics and tensions between religious groups.”
Ten days after the researcher opened the fake account to study disinformation, a suicide bombing in the disputed Kashmir border region sparked a wave of violence and an upsurge in accusations, disinformation and plots between Indian and Pakistani nationals .
After the attack, anti-Pakistani content began to circulate in the groups recommended by Facebook to which the researcher had joined. Many groups, she noted, had tens of thousands of users. Another report from Facebook, released in December 2019, found that Indian Facebook users tended to join large groups, with the country’s median group size of 140,000 members.
Graphic messages, including a meme showing the beheading of a Pakistani national and corpses wrapped in white sheets on the ground, circulated among the groups she joined.
After the researcher shared her case study with colleagues, her colleagues commented on the published report that they were concerned about misinformation about the upcoming elections in India.
Two months later, after the national elections began in India, Facebook implemented a series of measures to stem the flow of disinformation and hate speech in the country, according to an internal document titled Indian Election Case Study.
The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners – the third-party point-of-sale network that Facebook works with to outsource fact-checking – and increasing the amount of fact-checking. misinformation he deleted. He also noted how Facebook had created a âpolitical whitelist to limit public relations risk,â essentially a list of politicians who were given a special fact-checking exemption.
The study did not note the huge problem the company faces with robots in India, nor issues such as voter suppression. During the election, Facebook saw a spike in bots – or fake accounts – linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.
In a separate report produced after the election, Facebook found that more than 40% of the most common views or impressions in the Indian state of West Bengal were âfake / inauthenticâ. An inauthentic account had accumulated over 30 million impressions.
A report released in March 2021 showed that many of the issues raised in the 2019 election persisted.
In the internal document, titled Adversarial Harmful Networks: India Case Study, Facebook researchers wrote that there were groups and pages “filled with inflammatory and deceptive anti-Muslim content” on Facebook.
The report says there have been a number of dehumanizing messages comparing Muslims to ‘pigs’ and ‘dogs’, and misinformation claiming that the Qur’an, Islam’s holy book, calls on men to rape female members of their families.
Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, a right-wing and nationalist Indian group closely linked to India’s ruling Bharatiya Janata Party, or BJP. border, and posted Facebook posts calling for the expulsion of Muslim populations from India and promoting a Muslim population control law.
Facebook knew that such harmful posts were proliferating on its platform, the report said, and it needed to improve its “classifiers,” which are automated systems capable of detecting and removing posts containing violent and inciting language. Facebook has also been reluctant to designate RSS as a dangerous organization due to “political sensitivities” that could affect the functioning of the social network in the country.
Of the 22 officially recognized languages ââin India, Facebook said it has trained its AI systems in five. (He said he had human reviewers for others.) But in Hindi and Bengali, he still didn’t have enough data to properly vet content, and much of the content targeting Muslims ân ‘is never reported or implemented,’ according to the Facebook report. .
Five months ago, Facebook was still struggling to effectively suppress hate speech against Muslims. Another company reports detailed efforts by Bajrang Dal, an extremist group linked to the BJP, to post articles containing anti-Muslim narratives on the platform.
Facebook plans to designate the group as a dangerous organization because it “incites religious violence” on the platform, according to the document. But he hasn’t done it yet.
âJoin the group and help lead the group; increase the number of group members, friends, âsaid a post seeking recruits on Facebook to disseminate Bajrang Dal’s messages. âFight for truth and justice until the unjust are destroyed. “
Ryan mac, Cecilia Kang and Mike isaac contributed reports.