Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts – particularly anti-Muslim content – according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.
From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market.
Communal and religious tensions in India have a history of boiling over on social media and stoking violence.
The so-called Facebook Papers, leaked by whistleblower Frances Haugen, show that the company has been aware of the problems for years, raising questions over whether it has done enough to address these issues.
Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party (BJP) are involved.
Across the world, Facebook has become increasingly important in politics, and India is no different.
Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP.
Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialised by a 2015 image of the two hugging at the Facebook headquarters.
The leaked documents include a trove of internal company reports on hate speech and misinformation in India. In some cases, much of it was intensified by its own “recommended” feature and algorithms.
But they also include the company staffers’ concerns over the mishandling of these issues and their discontent expressed about the viral “malcontent” on the platform.
According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech”. Yet, Facebook did not have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.
In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has resulted in a “reduced amount of hate speech that people see by half” in 2021.
“Hate speech against marginalised groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.
ANTI-MUSLIM PROPAGANDA
Other research files on misinformation in India highlight just how massive a problem it is for the platform.
In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news.
Users told the researchers that “clearly labelling information would make their lives easier”.
Again, it was noted that the platform did not have enough local language fact-checkers, which meant a lot of content went unverified.
Alongside misinformation, the leaked documents reveal another problem plaguing Facebook in India: anti-Muslim propaganda, especially by hardline Hindu supremacist groups.
(Al Jazeera)