3 memos reported ‘polarizing’ content, hate speech in India, but Facebook said there was no problem

0

From a “constant barrage of polarizing nationalist content”, to “false or inauthentic” messages, from “disinformation” to content “denigrating” minority communities, several red flags regarding its operations in India have been raised internally on Facebook between 2018 and 2020.

However, despite these explicit alerts from staff mandated to perform oversight duties, a 2019 internal review meeting with Chris Cox, then vice president of Facebook, found “a relatively low prevalence of problematic content (hate speech, etc.) ”on the platform.

Two reports of hate speech and “problematic content” were presented in January-February 2019, months before the Lok Sabha elections.

A third report, until August 2020, admitted that the platform’s AI (artificial intelligence) tools were unable to “identify vernacular languages” and therefore failed to identify hate speech or problematic content.

Two reports of hate speech and “problematic content” were presented in January-February 2019, months before the Lok Sabha elections.

Yet the minutes of the meeting with Cox concluded: “A survey tells us that people generally feel safe. Experts tell us that the country is relatively stable.

These glaring shortcomings in the response are revealed in documents that are part of the disclosures made to the United States Securities and Exchange Commission (SEC) and provided to the United States Congress in drafted form by legal counsel to the former Facebook employee and whistleblower Frances Haugen.

Frances Haugen, a former Facebook data scientist turned whistleblower, released a series of documents revealing the social media giant’s products were harming the mental health of teenage girls. (PA)

The redacted versions received by the US Congress have been reviewed by a consortium of global news organizations, including The Indian Express.

Facebook did not respond to questions from The Indian Express about Cox’s meeting and those internal notes.

The review meetings with Cox took place a month before Facebook’s Indian Election Commission announced the seven-phase timeline for Lok Sabha’s elections on April 11, 2019.

Meetings with Cox, who left the company in March of the same year to return in June 2020 as product manager, however highlighted that “big problems in sub-regions can be lost at country level” .

The first report “Adversarial Harmful Networks: India Case Study” noted that up to 40% of the major VPV (harbor views) publications sampled in West Bengal were fake or inauthentic.

VPV or viewport views are a Facebook metric to measure how often content is actually viewed by users.

The second – an internal report – written by an employee in February 2019, is based on the findings of a test account. A test account is a fictitious friendless user created by a Facebook employee to better understand the impact of different platform features.

This report notes that in just three weeks, the test user’s news feed had “become an almost constant barrage of polarizing nationalist content, misinformation, violence and gore.”

The test user only followed the content recommended by the platform’s algorithm. This account was created on February 4, it did not “add” any friends and its news feed was “quite empty”.

According to the report, the “Watch” and “Live” tabs are about the only surfaces that have content when a user is not logged in with friends.

“The quality of this content is… not ideal,” the employee’s report said, adding that the algorithm often suggested “a bunch of softcore porn” to the user.

Over the next two weeks, and particularly after the terrorist attack on Pulwama on February 14, the algorithm began to suggest groups and pages focused primarily on politics and military content. The user of the test stated that he / she had “seen more images of deceased people in the past 3 weeks than I have seen in my entire life.”

Facebook told The Indian Express in October that it had invested heavily in technology to find hate speech in various languages, including Hindi and Bengali.

“As a result, we have halved the number of hate speech people see this year. Today it has fallen to 0.05%. Hate speech against marginalized groups, including Muslims, is on the increase around the world. So we’re improving the app and we’re committed to updating our policies as hate speech evolves online, ”a Facebook spokesperson said.

However, the issue of the inability of Facebook’s proprietary algorithm and AI tools to report hate speech and problematic content surfaced in August 2020, when employees questioned the “investments and corporate plans for India ‘to prevent hate speech content.

“From the call from earlier today, it appears that AI (artificial intelligence) is not able to identify vernacular languages ​​so I am wondering how and when we plan to do the same in our country ? It is amply clear that what we currently have is not enough, ”said another internal memo.

The memos are part of a discussion between Facebook employees and senior managers. Employees wondered how Facebook didn’t have “even basic key job detection configured to detect” potential hate speech.

“I find it inconceivable that we haven’t even configured a key job detection base to detect this sort of thing. After all, we cannot be proud as a company if we continue to let such barbarism flourish on our network, ”said an employee during the discussion.
The memos reveal that employees also asked how the platform plans to ‘regain’ the trust of colleagues in minority communities, particularly after an Indian senior Facebook executive shared a post on his personal Facebook profile that many believe. felt “denigrated” of Muslims.


Source link

Leave A Reply

Your email address will not be published.