Hate Speech Drops Almost 50%, According to Facebook, Marketing & Advertising News, AND BrandEquity

0
This image is for representation purposes only.

Snubbed by media reports of its inability to tackle hate speech, Facebook has now claimed that the prevalence of hate speech on its platform has fallen by nearly 50% in the past three quarters.

The claim came in response to an article published in the Wall Street Journal (WSJ) on Sunday, which stated that Facebook’s content moderators were not always successful in removing objectionable content using artificial intelligence (AI). .

In a response, Guy Rosen, vice president of integrity at Facebook, said their technology has a big impact on reducing the number of hate speech people see on Facebook.

“According to our latest Community Standards Enforcement report, its prevalence is around 0.05% of content viewed, or roughly five views per 10,000, down almost 50% in the past three quarters,” he added.

“Data pulled from leaked documents is being used to create a narrative that the technology we use to tackle hate speech is inadequate and that we are deliberately distorting our progress. This is not true,” Rosen said.

The WSJ report claimed that internal documents showed that two years ago, Facebook reduced the time human examiners focused on hate speech complaints and made other adjustments that reduced the number of complaints.

“This in turn helped create the impression that Facebook’s AI was more successful in enforcing corporate rules than it actually was,” the report said.

Rosen said in a blog post that just focusing on removing content is not the right way to look at how we are combating hate speech.

“We have to be sure something is hate speech before we delete it. If something may be hate speech but we are not sufficiently convinced that it meets the bar for removal, our technology may reduce content distribution or will not recommend groups, pages, or people posting. regularly content that may violate our policies, “he noted.

Facebook said when it started reporting hate speech measures, only 23.6% of the content it removed was proactively detected by its systems; most of what he got was found by people.

“Now that number is over 97 percent. But our proactive rate doesn’t tell us what we’re missing and doesn’t account for the sum of our efforts, including what we’re doing to reduce the distribution of problematic content.” , the Facebook executive said.

The document, titled “Unique on Facebook: Formulation and Evidence of (Nano) Targeting Individual Users with non-PII Data” – describes a “data-driven model” achieved through 21 Facebook ad campaigns …


Source link

Leave A Reply

Your email address will not be published.