Facebook “powered” around 31.7 million pieces of content on Aug 21, according to report

0

Facebook proactively “acted” on about 31.7 million pieces of content in 10 categories of violations during the month of August in the country, the social media giant said in its compliance report on Friday.

Facebook’s photo-sharing platform, Instagram took proactive steps against around 2.2 million pieces in nine categories during the same period.

Facebook had “acted” proactively on more than 33.3 million content in 10 categories of violations from June 16 to July 31 in the country. Instagram took proactive action against around 2.8 million posts across nine categories during the same period.

Facebook said on Friday that it received 904 user reports for Facebook through its Indian grievance mechanism between August 1 and August 31.

“On these inbound reports, Facebook provided tools for users to resolve their issues in 754 cases. These include pre-established channels for reporting content for specific violations, self-correction feeds where they can upload their data, ways to troubleshoot hacked account issues, and more. , he added.

Between August 1 and August 31, Instagram received 106 reports through the Indian complaints mechanism.

Over the years, we have constantly invested in technology, people and processes to advance our agenda to keep our users safe online and allow them to express themselves freely on our platform.

“We use a combination of artificial intelligence, reports from our community and reviews by our teams to identify and review content against our policies,” a Facebook spokesperson said.

In accordance with IT rules, the company released its third monthly compliance report for the 31-day period (August 1 – August 31), the spokesperson added.

“This report will contain details of the content that we have proactively removed using our automated tools and details of user complaints received and actions taken,” the spokesperson said.

In its report, Facebook said it implemented around 31.7 million pieces of content in 10 categories as of August 2021.

This includes content related to spam (25.9 million), violent and graphic content (2.6 million), adult nudity and sexual activity (2 million), and hate speech (242,000) .

Other categories in which content has been the subject of action include intimidation and harassment (90,400), suicide and self-harm (677,300), dangerous organizations and individuals: terrorist propaganda (274 200) and dangerous organizations and individuals: organized hatred (31,600).

“Processed” content refers to the number of pieces of content (such as posts, photos, videos or comments) for which action has been taken for violating the standards. Taking action may include removing a piece of content from Facebook or Instagram or covering photos or videos that may disturb certain audiences with a warning.

The proactive rate, which shows the percentage of all content or accounts that Facebook found and reported on using technology before users reported it, ranged in most cases between 80.6% and 100%.

The proactive rate of removal of bullying and harassment related content was 50.9%, as this content is contextual and very personal in nature. In many cases, people need to report this behavior to Facebook before it can identify or remove such content.

Under the new IT rules, large digital platforms (with more than 5 million users) will be required to publish periodic compliance reports every month, listing details of complaints received and actions taken against them. The report should also include the number of specific communication links or pieces of information to which the intermediary has removed or disabled access as part of proactive monitoring carried out using automated tools.

For Instagram, approximately 2.2 million pieces of content were processed in nine categories in August 2021. This includes content related to suicide and self-harm (577,000), violent and graphic content (885,700), adult nudity and sexual activity (462,400) and bullying and harassment (270,300).

Other categories in which content has been the subject of action include hate speech (37,200), dangerous organizations and individuals: terrorist propaganda (6,300) and dangerous organizations and individuals: organized hatred (2,300 ).

(Only the title and image of this report may have been reworked by Business Standard staff; the rest of the content is automatically generated from a syndicated feed.)


Source link

Leave A Reply

Your email address will not be published.