TikTok to automatically remove content that violates the policy

0

July 9 (Reuters) – Short video sharing app TikTok said on Friday it would use more automation to remove videos from its platform that violate its community guidelines.

Currently, videos uploaded to the platform go through technological tools that help recognize and report any potential breaches, which are then reviewed by a member of the security team. If a violation is identified, the video is removed and the user is notified, TikTok said.

The ByteDance-owned company added that over the next few weeks it will begin to automatically remove certain types of content that violate the policy on child safety, adult nudity and sexual activity, violent and graphic content, illegal activities and regulated goods.

This will be in addition to deletions confirmed by the security team.

The company said it would help its security team focus more on highly contextual and nuanced areas, such as bullying and harassment, misinformation and hateful behavior.

TikTok also added that it will send a warning in the app on the first violation. However, in the event of repeated violations, the user will be notified and the account may also be permanently deleted.

The changes come as social media networks, including Facebook (FB.O) and TikTok, have come under fire for amplifying hate speech and disinformation globally on their platforms.

Report by Tiyashi Datta in Bangalore; Editing by Krishna Chandra Eluri

Our Standards: Thomson Reuters Trust Principles.


Source link

Leave A Reply

Your email address will not be published.