Facebook removes millions of posts as part of first compliance report under IT Rules, 2021

Posted By : Rina Latuperissa
4 Min Read

[ad_1]

Facebook Inc. on Friday published the first edition of its compliance report in accordance with Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The report provides details on actions taken against Facebook and Instagram content—created by users in India—that violates norms. Facebook has identified two metrics. This includes ‘Content Actioned’, which measures the number of pieces of content (such as posts, photos, videos or comments) the platform took action on for going against its community standards. It shows the scale of the enforcement activity.

The second metric, ‘Proactive Rate’, shows the percentage of all content or accounts acted on that the company found and flagged before users reported them. This is an indicator of how effectively Facebook detects violations.

In the first report for the period between 15 May and 15 June 2021, Facebook said it took action against content that violated its community standards across 10 different categories or what the company calls policy areas.

It has taken action against 25 million spam content with a 99.9% proactive rate. Action was taken against a total of 2.5 million violent and graphic content with a 99.9% proactive rate.

Facebook said it clamped down on 1.8 million adult nudity and sexual activity-related content with a 99.6% proactive rate.

Other categories of content violation on Facebook include hate speech, dangerous organizations and individuals, organized hate, terrorist propaganda, bullying, harassment, regulated goods such as drugs and firearms, and suicide and self-injury.

On Facebook’s photo sharing platform Instagram, the maximum number of violations happened in the suicide and self-injury content category, with the platform taking action against more than 699,000 posts with a 99.8% proactive rate. Violent and graphic content was the second most violated content category with the platform taking action against more than 668,000 posts with a 99.7% proactive rate. Adult nudity and sexual activity was the third most violated content category with 490,000 posts being either taken down or violating photos or video content being blurred.

Read More:  YouTube feels heat as Russia ramps up ‘digital sovereignty’ drive

Bullying and harassment was another high violation category with 108,000 posts, while there were 53,000 violations in the hate speech category against which action was taken.

Facebook clarified the metric for ‘spam’ on Instagram is not available yet, and it is working on it.

“Over the years, we have consistently invested in technology, people, and processes to further our agenda of keeping our users safe and secure online and enabling them to express themselves freely on our platform. We use a combination of artificial intelligence, reports from our community, and review by our teams to identify and review content against our policies. We will continue to add more information and build on these efforts towards transparency as we evolve this report,” said a Facebook spokesperson.

The social media company said the next report will be published on 15 July, containing details of user complaints received and action taken.

Earlier, home-grown social media platform Koo released the compliance report for June 2021, which showed that 5,502 Koos were reported by the community out of which 22.7% (1,253) were removed, while other action was taken against the remaining 4,249.

Google also published its report stating it received 27,762 complaints from 1 April to 30 April, of which 26,707 or 96% were related to copyright.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint.
Download
our App Now!!

[ad_2]

Share This Article
Leave a comment