Published: Sun, November 22, 2020
Tech | By

Facebook Details Amount Of Hate Speech On Its Platform

Facebook Details Amount Of Hate Speech On Its Platform

Facebook has revealed its first-ever report on the prevalence of hate speech on its platform.

As part of its latest community standards enforcement report, Facebook added a new metric charting the number of hate speech posts on the platform.

Facebook's VP of Integrity Guy Rosen shared the stats during a call with reporters Thursday. "Prevalence is like an air quality test to measure pollution". But between its AI systems and its human content moderators, Facebook says it's detecting and removing 95% of hate content before anyone sees it. AI scans both new posts and reported posts, and removes or flags content automatically. Though this does not sound like a lot, it is.

Facebook said it took action on 22.1 million pieces of hate content in the third quarter. Facebook said it removed just over 22 million pieces of content for hate speech during the third quarter of 2020. Along with rolling out warnings, Facebook had also removed nearly 265,000 content pieces to uphold the strict rule of Facebook.

Facebook has announced that it labelled 180 million pieces of misinformation related to the U.S. election on its platform.

But the workers said only those with doctors' notes are now excused from working at home and called on Facebook to offer hazard pay and make its contractors full-time staff. The moderators from the U.S. and Europe said that the company was risking their lives by forcing them to work from the offices despite the imposed lock downs in their countries.

"Facebook needs us", the workers wrote in Wednesday's letter. "Moderators who secure a doctors' note about a personal COVID risk have been excused from attending in person.[1] Moderators with vulnerable relatives, who might die were they to contract COVID from us, have not".

The AI wasn't up to the job. In the letter, it was also stated that Facebook does not provide mental health services to its moderators, who have to see countless harmful content from child abuse to uncensored violence every day.


The lesson is clear. "They may never get there". In the letter, they said that Facebook's AI is unable to detect all threats in breach of their policies.

"Workers have asked Facebook leadership, and the leadership of your outsourcing firms like Accenture and CPL, to take urgent steps to protect us and value our work".

Facebook, which has 1.82 billion daily users globally, has drawn flak in the past for its handling of hate speech on the platform in India, which is among its biggest markets.

"Our proactive detection rates for violating content are up from quarter two across most policies, due to improvements in AI and expanding our detection technologies to more languages". But Facebook offers users the ability to choose from over a 100 languages, and has more than 70 percent of its users based in Asia Pacific and what it calls the "rest of world".

In October, Facebook said it was updating its hate speech policy to ban content that denies or distorts the Holocaust, a turnaround from public comments Facebook's Chief Executive Mark Zuckerberg had made about what should be allowed.

Facebook reported taking action on 12.4 million pieces of child nudity and sexual exploitation content, up from 9.5 million in the previous quarter.

It also emerged that an ex-Facebook employee believed the company was unable to tackle misinformation.

Like this: