Published: Wed, May 16, 2018
Economy | By

Facebook disabled 1.3B fake accounts in 6 months

Facebook disabled 1.3B fake accounts in 6 months

Facebook released its Community Standards Enforcement Preliminary Report on Tuesday, providing a look at the social network's methods for tracking content that violates its standards, how it responds to those violations, and how much content the company has recently removed.

The numbers are astonishing, when one knows that Facebook claims 2.2 billion active monthly users altogether.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter.

In the majority of cases, Facebook's automated systems actually did a pretty good job of both detecting and flagging content before users could even get the chance to report it.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda. The only category AI flagged first less than 86 percent of the time was hate speech, which it flagged first 38 percent of the time.

Guy Rosen, VP of Product Management, said most of the action Facebook takes to remove bad content revolves around fake accounts and spam. The Facebook leader hoped that the public would read the report to see how the company is acting to dilute a plethora of harmful activity.

Facebook also released statistics that quantified how pervasive fake accounts have become on its influential service, despite a long-standing policy requiring people to set up accounts under their real-life identities.


Facebook's detection technology "still doesn't work that well" in the hate speech arena and needs to be checked by the firm's review workers, Mr Rosen said.

It also took action on 837 million pieces of content for spam, 21 million for adult nudity or sexual activity and 1.9 million for promoting terrorism. "While not always ideal, this combination helps us find and flag potentially violating content at scale before many people see or report it".

"If Mark Zuckerberg truly recognises the "seriousness" of these issues as they say they do, we would expect that he would want to appear", he said.

Facebook uses computer algorithms and content moderators to catch problematic posts before they can attract views.

From October to December alone, Facebook disabled almost 1.3 billion accounts - and that doesn't even count all the times the company blocked bogus profiles before they could be set up.

In the United Kingdom, Facebook this week again resisted a request from British lawmakers to testify as part of their investigation into Cambridge Analytica, a political consultancy that improperly accessed personal information about 87 million of the social site's users.

Like this: