Facebook (META 3.51%) has a pretty standard answer whenever confronted with many of the scandals and controversies that it has grappled with over the past two to three years: artificial intelligence (AI) and machine learning can help it address problems on its platform at scale. After all, it's not easy to enforce standards and policies at scale when 2.1 billion people log in to one of Facebook's core services every single day. In many ways, scale is Facebook's greatest enemy.

But the bad guys are using robots, too.

Illustration of person interacting with a robot on the computer

Image source: Getty Images.

They're programmed to destroy us

Facebook has published its third Community Standards Enforcement report this week, which covers a wide range of topics from adult content to hate speech to spam to fake accounts. There's a stunning figure in the section on fake accounts: Facebook disabled a whopping 2.2 billion fake accounts in the first quarter. That's not a typo.

In other words, the number of fake accounts that Facebook took down was more than its familywide daily active users (DAUs) of 2.1 billion, and not too far from familywide monthly active users (MAUs) of 2.7 billion. That's nearly twice the 1.2 billion fake accounts that the social networking giant disabled in the fourth quarter, a significant sequential spike in malicious activity that Facebook attributes to automation.

"For fake accounts, the amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time," VP of Integrity Guy Rosen wrote.

It's demanding to defeat those evil machines

Of course, Facebook itself relies heavily on automation to detect and disable questionable accounts. "Our detection technology helps us block millions of attempts to create fake accounts every day and detect millions more often within minutes after creation," Facebook writes.

Since these fake accounts are removed so swiftly, they don't have the opportunity to interact on the platform and as such are not included in Facebook's MAU metrics. Some fake accounts are blocked during the registration process, while others are removed after completing registration. Facebook said that 99.7% of fake accounts that made it through registration were detected before being reported by other users. More fake accounts are making it past "initial detection" during the registration process due to the rise of automated attacks, according to Facebook.

In other words, the good robots are beating the bad robots -- for now.

It'd be tragic if those evil robots win

To be clear, there are still quite a few fake accounts out there. It's not realistic to think that Facebook can detect and disable all fake accounts. The company estimates that roughly 5% of MAUs on the Facebook platform (2.38 billion) are fake, or 119 million.

User metrics are among the most important things investors look at, so it's important that Facebook continues to combat fraudulent activity on its platform. More importantly, automated accounts on social media also play a large role in election interference, and Facebook is very concerned about its impact on the world after having been made an inadvertent accomplice last time around.