Facebook's (META 1.13%) old motto was to "move fast and break things," which meant prioritizing the evolution of its products and worrying about the consequences later. Unfortunately, Facebook ultimately went too fast and broke lots of things.
Those missteps include the Cambridge Analytica scandal, data breaches regarding linked third-party apps, the exposure of that data on unsecured cloud servers, and severe security flaws in WhatsApp. Fake news campaigns on Facebook also exacerbated conflicts in countries like Myanmar, Thailand, and Cambodia.
CEO Mark Zuckerberg repeatedly apologized for those incidents, but tone-deaf moves like the launch of its Portal smart screens, a plan to unite Messenger, Instagram, and WhatsApp's messages, and the development of a new cryptocurrency all indicated that Facebook was still "moving fast and breaking things."
That's why it was surprising when Facebook recently made three decisions that could improve the overall health of its platform but throttle its near-term growth.
1. Shielding its platform from political influence
Facebook was widely criticized for failing to stop divisive misinformation campaigns -- many of which were funded by foreign governments -- during the U.S. presidential election in 2016. That failure and the Cambridge Analytica scandal indicated that Facebook was a dangerously potent tool for influencing elections.
Facebook generates nearly all of its revenue from ads, and those ads are sold via an automated platform with little human oversight. As a result, anyone with enough money can flood the platform with ads -- and problematic ads can circulate for a long time before being spotted by human moderators.
That's why Facebook recently announced that it was tightening its authorization process for all ads related to social issues, elections, or politics in the U.S. ahead of the 2020 election. Facebook already started requiring those advertisers to authorize their accounts with identification and locations and to put clear "paid by" disclaimers in their ads in 2018.
Starting in mid-September, those advertisers must provide "more information about their organization" before the ads are approved -- which could block deceptively named agencies from launching misinformation campaigns.
2. Adding a new setting for facial recognition features
Facial recognition technology makes it easier for Facebook users to tag each other in photos, but it also raises serious privacy concerns. For example, China uses a facial recognition platform that binds users' faces to their "social credit" score -- and users with low ratings already find it tougher to travel, take out loans, or even rent a bike.
Most countries hopefully won't follow China's dystopian lead, but Facebook's massive audience of 2.41 billion monthly active users (MAUs) comprises the biggest database of digital faces in the world. That's a tasty target for hackers, companies, and government agencies, even though Facebook states that it won't sell that data to third parties.
Facebook isn't suspending the feature, but it recently introduced a new setting that will let users turn off its facial recognition features for automated tags. That move could reduce engagement rates between users, but it could also allay concerns about the platform's biometric data-mining practices.
3. Eliminating "like" counts
Two years ago, Facebook co-founder Sean Parker argued that the social network was designed to exploit "a vulnerability in human psychology" with "a social-validation feedback loop." In other words, Facebook was created to be addictive, with "likes" pushing users to post more content.
Facebook's own researchers then admitted that "passively consuming" information on social media can leave people "feeling worse." They cited a study from UC San Diego and Yale University that found that people "who clicked on about four times as many links as the average person, or who liked twice as many posts" reported worse mental health.
Facebook is now addressing those concerns by hiding "like" counts in several markets on Instagram, and it could reportedly extend that strategy to Facebook in the near future. That move could reduce the average time spent on its apps -- but it could make users happier by breaking its social-validation feedback loop.
What does this mean for investors?
Facebook's greatest challenge in recent years has been its slowing revenue growth. Its revenue rose 47% in 2017, 37% in 2018, and analysts expect just 26% growth this year. That deceleration can be mainly attributed to its peaking user growth in the U.S. and Canada, who only accounted for 10% of its MAUs last quarter but generated nearly half of its total revenues.
Therefore, throttling political ads in the U.S. -- a massive source of revenue growth leading up to the 2020 election -- is a bold move which sacrifices its short-term gains to avoid repeating the mistakes of 2016. Toning down facial recognition features and like counts could reduce engagement rates between users -- but might make it a less toxic platform.
Facebook's recent fines and legal fees are expected to reduce its earnings 17% this year. Analysts anticipate a rebound next year, but proactively tightening up its platform could help it avoid any future headaches. In other words, Facebook is growing up, and these moves could make it a better tech stock to own over the long term.