Facebook (NASDAQ:FB) knew its algorithms were serving up content that divided users, but did nothing to stop it, reports The Wall Street Journal

Citing internal documents and people familiar with the effort, the WSJ reports that in 2018 the social media giant began looking into whether or not its algorithms were creating content that was divisive. The internal inquiry was prompted in part by the Cambridge Analytica scandal in which the now-defunct political consulting firm accessed information on 187 million Facebook users without their consent.

Facebook thumbs up and down icons in red and blue.

IMAGE SOURCE: GETTY IMAGES.

When presented with the findings, however, Facebook didn't act. Instead, Mark Zuckerberg and other executives at the company ignored the research and prevented employees from applying any of the findings to products, reports the WSJ. Joel Kaplan, Facebook's policy chief, called it "paternalistic" to reduce the discord on the platform. Another reason not to highlight the findings: backlash from conservatives who accused Facebook of being politically biased. 

The staff did come up with ways to make conversations on Facebook more civil, including tweaking algorithms to suggest a broader range of groups users can engage with and limiting the postings of hot button topics. However, those suggestions would have hurt Facebook's ability to grow and required the tech stock to take a "moral stance" on issues, something it chose not to do. Efforts to build a system to classify polarizing content and ideas to reduce political clickbait were also killed by senior executives. 

"We've learned a lot since 2016 and are not the same company today," a Facebook spokeswoman told the WSJ in response to its report. "We've built a robust integrity team, strengthened our policies and practices to limit harmful content, and used research to understand our platform's impact on society so we continue to improve."