There are two certainties in life: death and Twitter (TWTR) trolls. The microblogging service has struggled with abuse and harassment since the dawn of time. "We suck at dealing with abuse and trolls on the platform and we've sucked at it for years," former CEO Dick Costolo once famously said in an internal memo. "It's no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day."
The company has made some undeniable progress in recent years, including an expanded set of harassment policies that went into effect late last year. Despite that headway, anonymous trolls and Nazis still thrive on the platform, so Twitter is taking a big new step in an effort to smite them.
Being more proactive instead of reactive
Twitter announced today that it has acquired Smyte, a three-year-old start-up founded by a trio of former Facebook, Instagram, and Google engineers: Pete Hunt, Julian Tempelsman, and Josh Yudaken. The co-founders had all worked on spam, anti-abuse, and anti-fraud teams at their respective former companies.
Smyte's technology works by monitoring data feeds and leveraging machine learning and artificial intelligence (AI) in an effort to flag and label potentially inappropriate content, using a number of data points, behavioral patterns, and other signals. More recently, Smyte had been focusing mostly on the anti-fraud applications, helping e-commerce sites combat fraudsters.
We're very excited to announce that Smyte is joining forces with @Twitter!https://t.co/nlV0BR49Fj
— Smyte (@HelloSmyte) June 21, 2018
Twitter says the Smyte team will help bolster the company's "proactive approach" in which it hopes it can curb harassment "before it impacts anyone's experience." No financial terms were disclosed, but it's been about 18 months since its last acquisition (Yes, Inc. in December 2016).
This all comes as Twitter has started to right-size its cost structure, delivering GAAP profits for the last two consecutive quarters. That may explain why the company is willing to start opening its wallet again for targeted acquisitions following that year-and-a-half long acquisition hiatus. Twitter used to acquire start-ups every few months.
Scaling safety
The challenge that many social media platforms have long struggled with is scaling their safety efforts across hundreds of millions -- or in some cases, billions -- of users. The answer, as it often is in Silicon Valley, is AI and algorithms that can automate many core functions.
For example, YouTube said in April that it had removed 8 million inappropriate videos between October and December of last year, 84% of which were initially flagged by AI. Of those flagged by machines, 76% were taken down before ever being viewed a single time. The popular video-sharing site added AI-based flagging in June 2017. YouTube still uses human moderators and reviewers in conjunction with machines, but the AI empowers the humans.
Facebook started using a similar approach with automated flagging and reporting in late 2016, and now it's Twitter's turn. This task may sound easier for Twitter since it shouldn't be too tough to scan 280 characters, but keep in mind that Twitter has been for years expanding the types of media formats that are supported, making flagging offensive content trickier. Still, fighting Nazis and trolls is a worthy cause, and Twitter needs all the help it can acquire.