It's no secret that Twitter (NYSE:TWTR) has a troll problem.
For the most part, the problem historically manifested itself in nonexistent user growth or the occasional storyline about a celebrity getting bullied off the service. Those should have been enough for Twitter to actually do something about it, but sadly its efforts to date have been completely half-hearted and ineffective. Sure, Twitter would attempt to reassure users or suspend a handful of accounts, but it has never really done much to address the systemic weaknesses that allow trolls to thrive on its site. The company's reactionary mentality to hate speech is precisely the problem.
In fact, Twitter's reputation as a haven for hate speech led several potential suitors to drop out of its highly publicized auction last month, for which it found no buyers. I stand by my unscientific assessment that Twitter is 70% depraved YouTube comments. Twitter has now announced a handful of new features that are intended to combat online abuse and harassment. Will it be more of the same, or does Twitter mean business this time?
Probably more of the same
The company acknowledges a "growing trend of people taking advantage of that openness and using Twitter to be abusive to others." It also notes that it faces a unique challenge since tweets are public and occur in real time.
The first part of the initiative is a granular way to mute notifications, filtering them based on keywords or phrases. Historically, muting has only applied toward accounts, but now users have greater control over what types of notifications they will or won't see. In August, this exact feature was rumored to be in the pipeline. Note that this does not actually prevent the abuse from occurring; you just won't be notified about it. Ironically, muting these types of notifications actually runs counter to the next element of Twitter's strategy: reporting.
Twitter is also trying to improve its reporting mechanisms to flag abusive content, giving users "a more direct way" to report behavior that violates Twitter's hateful conduct policy. The last aspect of the plan is to improve training of Twitter's support teams so that they can better enforce the policies.
At face value, these changes are good in that Twitter is trying to address a major problem associated with its platform. The question is whether or not Twitter puts serious effort behind these moves, or if they will translate into actual improvements.
In fairness, there are considerable technical challenges with managing a platform of this size in a scalable way, which is partially why it's taken Twitter so long to propose solutions. It also doesn't want to go overboard and enact anything that could be perceived as censoring its users.
For much of its history, Twitter has defined itself by what it is as much as what it's not -- namely, Facebook (NASDAQ:FB). The two competing social networks made diverging decisions across a number of strategic decisions, and in doing so established respective identities of being what the other was not.
Facebook is walled off and only visible to people you have voluntarily befriended on the platform; Twitter is public for everyone to see. Facebook posts are algorithmically curated; Twitter is mostly unfiltered. Facebook enforces a real-name policy that creates accountability; Twitter doesn't care what you call yourself. Facebook blocks pornographic material; Twitter allows it.
Now that this dynamic has been established, it's very hard to go back. Twitter users completely freaked out when Twitter started exploring the idea of algorithmic curation. Twitter enthusiasts see the service as a sort of anti-Facebook. Any move that threatens to change the experience earns swift backlash from a very vocal minority, even if that change has the potential to improve the service or attract new users.
The move Twitter should make but can't
Not only has Twitter earned a reputation for looking the other way while trolls savage other human beings, but it also cannot realistically implement the single most effective method that Facebook has used to combat harassment: real names. That level of accountability is hard to replicate, and its efficacy is hard to understate. Anecdotally, The Motley Fool switched to using the Facebook Comments plugin as part of our redesigned site, and an unintended effect was a dramatic reduction in the number of trolling comments from anonymous netizens.
There's absolutely no way that Twitter could move toward real names, as such a change would inevitably translate into a mass exodus from the platform. Even Alphabet's attempt to improve YouTube comments by shifting toward real names failed spectacularly, although that might have been because the search giant tied it to the sinking Google+ platform.
Twitter didn't realize it at the time, but it has created a platform that is structurally conducive to trolling. Trolls value anonymity (no real names). Hateful messages are typically short, and arguably more effective that way; there is no such thing as a well-argued, well-researched abusive message (140-character limit). It's also extremely easy to target people that do use their real names (asymmetric following and public nature of tweets).
Unless Twitter tries to address any of the underlying structures that empower its trolls, which risks alienating regular users in the process, trolls will continue to thrive.