Once upon a time, terrorists used bombs, machetes, and bullets to get their message across. While that's still the case, modern day terror has a new tool at its disposal, one that it has become particularly adept and successful at deploying: social media. This stark reality has come to light in the wake of terror campaigns that ended with participants pledging their support to their chosen causes and posting them on social media platforms.
Other insidious forms of communication and objectionable material have flourished in the internet era as well. Hate speech and violent threats have found homes there. Governments and advertisers worldwide are aware of the phenomenon and are increasingly pressuring social-media companies like Facebook, Inc. (META -1.06%), Alphabet Inc. (GOOGL -1.40%) (GOOG -1.65%), Twitter, Inc. (TWTR), and Microsoft Corporation (MSFT -0.42%) to police undesirable content on their sites.
The sheer volume of content and the differences and complexity of local laws and regulations conspired to create a near-insurmountable task for these sites. However, recent advances in artificial intelligence (AI) are being brought to bear, and producing surprisingly effective results.
A monumental task
Facebook revealed that new AI algorithms based on image recognition have been deployed to assist with the Herculean chore. One tool has been developed to scan the site for images and live videos containing terrorist propaganda, including beheadings, and to remove them without the intercession of a human moderator.
Another system has been trained to identify accounts that have been set up by terrorists, and prevent them from setting up additional accounts. Another algorithm is being trained in the language of propaganda to help identify posts related to terror. Once the content has been identified and removed, the system catalogs the data, then consistently scans the site and identifies attempts to repost it.
Twitter has been deploying similar tools based on AI for rooting out terrorist content. The company says that these methods flagged 74% of the nearly 377,000 accounts it removed between July and December of 2016.
Tech develops a terror database
This follows an alliance by some of the biggest names in tech circles late last year to create a database of the worst content, to prevent it from being reposted on any of the sites. YouTube, Twitter, and Facebook joined Microsoft in the venture to create unique digital identifiers, or "fingerprints," to use for automatically detecting and removing content that had previously been tagged as terrorist propaganda.
Microsoft developed and deployed similar technology to battle child pornography on the internet. The system was used to detect, report, and remove the images contained in a database.
Inaction can be costly
Google, the Alphabet subsidiary and owner of YouTube, is a pioneer in AI and recently found another way to use the nascent technology. YouTube faced a massive boycott from some of its biggest advertisers after it was revealed that brand advertising had appeared on YouTube videos containing racist, homophobic, anti-Semitic, and terrorist content. The company applied new AI techniques to the task, and within weeks achieved a 500% improvement in identifying objectionable content. YouTube revealed that more than half the content it removed over the previous six months for containing terrorist-related material had been identified using AI.
The world is a complicated place, and new technology brings new challenges. The advent of social media brought the world closer together, for better or for worse. Artificial intelligence is still a nascent technology, and while it isn't a panacea, it is being used in a variety of ways that make the world a better place.