Social Media

Study: New AI Can Seek Out Violent Groups On Twitter

Law enforcement could use this artificial intelligence program to hunt cyberbullies and terrorists

Social Media
Illustration: Diana Quach
Jul 28, 2016 at 3:50 PM ET

A group of researchers in Spain have developed an algorithm that could identify and stop violent groups that recruit and attack people on social media.

Since its debut ten years ago, Twitter has become a preferred medium for both terrorism recruitment and mass cyberbullying. Last week, after a barrage of racist comments were tweeted at “Ghostbusters” actor Leslie Jones, Twitter banned the account of one of the main instigators—Milo Yiannopoulus, Breitbart Tech writer and alt-right hero.

Twitter took action after Jones called out the company for allowing harassment and after Twitter CEO and founder Jack Dorsey told Jones to direct message him.

The decision to ban Yiannopoulus incited a public debate over the line between censoring hate speech and blocking free speech. But what if platforms and law enforcement agencies didn’t wait for a conversation between a celebrity and a tech tycoon to take action?

That’s what an artificial intelligence (AI) team at the University of Salamanca (USAL) is trying to answer with their latest creation—a program that performs sentiment analyses and monitors relationships between Twitter users. The goal of the algorithm is to identify violent groups. “This system could have been very useful—for example—as a support system to control the violent football fans that caused serious incidents during Euro 2016 in France,” Juan Manuel Corchado, the professor who leads the USAL AI department, told Spanish science publication Plataforma SINC.

More Twitter Dropped The Ball In Leslie Jones’ Fight Against Trolls

Many AI technologies use sentiment analysis, but Corchado says USAL’s program is different because it analyzes “historical data and their evolution.” The algorithm can detect changes in sentiment and physical location, thereby monitoring how group interrelationships progress. “It can establish where a dangerous user is located with reasonable precision, based on what they share on Twitter and how and with whom they are connecting at any time, without the need of geolocating tweets,” Corchado told SINC.

The USAL team built a hybrid form of machine learning that blends the logic-based symbolic AI with neural networks AI, which learns to “think” based on a database of millions of inputs or examples. In the case of social media, it’s possible to train a neural network on, for example, violent tweets from ISIS supporters to help the AI learn to recognize this language.

Corchado even claims the application can distinguish between the leaders and followers of a group, so that law enforcement officers can use it as a tool to influence the leaders. In fact, USAL is already working with Spanish national law enforcement authorities.

The application currently works with Arabic, English, French, German, Russian, and Spanish, so it might only be a matter of time before American agencies start using it to hunt out the next martyr of “free speech.”