GUNS

We Can’t Keep ISIS And Mass Shootings Off Facebook Live

It took Facebook nearly half a day to remove a live terrorist video

GUNS
Illustration: Vocativ
Jun 17, 2016 at 3:38 PM ET

At 8:52 pm on Monday, Larossi Abballa, a French jihadist, began broadcasting a live video on Facebook, soon after committing a double murder of a couple—both of whom worked for the police—in a suburb of Paris.

Since Facebook began allowing all users to stream live video last December, the platform has provided a new channel for asinine science experiments and mask-wearing moms to reach millions of viewers. But under the current model, it’s almost as easy for a live video of a murder to go viral as it is for a video promoting gun control. That’s especially dangerous since Facebook Live is the perfect medium for a narcissistic rage-fueled killer who wants to share their message with a wide-scale audience.

Abballa’s video depicted the frightening aftermath of the double murder. “I don’t know what I’m going to do with the boy,” he said, in French, referring to the couple’s 3-year-old child, who sat behind him, visibly terrified. In the video, Abballa proclaimed his dedication to ISIS, listed several journalists, police officers, and public figures that he wanted killed, and said the ongoing UEFA Euro 2016 soccer championship will be “a graveyard.” The 13-minute video, along with several photos of the two victims, remained public on Abballa’s Facebook account (under the name Mohamed Ali) for about 11 hours, until Facebook suspended the account, according to David Thomson, a French journalist for Radio France International who live-tweeted about Abballa’s posts for several hours.

“Streaming murders on Facebook Live is an evolution of snuff films into the world of terrorism,” said Reid Meloy, a forensic psychologist and consultant for the FBI. “It is absolutely important that sites like Facebook and Periscope figure out a way that individuals cannot livestream these acts of violence. Those of us involved in threat assessment have been very concerned that the next step after the Vester Flanagan case in Virginia [when a former reporter killed two of his former employees during a live interview] is that we see more use of livestreaming during domestic acts of violence and that seems to have just about arrived.”

Meloy believes that Facebook Live is “particularly appealing to lone actors,” seeking notoriety. In the past, shooters had less certain means of making their manifestos public. Now, they can broadcast their final testaments live, unfiltered, to a large audience. “We know that narcissism pervades these cases and oftentimes individuals who are engaging in critical behavior will, in a sense, create a sacred or positive veneer to what they’re doing. For example, pledging to ISIS as a way to elevate their criminal violence to a more noble cause in their eyes. That also tends to seed their sense of grandiosity, their sense of importance, that they’re engaging in something that’s much larger than just criminal violence.”

Studies show that media coverage of mass shootings likely causes copycat attacks. Last July, researchers at Arizona State University and Northeastern Illinois University published a study in PLOS ONE that applied a mathematical contagion model—usually used to track the spread of disease—to shootings and killings, and determined that one mass killing increased the chance of another happening within 13 days, with about 20 to 30 percent of shootings possibly being incited by previous attacks. Shootings that did not receive national news coverage did not show any effect. So what would happen if the shooter themselves could reach a national or worldwide audience?

“I really, really hope it doesn’t become en vogue to do that,” said Sherry Tower, lead researcher of the study and research professor at Arizona State University. “I think that just increases the public’s attention in it tenfold, if not more, and suddenly we’ll be wanting to watch the video and suddenly it’s not that you’re just hearing about these tragic events, you’re actually almost participating in it. Without any checks and balances from places like Facebook, without removing these types of videos quickly, I think that would be a really horrible development.”

Streaming violent acts live may not be ‘en vogue,’ but they are happening more frequently. Last April, an Ohio teenager broadcast a live video of her friend’s rape on Periscope and days later two teenagers in Bordeaux, France, filmed themselves assaulting a drunk man on the same Twitter-owned app. In May, a woman used Periscope to film herself as she jumped in front of a train to her death.

Facebook has made an effort to help people who share suicidal posts with the recent addition of suicide prevention tools, but the company might need to upgrade its methods for responding to offensive content.

More Orlando Gunman Warned About Imminent ISIS Attacks On Facebook

“We are working closely with the French authorities as they deal with this terrible crime,” Facebook said in a statement emailed to Vocativ. (According to Facebook’s guidelines, the company works with law enforcement agencies, disclosing account records once a subpoena is issued. Facebook has an expedited process for emergency requests.) “Terrorists and acts of terrorism have no place on Facebook. Whenever terrorist content is reported, we remove it as quickly as possible. We treat takedown requests by law enforcement with the highest urgency.”

A Facebook spokesperson told Vocativ that the company relies heavily on users to report content that violates their terms of service, and that flagged content is sent to a team for review and removed if it is deemed inconsistent with the terms of service.

“We do understand and recognize that there are unique challenges when it comes to content and safety for Live videos,” Facebook said in an emailed statement, and a spokesperson said that Facebook Live has a dedicated team of moderators.

But even with the highest urgency, it took Facebook nearly half a day to remove Abballa’s account with images of his victims and a video message—plenty of time for the Islamic State’s Amaq media agency to save and publish the video onto its website.

There are existing algorithms that can mine videos for gunshots, or “things like beheadings or ISIS-type propaganda,” said Kenny Daniel chief technology officer of Algorithmia, which builds algorithms that can detect explicit content, amongst many other things. These algorithms use artificial neural networks that try to mimic the way humans process images. By feeding AI millions of images flagged as “gun” or “beheading,” these programs find ways to recognize those things on their own. It’s not perfect, Daniel admits. But it’s enough to make a “significant dent” in the kind of content companies want to suppress. He believes we could be a couple years from algorithms that accurately flag acts of violence or terrorism and alert the relevant authorities. For now, human discretion is still necessary. Otherwise, you’d risk censoring every video including a police officer’s holstered gun or a Halloween prop.

“That’s where humans come in—to recognize the context and understand the overall image and calibrate the level of violence you want to detect,” said Joshua Buxbaum, co-founder of WebPurify, an online video moderation service that employs professional human scanners to surveil videos 24 hours a day for a range of companies like children sites, gaming sites, and dating sites. “For example, for a majority of our clients, it’s OK if there’s a boxing match and someone is punching someone else, but fighting in a non-violent sport, like baseball, they might reject that.”

Facebook, Twitter, and numerous other tech enterprises depend on companies like WebPurify to moderate content, but Buxbaum says that for many Silicon Valley businesses, moderation is an afterthought. “In general, clients build these platforms without considering moderation. There’s a whole idea: ‘Let’s build something big. Let’s go viral. Let’s get everybody to use it.’ But they don’t think about what happens when everybody uses it. How do we protect the users once there’s millions of videos coming in?”

Leading tech companies will eventually figure out the right balance of manpower and coding required to keep users safe. Maybe it will happen before livestreams from mass killers become mainstream.