Gaming

This Robot Prevents Gamers From Bullying Each Other

Gamers say it could help a major problem, but worry: Will an A.I. know the difference between trash talking and harassment?

Gaming
Photo Illustration: Vocativ
May 18, 2017 at 11:42 AM ET

Abusive gamers might soon face a new kind of in-game adversary that could add an extra challenge to gameplay — tone-policing bots.

The London-based startup Spirit AI has developed software that monitors games for harassment and intervenes when it detects bullying. The application, Ally, registers verbal abuse and other concerning behavior, like in-game stalking, or even “touching” in virtual reality games. When Ally detects harassment, the program compares this to past communication between players to see if this is just friendly trash talk.

If the interaction does seem abusive then an Ally-generated character can poke in and ask if the player is OK. If the players confirms they’re being harassed, the bot will take action against the bully. The software currently works with all online games and should soon be compatible with virtual reality games.

Spirit AI was founded by Steven Andre, a former IBM executive and gamer who wanted to fuse his love for gaming with the AI that he watched his previous employer pioneer with their Watson computing system. His team of AI specialists, engineers, and game developers, including interactive fiction writer Emily Short and Mitu Khandaker-Kokoris, a professor at NYU’s Game Center. Mattie Brice, an NYU professor who advocates for using games for social impact is a consultant for the Ally software.

“Ultimately, we’re interested in helping developers create safer and more inclusive communities, allowing more people to play their games, no matter their identity,” Khandaker-Kokoris told Vocativ. “We know that companies have a vested interest in making their spaces safer, but this currently either requires huge teams of moderators, or the problem just isn’t dealt with.”

Gaming harassment is a growing concern among anti-bullying advocates. For two decades, “greifers” have been targeting players in multiplayer games, getting kicks out of making others’ gaming lives hell, pranking avatars and vandalizing digital empires. A study published last year in Media & Society showed that online gaming presents toxic environments where sexual harassment leads many women to withdraw from gaming. In 2012, the New York Times reported on a female gamer who withdrew from a six-day $25,000-prize Cross Assault video game tournament after her coach repeatedly sexually harassed her on camera.

On her blog, Not in the Kitchen Anymore, Jenny Haniver posts recordings of some of the hundreds of verbal attacks and threats she has received while playing video games. “I think [Ally] could certainly be a beneficial tool. But it should be treated as just that — a tool, not a solution,” Haniver told Vocativ. “People are ultimately going to be where the true moderation and changes occur.”

She also acknowledged that verbal sparring can be an important element of gaming “I like the idea of the AI being able to identify playful banter between two friends and distinguish it from actual harassment,” she said. “And since there are millions of people who game online, I think any tool that has the potential to more effectively filter out and identify potential issues to the community management should absolutely be explored.”

Dan Ackerman, an avid gamer, video game journalist, and author of The Tetris Effect agrees that community management is a major issue for the industry. “Human monitors have fallen woefully behind,” Ackerman said. “We see this on Twitter and Facebook, and online gamers have been dealing with similar problems of griefing and harassment since long before social media was a thing.”

Though Ackerman also suspects this technology could turn off some gamers who don’t want bots meddling with their experience: “Do players want an algorithm monitoring and policing their interactions?” he said. “It’s one of many, many areas in technology where we’re starting to think more about what level of authority to give AI, and it’s not a question I think we’re going to have a definitive answer for anytime soon.”