Meet Google’s New Robotic Safe Space Enforcers: Conversation AI

By Ian Miles Cheong | 4:49 pm, October 14, 2016

Google is taking on the task of dealing with online harassment, ostensibly to protect a few sensitive, but very vocal souls from getting offended by things they read on the Internet. It’s doing so through the creation of a new artificial intelligence called Conversation AI.

Developed by its subsidiary Jigsaw, which is a creepy sounding name if you’ve ever seen any of the SAW movies, the new technology allows for the automated detection of mean words and statements on social media, as well as comments sections and blogs.

If you’ve ever read a vitriolic post about yourself and wished there was a way to stop yourself from reading it because you’re too incapable of walking away from the screen or closing your eyes, then Conversation AI will be a boon to your handicap.

According to a piece on Wired that revealed Conversation AI, the software will first be tested out in the comments sections of The New York Times and on Wikipedia. Conversation AI can perform a variety of actions, including simply flagging problematic comments for moderators to deal with, scolding users who misbehave, and even auto-deleting portions of a post.

The software is presently trained in over 17 million comments to detect abusive language, giving every statement a score between 0 and 100, with the latter denoting extreme abuse. The number of comments it’s trained to parse will only grow over time. For now, the software only works with a 92% certainty due to its limited data.

The launch of the new AI comes in the wake of YouTube’s new “YouTube Heroes” program, which encourages its users to snitch on channels with problematic videos in exchange for social capital.

Google plans to make the software available for free for any platform to use.

On the surface, flagging racist comments will be a blessing for underpaid moderators who spend hours sifting through ugly comments sections on political news websites, but Conversation AI’s potential deployment on social media could suppress free speech.

Given how sites like Reddit and Twitter exhibit an allergy to transparency, it’s scary to think of how the software might one day be used to censor “dangerous” or “problematic” opinions over topics like race, gender and religion — only allowing for politically correct statements to be published and seen.

If you’re deemed hostis publicus, there might not even be a way to inform anyone else of your suppression. Much like the classic Harlan Ellison sci-fi novel, you’ll have no mouth, and you’ll want to scream.

Ian Miles Cheong is a journalist and outspoken game critic. You can reach him through social media at @stillgray on Twitter and on Facebook.