Video games could soon feature invasive AI chat bots which drop into your conversations to check that everybody is playing nicely.
A tech company announced that it is building bots designed to police online communities by automatically detecting whether somebody is out of line.
The artificial intelligence – nicknamed “Ally” – could then report players to moderators if it thinks they are misbehaving.
Spirit AI, the company behind the bot, has recently been talking up the new invention, which was featured online by the MIT Technology Review.
Company officials say they’re working with undisclosed developers to roll out the tool in online environments soon.
Ally can apparently use “ambient eavesdropping” to monitor communication, then spring into action if something triggers its internal systems.
Apparently, Ally is sophisticated enough to go beyond just text, and recognize in-game behavior like annoyingly following another player’s avatar everywhere.
Describing a potential intervention, the article said:
Ally’s bot might say: “I noticed you and Player 42 seem to be strangers. They may have said something inappropriate to you, so I just wanted to check if you’re okay?”
If it misunderstood, Ally would move on and update its algorithm. Otherwise, it could forward the complaint to developers.
Imre Jele, a Spirit AI employee who formerly worked on RuneScape, said that the AI could also have a direct role in punishing players, without the involvement of human moderators.
Spirit AI admits that providing proper support to online communities still ultimately comes down to human employees.
But the announcement raises the prospect of online games soon being policed by robot referees.