Minerva AI issued 90,000 warnings after monitoring in-game chat.

A new AI built to combat toxicity in online gaming has banned 20,000 Counter-Strike: Global Offensive players within its first six weeks, solely by analyzing messages in the game’s text chat.

The AI is called Minerva, and it’s built by a team at online gaming platform FACEIT—which organised 2018’s CS:GO London Major—in collaboration with Google Cloud and Jigsaw, a Google tech incubator. Minerva started examining CS:GO chat messages in late August, and in the first month-and-a-half marked 7,000,000 messages as toxic, issued 90,000 warnings, and banned 20,000 players. 

The AI, trained through machine learning, first issued a warning for verbal abuse if it perceived a toxic message, while also flagging spam messages. Within a few seconds of a match finishing, Minerva sent notifications of either a warning or a ban to the offending player, and punishments grew harsher for repeat offenders.

The number of toxic messages reduced from by 20% between August and September while the AI was in use, and the number of unique players sending toxic messages dropped by 8%. 

The trial started after “months” of eliminating false positives, and it’s only the first step in rolling out Minerva to online games. “In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI,” FACEIT said in a blog post. “We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time.”

“In the coming weeks we will announce new systems that will support Minerva in her training.”