Call of Duty is joining the growing number of online games combatting toxicity by listening to in-game voice chat, and it’s using AI to automate the process. Activision announced a partnership with AI outfit Modulate to integrate its proprietary voice moderation tool—ToxMod—into Modern Warfare 2, Warzone 2, and the upcoming Modern Warfare 3.
Activision says ToxMod, which begins beta testing in North American servers today, is able to “identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more.”
Modulate describes ToxMod as “the only proactive voice chat moderation solution purpose-built for games.” While the official website lists a few games ToxMod is already being used in (mostly small VR games like Rec Room), Call of Duty’s hundreds of thousands of daily players will likely represent the largest deployment of the tool to date.
Call of Duty’s ToxMod AI will not have free rein to issue player bans. A voice chat moderation Q&A published today specifies that the AI’s only job is to observe and report, not punish.
“Call of Duty’s Voice Chat Moderation system only submits reports about toxic behavior, categorized by its type of behavior and a rated level of severity based on an evolving model,” the answer reads. “Activision determines how it will enforce voice chat moderation violations.”
So while voice chat complaints against you will, in theory, be judged by a human before any action is taken, ToxMod looks at more than just keywords when flagging potential offenses. Modulate says its tool is unique for its ability to analyze tone and intent in speech to determine what is and isn’t toxic. If you’re naturally curious how that’s achieved, you won’t find a crystal-clear answer but you will find a lot of impressive-sounding claims (as we’re used to from AI companies).
(Image credit: Modulate)
The company says its language model has put in the hours listening to speech from people with a variety of backgrounds and can accurately distinguish between malice and friendly riffing. Interestingly, Modulate’s ethics policy states ToxMod “does not detect or identify the ethnicity of individual speakers,” but it does “listen to conversational cues to determine how others in the conversation are reacting to the use of [certain] terms.”
Terms like the n-word: “While the n-word is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities… If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation.”
Modulate also offers the example of harmful speech toward kids. “For instance, if we detect a prepubescent speaker in a chat, we might rate certain kinds of offenses more severely due to the risk to the child,” the site reads.
In recent months, ToxMod’s flagging categories have gotten even more granular. In June, Modulate introduced a “violent radicalization” category to its voice chat moderation that can flag “terms and phrases relating to white supremacist groups, radicalization, and extremism—in real-time.”
The list of what ToxMod claims to be detecting here includes:
Promotion or sharing ideologyRecruitment or convincing others to join a group or movementTargeted grooming or convincing vulnerable individuals (ie, children and teens) to join a group or movementPlanning violent actions or actively planning to commit physical violence
“Using research from groups like ADL, studies like the one conducted by NYU, current thought leadership, and conversations with folks in the gaming industry,” says the company, “we’ve developed the category to identify signals that have a high correlation with extremist movements, even if the language itself isn’t violent. (For example, ‘let’s take this to Discord’ could be innocent, or it could be a recruiting tactic.)”
Modulate is clearly setting its goals high, though for Call of Duty’s purposes, it sounds like ToxMod will simply be the middleman between potential offenders and a human moderation team. While the machinations of AI decision-making are inherently vague, Activision says its enforcement will ultimately abide by Call of Duty’s official Code of Conduct. That’s not dissimilar to how Riot and Blizzard have been handling voice chat moderation in Valorant and Overwatch 2, though Riot has also been gathering voice chat data for over a year to develop its own AI language model.
ToxMod will roll out worldwide in Call of Duty at the launch of Modern Warfare 3 on November 10, starting with English-only moderation and expanding to more languages at a later date.