Many games studios and developers are aware of the toxicity and harassment happening in their games and have put what’s known as “reactive moderation” measures in place as a response. Usually, here’s how this works:
This is slow, expensive, and inefficient.
Proactive moderation is all about looking for signs of toxicity as it happens.
Rather than relying on players to send incident reports, proactive voice moderation notices flags bad behavior and automatically captures the key data to escalate to moderators, enabling them to respond faster and more comprehensively to any unfolding toxicity.
Proactive moderation takes the onus off of the player, enables moderator efficiencies, and is cost-effective.
Built on advanced machine learning technology and designed with player safety and privacy in mind, ToxMod triages voice chat to flag bad behavior, analyzes the nuances of each conversation to determine toxicity, and enables moderators to quickly respond to each incident by supplying relevant and accurate context.