The need for safety in games has always been there, but as games and the communities that surround them have become more complex, the need has evolved. New safety risks have appeared, like avenues for online anonymity, changing surface areas of user-generated content (e.g., voice chat and creator tools), complex social features (e.g., parties and advanced app connectivity), and more. Malicious actors will continuously seek to circumvent and exploit players who come to online spaces.
This means that safety features, like reporting, chat filtering, and warning systems, are essential for every game. These features will go a long way in building player trust and ensuring success.
Safety by Design as a Gameplay Practice
Tips for adding safety features into a game during the design phase in order to promote a prosocial, welcoming environment to players.
Once you understand what safety risk looks like for your space (such as where user-generated content is and what social features might look like), building solutions to address that risk is the next step. However, there are some considerations that any safety team or engineer ought to keep in mind when designing solutions to protect players and promote prosocial interactions. These considerations will influence how your design manifests and impact how your players might receive your features.
Here are the most important pillars to follow when designing safety features:
1. The number one success metric for any safety team is player trust
Safety features will fail if players do not use them. If players don’t trust your moderation policy, they aren’t going to use the reporting system. Likewise, if they don’t trust the evidence being used for enforcement, they won’t give you an opportunity to try. Building player trust in your features means building them so they work as intended, and you are transparent and communicative about how those features are being used. Testing and quality assurance is a key principle here.
2. It’s easy to say no, hard to say yes
Safety risk can be scary, and making decisions about how to mitigate, eliminate, accept, or reduce that risk can be daunting. While eliminating risk entirely by shooting down features is tempting, most of the time that’s not the answer. The biggest challenge for a safety team isn’t finding out what can go wrong, it’s finding out how to do it right. Safety by design doesn’t mean safety by elimination, it means working with teams to figure out how to mitigate as much risk as possible without compromising the intent of the feature. Not making a habit of saying no has the added benefit of making it easier for other teams to come to you.
3. Player reception isn’t always positive, and transparency goes a long way in helping that
Players don’t necessarily have the same love for safety features, such as player reporting and content filtering, that they do for other gameplay features. This means that communication and transparency of policy is incredibly important in order to set expectations with players about how and why safety mitigations are taken.
Safety protections can be viewed as taking something away from the player (e.g., the ability to use vulgar language), so be proactive before and during the release of these features with communication about why they are being implemented. This brings your players along your safety journey, encouraging inclusive thought and helping players feel like they are a part of the process. As an added benefit, this gives them opportunities to vocalize what matters to them.
4. Data metrics and KPIs are hard, but critical, to make
One of the key lessons for any safety team is that this field is dynamic and ever-changing. Measuring success post-release is what allows for recalibration and an iterative approach. The quality of reporting and other features directly correlates to efficiency of moderation practices, and it’s vitally important for scalability.
- Metrics, like the accuracy of moderation actions, ratio of reported content versus escalated content, accuracy of player reports, etc., make it possible to evaluate where features are doing well and where they are failing.
- Safety essentials, such as reporting and content policies, especially automated ones, need to be constantly evaluated to ensure they are adapting as the player base and the world change.
5. Your content moderation policy will be different depending on your game
Safety teams should not assume that all players understand what is expected of them in the game. A good portion of players will not read and understand community guidelines, so making sure that the policy is aligned to (and appropriate for) the community will avoid a disconnect between how you enforce policies and player expectation.
For example, the behavioral expectations of a creative building game are different from that of a combative PvP game, which is different from a collaborative puzzle game.
6. Your players care about this too
Player feedback is so essential. Give your players opportunities to give feedback on policies and strategies; most players recognize the need for safety features and a lot are going to have opinions on how they should be implemented. The best way to design for your players is to bring them along on the journey.
Now what?
These are some of the core factors that go into making a safe and welcoming game for all. By framing your gameplay design through the lens of these considerations before building a solution, you are setting yourself up for success before you even start designing. Build communication plans and shipping metrics at early stages. If you are empowered with an understanding of what safety by design looks like, perhaps the next step is knowing how to actually build a solution:
References
- Lewington, R. & Fair Play Alliance. (2021). Being ‘targeted’ about content moderation: Strategies for consistent, scalable and effective response to Disruption & Harm.