Announcing GameBench SDK for Unity and Game Performance Monitoring (GPM)
Clients of GameBench services, including both game makers and device makers, are already familiar with our performance rating system. They use our simple, color-coded badges on a regular basis, often monthly or quarterly, to help them visualize key data and make vital decisions such as:
In fact, our badges have become so important to so many decisions, from development all the way through to marketing, that it’s about time we revisited what the badges mean and how they’re evolving.
GameBench badges are simple visualizations which reveal whether a gaming experience happened the way it was intended.
A great thing about badges is that they take a gamer’s perspective rather than an engineer’s. They help our clients to acknowledge whether an experience met certain healthy thresholds without worrying prematurely about the causes of any problems and how readily fixable they are.
Every gaming experience is a culmination of many different variables, including multiple products working together in harmony (or disharmony), so starting with questions of causality isn’t always helpful or conducive to good prioritization.
For example, cloud gaming can easily be affected by at least four different products: which device you use, which game you choose, which streaming service you subscribe to and which WiFi or cellular network you connect through. Different engineering teams in this chain might prioritize issues that they can directly fix, and deprioritize those perceived as being outside of their domain. They might also deprioritize issues where the range of uncontrolled variables means a cause can’t immediately be pinned down. Our badges deliberately push against this way of thinking.
Yes, we keep a log of all key variables and products involved in a particular test, because this context is vital and a different context could result in a different badge. However, the first step should be to look at whether the end result did or didn’t work well. This then highlights the issues most likely to cause a gamer to abandon an experience, even if these issues had complex causes or required multiple companies to cooperate towards a fix. This sort of common language and set of standards has been missing from the gaming industry.
Although GameBench badges summarise objective metrics, they also contain an element of subjectivity in the form of the thresholds they represent. In other words, how high or low does a particular metric need to be in order to be considered “good”?
Given this element of subjectivity, our approach is to always make our thresholds transparent and always be open to discussing them and refining them. What matters most is that they continue to serve gamer’s interests, even while mobile gaming continually changes.
To determine whether a threshold makes sense, we use the following sources of data:
Our badges are based on thresholds which constantly need to evolve and adapt to gamers’ changing expectations. This is essential, because gaming is progressing rapidly and being delivered in new and exciting ways that are not always directly comparable to what came before.
For us, the goal is not to create and defend permanent thresholds, but to curate and evolve meaningful thresholds over time, while always being transparent about how and why we’re doing this, and continually discussing these topics with our clients.
No matter where your company or your product sits in the gaming ecosystem, GameBench badges can help you get a clear view of how you compare to competitors to determine where your strongest and weakest points.
Once this is done, our Labs service can go further to identify specific causes to specific issues and help you prioritize and track efforts towards fixing them.