THE FUNDAMENTAL METRICS OF GAME ANIMATION: MEDIAN FPS, STABILITY AND VARIABILITY
If you’ve been in software for a minute, you’ve seen teams ship bugs. Big, breaking, and even super obvious bugs sometimes go out the door risking your reputation and revenue. You’d be forgiven if you’re confused about how this could happen. I mean, look at all the folks who have touched the product in the last two weeks! And of course, all the testing. But maybe it wasn’t enough.
Now, here comes the big bad decision. More testing. And why not? Clearly there wasn’t enough testing. Maybe.
The trap is set, baited, and ready to grab us by the ankle.
Working in QA, it is often a struggle to communicate the value our activities generate. How do we attach a dollar figure to “finding all the bugs” that might exist? Well, you might say, “Look at all the bugs we find!” But how do you calculate the value of finding these bugs? And we all know that testing can’t give assurances. At least not with a reasonable testing budget.
The hard truth is that the processes and activities we call testing only produce value when we ship a fix. If we don’t understand this reality, we won’t focus on solving the right problems. We must reduce the time it takes to find, isolate, report, fix, verify, and deliver bug fixes.
Let’s be clear. Everything leading up to delivering the fix is plain waste. Defects and time spent on them don’t create value. And it is for this reason that quality inspections in manufacturing are also waste. The discovery of this kind of waste is what lead to the Lean practice of building quality into the process of production. Share via Twitter
Well, ok. But it isn’t like we can make a pattern or jig for software. But we can test more efficiently with automation. I mean, if I can write some UI scripts that does what a tester does then I have 100% efficiency, right?
How much did it cost to create the automation suite? How often will we change that script? Can we keep up with the development team? How often are the results incorrect? How much has it reduced the mean time to defect detection? And, most importantly, how much does it reduce the time between commit and delivery or deployment (lead time)?
There will be some gains over manual testing, for sure. But soon a bug will ship and there will be a call for, “More! More! More!” Sadly, it won’t be a rebel yell. 🎶
Again, the trap is set but now it is even bigger and robed in cash.
The problem is this: No matter how we go about testing (inspecting) a thing, we can’t give an honest assurance of quality. We can manage and reduce risk, sure. But quality and cost control start at the source. If we want to get closer to making assurances by speaking more accurately about present risks, we need to take responsibility and influence the production process itself.
How do we start and what exactly should we try to influence?
There are three key areas we can influence. Culture, project management practices, and technological practices that together form our capabilities. Where to start depends on your context and competencies.
By far, culture is the most powerful element we can use to influence quality. Culture is people, shared values, conventions, and social practices. When we work in a culture that does not value learning, transparency, focus, generosity, and courage, the practices necessary for continual improvement won’t be effective – if they happen at all. To build quality into the process, we need to align our values and act together toward this common goal.
The tough part is that we can’t control culture. That’s a good thing, but it means we must work hard to influence it. It means we must take full responsibility for the things we can control. And it means we must model out the change we want to see.
Project Management Practices
Good management practices allow for rapid iteration, transparency, and technological practices that drive quality.
The processes, rituals, and artifacts we produce while creating our product matter. Whether we adopt Extreme Programming, Scrum, or Lean is less important than our ability to align these frameworks with our culture and our desired outcome. We can focus solely on activities that do the most work for us with minimal effort. We might start with only a regular retrospective or a sprint demo to get things started.
Here again, we can’t control the project management practices of whole teams or departments. So, we must learn to be effective trusted influencers and advisors. We must take full responsibility for what we can control. And we must model out the change we want to see.
The way we approach writing software can make or break our profitability, the value of our QA activities, and the overall quality of our product. While this is true, the development process doesn’t live in a vacuum. When we don’t align our culture and project management practices with sound technical practices, we run the risk of throwing the baby out with bath water, blaming the wrong things for failure, and limiting our potential.
So, what practices predict success? And what is success?
What we want is software that is stable and flexible. We want to easily add features without changing existing code. We want to fix bugs and be confident we haven’t created new defects. And we want to have reasonable certainty that the code written does what we think it ought to do before it gets integrated into the mainline.
Sounds like a tall order. It certainly isn’t a cakewalk. But it is approachable. The great news is that this is a big industry with lots of smart people finding solutions to these problems.
1. High Unit Test Coverage: Make a Jig
By the time we get to end-to-end testing of a feature, we are often indirectly testing dozens or even hundreds of methods. Any of which could be defective. To make things worse, our end-to-end tests rarely can isolate the issue to the problematic method(s).
Unit tests put each method under test and with an extremely high degree of isolation. Now, if we really want to build quality into the process of producing code, then we can adopt Test-Driven Development.
These tests tell us very quickly if our new code has created issues in old code. These tests are the contract that specifies the expected behavior of our methods. Wait, it looks like we can build jigs for software!
When we don’t write these tests, we are shifting responsibility down the production line. With each shift, the costs pile up and the testing we do becomes less valuable.
Once we have solid unit level coverage, then it is safe to venture into higher level automation. We know we have a solid foundation beneath our feet. We know we don’t have to cover every case and in doing so create a massive, fragile, and costly automation suite. We can target the Ideal Test Automation Pyramid and gather in the benefits it brings. As QA Engineers, SDETs, or other QA professionals this is our holy land, the site of the grail, and the fount from which all good things spring.
We can and should influence the quality of our products. Helping to place the foundation stones of automation is something we can and should do if those stones aren’t already there.
2. Continuous Integration and Delivery (CI/CD)
Code that sits around is waste in the same way the 20 lb. bag of mama’s oats on the bottom shelf that isn’t selling is waste. We must keep our inventory moving. If our code is valuable, it needs to go out the door. And before it can do that, we need to know that it doesn’t break existing code and that it functions well. It can’t do that if it is sitting on my machine or in a feature branch for 2 weeks collecting dust!
These practices allow us to automate conditional integration of our branches into the mainline. This lets us define what must happen before an integration and get out of the way when it is ready. As QA professionals, we should be encouraging our teams to deliver builds and source code as soon as a story is dev done. Or sooner. Or pair program! Helping to setup a robust CI/CD system would also go a long way.
3. Loosely Coupled Software Architecture
If our software is rigid and fragile, no amount of testing will save the project. Testing can only ever describe the problems. We must build quality into the process! We can do this by checking our design against the SOLID Principles and The Clean Architecture during the cycles within test-driven development. We can even use static analysis tools to check for code smells. Then integrate that static analysis into our CI/CD pipeline.
As QA professionals, we need to intimately understand how to engineer and architect software. This is the source! And it is here that we can partner with our teams to work together toward our common goal of stable and flexible software.
Finally, we can put software engineers and QA professionals side-by-side to help each other put all of this into practice as we produce the code. Momma always said two sets of eyes are better than one!
As a pleasant side-effect, we then know the code intimately. We then know where it is strong and where it is weak. We can help Black-Box Testers focus on high-risk areas and slim down time spent on regression testing.
The process of creating a complex software product is Software Engineering. The process of increasing the quality of that software involves highly calculated influence of technological capabilities, behaviors, attitudes, and values. This is Quality Assurance Engineering. When we don’t see QA as a comprehensive problem with highly intertwined dependencies, we limit our value and the potential of our products.
Learning how to influence culture, project management practices, and technical practices requires us to dive deep. We must gain new abilities, show success using them, and support all positive movement toward our goal.
We must take full responsibility for what we can control. And we must model out the change we want to see.
I hope to see you at GameBench Open where we’ll take this topic deeper still.