Software testing is important; you already know that. But how do you share its value with the rest of your company? In this blog, Joel Montvelisky, Chief Solution Architect at PractiTest, gives us some practical tips and approaches for communicating the value of testing work.
Your answer may depend on the team you’re on. If you’re a project manager or a developer, you might think, “No, testing is actually really expensive and time-consuming. Can’t I just ship features already?” But if you’re a tester, you’re likely to say, “Of course there’s value in testing! But my team seems to be the only one who knows it.”
organizations value their work. Over 80% of them answered no. With such an overwhelming majority feeling this way, it made me wonder what made them come to this conclusion. How did they perceive “being valued”? And how was this “value” being defined?
The problem is that value isn’t necessarily an objective measure.
Sure, there are things like ROI and attribution models that can determine the monetary value of a certain activity, but when it comes to bigger questions like, “Should we invest in QA?” value is often a matter of personal perception and interpretation.
It’s interesting to see the difference between how testers and non-testers value software testing work. Some non-testers may understand this work as looking for worthless bugs that will not be fixed while wasting valuable time running useless tests. Meanwhile, testers see the value in gathering and presenting information about the product in order to help stakeholders make good decisions. Bugs are a by-product and tests are a necessity, even when it seems like a specific test has a low probability of unearthing a major catastrophe.
As you can see, many times there is an abysmal value gap from team to team, and in my experience, this is a direct result of lack of communication between them.
Interpretation is no less important than the data itself
In my experience, all good communication fulfills a few important criteria. In my work with stakeholders, I like to follow the SMART principles:
Simple. Try to convey your messages using one-liners instead of paragraphs. Whenever possible, use graphs instead of words. Assume your reader has little time and a limited attention span.
Measurable. Numbers and statistics speak louder than opinions and descriptions. This doesn’t mean that your opinions are worthless; sometimes, with experience, you simply know the answer based on your gut feeling, but supporting this feeling with hard data and evidence will make it more credible.
Actionable. Data analysis is most effective when it suggests a solution instead of only pointing out flaws. In many cases, you, as a tester, are the best source of knowledge. Humility is always important, but if you have a solution, don’t be afraid to speak up.
Repeatable. When you are providing information, you will most certainly be asked what the previous status of the issue was. The most common question you will hear is, “Is this a regression?” This means that your checks and tests should be repeatable, and you should strive to have historical information whenever possible.
Timely. Provide information while it is still relevant, and when your stakeholders will be able to make good decisions. If you say the right thing at the wrong time, it will have little or no impact at all. For example, if you find an issue that will take weeks to solve when you are only days from the release date, it means that your team will either ignore the issue or they will have to delay the release. Keep your release date in mind so that any issues can be addressed before it’s go time.