The quality of the product is the most important for every software tester/QA/test ninja etc.
How to evaluate the quality and be sure, that the whole team delivers the highest possible one in the project? That’s a hard nut to crack, and it depends on various factors like the project, test mission, team size and their skill, product complexity and many, many more. So, as you see context matters, and in many cases, the understanding of quality can differ.
Therefore, the quality evaluation process could only bring the illusion of “quality” on the project.
And here’s a list of these misconceptions.
So let’s dig into every deadly quality misconception.
Having additional testers in a project is always a good thing, no matter their experience because you have another fresh pair of eyes to look at your product.
But are your testers:
When an intelligent person with the right skills for an investigative job, does unfulfilling, unchallenging and repetitive activities, you potentially may lose a great tester. Because he/she would be bored to death. Nobody wants to do monotonous tasks. That’s one of the reasons we need automation.
The product is fully documented, starting from code to its data, and ending with how it should be tested. There is nothing to be afraid of… right? But…Is the documentation up-to-date?
Because why to brag about documentation, if it does not serve your development team at all. Also, are you sure, that every tiny bit of knowledge about the product ends up documented? How many test ideas were not written down because of various reasons?
Test cases should be your sanity checklist. They should be a way for you to check product functionality and look for regression issues. So, a higher number of test cases is a good thing? Right? Test case quantity should not be your concern (this is just another meaningless metric), we should focus more on answering some questions about these test cases.
The quantity of test cases on its own does not give a lot of information on overall product coverage, it gives you just the information “We have a large number of something that possibly could verify if our product is working”. It does not give you the important information, whether the product has been covered properly with checks or their execution time. So, try not to rely only on their number, but on the value that is given to the product.
Some companies do not have any testers who are exploring or investigating the product. Because they have test case checking automated. If your test cases are repetitive and they could be replaced by an artificial agent, go with it and automate them all. You will replace manual checking with automation and save a lot of time. But what about testing your software in case of new bugs that are not covered in your automation? Or what if the logic behind your checks is flawed (your system mental model for test cases can be flawed). Do they give you the value that you are looking for?
Automation is checking regression and nothing more than it is coded. They won’t find anything new (that’s why you need the exploration of the product). And why it is important? Because you cannot automate the user experience and what’s more, your end user will be another human being.
Somehow a lot of development teams are promising 100% test coverage for their clients, as a way of assuring good quality of their product. Ok, but what about:
The most important questions are. Are you sure, your code coverage tests are written properly and really check what they supposed to? Do you have some way to check your code test quality? “Who watches the watchmen?”
Do you, really?
The clear bug list does not mean the software has no issues or exploits.
In software development the bug-free product state does not exist – there are just areas of your software that no one has looked into yet.
Your user stories, no matter how good they are, will never cover all product flows. Most of a time they cover only happy/positive paths of client requirements and do not check additional paths that are created with new system requirements.
Remember that your software besides formal written requirements also has hidden ones. They come from different places. You can search for them in design, code implementation, conversations with the client/developer or integration with other systems etc.
So when someone will assure you, there are no bugs or risks in your project, this should be your reaction next time.
To the previous point, the test pass rate also does not prove the point that the product is bug-free.
The green dot in your test suite run summary does not mean that “everything works”. It means, “we don’t have additional information that something is not working”. And that’s a crucial difference.
But let’s presume that your test past rate is 95%, so what now? Are we gonna ship/deploy it?
Does this percentage number give the people in charge (managers/stakeholders) enough information to make this important decision? What if this pass rate would be 94%? Is this stops the release?
The better approach would be to check what has failed in remaining 5% of the checks. What if, there is a critical or a blocker issue in the important product feature. We have 95%, but the main feature is not working. Has this information changed your mind about your further actions?
I think these 7 statements should be translated into this:
Beliefs:
|
Reality:
|
In the end, we shouldn’t follow blindly these metrics or misconceptions of “proper” quality.
The tester/QA should always dig deeper. We shouldn’t follow blindly (without questioning) conceptions or metrics, that could do more harm than good.
But I would like to hear your opinion. Do you agree or disagree? I would love to hear your thoughts. And like always.
Have fun.
Test Lead, IT consultant. Patryk is the Company's first tester. An agile testing enthusiast who finds
fun and passion in exploratory testing and checkers automation. Tries to
be an advocate for clients and testing. In free time he plays indie
video games or can be found working out.
Patryk Oleksyk
Comments