7 deadly quality misconceptions
The quality of the product is the most important for every software tester/QA/test ninja etc.
How to evaluate the quality and be sure, that the whole team delivers the highest possible one in the project? That’s a hard nut to crack, and it depends on various factors like the project, test mission, team size and their skill, product complexity and many, many more. So, as you see context matters, and in many cases, the understanding of quality can differ.
Therefore, the quality evaluation process could only bring the illusion of “quality” on the project.
And here’s a list of these misconceptions.
We have a good quality because:
- We have many testers
- We have documentation for everything
- We have 1000+ test cases that cover our product
- We have automated everything
- We have 100% test coverage
- We have the bug free product
- We have a pass rate above 95%
So let’s dig into every deadly quality misconception.
We have many testers:
Having additional testers in a project is always a good thing, no matter their experience because you have another fresh pair of eyes to look at your product.
But are your testers:
- Testing your product or just checking it?
- Do they follow some testing plan or guidelines or they randomly maneuver over the product?
- Can your test team members be easily replaced by automation?
- Does your team know, what are the most important features of the project and what are their user flows?
- Does your team have any test strategy that they follow?
- Are they testing or checking the same thing over and over again (their work is not fulfilling?)?
- Is the role of a tester in the project just a temporary one (How many test experts you have in the project, and how many testers were transformed into the developer role)?
When an intelligent person with the right skills for an investigative job, does unfulfilling, unchallenging and repetitive activities, you potentially may lose a great tester. Because he/she would be bored to death. Nobody wants to do monotonous tasks. That’s one of the reasons we need automation.
We have documentation for everything:
The product is fully documented, starting from code to its data, and ending with how it should be tested. There is nothing to be afraid of… right? But…Is the documentation up-to-date?
- Is someone responsible for updating it?
- Is the documentation easy to find by new team members?
- Does it contain all the valuable information?
- Does test documentation include only test cases or guidelines and ideas on how to test the product?
- Are the team members using the documentation? Or they gather information in other ways?
- Do we have info about the changes in the documentation? Is there any user version control?
Because why to brag about documentation, if it does not serve your development team at all. Also, are you sure, that every tiny bit of knowledge about the product ends up documented? How many test ideas were not written down because of various reasons?
We have 1000+ test cases that cover our product:
Test cases should be your sanity checklist. They should be a way for you to check product functionality and look for regression issues. So, a higher number of test cases is a good thing? Right? Test case quantity should not be your concern (this is just another meaningless metric), we should focus more on answering some questions about these test cases.
- Are these test cases covering your overall product?
- Why we have so many of them, are all of them needed?
- Does the output from these test cases give you any valuable information?
- How many man-days do you need to execute all of these test cases?
- How many of them could be automated?
- Have they been up-to-date?
The quantity of test cases on its own does not give a lot of information on overall product coverage, it gives you just the information “We have a large number of something that possibly could verify if our product is working”. It does not give you the important information, whether the product has been covered properly with checks or their execution time. So, try not to rely only on their number, but on the value that is given to the product.
We have automated everything
Some companies do not have any testers who are exploring or investigating the product. Because they have test case checking automated. If your test cases are repetitive and they could be replaced by an artificial agent, go with it and automate them all. You will replace manual checking with automation and save a lot of time. But what about testing your software in case of new bugs that are not covered in your automation? Or what if the logic behind your checks is flawed (your system mental model for test cases can be flawed). Do they give you the value that you are looking for?
Automation is checking regression and nothing more than it is coded. They won’t find anything new (that’s why you need the exploration of the product). And why it is important? Because you cannot automate the user experience and what’s more, your end user will be another human being.
We have 100% code coverage
Somehow a lot of development teams are promising 100% test coverage for their clients, as a way of assuring good quality of their product. Ok, but what about:
- Data coverage
- User experience
- Accessibility
- Security
- Private policy
- Testability
- Performance
The most important questions are. Are you sure, your code coverage tests are written properly and really check what they supposed to? Do you have some way to check your code test quality? “Who watches the watchmen?”
We have the bug-free product
Do you, really?
The clear bug list does not mean the software has no issues or exploits.
In software development the bug-free product state does not exist – there are just areas of your software that no one has looked into yet.
Your user stories, no matter how good they are, will never cover all product flows. Most of a time they cover only happy/positive paths of client requirements and do not check additional paths that are created with new system requirements.
Remember that your software besides formal written requirements also has hidden ones. They come from different places. You can search for them in design, code implementation, conversations with the client/developer or integration with other systems etc.
So when someone will assure you, there are no bugs or risks in your project, this should be your reaction next time.
We have a pass rate above 95%
To the previous point, the test pass rate also does not prove the point that the product is bug-free.
The green dot in your test suite run summary does not mean that “everything works”. It means, “we don’t have additional information that something is not working”. And that’s a crucial difference.
But let’s presume that your test past rate is 95%, so what now? Are we gonna ship/deploy it?
Does this percentage number give the people in charge (managers/stakeholders) enough information to make this important decision? What if this pass rate would be 94%? Is this stops the release?
The better approach would be to check what has failed in remaining 5% of the checks. What if, there is a critical or a blocker issue in the important product feature. We have 95%, but the main feature is not working. Has this information changed your mind about your further actions?
Recap
I think these 7 statements should be translated into this:
Beliefs:
- We have many testers
- We have documentation for everything
- We have 1000+ test cases that cover our product
- We’ve automated everything
- We have 100% code coverage
- We have the bug free product and cover all user stories
- We have a pass rate above 95%
|
Reality:
- We have people that are doing the same thing over and over again
- We outdated documentation that no-one uses
- We think that test cases are testing and their high number means something
- We believe that check automation will decrease our headcount and increase the quality
- We think code coverage is testing
- We don’t know how many bugs and risks are in our product
- We think more about KPI than true product quality
|
In the end, we shouldn’t follow blindly these metrics or misconceptions of “proper” quality.
The tester/QA should always dig deeper. We shouldn’t follow blindly (without questioning) conceptions or metrics, that could do more harm than good.
But I would like to hear your opinion. Do you agree or disagree? I would love to hear your thoughts. And like always.
Have fun.
Patryk Oleksyk
Test Lead, IT consultant.
Patryk is the Company's first tester. An agile testing enthusiast who finds
fun and passion in exploratory testing and checkers automation. Tries to
be an advocate for clients and testing. In free time he plays indie
video games or can be found working out.
Comments