The service provided by Consileon was professional and comprehensive with a very good understanding of our needs and constrains.

Wolfgang Hafenmayer, Managing partner, LGT Venture Philanthropy

Technical quality of staff offered, capability of performing various project roles, as well as motivation and dedication to the project (... [...]

dr Walter Benzing, Head of development B2O, net mobile AG

Technical quality of staff offered, capability of performing roles of developers, lead developers, trainers, team leaders, architects as wel [...]

Karl Lohmann, Itellium Systems & Services GmbH

Firma Consileon Polska jest niezawodnym i godnym zaufania partnerem w biznesie, realizującym usługi z należytą starannością (...)

Waldemar Ściesiek, Dyrektor zarządzający IT, Polski Bank

The team was always highly motivated and professional in every aspect to perform on critical needs of our startup environment.

Denis Benic, Founder of Ink Labs

7 deadly quality misconceptions

Category: Testing Tags: ,


The quality of the product is the most important for every software tester/QA/test ninja etc.

How to evaluate the quality and be sure, that the whole team delivers the highest possible one in the project? That’s a hard nut to crack, and it depends on various factors like the project, test mission, team size and their skill, product complexity and many, many more. So, as you see context matters, and in many cases, the understanding of quality can differ.

Therefore, the quality evaluation process could only bring the illusion of “quality” on the project.

And here’s a list of these misconceptions.

We have a good quality because:

  1. We have many testers
  2. We have documentation for everything
  3. We have 1000+ test cases that cover our product
  4. We have automated everything
  5. We have 100% test coverage
  6. We have the bug free product
  7. We have a pass rate above 95%

So let’s dig into every deadly quality misconception.


We have many testers:

Having additional testers in a project is always a good thing, no matter their experience because you have another fresh pair of eyes to look at your product.

But are your testers:

  • Testing your product or just checking it?
  • Do they follow some testing plan or guidelines or they randomly maneuver over the product?
  • Can your test team members be easily replaced by automation?
  • Does your team know, what are the most important features of the project and what are their user flows?
  • Does your team have any test strategy that they follow?
  • Are they testing or checking the same thing over and over again (their work is not fulfilling?)?
  • Is the role of a tester in the project just a temporary one (How many test experts you have in the project, and how many testers were transformed into the developer role)?

When an intelligent person with the right skills for an investigative job, does unfulfilling, unchallenging and repetitive activities, you potentially may lose a great tester. Because he/she would be bored to death. Nobody wants to do monotonous tasks. That’s one of the reasons we need automation.  


We have documentation for everything:

The product is fully documented, starting from code to its data, and ending with how it should be tested. There is nothing to be afraid of… right? But…Is the documentation up-to-date?

  • Is someone responsible for updating it?
  • Is the documentation easy to find by new team members?
  • Does it contain all the valuable information?
  • Does test documentation include only test cases or guidelines and ideas on how to test the product?
  • Are the team members using the documentation? Or they gather information in other ways?
  • Do we have info about the changes in the documentation? Is there any user version control?

Because why to brag about documentation, if it does not serve your development team at all. Also, are you sure, that every tiny bit of knowledge about the product ends up documented? How many test ideas were not written down because of various reasons?


We have 1000+ test cases that cover our product:

Test cases should be your sanity checklist. They should be a way for you to check product functionality and look for regression issues. So, a higher number of test cases is a good thing? Right? Test case quantity should not be your concern (this is just another meaningless metric), we should focus more on answering some questions about these test cases.

  • Are these test cases covering your overall product?
  • Why we have so many of them, are all of them needed?
  • Does the output from these test cases give you any valuable information?
  • How many man-days do you need to execute all of these test cases?
  • How many of them could be automated?
  • Have they been up-to-date?

The quantity of test cases on its own does not give a lot of information on overall product coverage, it gives you just the information “We have a large number of something that possibly could verify if our product is working”. It does not give you the important information, whether the product has been covered properly with checks or their execution time. So, try not to rely only on their number, but on the value that is given to the product.


We have automated everything

Some companies do not have any testers who are exploring or investigating the product. Because they have test case checking automated. If your test cases are repetitive and they could be replaced by an artificial agent, go with it and automate them all. You will replace manual checking with automation and save a lot of time. But what about testing your software in case of new bugs that are not covered in your automation? Or what if the logic behind your checks is flawed (your system mental model for test cases can be flawed). Do they give you the value that you are looking for?

Automation is checking regression and nothing more than it is coded. They won’t find anything new (that’s why you need the exploration of the product). And why it is important? Because you cannot automate the user experience and what’s more, your end user will be another human being.


We have 100% code coverage

Somehow a lot of development teams are promising 100% test coverage for their clients, as a way of assuring good quality of their product. Ok, but what about:

  • Data coverage
  • User experience
  • Accessibility
  • Security
  • Private policy
  • Testability
  • Performance

The most important questions are. Are you sure, your code coverage tests are written properly and really check what they supposed to? Do you have some way to check your code test quality? “Who watches the watchmen?


We have the bug-free product

Do you, really?

The clear bug list does not mean the software has no issues or exploits.

In software development the bug-free product state does not exist – there are just areas of your software that no one has looked into yet.

Your user stories, no matter how good they are, will never cover all product flows. Most of a time they cover only happy/positive paths of client requirements and do not check additional paths that are created with new system requirements.

Remember that your software besides formal written requirements also has hidden ones. They come from different places. You can search for them in design, code implementation, conversations with the client/developer or integration with other systems etc.

So when someone will assure you, there are no bugs or risks in your project, this should be your reaction next time.


We have a pass rate above 95%

To the previous point, the test pass rate also does not prove the point that the product is bug-free.

The green dot in your test suite run summary does not mean that “everything works”. It means, “we don’t have additional information that something is not working”. And that’s a crucial difference.

But let’s presume that your test past rate is 95%, so what now? Are we gonna ship/deploy it?

Does this percentage number give the people in charge (managers/stakeholders) enough information to make this important decision? What if this pass rate would be 94%? Is this stops the release?

The better approach would be to check what has failed in remaining 5% of the checks. What if, there is a critical or a blocker issue in the important product feature. We have 95%, but the main feature is not working. Has this information changed your mind about your further actions?



I think these 7 statements should be translated into this:


  1. We have many testers
  2. We have documentation for everything
  3. We have 1000+ test cases that cover our product
  4. We’ve automated everything
  5. We have 100% code coverage
  6. We have the bug free product and cover all user stories
  7. We have a pass rate above 95%

  1. We have people that are doing the same thing over and over again
  2. We outdated documentation that no-one uses
  3. We think that test cases are testing and their high number means something
  4. We believe that check automation will decrease our headcount and increase the quality
  5. We think code coverage is testing
  6. We don’t know how many bugs and risks are in our product
  7. We think more about KPI than true product quality


In the end, we shouldn’t follow blindly these metrics or misconceptions of “proper” quality.

The tester/QA should always dig deeper. We shouldn’t follow blindly (without questioning) conceptions or metrics, that could do more harm than good.

But I would like to hear your opinion. Do you agree or disagree? I would love to hear your thoughts. And like always.

Have fun.

Patryk Oleksyk

Test Lead, IT consultant.

Patryk is the Company's first tester. An agile testing enthusiast who finds fun and passion in exploratory testing and checkers automation. Tries to be an advocate for clients and testing. In free time he plays indie video games or can be found working out.

Tags: ,


Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *

Trwa ładowanie