/ CODE QUALITY, TESTING, CODE COVERAGE

Why are you testing your software?

15 years ago, automated tests didn’t exist in the Java ecosystem. One had to build the application and painfully test it manually by using it. I was later introduced to the practice of adding a main method to every class and putting some testing code there. That was only marginally better, as it still required to manually run the methods. Then came JUnit, the reference unit testing framework in Java which brought test execution automation. At that point, I had to convince teams I was part of that we had to use it to create an automated tests harness to prevent regression bugs. Later, this became an expectation: no tests meant no changes in the code for fear of breaking something.

More recently, however, it happened that I sometimes have to advocate for the opposite: not write too many tests. Yes, you read that well, and yet I’m no turncoat. The reason for this lies in the title of the post: why are you testing your software? It may sound like the answer to this question is pretty obvious - but it’s not, and the answer is tightly coupled to the concept of quality.

In the context of software engineering, software quality refers to two related but distinct notions that exist wherever quality is defined in a business context:

  • Software functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product. It is the degree to which the correct software was produced.
  • Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability. It is the degree to which the software was produced correctly.
— Wikipedia

Before answering the question of the why, let’s consider some reason why not I’ve already been confronted to:

  • Because everyone does it
  • Because the boss/the lead/colleagues/authority figures say so
  • To achieve 100% of code coverage
  • To achieve more code coverage than another team/colleague
  • And so on, and so forth

All in all, those "reasons" boil down to either plain cargo culting or mistaking a metric for the goal. Which brings us back to the question, why do you test software?

The only valid reason for testing is that resources spent in making sure the software conforms to non-/functional requirements will be less over the course of time than the resources spent if not done.

That’s pure and simple Return Over Investment. If ROI is positive, do test; if it’s negative, don’t. It’s as simple as that.

Perfect is the enemy of good

The real difficulty lies in estimating the cost of testing vs the cost of not testing. The following is a non-exhaustive list of ROI-influencing parameters:

Business

No bugs are allowed in some industries e.g. medical, airplanes, banks without serious consequences to the business, while this is less critical for others such as mobile gaming.

Estimated lifespan of the app

the longer the lifespan of an app, the better the ROI because the same amount of testing code will yield more times e.g. nearly no tests for one-shot single-event apps vs. more traditional testing for tradition business apps running for a decade or so.

Nature of the app

some technologies are more mature than others, allowing for easier automated testing. The testing echosystem around webapps is richer than around native or mobile apps.

Architecture of the app

The more distributed the app, the harder it is to test. In particular, the migration from monoliths to microservices has some interesting side-effects on the testing side. It’s easier to test each component separately, but harder to test the whole system. Also, testing specific scenarios in clustered/distributed environments, such as node failure in a cluster, increase the overall cost.

Nature and number of infrastructure dependencies

The higher the number of dependencies, the more test doubles are required to test the app in isolation, which in turns drive up testing costs. Also, some dependencies are more widespread e.g. databases and web services, with many available tools while some are not e.g. FTP servers.

Size of the app

Of course, the bigger the size of the app, the bigger the number of possible combinations that needs to be tested.

Maturity of the developers, and the size of the team(s)

Obviously, developers range from the ones who don’t care about testing to those who integrate testing requirements in their code from the start. Also, just for developers, adding more testers is subject to the law of diminishing returns.

Nature of the tests

I don’t want to start a war, suffice to say there are many kinds of tests - unit, integration, end-to-end, performance, penetration, etc. Each one is good at one specific thing, and has pros and cons. Get to know them and use them wisely.

Strength of the type system

Developing in dynamically-typed languages require more tests to handle the job of the compiler in comparison to more statically-typed languages.

While it’s good to listen to other’s advices - including well-established authority figures and this post, it’s up to every delivery team to draw the line between not enough testing and too much testing according to its own context.

Nicolas Fränkel

Nicolas Fränkel

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Also double as a trainer and triples as a book author.

Read More
Why are you testing your software?
Share this