Our most valuable tests are those that fail more often than other.
The right test is the test consciously addresses right scope, have expected time to run and easy to understand when fails.
Little over 10 years ago, we have learned and adapted test-driven development (TDD). TDD is a huge step forward to software quality, as well as to the value of your software as a company asset. Functionalities and architectures wrapped with tests can be changed under control and evolve — one of the biggest achievements in the industry during the last decade.
When codebase grows in size and complexity and dependencies start degrading team productivity, the problem of how to maintain automated test base comes to stage.
Production code changes over time due to new functionality, bug fixes, etc. – and it affects test’s code and lifetime of the tests.
If a test fails due to changes in code, it means an error has been detected early and the test has done its job.
This is point where a developer should fix a problem and a key to fix is understanding of the test, why it fails. This leads to good test design as key component of continuous integration. We cannot submit on red build and then cannot change and develop our application.
We generally want to run the whole the test suite before every code change submission and how long does it take to run - plays significant role for big projects.
if whole test suite takes more than 15 minutes, it becomes an obstacle for doing frequent changes. Teams start running not full suite or try to detect relevant tests for this particular change, etc.
Problem of redundancy in unit testing due to the large number of people is working on a project become important once test code grown in size. Developers write tests for their components, and some tests overlay each other—often covering the same code over and over again. This is not a problem from quality point of view, but maintenance can be an issue. Here we come to distinction of unit test and integration test.
The important quality metric is code coverage. There are aspects that this metric does not reflect: sensitivity to input data.
Many workflows are vitally dependent from input data: small difference in data would only add couple of lines of code to coverage, but makes huge difference in calculations and final results. However, we often do not take these effects in account when measuring quality metrics.
Our most valuable tests are those that fail more often than other. The right test is the test consciously addresses right scope, have expected time to run and easy to understand when fails.
I my talk I will be speaking about how good test design affects productivity of agile project based on large and legacy codebase. I will be focusing on lessons learned from existing project, I have been involved with, coming up with examples of good and bad test design and its consequences in project delivery. I will be giving live code examples on the go.