Unit Tests at Vertec
For us as software developers, writing unit tests in an agile development process is just as much part of our daily work as implementing specific requirements for our product. In general, we use classic white box testing. For simple classes without dependencies, we also use TDD (Test-Driven Development TDD is well suited for these, with the corresponding tests, the productive code often writes itself. As soon as dependencies come into play, this becomes much more difficult and we usually write our tests after the code to be tested has been written. Furthermore, we rely on appropriate test categories for different requirements.
The Test Pyramid at Vertec
Our test pyramid currently has two main levels. It is based on simple, clearly delineated unit tests. If possible, we use them to test only a single class with mocked dependencies. These tests are adaptable and time-consuming to maintain. If new requirements result in new dependencies, we may need to revise existing tests as well. There are also classes from history where implementation of these tests is complicated. We are constantly working to reduce these technical debts and consistently implement unit tests for all execution paths of a class.
On top of that, there is a large quantity of hardly variable integration tests that test specific requirements for our product as a whole. Because we place a very high value on backward compatibility, these tests, once written, are very durable.
All our up to date tests run both on the Build Agent as part of our CI (Continuous Delivery / Continuous Deployment ) Pipeline as well as locally on the developer computers.
Looking ahead, we are currently considering the implementation of another, even more abstract level of automated testing, which aims to build on the external extension of Vertec server components. Because server and client communicate very closely through our sync protocol, this is much more complex than testing individual endpoints of a Web API, for example.
What do we want to achieve with our tests?
The goals we pursue with our testing strategy are wide-ranging. Of course, one of the goals is to find bugs early. But most importantly, we give the productive code more context with our tests. Just like well-designed code, meaningful tests tell us something about the purpose of an implementation. They help me or my colleague understand in two months what I wanted to achieve with my implementation. That’s why we’d been following a convention for naming our unit tests for some time, with which the test’s name indicates what the prerequisites are and what the expected result is. If a test turns red, the developer can often read from the test’s name where the problem is – without having to understand the whole test code first.
What I personally appreciate about having good unit tests is the freedom they give me when refactoring. Extracting methods, rewriting old for loops into LINQ single lines, converting strings of if statements into switch expressions – a keystroke after each change starts the tests and shows me that the behavior of the class has not changed externally. That’s how fun the scout rule is.
In the long run, all of these aspects are ultimately about one thing. Quality. Qualitative code is maintainable, expandable and understandable, resulting in a quality product.
Front hui, back pfui ...
Who doesn’t know it: In productive code, you adhere to principles like Don’t Repeat Yourself, use patterns where it makes sense, put every name of a method on the gold scale – only to write almost identical code to generate mocks in ten test methods.
To avoid this, we set the same quality standards for our new test code as for our production code. I already wrote about the naming of our test methods above. We reduce duplicate code for generating mocks by using a builder pattern. Our tested objects are created by a builder that encapsulates the setup of the dependencies so that they can be reused in all tests for a class. In addition, the unit tests go through the same review process by a second developer as our production code.
Conclusion
Automated tests are an integral part of our software development process, which, just like the production code, require constant maintenance and must be questioned regularly. Do the tests, as we currently write them, help us even further? What can we do better?
Is code quality your top priority? Then take a look at our vacancies or apply spontaneously!