TL;DR – Writing maintainable unit tests starts with treating test code differently than your production code.
In my experience introducing unit tests to a project can come with its fair share of resistance. An argument I hear often is that test suites rarely provide an acceptable return on interest; the cost of maintenance is just too high.
Developers who feel this way aren’t necessarily wrong. A poorly written test suite can create a huge maintenance overhead while providing very little valuable feedback. Such a suite could cost a company more time and money than its worth.
But what about properly written test code? Is it even possible to write maintainable unit tests? I struggled with this question for quite a while when I first started out. And it took a lot of practice to finally come to an answer I was satisfied with.
Creating a Maintenance Nightmare
Most developers learn software testing on the job, and I was no exception. The first tests I ever wrote were for an application that was already in production. The test code looked no different than my production code. It utilized encapsulation, abstractions, and the DRY principle to be clean and concise.
Having a test suite left me feeling confident about the application. And I actually learned more about the business requirements and domain in the process.
But almost immediately the tests began to fail unexpectedly. At first it was only two or three failures triggered by minor changes. But then, as my project continued to grow, things spiraled wildly out of control.
Small changes in one class led to failures in a multitude of completely unrelated classes. And on top of that the failure results were often obfuscated by multiple assertions and generic messages.
Before I knew it I was spending more time debugging failed tests than adding anything of value to my project. Despite all of my best efforts, I had created a brittle, unmaintainable, and ultimately costly set of unit tests.
Learning to Write Maintainable Unit Tests
Few things can kill your motivation faster than seeing a bulk of your tests suddenly go red. Many developers take an experience like mine as an opportunity to quit while they’re ahead. But I couldn’t accept that unit testing was just another buzzword.
I’d read classic programming books written by the greats, and they all mentioned unit testing. Surely the likes of Martin Fowler or Kent Beck couldn’t have gotten it wrong. This pushed me to keep exploring.
I spent a lot of time researching the subject, but practice is what really drove it home for me. I wrote and re-wrote countless tests searching for the right formula. Eventually I started to discover patterns that I hadn’t noticed before.
I found that tests for classes with dependencies and collaborators were the most susceptible to maintenance headaches. Classes like this tend to require complicated setup which I would encapsulate within my test classes for reuse. They also delegate logic to member variables, coupling themselves to implementations that they can’t control.
These abstractions make for clean and concise production code, but they serve to make unit testing a maintenance nightmare. Let’s take a look at a couple of examples to see why.
Don’t Let Collaborators Control Your Tests
Cascading failures occur when tests expect concrete collaborators to return specific values or behave in a certain way. These failures crop up in groups, forcing you to scour an excessive amount of code to locate the source of the problem.
Let’s see cascading failures in action. Our example application is a simple game. It has a player object that returns weapon damage. There is also a player view that returns the player’s damage.
The unit tests work but they hold a dangerous expectation about the strong weapon’s damage. We’ll fast forward a couple weeks into development to see how this can hurt us. Game testers have been reporting that the strong weapon is overpowered so we dial back the damage to 4.
This was an easy adjustment, which is a good thing because balancing our game will require a lot of tuning. But something went wrong with our tests. Two tests for completely unrelated classes have suddenly begun to fail.
After you get over the initial shock, dismay, and temptation to just call it quits for the day, you sift through these tests and discover that they both expected strong weapon to return 5 damage. In this case the fix is simple. We just have to update every test that directly or indirectly references the strong weapon to expect the correct damage.
But our example only deals with two classes. In the real world you’ll be dealing with applications that have complex inter-class relationships and hierarchies. We’re going to have to come up with a better solution in order to write maintainable unit tests.
Tame Collaborators with Mocks
Martin Fowler defines mocks as “objects pre-programmed with expectations which form a specification of the calls they are expected to receive.” Our previous tests failed because they were coupled to an arbitrary value that could change without our control. Lets gain back that control with mocks.
Our improved player tests now use a mock weapon instead of a concrete implementation. This decoupling allows us to focus on the class under test.
We can assert that damage comes from weapon without coupling ourselves to its implementation. We do the same for player view tests and our cascading failures are gone, leaving our tests green and our tests more maintainable.
Abstractions Will Hurt Your Test Code
Eventually you’re going to miss something that a unit test is going to catch by failing. This type of failure means that your tests are doing their job. But let’s not get ahead of ourselves, feedback is great but it needs to provide value.
For our next example we’ve added power-ups to our player class that multiply weapon damage. In order to keep gameplay balanced, we’ve set a 4 power-up maximum to the damage calculation.
Looking at these tests, something might feel off to the untrained tester; there’s a lot of duplication. This seems like the perfect time to utilize the DRY principle.
Our test code is more clean and concise. But I wouldn’t have created this example if there wasn’t some sort of drawback. Let’s make a small change that will break our tests, and compare the results. Four power-ups is way too exploitable so we’re going to dial down the maximum power-up limit to 2.
Take a look at the test feedback below. Our refactored code is much cleaner but the new results do little to tell us what went wrong. However, the results of the initial tests point us in the right direction before we even have to look at the source code.
What we gained in cleaner code we lost in feedback. Having to debug a couple of tests like this every day is a surefire way to kill your motivation. Tests should provide obvious results when they fail in order to be valuable.
The Woes of Maintaining Clean Tests
The abstraction added to the refactored tests also makes them more difficult to maintain. Every time business requirements change you’ll have re-conceptualize the test logic in order to add to it.
Imagine patching the damage calculation 6 months after release. You’ll have to take valuable time away from the implementation in order to relearn what your test is doing. This is a classic argument against testing I hear all the time.
The solution is to treat your tests like their own universes. Each test method should be concrete and have its setup logic self contained. You’ll notice that the initial tests follow both of those rules, making them very easy to understand.
Final Thoughts On Writing Maintainable Unit Tests
Throwing out good coding techniques is a challenge. Its even tempting to use it as an argument against the merits of unit testing as a whole. But the reality is that production code and test code serve two very different purposes.
Yes, breaking OOP rules in the name of unit testing feels wrong at first. But doing so will enable you to create maintainable unit tests that serve a purpose greater than quality assurance. Their true power will become more apparent as you being utilizing your test suite to experimentation and even documentation.
If you have any thoughts about the ideas shared in this article or any experience writing maintainable unit tests, I’d love to hear your feedback. Please feel free to leave a comment below!