How to avoid constant tests maintenance and regressions in your code? In this article, we will talk about a new term “the Coupling Pitfall”, what harm it is doing to your code, and a way to fix it.
What are regressions and where can we catch them?
Last week we had a production incident. An API that used to work stopped working. Luckily it was caught (accidentally) by one of our team members.
Such incidents are called “Regressions”. It means something worked – and after a change to the code it didn’t. This is a very bad thing to your app’s or code reputation as well as a frustrating user experience.
There are many places along the way one might catch such regressions:
The Best Case Scenario
The best place regressions can be caught is the developer’s computer – before one commits or pushes to the repository. This is a fast feedback loop that involves the developer alone when the metal is still warm. Easy and fast to debug and fix. It also doesn’t involve other people’s time and effort.
The Worst Case Scenario
The worst place regressions can appear is in production. I don’t believe I need to explain why.
The closer the regression is to production, the more costly its damage. It can be caught in both ends, but it can also be caught during CI, code review, or any other process you have in place between implementation to production.
The odds someone will notice a broken API during a code review is slim. It is not the reviewer’s job to do so, anyway.
What about the CI? First of all, the CI takes time to run and there’s a context switch between its failure and the developer’s attention to the branch/pull request. The second thing is – if there are no automated tests to capture the error, the CI will not help us.
What happened to us in production?
One of our developers created a component that extended a third party library component:
export class Disclosure extends FoundationDisclosure
The extended class had some properties and methods the
Disclosure class depended upon. More specifically – it had the
This API was projected outside via the
Disclosure component and everything worked fine.
During a refactor in the code it was decided the
FoundationDisclosure class was not what we needed, and we should revert to a different implementation:
export class Disclosure extends FoundationElement
FoundationElement is missing the
open property and hence it was non existent on
Disclosure anymore. This broke the API. Luckily this API was used in our own components, and a developer noticed it while working on one of them. Can you base your reputation and code quality on luck?
The Pitfall in Developers Logic when Testing
The developer didn’t write tests because:
What the developer told me is that because the class was extending a tested class, the API was already tested on the other class. If I had 1 cent for every time I heard this logic explaining why no test was written, I’d probably had 30 to 40 cents…
Seriously though, this is one of the biggest pitfalls when trying to decide what to test: when you extend a class, you create the tightest coupling known in the computer science world (my opinions are my own).
The end consumer of your API is not aware of the class you extend. This, by definition, makes this class an implementation detail. It doesn’t make your API implementation detail.
If your class exposes an open property – then you must test it. It doesn’t matter if you wrote the property explicitly or if was added by an extended class/mixin/function/watnot. Your consumers expect this property and if it is removed you get a regression.
I shall call this pitfall – The Coupling Pitfall.
How to avoid the coupling pitfall?
Now that we named our pitfall, let’s suggest a solution. We need a way to decouple our tests from our implementation details and focus on the API.
Here’s an idea: what if we wrote the test to the API before we implemented the solution?
We know how the API should work before we implement the solution. Here’s a possible test:
The test is so simple and takes two minutes to write. Much less time than creating a new branch, creating a pull request, waiting for a review, pass all the CI and finally push the change to production, right?
We accomplished 2 things here:
- We covered our API with a test (which we need to do anyway)
- We decoupled our tests from any implementation detail – we test only API. This makes any refactor safe because our tests are now sensitive to changes in the API.
So how is it that people don’t decouple their tests from their implementation?
The production incident described here is a real-world use case that shows the importance of testing – and the importance of experience in writing tests.
The pitfall described here is not relevant only for this use case – coupling between test and implementation is a major problem in most code bases. It creates bad tests and frustration among developers, because changing implementation detail requires to change tests. In worse cases, it misses regressions.
The solution offered here is to write the test before writing the implementation – hence decoupling the two from the start.
This solution has a name that fills developers’ hearts with terror. It is called TDD 😉
Thanks a lot to Yuval Bar Levi for the kind and thorough review of this article
If you liked the article, please share it: