Roy Braam wrote an interesting article in the latest Java Magazine 1/2021 that I wholeheartedly agree with about the dubious usefulness of en-to-end testing. Just to make it clear: an end-to-end test is not to ensure the integrity of your own software components, but to make sure that they cooperate well with everything that is outside your sphere of influence. As these concerns are opaque and fickle by nature you would think they merit very thorough testing. In practice however you will benefit more from building for failure as a strategy to counter anything that could go wrong. If your automatic process relies on a manually edited Excel sheet on a Windows share and someone left it open during the weekend – I’m not making this up – you’re toast anyway. Let me explain by means of a recent real-world example.
At irregular intervals (usually six times a year) team X produces a number of tab-separated files for team Y to process in a classic extract/transform/load (ETL) process. The number of files and the frequency of updates varies. They are supplied in a single zip file to be downloaded from an https address using Basic Authentication, i.e. with a username and password. A scheduled process downloads this file daily and compares if the MD5 digest of the content differs from the last processed run. If so, the database is updated. Usually nothing has changed so nothing happens.
This is a textbook case of an external dependency where literally anything could go wrong, since neither the server nor the contents or the file are under our direct control.
- The webserver is not available; no HTTP response
- The URL is not known with the server (HTTP NOT FOUND error)
- We’re not allowed to access it (UNAUTHORIZED of FORBIDDEN)
- There’s something wrong with our request headers (BAD REQUEST)
- The file corrupt, empty, or otherwise unreadable as a zip file
- The files are not the ones we expected; they must adhere to a strict naming scheme
- The content is malformed; number and names of columns incorrect.
- Content is correctly formed but leads to excessive number of rejections by the database, e.g. integrity constraints.
I could go on, but you quickly see the three categories of errors and how to handle them. You have network errors, content errors and processing errors (ETL logic and the database).
Robust error handling will ensure that we are at least alerted unambiguously as to the category of the error. A quick and dirty solution built around the happy flow scenario will cost you dearly in meaningless stack traces or – much worse – corrupt data getting into the system because the database didn’t complain.
Now you could choose to cover all the above scenarios in a production-like environment, but it’s not easy to implement and they add considerable runtime to your test suite. Provided your code is built SOLIDly stubs and mocks are a better option. Tests of the network code are not interested in the contents of the file. Validation of the zip contents does not rely on downloading it: you just use a local file. Checking the integrity of the CSV contents does not rely on the zip format.
Whether you need all this robustness comes down to scale of the potential pain. In reality the above process hardly ever ran into trouble. But if you have hundreds of such processes you will encounter failures on a daily basis and more hardening will pay off. Otherwise you will have the development team wading frustratedly through log files on a rotating basis, the kind of unpopular chore costing the equivalent of a full time programmer’s salary. Implementing more robust error handling is also perfectly scalable. Think about intelligent wrappers around your network, ETL and database code that can send useful messages to the responsible departments and persons. Another beneficial effect of having mistakes quickly routed and solved is that they don’t have to be flagged as a generic (read: alarming) incident, which is better for everyone’s peace of mind.