Your database needs unit tests: no excuses

The test pyramid is a well-known visualization aid to classify software tests. As you climb the steps towards greater integration of components, you proceed from many, detailed, fast and isolated tests towards fewer, slower and more global tests that validate the system as a whole. It makes good sense in principle but it’s harder to explain how the stratification between unit-, integration and end-to-end tests is supposed to work. Opinions differ on what parts to integrate at which layer in the pyramid. You would think the database belongs in the upper strata, because it is expensive to set up and run. Yet it also makes sense to integrate it at the lower, detailed stage in the pyramid when it contains business-critical logic that requires detailed validation. It often needs the kind of rigorous validation that you cannot leave to a few slow, global integration tests. But let’s start with a recap of the definition of a unit test.

Speed, single focus and isolation

In his excellent book Unit Testing Principles, Practices, and Patterns, Vladimir Khorikov’s three requirements of a unit test are speed, single focus, and isolation – if tests do not depend on each other’s state, they can be run in parallel. Single focus can be seen in two ways: as a single unit of code versus a unit of behavior. The first approach favors – well, demands – a strict correspondence between a class under test and a test class, whereas the second approach focuses on the public API and is not primarily concerned with validating implementation details. The unit-of-code approach requires you to cut out the role of anything external to the class under test by using test doubles (mocks and stubs). This certainly applies to all interaction with the database. The unit-of-behavior approach is more pragmatic and only uses test doubles when using the real thing would degrade performance.

Before discussing the database, let’s look at an example where it would make sense to integrate the Spring context in detailed unit tests. Consider this simple REST controller:

@RestController
public class MemberController {
    @Autowired
    private PersonDAO personDAO;

    @PreAuthorize("hasRole('ADMIN')")
    @GetMapping("/api/member")
    List<Person> getAll() {
        return personDAO.getAll();
    }
}

Suppose we want to validate that admin users receive a http 200 response, and non-admin users a 401. To test this, we need to spin up a Spring context, which is easy to achieve yet slower than a plain unit test. Does that make it an integration test? From a technical standpoint, yes, but such a granular level of testing has all the hallmarks of a unit test. Moreover, a plain unit test without the Spring context could not possibly validate the business rules captured in the PreAuthorize annotation.

@SpringBootTest(webEnvironment = DEFINED_PORT)
class MemberControllerTest {

    @Test
    public void getAllMembersForAdminUserReturnsListOfOne() {
        assertThat(getMembersAsUser("bob").getBody().length).isEqualTo(1);
    }

    @Test
    public void getAllMembersForNonAdminUserThrows() {
        assertThat(getMembersAsUser("alice").getStatusCode().value()).isEqualTo(401);
    }

    private ResponseEntity<Person[]> getMembersAsUser(String user) {
        return new TestRestTemplate()
                .withBasicAuth(user, "secret")
                .getForEntity("api/persons", Person[].class);
    }
}

Coming back to our main topic: is it worthwhile to write similarly fine-grained tests for our database code? Spinning up a production-like database is even more expensive compared to using the Spring runtime. Here’s a simple rule to help you make that decision. A detailed test, run locally as part of the normal build, is always worthwhile if there’s a fair chance it can prevent critical bugs in production. In the case of our authorization tests, the answer would be a resounding yes. It’s all too easy for a configuration typo to slip through the cracks and open up a big security leak. Testing early to prevent regressions later is mandatory for all critical non-trivial code of your own making. Dependencies on resources that are provided and maintained outside your team are a different story. You can more easily defer them to the end-to-end stage, because emulating the third-party system on your local machine won’t always prevent things going wrong in production. But those scenarios do not make up the majority of cases.

SQL is code

Your database may look like an external dependency from a networking and deployment perspective, but when it is developed in tandem with the code that uses it, it is an integral part of the system you’re building. SQL is code. It represents important business logic. It can reside on the database server as views, triggers and stored procedures, or it can manifest itself as SQL queries interspersed with control code or slyly hidden from view in Hibernate criteria syntax. Yet eventually it is interpreted by the database server, and preferably before the code moves to the user acceptance environment. SQL code in database-driven enterprise applications is different from Java. It constitutes a declarative programming model that is inherently stateful and persistent. To defer testing it implicitly by a user-centric Gherkin scenario that results in thirty queries is inviting trouble. Success or failure of queries often depends on a prior state of the database. Database tests become fickle when the tests are not in full control of this state. The smaller the unit under test, the easier it is to control this state.

Mind also the word interpreted in the last paragraph. While Oracle stored procedures are compiled and warn you of syntax errors, raw SQL in Java certainly only backfire when it is run on the database. The proof is in the pudding, so eat it soon, not as a dessert.

Summing up, there is no good excuse not to test your database interactions in great detail. This then creates the challenge to keep overall build time within acceptable bounds. Some strategies and frameworks that you can use for this will be the topic of a later post, but let me already summarize them:

As for organizing your code:

  • Never put raw SQL in with control logic. Abstract these interactions behind an interface and mock out the database interactions during tests. Code that doesn’t directly reference SQL should not have to depend on a running database for testing and it should have no compile-time dependencies on anything that has  SQL in its class name.
  • Put the implementations of the database interactions described above in dedicated classes with one query per method. At this granular level it’s much easier to write detailed tests that bring the tables in a desired state and validate the outcome. 
  • If you have hundreds of database tests covering multiple schemas, consider splitting up your database code into separate modules per schema with their own versioning and releases. That way you need only build and test the modules that are affected. To make that work those modules must be loosely coupled, with minimal inter-schema dependencies.

As for running/maintaining your database:

  • Run a containerized version of the database on the same machine. Memory matters. The build process takes care of creating and tearing down the container and no test suite can leave the database in a dirty state.
  • To prevent creating an image from scratch for every test run, consider providing an up-to-date production copy from a local repository, empty or with limited test data. Minimize the amount of this fixture data. It all adds to the maintenance load, and you are not running a performance test – yet.