Almost every application that we develop uses a database. It either stores data to a database or reads data from a database. Often both. Unit testing such code can be a tricky thing and we’ve tried multiple approaches to this problem. Eventually, we found a good compromise.

For the sake of simplicity assume we have a function that communicates to a database:

1
2
3
4
5
6
7
8
async function readPerson(personId, database) {
    const items = await database.findAll({id: personId});
    if (items.length === 0) {
    	return undefined;
    } else {
    	return items[0];
    }
}

It reads a person’s data from the database and it returns undefined if there is no such person. It does not matter what database we use. What matters is how to unit-test such function. Historically, we have been using mainly two different approaches:

  1. Use a spy instead of a real database and then expect correct methods were called on the spy with correct arguments.
  2. Use the real database connector to the real and running database and verify that the function reads correct data from this database.

Database Spy

Let’s take a look at the first option, spying the database. We can implement a simple database spy and then use it in the test:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
const testRecord = {id: 'Serral', name: 'Joona Sotala'};

class DatabaseSpy {
	findAllArguments = [];

	async findAll(filter) {
		this.findAllArguments.push(filter);
		return testRecord;
	}
}

it("should call database connector with correct query", async () => {
    const databaseSpy = new DatabaseSpy();
    const person = await readPerson("Serral", databaseSpy);
    expect(person).to.be.deep.equal(testRecord);
    expect(databaseSpy.findAllArguments).to.be.deep.equal([{id: 'Serral'}]);
});

We created a spy that remembers all the calls of the findAll method. And we verified that we called the method correctly. What are the advantages of such approach?

  • It’s fast. We don’t need to connect to any database. We just create a few instances and that’s it.
  • It’s deterministic. There is nothing that could go wrong. The databaseSpy always behaves the same way regardless of anything else which is great.

What are the disadvantages?

  • We cannot refactor our code to fetch the same data using different database query without changing the tests. Usually, the database language is rich enough to support multiple different approaches to fetch data. But when we use the DatabaseSpy we don’t test behavior, we test implementation. Imagine someone notices that the database driver supports findOne method. Then they rewrite the readPerson this way:
1
2
3
async function readPerson(personId, database) {
    return database.findOne({id: personId});
}

It does the same thing! The findOne method returns the first document satisfying the filter, or undefined. We just refactored the code, we didn’t change any behavior. In this case, we should be able to use our unit tests to verify we didn’t break anything. Because we could break something. But do the tests help us? No. They won’t pass. They shout at us that we are supposed to use findAll method. But we don’t want to use findAll method! We want to use findOne method because it’s more efficient! In this case, the tests do not help us. Moreover, they are in our way. We need to update the tests first, then update our production code. Which is fine if we were changing the logic, the behavior. Not refactoring the implementation!

We don’t know if our code really works. I know, in this example the database query is quite easy and simple. But I bet you all have written far more complicated database queries. With a lot of JOINs or using some complicated aggregation pipeline. How can we verify the db query does what we expect? Well, we have to test it somewhere else. We have to open some SQL console or Mongo shell or whatever, test our db query there and then copy & paste the verified query to our tests and then to the production code.

Isn’t there a better way?

Using Real Database in Unit Tests

Oh boy. I know how it sounds. Bear with me.

In general, why don’t we like to use a real database in unit tests? Or any I/O devices such as file system or network? There are three main reasons:

  • It’s slow. Unit tests are supposed to be fast. Slow units are like drinking non-alcoholic beer or like wearing a face mask to cover just your chin during a pandemic. It’s just not the right way. We should be able to run a suite of unit tests every time we save our changes. And sending a query over the network to some testing database slows things down significantly.
  • It’s unreliable. Network is by definition unreliable. Querying a real database is dangerous because the database can shut down. Sometimes we need to be on some VPN to query our testing database. The worst thing is when we have to run the database manually on your machine before running tests. Urgh…
  • It’s not isolated. A running database is a shared resource. File system is a shared resource. Meaning any time we run tests, anyone can meddle and somehow affect our tests. Maybe a script clearing all tables/collections in our testing database in the middle of our tests.

Now we know the main disadvantages of using a real databases in unit tests. But still… Let’s try to write some unit tests against a real database:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
it("should fetched requested person's data", async () => {
    const database = await getEmptyDatabase();
    await database.insert([
    	{id: 'uThermal', name: 'Marc Schlappi'},
    	{id: 'Serral', name: 'Joona Sotala'},
    	{id: 'Neeb ', name: 'Alex Sunderhaft'},
    ])
    const person = await readPerson("Serral", database);
    expect(person).to.be.deep.equal({id: 'Serral', name: 'Joona Sotala'});
});

We connect to a database in the getEmptyDatabase function, we fill the database with some testing data and we just expect our readPerson function returns what we expect. What are the advantages of such a solution?

  • Suddenly, we test behavior, not implementation. In tests, we say: if the data in the database looks like this, I expect you fetch that. And we don’t care about implementation!
  • We can refactor our implementation easily. Because we don’t depend on implementation in the tests, we can refactor our code to use different approaches and different db queries.

Best of Both Worlds

It looks like there is no silver bullet and both solutions have some drawbacks. Can we somehow manage to take the best things from both worlds and merge them? We gave it a try. Let’s discuss the result:

We use MongoDB for almost anything. We use other databases such as MySQL, Cassandra or PostgreSQL, but the primary database is Mongo DB. How did we overcome the mentioned disadvantages?

We Install It Automatically In The Tests On Our Machines

When we were thinking about using real database in our unit tests we knew that is would have to be as effortless as possible. For both the programmer writing the tests and for programmer executing the tests. Thus we wanted to automatize everything. Our idea was to install MongoDB engine locally on the machine running the tests.

We use mongodb-prebuilt library to download MongoDB binary file compiled for the specific OS. You can choose many different MongoDB versions and many different OSs. If we run unit tests for the first time, we download the binary, we store it somewhere (in ~/.mongodb or whatever) and then we start the database.

In the second run, the stored binary is always used. The first run takes some time because we need to download the binary but every other run is fast again. Starting the database takes some time, but less than two seconds (that is our timeout).

The programmer writing tests has to do two things: They just need to call installMongoDb() and runMongoDb() in our test setup, that’s it. Then, everyone can just hit npm test and nothing else needs to be done. We can even choose different MongoDB versions for each project!

This is probably the most important point here. It really cannot work if it is not automated. No manual work has to be required! No more “yeah, so, you need to run yum install mongodb and…” and no more “yeah, so, you need to open SSH tunnel to the server running the database…”

We Made It Fast By Running It Locally

I guess every database is more or less fast. That is one of the goals for each database. And if there is no network involved, it’s super-fast. We can make hundreds of db queries per second. Querying a database that runs locally on our machines can hardly be the bottleneck of our unit tests – and in reality, it’s not.

We Isolated It

Every time we start a database we configure it differently. It listens to a different port and it uses different data folder to store data every run. No two runs of tests are going to reuse anything. Moreover, we use a new collection for each test. Each it uses a new collection. We don’t even share collections in one test file. There is no need for it – it’s fast anyway.

We Made It Reliable

There are not many moving parts. There are not many dependencies or requirements. For the first run, we need to be connected to the internet so we can download the binary and the remote server has to be operating (it’s like running npm install). We need to use one of the supported OS. And that is all. Once we have a MongoDB binary on our machines there is not much else that can break. Starting a database is quite easy, it’s no magic.

The key reason why is it reliable is that everything is done programmatically. No human action is required. We don’t need to run a database by ourselves. Nothing like it. Just hit npm test and enjoy the fast tests against a real database.

The Conclusion

And that is how we unit test our code that queries a database. It’s not rocket science. To sum it up:

  1. We install MongoDB on our machines by downloading a precompiled version of the MongoDB. This is done only once per “lifetime” as it is cached on the disk.
  2. We start MongoDB in the test setup. We use different data folder etc.
  3. We use this running database is tests: every test uses a different collection so there is no data sharing.
  4. We shut down the database once the tests are done.

I am not sure if we can use the same approach for every database available such as PostgreSQL, MySQL or Oracle. We’ve done it just for MongoDB. And we have had only a few issues with this setup:

  • Once we wanted to use some specific version of MongoDB on some OS and the combination was not supported. Eventually, we updated both the OS and the MongoDB anyway. In the meantime, we used a precompiled version for the older version of the same OS and it worked without any problems.
  • Starting the MongoDB engine is usually quick and errorless but we had a problem with it when running the tests on our CI tool, on Jenkins. At that time we used some pretty old HW for our Jenkins. And if many jobs were running at the same time, the MongoDB time outed. It did not start at the required time. We updated the Jenkins HW and it has been just fine since then.

So in the end it seems like a good decision to run the tests this way. There are more benefits than troubles. We can test our code more conveniently. We can even use TDD to build complicated database queries! Imagine that! We start with writing a test for a simple case such as an empty database or something like that and then we implement a piece of production code. Then we insert some data into the database in the second test and we write one or two lines of production code. Then we reorder the data in the database so we can test stuff like sorting. And it all works! Using TDD to write a database query – don’t tell me that was not your dream!

Further reading