JavaScript Asynchronous Testing Gotchas

Sergeon
CloudBoost
Published in
7 min readJan 2, 2018

--

Asynchronous coding is hard to get right, and when it comes to test asynchronous code, is pretty easy to mess things up if you’re not careful or don’t understand well the difference between synchronous and asynchronous code.

Either using mocha in the back-end with node.js or jasmine in the front-end with karma, if you never did asynchronous testing before you will get a lot of problems getting things right.

The main problem with asynchronous testing is that, when you don’t set up it correctly, specs just end before its assertions get run.

This usually lead to false positives. It is really easy to get false positives when testing with mocha or jasmine, since they, by default, flag a test as passed when they find no expectation in it.

In the other hand, this may -or not- lead to a false negative in another test, or to strange exceptions that seems to be senseless.

Not only is really easy to mess your specs up when dealing with async code; on top of that, you will get non-deterministic outputs: sometimes the suite pass, sometimes no. And when it fails, the failing test may not be the same between each run. Usually a test suite pass locally but fails in your CI server.

When you have such symptoms, you surely has a spec -or several- that is handling its asynchrony wrong.

The problem

The core of the issue is that the test suite is terminating specs before its asynchronous code runs.

Just like this node.js snippet:

will never log anything, code like this:

won’t run the expect(true).to.be.false; line before the end of the test.

On top of that, in most javascript test suites, specs with no expectations within just pass. This snippet:

Will generate this output in mocha:

the silliest test suite ever
✓ a spec with no expectation will always pass
1 passing (5ms)

This default behavior maybe is not bad by itself: you can put code in a spec, and, if it doesn’t throw an Error, it somehow means that everything is ok, so the test passes.

However, this default behavior may lead to serious issues when you’re dealing with assertions inside asynchronous callbacks:

This test, which should apparently just fail, actually passes. This is the output:

a false positive
✓ an expect inside asynchronous code will be ignored
1 passing (8ms)

Since the expectation lays within a 1 mssetTimeout, it won’t run until the next tick: when the spec finishes, there is still no expectation code, so the test just passes, since that is the default for specs with no expectations within.

the solution

To deal with this problem is really easy: both in mocha and jasmine, you can just pass an extra parameter to the it spec -usually called done-.

This flags the spec as asynchronous, and the test engine will wait to that parameter -which is a function- to be called before flagging the test as passed:

This test fails as expected.

Also, in mocha you can also return a promise in your spec in order to get the test engine to wait until it resolves before finishing the spec:

Mocha will make that test to fail, as expected.

The gotchas

So far, this story is not the most interesting: a bit of googling will tell you how to perform asynchronous testing in javascript: you just need to ensure your expectations get run before the spec end usually calling done. Easy peasy.

However, this is the tip of the iceberg. Such a silly issue, with such a straight, simple solution, can lead to severe problems in a test suite, because when you forgot to call this done parameter properly, the suite run becomes non-deterministic and really hellish to debug.

Let’s imagine we have a suite like this:

Now, the first spec, get() is obviously wrong. However, the suite will give a result like this:

ACME.com
✓ get()
✓ getLolz() (104ms)
✓ getFruit() (102ms)

3 passing (216ms)

When that’s wrong. If you’re lucky enough, and you’re dealing with a suite that waits for every callback in the stack to run, you will receive a warning of the first expectation failing when it fails(mocha current version would do that, i.e.). If you’re not, your tests will happily pass. If you’re running a jasmine suite inside karma, this may depend upon your karma configuration.

Well, let’s imagine those are the specs of your codebase. Everything seems to be working fine since all the tests pass. Now, we add a new function getPlayer() and we test it too:

Now, depending on the testing library we are using and its version , we’ll get a weird error message of some type. For example is easy to get something like:

✓ get()
✓ getLolz() (112ms)
✓ getFruit() (102ms)
1) getPlayer()

And some error regarding ‘ACME’ not being equal to ‘a wrong value ’. Maybe you’ll get all the tests passing, with an uncaught exception warning regarding the first assertion.

Here, despite the error message of the exception, it would seem that the new test is broken. In our contrived example, is pretty easy to spot that it is not, but in real tests, this will not be so obvious. Specially, when testing booleans, we just get an expected true to be false message, with no proper context of the spec that is failing. We will lose a lot of time isolating getPlayer() only to see that it actually passes when run in isolation.

What is happening, is that the expectation of the first test -which lies within a callback- happens to run during the fourth test execution: making us believe the fourth test is broken.

Here, some strange things start to happen:

1- If we remove the fourth test, the suite pass.

2- If we remove the first, the suite pass.

3- If we add any spec between the first and the fourth, that takes more than 800 ms to run, that test will fail.

Also, with different timing values, removing tests between the broken test and the one that seems to fail can make the suite to pass.

As is easy to spot, in such scenario, just to start running tests in isolation to check which is broken is not the way to go. You will start getting a lot of inconsistent results. If you just comment specs out in order to make a build, it can happen that the suite breaks when run inside the CI server, since you’re depending upon asynchronous stuff that don’t take always the same time to finish.

(

just if you don’t know, testing libraries usually have easy ways to ignore tests, or run some tests in isolation:

  • mocha and jest have it.only() and describe.only() to ensure that a spec or a describe block gets run in isolation.
  • Jasmine has fit() and fdescribe() (the ‘f’ is for ‘focus’ afaik).
  • Most libraries allow to xit() or xdescribe() to ignore a test or describe block. Also, mocha has the .skip() method to skip a block or test.

)

In such a scenario, you must properly understand what is going on and fix the root of the problem. If the test suite fails sometimes, in such an unpredictable way, it will start to fail always when it gets bigger, and it’s a signal that something really wrong lays within your suite.

Commenting out specs that, for whatever reason, happen to fail in such scenarios is really bad: you’re probably wiping out a useful, working test, and you will face the same issue when you add more tests. On top of that, you maybe have a bug in your app that your broken, false positive test is hiding.

The obvious thing here is to go spec by spec and check if the async assertions are running properly: lack of the done callback (or a returned Promise in mocha) is a strong signal that something is broken.

Bad Promises

However, this is not the only possible problem. Working with promises, we may have broken tests due to mistakes when making promise chains:

Here, we’re attaching a .then right after acme.getSomething(): doing so, we’re wrapping the expectation inside a plain callback -it just happen to be called inside a .then handler- and detaching it from the main promise chain. As a consequence of that, the done callback will run before the assertion, and the test will pass, even if it is broken (‘abracadabra’ is not the expected result there).

So, just checking done parameter is passed to the spec and then called within the spec is not the only thing you need to check.

In order to fail properly, the spec should be written like so:

Shared state

They use to say that tests should always be isolated: this is a really good advice in general. On top of that, dealing with asynchronous testing, to properly isolate every test from each other is mandatory. If some of your tests depend upon some state set by a previous test, you can’t run tests in isolation. An when you can’t run a failing spec in isolation, you can’t tell if it fails by itself, by the previous test messing with the state, or by another one that is creating an asynchronous, delayed exception. Too many candidates: don’t ever share state between your specs, and you will never need to consider dependencies between tests when debugging a failing suite.

So

I really hope this story helps someone out there: when I first started doing js testing, I had almost no idea about the oddities of asynchronous api’s, callbacks, promises and all the stuff, and really had a bad time dealing with it. Asynchronous coding is hard, and asynchronous testing is harder, and having such problems when making javascript TDD is pretty common, so I really expect this can be useful to someone :-D!

--

--

A Spanish software artisan from Madrid, passionate about programming, sequential puzzles, sci-fi and blockchain technologies.