I recently had a great conversation about application testing strategy and remote API calls. The question we were trying to answer was this:
In an application which makes external API call, when should you mock those calls in your test suite, and when should you make live calls in your tests?
My take on this issue: always always always mock external1 API calls. Here’s why:
They Invite Too Many False Positives
Making your tests dependent on live results from external APIs pulls something you have no control over within your test boundary.
There are many reasons why an external call could fail that have nothing to do with your code. You might have DNS or connectivity issues on your CI server2. The test data in that remote system could have changed. The remote system’s contract might have changed temporarily.
They Slow Your Tests Down
In all but the very best cases, a call to a distant system is going to be an order of magnitude slower than a database call. In the average case, it’s going to be two orders slower. You don’t want to know what the worst case looks like.
Adding 100ms to a test suite’s run-time is going to be felt by your developers. Adding 100ms per test is going to lead to people not running the tests. If you’re going to absorb this cost, you better get something really good from it, but there’s not much benefit here.
They Provide Feedback Too Late
The remote API may be really important, and you may need to know when it changes because it’s critical to your core business. Your test suite doesn’t do that.
The remote system changes, as far as you know, arbitrarily and without notice. If your tests catch that the remote API has failed, how long as your integration been broken in production? You don’t know, but it could be as long ago as the last time those tests passed.
What’s The Solution?
Instead of relying on your CI system to catch changes in the remote API, you have two options and I recommend doing both for mission critical integrations.
First and most importantly: monitor all external API calls and their success or failure in production. This doesn’t just mean HTTP status codes of the responses, but also whether or not that response provided the data you expect. Enforce this at the HTTP adapter level if possible. This is the bare minimum.
Second, use something like Runscope to run periodic tests to ensure the contract you’re expecting is being conformed to. Runscope will provide detailed logs and will show you exactly what failed. These tests can be extremely specific to the precise data you need or very stingy — whatever your heart desires.
In both cases, you need alerting for your mission critical integrations.
Don’t forget that you can and should test that your system handles API call failures gracefully, and you can do so without making live calls in your tests.
Do monitor remote API calls. Do guarantee that they uphold their contracts. Do those things, but don’t borrow the trouble of “testing” a system over which you have no control and spending execution speed to get it.
Feel free to tell me how wrong I am in the comments!