How We Used Mock APIs to Supercharge Our Microservice Testing

A guide on how we did it and how you can replicate the strategy ⚑

Note: This is a case study from an earlier project that we developed for a client. Some details have been left out or modified for confidentiality.

Introduction

After a long time developing and managing a huge monolithic application we decided to travel down the inevitable route of breaking it down into small independent microservices. This came with quite a few challenges where many of them related to testing. We no longer managed only one large application but many small ones which we should deploy and test individually.

In this article we will describe our microservice journey and what we did to accelerate our testing strategy by leveraging mock APIs.

Our Architecture πŸ—ΊοΈ

Monolith Architecture Monolithic arcitecture with a single application using loads of different AWS services such as Lambda, API Gateway, DynamoDB, Kinesis, ElasticSearch and Cognito all deployed through CloudFormation.

The legacy architecture that we were moving away from was in fact not really ancient. We were deploying everything in AWS using infrastructure as code (AWS CloudFormation), we were running serverless functions (AWS Lambda) instead of managing VMs or container clusters and from the outside things looked pretty good. However, when you dug into the code and saw all of the entities and different services within you quickly understood that we had created a huge monolith with code dependencies that looked like a spider web across the different parts of the application.

The main challenges with this architecture was:

Microservices were an obvious choice for us since we could…

Our Integration Test Strategy πŸ“ˆ

Integration Test Strategy

Our testing workflow which in theory was functioning well but in practice was hugely time consuming.

One large pain point were the time consuming integration tests that had to be run before a new feature could be merged to a release branch. These tests ran HTTP requests towards our REST API and made sure that the response was ok for each endpoint. In the monolith days we would spin up the entire monolith for the feature branch and run the tests each night. This would then mark the PR as approved in GitHub.

That was quite time consuming because of the HTTP overhead and all of the setup of test data required to run the requests. We decided to instead minimize them to only test the happy flow. Unfortunately that sometimes resulted in issues with faulty error messages and requests with non-standard parameters that did not respond according to the specification.

All in all while we were not happy with the performance of these tests we however liked the flow itself. That is why we set out to improve the performance of the tests while keeping the workflow as described in our microservice architecture.

The Breakout πŸ’₯

For each new feature that we developed in the monolith we formed a strategy on how to break out the related application component into a new microservice. Once a developer picked up the ticket she/he started developing a new service in a new repository with its own deployment pipeline and no shared code with the monolith. This was fairly time consuming but helped us keep up with the regular feature development in parallel to the microservice transformation work.

How We Supercharged our Testing ⚑

According to our old way of running our tests we had only one option to run our integration tests: Once a PR is created we will simply spin up all our microservices, run the tests on all of them to make sure that they work together and then spin them down again.

However, this would make the tests even more time consuming. Deploying all of the individual services would be more complex and more time consuming than just deploying a single monolith instance. After some thinking we concluded that there is no actual reason to deploy all services when the change is introduced in only one of them. However, sometimes the service to be tested had dependencies to other services. So what to do then? Deploy them anyways?

We researched different solutions to this problem and started investigating using mock APIs instead of the real services. The philosophy behind doing this was as long as the external services respond correctly there is no need for them to be real deployed services. This way we did not have to wait for the external dependencies to deploy and we wouldn’t even have to pay for their infrastrucure. We could also control the response structure to catch corner cases and make sure that we have good test data. All this without having to spend time and resources to set it up programmatically before running the tests.

Running mock APIs for the external dependencies also meant that we could run the entire stack locally in a very lightweight fashion. Sometimes you see teams running Docker containers with 40 microservices that are really heavy and tedious to get running. When using lightweight mock APIs without any real infrastructure at all we could run everything locally easily even on weaker workstations. By tweaking the responses during development we could also test edge cases and run the integration tests locally which sped up the feedback loop considerably.

Service Tests vs. End-to-End Tests βš–οΈ

One challenge that we had was what to mock and what not to mock. Basically we split the tests up into two different suites. One suite we called service tests. These had all external dependencies completely mocked and were required to pass before a new feature was merged. This made sure that the test subject worked when the external services were happily returning the data that we expected.

Service Test

Illustration of what we call “Service Tests”

End to end tests were the other tier and ran as soon as a feature branch was merged to develop. Here we had a dedicated environment up and running with all the real services fully integrated. If these tests failed (which would be rarely) a developer would try to ship a fix as soon as possible. This made us certain that our fleet worked as it should.

End-to-end Test

Illustration of what we call “End-to-End Tests”

Tools πŸ› οΈ

At first we ran tiny Nodejs applications written in Express as mock APIs. These were quick to set up but did not really scale when we had to manage them individually for all of our services. This meant having to deploy them, keep them in sync with the real services and maintain their infrastructure.

At first we started to look into the different tools available to support our use case. We found many services that could generate simple HTTP responses but not quite what we needed. We wanted something that:

That is why we decided to build our own mocking tool. The initial idea was to configure our mocks through a configuration file that we could manage in the GitHub repositories of each individual service. As soon as you pushed a change to the configuration the mock would update and be ready to use. This way we did not have to spend any time on keeping things in sync and worrying about different versions of the mocks. Each branch and version had its own mock that was always available. Next up was the test data needed to keep the consuming services happy. This was quickly integrated in the configuration file so that we could easily add the test data we needed and generate realistic fake data for the consumers.

This was the start and when we decided to build the public tool that is Mocki today we added features such as realistic test data generation as well as simulated failures and delays. If you are interested in trying out a similar setup head over to our start page to learn more or dive straight into the documentation.

Final Solution πŸ’‘

New feature: Service tests tests are run on features once there is a PR up. External dependencies are hosted mocks. This made the test suite a lot faster to finish and they required less setup to get started. By using mock services we are also able to test corner cases with generated test data in the mock.

Feature merged: When merging to develop a new test is triggered with real deployed dependencies. If a test fails here developers are notified in Slack and someone will take a look at it. This makes sure that the service works with all external dependencies being the real deal.

Development: While developing we run the microservice locally integrated to mock services also running locally. Since we no longer required the external dependencies to be up and running to develop features we got an increase in developer productivity. We also no longer needed to spin up our real services locally to test things out. We could simply use our lightweight mocks running locally or the hosted ones which are always available.

Development

Note on costs: Not only did we create higher developer efficiency, we also managed to lower the costs for our environments significantly thanks to the infrastructure cost that we save when not haing to deploy each service for each feature environment.

Further Work πŸ—οΈ

Conclusion

Thank you so much for taking the time to read this! Feel free to reach out if you have questions or need advice on how to get similar benefits πŸš€