How We Used Mock APIs to Supercharge Our Microservice Testing
A guide on how we did it and how you can replicate the strategy β‘
Note: This is a case study from an earlier project that we developed for a client. Some details have been left out or modified for confidentiality.
Introduction
After a long time developing and managing a huge monolithic application we decided to travel down the inevitable route of breaking it down into small independent microservices. This came with quite a few challenges where many of them related to testing. We no longer managed only one large application but many small ones which we should deploy and test individually.
In this article we will describe our microservice journey and what we did to accelerate our testing strategy by leveraging mock APIs.
Our Architecture πΊοΈ
Monolithic arcitecture with a single application using loads of different AWS services such as Lambda, API Gateway, DynamoDB, Kinesis, ElasticSearch and Cognito all deployed through CloudFormation.
The legacy architecture that we were moving away from was in fact not really ancient. We were deploying everything in AWS using infrastructure as code (AWS CloudFormation), we were running serverless functions (AWS Lambda) instead of managing VMs or container clusters and from the outside things looked pretty good. However, when you dug into the code and saw all of the entities and different services within you quickly understood that we had created a huge monolith with code dependencies that looked like a spider web across the different parts of the application.
The main challenges with this architecture was:
- Time consuming deployments: A release took a lot of time and required careful monitoring as to make sure nothing broke. We needed to sync everything up and deploy during a specific time to make sure we had time to correct things. Many precious developer hours went into monitoring and fixing release issues.
- Developer experience: Since the code base had grown so large it was very hard to modify the existing code without knowing about the different dependecies between other parts of the application. Running the application locally was a strict no-go. Deploying to test your code when deployed to the cloud could take up to 20 minutes because of the ever increasing number of resources that we had to deploy.
- Testing: For a developed feature to be merged into a release branch all tests for the entire monolith had to pass. These tests were time consuming and could take up to 30 minutes to complete. Waiting for the tests to pass when you made a small change lowered development and review speed. This is what we will focus on for the rest of the article.
Microservices were an obvious choice for us since we could…
- Have small, fast and independent deployments of each microservice
- Split the monolith into smaller graspable code repositories with a clear purpose that developers could understand quicker
- Test each service individually with small test suites that finish quicker. No need to run the entire suite before releasing a microservice.
Our Integration Test Strategy π
Our testing workflow which in theory was functioning well but in practice was hugely time consuming.
One large pain point were the time consuming integration tests that had to be run before a new feature could be merged to a release branch. These tests ran HTTP requests towards our REST API and made sure that the response was ok for each endpoint. In the monolith days we would spin up the entire monolith for the feature branch and run the tests each night. This would then mark the PR as approved in GitHub.
That was quite time consuming because of the HTTP overhead and all of the setup of test data required to run the requests. We decided to instead minimize them to only test the happy flow. Unfortunately that sometimes resulted in issues with faulty error messages and requests with non-standard parameters that did not respond according to the specification.
All in all while we were not happy with the performance of these tests we however liked the flow itself. That is why we set out to improve the performance of the tests while keeping the workflow as described in our microservice architecture.
The Breakout π₯
For each new feature that we developed in the monolith we formed a strategy on how to break out the related application component into a new microservice. Once a developer picked up the ticket she/he started developing a new service in a new repository with its own deployment pipeline and no shared code with the monolith. This was fairly time consuming but helped us keep up with the regular feature development in parallel to the microservice transformation work.
How We Supercharged our Testing β‘
According to our old way of running our tests we had only one option to run our integration tests: Once a PR is created we will simply spin up all our microservices, run the tests on all of them to make sure that they work together and then spin them down again.
However, this would make the tests even more time consuming. Deploying all of the individual services would be more complex and more time consuming than just deploying a single monolith instance. After some thinking we concluded that there is no actual reason to deploy all services when the change is introduced in only one of them. However, sometimes the service to be tested had dependencies to other services. So what to do then? Deploy them anyways?
We researched different solutions to this problem and started investigating using mock APIs instead of the real services. The philosophy behind doing this was as long as the external services respond correctly there is no need for them to be real deployed services. This way we did not have to wait for the external dependencies to deploy and we wouldn’t even have to pay for their infrastrucure. We could also control the response structure to catch corner cases and make sure that we have good test data. All this without having to spend time and resources to set it up programmatically before running the tests.
Running mock APIs for the external dependencies also meant that we could run the entire stack locally in a very lightweight fashion. Sometimes you see teams running Docker containers with 40 microservices that are really heavy and tedious to get running. When using lightweight mock APIs without any real infrastructure at all we could run everything locally easily even on weaker workstations. By tweaking the responses during development we could also test edge cases and run the integration tests locally which sped up the feedback loop considerably.
Service Tests vs. End-to-End Tests βοΈ
One challenge that we had was what to mock and what not to mock. Basically we split the tests up into two different suites. One suite we called service tests. These had all external dependencies completely mocked and were required to pass before a new feature was merged. This made sure that the test subject worked when the external services were happily returning the data that we expected.
Illustration of what we call “Service Tests”
End to end tests were the other tier and ran as soon as a feature branch was merged to develop. Here we had a dedicated environment up and running with all the real services fully integrated. If these tests failed (which would be rarely) a developer would try to ship a fix as soon as possible. This made us certain that our fleet worked as it should.
Illustration of what we call “End-to-End Tests”
Tools π οΈ
At first we ran tiny Nodejs applications written in Express as mock APIs. These were quick to set up but did not really scale when we had to manage them individually for all of our services. This meant having to deploy them, keep them in sync with the real services and maintain their infrastructure.
At first we started to look into the different tools available to support our use case. We found many services that could generate simple HTTP responses but not quite what we needed. We wanted something that:
- Worked well for microservices with many small APIs
- Required as little manual work as possible to set up
- Were version controlled
- Could be run both locally and hosted mocks in sync
- Was easy to keep in sync with the real API
That is why we decided to build our own mocking tool. The initial idea was to configure our mocks through a configuration file that we could manage in the GitHub repositories of each individual service. As soon as you pushed a change to the configuration the mock would update and be ready to use. This way we did not have to spend any time on keeping things in sync and worrying about different versions of the mocks. Each branch and version had its own mock that was always available. Next up was the test data needed to keep the consuming services happy. This was quickly integrated in the configuration file so that we could easily add the test data we needed and generate realistic fake data for the consumers.
This was the start and when we decided to build the public tool that is Mocki today we added features such as realistic test data generation as well as simulated failures and delays. If you are interested in trying out a similar setup head over to our start page to learn more or dive straight into the documentation.
Final Solution π‘
New feature: Service tests tests are run on features once there is a PR up. External dependencies are hosted mocks. This made the test suite a lot faster to finish and they required less setup to get started. By using mock services we are also able to test corner cases with generated test data in the mock.
Feature merged: When merging to develop a new test is triggered with real deployed dependencies. If a test fails here developers are notified in Slack and someone will take a look at it. This makes sure that the service works with all external dependencies being the real deal.
Development: While developing we run the microservice locally integrated to mock services also running locally. Since we no longer required the external dependencies to be up and running to develop features we got an increase in developer productivity. We also no longer needed to spin up our real services locally to test things out. We could simply use our lightweight mocks running locally or the hosted ones which are always available.
Note on costs: Not only did we create higher developer efficiency, we also managed to lower the costs for our environments significantly thanks to the infrastructure cost that we save when not haing to deploy each service for each feature environment.
Further Work ποΈ
- Error handling and simulation - We have not utilized Mockis capabilities in error handling and failure simulation yet but that could be something to try out for the future to investigate how our services behave with failing dependencies.
- Load testing - Using Mocki it is also possible to simulate delays that you would typically see when a service is overloaded. In the future we will run chaos engineering to see which services are affected most by external dependencies being overloaded and how we can remedy the risk of that affecting our users.
Conclusion
- Mock APIs can be utilized to save cost and up developer productivity
- There are tools that can help you in your journey to testing microservices efficiently
- There are many possibilities to use mock APIs to stress test your application
Thank you so much for taking the time to read this! Feel free to reach out if you have questions or need advice on how to get similar benefits π