Picture this: your product owner brings you a set of new requirements, and together you review them. After some preparation and initial work, you get your design reviewed and approved by relevant stakeholders. Then, you enter your focus zone and start implementing it. While doing so, you want to gain confidence, so you add tons of unit tests to cover it, manually test it, and refactor your code based on your tests' results until you feel comfortable with the outcome.
But still, you're left wondering, "how do I get to a strong level of confidence in this code?"
In this post, I'll discuss what I did when faced with the above situation and how my team and I improved the quality of (and our confidence in) our code.
Integration tests: what are they, and why do them?
There is a straightforward answer to the second part of the above question: integration tests help ensure happy customers.
Developers take several approaches to test their code to ensure its integrity. Choosing the suitable testing method depends on your context. Nevertheless, some tests are required for every software. These tests are represented as a layer-based model, known as the testing pyramid proposed by Mike Cohn.
Personally, I think the image below (by Atlassian) emphasizes what we would like to achieve when choosing a specific testing approach by following each of the layers of the pyramid.
A test pyramid. Source: Atlassian
We chose to implement integration tests to answer the question, "Are we building the system right?"
Why did we choose integration tests? These tests determine whether the parts of the solution work together as expected. Integration test implementation is relatively easy and can prevent errors that are hard to catch later on. In addition, integration tests help to validate builds faster, reduce the time to ship, avoid human error, adhere to continuous integration (CI) practices, and minimize cost. As a result, everyone is happier in the long run because bugs are detected earlier, and larger problems are avoided. This increases our confidence and results in higher-quality code.
How did we build them?
At Igentify, our services are deployed as Docker containers. In a nutshell, containers isolate apps from their environment, solving the "it works on my machine" problem. Docker has a powerful API, which makes it easy to automate its setup and deployment.
We use the famous Testcontainers library that provides lightweight, throwaway instances of anything that can run in a container. The Testcontainers library is test-friendly, open-source, and supports multiple programming languages. Since Igentify's services are being deployed as containers, we already have a Dockerfile, and Testcontainers simply create a new temporary image for it on-the-fly.
If you've read this far, you're likely interested in diving into more technical details 🤓
As an example, if the design of your system is:
Service A is a container that talks to the "world" using a message broker in the form of RabbitMQ
It gets a message from the
RQ_Queue , processes it, requests a pre-signed s3 URL, and stores its result in the bucket using that URL
When finished, it responds to
RS_Queue with the output
At a high level, the RabbitMQ container is being replaced by TestContainers' RabbitMQ module, and its configuration is being filled by
Service A context, which is being initiated by @SpringBootTest annotation. The blob service is essentially a web server that gets requests via REST calls. As part of its API, the blob service generates pre-signed URLs for uploading to cloud storage, which, in our case, is an AWS S3 bucket.
So, how can a web server be controlled in a test environment? API mocking. In practice, it means you replace the real implementation with a local web server for your testing purposes. We use a well-known library named WireMock, where you simply mock the various requests and responses, and you are able to store files directly on your local drive, bypassing the cloud storage. Two birds with one stone.
The final piece of the puzzle is to add a consumer for the
RS_Queue to get the response and use JUnit5 to assert the expected output, resulting in a self-contained environment for testing. The input should be meticulously selected to cover success and failure scenarios.
Eventually, the integration test diagram looks like this:
The last step was incorporating the integration tests into our continuous integration in the Jenkins server. Currently, we run it for every build, but in the future, we will trigger it once a branch is merged into the main (current) branch. That will help us gain certainty that we didn't break anything.
A Jenkins run. Source: Mememaker
In this post, we reviewed how we've designed integration tests for our Java Spring Boot application with the help of TestContainers and WireMock libraries. We showed how we could easily interact with actual data while being as close as possible to the product, giving us confidence in our code.
I believe these libraries bring a lot of immediate value with a rich set of functionalities. I encourage you to give them a try. Overall, integration tests are a crucial part of any software development, as they help to ensure a more robust product, resulting in happier customers.
We used integration tests while building version 7 of our Igentify digital genetic platform, which we released last month. We plan to expand the capabilities of these tests in future releases of our software. If you're interested in learning more, please let us know here.
If you’re interested in subscribing to this blog to get email alerts when we publish a new post, please fill out the form below.