Table of Contents
Intro
If you’re using Kafka, it makes sense to build a suite of integration tests to verify you’re producing messages correctly.
In this article, I’ll present an approach to accomplish that in an isolated test environment using Docker containers.
This is the simple workflow we’ll build an integration test for:
I’ve kept the implementation as minimalistic as possible. There’s an API controller method that receives an Order
object and publishes the order into Kafka.
In strict terms, if you’re doing CQRS, this might be part of your Command Sourcing workflow. Yet, the integration test showcase is general and not restricted to this specific scenario.
Concretely, this is the testing workflow we’ll go through:
- Start Kafka in a Docker container before the test execution (using the Testcontainers library).
- Start a built-in in-memory TestServer that will host the API.
- Prepare a Kafka consumer that we’ll use to read the produced message.
- Invoke the Orders API that will push a Kafka message.
- When the API call returns successfully, consume the message from Kafka and check its correctness.
One more thing I’d like to emphasize is that the testing pattern here is not in any way specific to Kafka. You can follow the same approach if you want to test the interactions with your database, cache servers, other types of messages brokers, you name it.
I encourage you to review and experiment with the implementation on GitHub. I will only present the key pieces and skip some of the plumbing code as it’s irrelevant to this article (for example, the Dependency Injection setup).
Production Code
The Orders Controller is pretty simple:
The CreateOrder
API method just receives an
from the POST body and invokes the Order
Producer
to publish the order to a Kafka topic.
The Producer itself is a simple wrapper around the Kafka C# SDK IProducer
:
One thing to note here is that the Kafka config is stored in the appsettings
and provided to the application code via the Options Pattern:
This is pretty much all the production code we have in the app. The main focus here is building an integration test for it, so let’s move to that part.
Test Code
The test method itself is straightforward. It uses an HttpClient
to post an order to the API and then ensures the message is correctly produced by retrieving it via the consumer
.
The method body looks so simple due to the great extension capabilities of AutoFixture and xunit that allow you to extract the test setup to self-contained components that run prior to the test execution.
Pay attention to the input parameters of the test method. In the following sections, you’ll see how they are constructed and injected by the setup logic behind the OrdersControllerSetup
attribute.
The “OrdersControllerSetup” Attribute
Take a look at the OrdersControllerSetup
attribute:
It derives from the AutoDataAttribute
, which in turn derives from DataAttribute
– the xunit
extension point for injecting test data into your test methods.
You can read more about
here. Still, the main thing to understand is that it allows you to configure your AutoDataAttribute
IFixture
object, which will be used by the framework to produce the input parameters for the test method.
Building the IFixture
object can be split logically by defining a set of “Customization” classes, each responsible for a specific part of the setup.
In our case, these responsibilities are:
- Run Kafka in Docker (using Testcontainers) and create the orders topic.
- Set up the Kafka consumer that will be used to retrieve the message from the test.
- Run a TestServer and produce an HttpClient to invoke it.
The “TestContainersSetup” Customization
This is the main piece of code that deals with spinning up the Kafka (and Zookeeper) containers and creating the Orders test topic.
Spend a moment reviewing the code, and I’ll add some details in the section below it.
Let’s dive into each part individually.
Running Zookeeper
The first step is creating the Zookeeper container required for the Kafka cluster:
The TestcontainersBulder
class is part of the Testcontainers framework that lets us programmatically start Docker containers.
The AsyncContext.Run
method is just a useful utility from the Nito.AsyncEx library that helps with calling async methods from a synchronous context, like the Customize
method in this case.
Finally, we get the assigned port to Zookeeper on the host machine, as we’ll need this to set up in Kafka.
Running Kafka
The Kafka config is a little more complex, especially when it comes to the environment variables and properly configuring the Kafka listeners. I encourage you to read my article on the topic in case you’re not familiar with it.
Note that the first thing we do is call the GetAvailablePort
method. That’s a small function for obtaining a free TCP port on the host machine. We need to know the port before running the container because it’s specified in the PLAINTEXT
listener configuration.
Creating the Test Orders Topic
The final step of this customization is to create the actual topic, which is fairly easy with the C# SDK:
As it’s an async
method, it’s invoked via AsyncCntext.Run
:
The KafkaTestConfig
object is injected here as it’s needed later when starting the TestServer.
The “KafkaConsumerSetup” Customization
Here is how we define the consumer and subscribe to the topic created in the previous step:
The consumer is injected into the IFixture
object, so this same consumer will be resolved as the IConsumer
input argument of the test method. Again, this “magic” is enabled by the AutoDataAttribute
.
The “TestServerSetup” Customization
In this customization, we start a TestServer via the CustomWebApplicationFactory.
This factory is also responsible for injecting the proper config values for the topic name and Kafka bootstrap servers:
Test Teardown
A critical aspect of running integration tests is how to clean the system once the tests are finished.
Using Docker containers is the perfect solution for building an isolated environment that you can create and destroy with a simple command.
In our case, we need to ensure that we get rid of the Kafka and Zookeeper containers after the test.
Luckily, this is handled behind the scenes by the Testcontainers framework.
Here is how. If you start the test and you check your running Docker containers, you’ll see the following:
Notice that except for the Kafka and Zookeeper containers, there’s a third one – testcontainers-ryuk
This container tracks the other containers started via the Testcontainers API and destroys them based on some time interval after the test has been completed.
This means you don’t need to take any custom actions to drop your containers, which is pretty awesome!
Summary
In this post, you learned an approach to writing integration tests for your Kafka workflow in C# using the Testcontainers framework.
Thanks for reading, and see you next time!