Table of Contents
Intro
In this couple of articles, I’ll demonstrate how you can deploy a sample .NET console app to AWS and run it periodically as a Fargate task. That’s a common use case for any kind of background service you need to run on some predefined time intervals.
A few examples are data cleanup processes, loading caches, ETL batch jobs, and many more.
The logic of the demo app itself is quite simple so that we can focus on the overall end-to-end workflow. Still, that’s not just a typical Hello World app. My ambition was to introduce at least a few practical aspects that might be of real help in a production system.
A few of these are:
- Set up dependency injection for a proper service resolution.
- Switch between multiple environment-specific configurations based on environment variables.
- Connect to a database (Mongo) to read some data.
You can review the .NET application source code on GitHub.
The specific steps we’ll walk through in this and the following article are:
- Set up a MongoDB cluster in Atlas.
- Create a sample .NET app that reads data from Mongo and outputs in the app logs.
- Containerize the app and publish it to an ECR repository.
- Create an Elastic Container Services Cluster.
- Create a Task Definition.
- Run the task on the cluster.
- Configure a VPC with public and private subnets.
- Assign a public static IP for the Fargate task via a NAT Gateway.
- Whitelist the IP in Atlas.
- Configure the Task Definition to run periodically in Fargate via a Cron Job expression.
In this post, we’ll cover points 1-6, while in the second one, we’ll go through points 7-10.
Let’s get started!
Set Up Mongo Cluster in Atlas
The first step is to set up our Mongo cluster in Atlas. Start by creating your account or logging in with an existing one.
Create a new Organization:
Create a new Project:
Next, select the type of Mongo cluster to deploy. For our purposes, the Shared one is fine:
Select your AWS region:
Enter a cluster name and hit the “Create Cluster” button.
Add an admin user for your database:
Then, whitelist your current IP so you can connect to the DB from your machine. Of course, at some point, we’ll also need to enable connections from our container running in AWS. This is something we’ll go through in the next article.
Click “Finish and Close” to create the cluster.
Create a Sample Database
It’s time to connect to the new Mongo cluster and create our test database. You can get the connection string from Atlas:
I am using Studio3T as a Mongo IDE, but of course, you can use your favorite IDE or just do everything in the Mongo shell. The setup is simple enough, so that shouldn’t be a problem.
Create a database FargateDemoDb
and a users
collection:
You can see I have created some sample data. Feel free to insert some test users yourself, but make sure they follow the same format (just set a name
field) as the Mongo object is deserialized to a POCO in the C# code.
Review the .NET Console App
Link to GitHub repo.
To run the application, you first need to set a few environment variables:
- DOTNET_ENVIRONMENT
- MONGO_PASSWORD
- MONGO_USERNAME
I use Rider to configure those for the project, but of course, you can configure them via some other mechanism, like just setting them globally on your machine.
Later on, we’ll also see how to apply these variables properly when creating the ECS Task in AWS.
Let’s move to the application code.
I’d advise you to walk through the bootstrapping logic for the project that utilizes a HostBuilder for a Console application.
This allows us to enable dependency injection, which makes the implementation a little more realistic than just coding everything quick and dirty for the demo.
The main logic is in the RunConsoleApp method of the ConsoleHostedService
class:
Basically, we connect to Mongo, read all the users, and output them to the Console via the _logger
.
Containerize the Application and Push to Elastic Container Registry
Create an ECR Repository
First, we need a new repository in AWS ECR.
Build and Push the Image
For publishing a Docker image to ECR, follow the steps in this link.
In essence, you first need to authenticate your Docker client:
aws ecr get-login-password --region <region>| docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.eu-south-1.amazonaws.com
Then, build and tag the image:
docker build -f .\FargateDemo\Dockerfile -t fargate-demo-image . docker tag <image_id> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/fargate-demo-repo:1.0
And finally, push the image to the registry:
docker push <aws_account_id>.dkr.ecr.eu-south-1.amazonaws.com/fargate-demo-repo:1.0
Configure a Fargate Task to Run the Container
Create an ECS Cluster
To run the container, we need a new Elastic Container Service (ECS) cluster:
Create a Task Definition
Then, create a new Task Definition.
Select the minimum memory and CPU requirements:
Hit “Add container” and select the Docker image we pushed to ECR in a previous section:
This is the place where you also need to specify the environment variables for the container:
After adding the container, finish the task creation by clicking the “Create” button.
At this point, your new task should exist in the Task Definitions section:
Run a Fargate Task
Once the task definition is ready, we can finally run a Fargate Task with our container. For now, we’ll just run a task on demand for testing, but in the following article, we’ll see how to schedule it to run periodically on some predefined time interval.
Hit the “Run new Task” button:
Populate the required fields:
For the VPC config, for now, I’m just selecting a default public subnet. This isn’t optimal as our task doesn’t really need to be publicly accessible. We’ll see how to handle this properly in the next post.
Click the “Run Task” button.
The task will appear in the Running tasks list:
Once completed, it will be moved to the Stopped tasks section:
You can click on it and examine the logs:
You can see an error message that the program couldn’t reach the Mongo instance. That’s expected because we haven’t whitelisted any AWS public IP for this task in Atlas for inbound connections.
That’s what we’ll handle next.
What’s Next?
In the next article, we’ll explore the networking issue above and how to fix it properly.
You’ll see how to configure a Virtual Private Cloud (VPC) with public and private subnets and assign a static public IP to the scheduled task via a NAT Gateway.
Then we can whitelist the public IP in Atlas so we can access the Mongo instance from our ECS container.
We’ll also configure the ECS task to run periodically at a predefined time interval.
See you there!