Photo by Marvin Meyer on Unsplash

CI/CD Pipeline for AWS Fargate

Sriharsha Polimetla
NEW IT Engineering
Published in
5 min readOct 17, 2019

--

In this post I show how to deploy a Spring boot application running in a container which can be used on any AWS Account with a few manual steps.

The application is written in such a way that the developers can only focus on writing the Java code and upon a commit to a specific branch (in this case, master) the new version of the docker image produced by the code is run.

The code can be found at my GitHub Account.

Why Fargate and not ECS or EC2?

Before we start going into the actual implementation steps, why am I using Fargate for my deployment?

There are other services from AWS which can also be used for deploying docker containers. EC2 and normal ECS among others.

In terms of operation, EC2 and normal ECS instances are similar (not same!). A ECS instance is a EC2 with ECS Agent. It means, whenever you run a ECS instance, there is a EC2 instance running and you pay for those instances. This comes with some additional maintenance operation for the developer. Fargate however is task based service and the developer only needs to define a task definition and the number of instances to run. Here you pay for the tasks.

My opinion whenever working with deployments on Cloud is, to have your cloud provider manage your infrastructure for its scaling and availability as much as possible, so you can only focus on your applications. Implementing cloud based applications is not only about getting rid of your data centers or servers but also about taking full advantage of the cloud managed services.

To summarize, always prefer Fargate to ECS or the good old EC2 instances whenever possible so you can focus only on the application and leave the infrastructure management to AWS.

Implementation

Requirements:

  1. AWS Account
  2. GitHub Account
  3. Maven (only for local testing)
  4. Java installation 8 or later (I coded with 11, only for local testing)

Manual steps for deployment:

  1. Create a Personal Access Token on GitHub — Settings — Developer Settings — Personal Access Token — Generate new token. Select the repo permissions so this token can be used later to access your GitHub repositories. You will only see this token once when it was created.
  2. Create a parameter in AWS Parameter store. I have this parameter as /github/polimetla_access_token. The value of this parameter is the token created in the previous step. Remember to create this parameter as a string and NOT a secure string. The reason for this, as of now (Oct 2019) AWS CodePipeline does not support pulling of secure string parameters. Alternatively, you can choose to add it as a secret in AWS Secrets Manager. A detailed article about it here.
  3. Edit the file build-pipeline-docker-ecs.yaml at the line
OAuthToken: “{{resolve:ssm:/github/polimetla_access_token:1}}”

The part /github/polimetla_access_token should be changed to the name of your parameter created in the last step. The version number (here 1) can be seen in the parameter store. It usually starts with 1 and upon every revision of the parameters the version number is incremented.

4. Now you are ready with deploying. Create a CloudFormation stack using your build-pipeline-docker-ecs.yaml file. You should have permissions to use the services CloudFormation, IAM: Creating Service Roles, SSM: Create and Read parameters, CodeBuild, S3, CloudWatch and every other service mentioned in the file.

I deployed this using a root account. So I did not have to worry about the permissions while creating the stack. In case, your stack fails due to permission errors, make sure to contact your Admin or root user for the necessary permissions.

CloudFormation — Create Stack — Upload a new template — Choose file — Select your build-pipeline-docker-ecs.yaml file — Next

Fill out the parameters as your desire but keep in mind, the Stack name and Project Name should be different and the name of DockerImageRepository should be the same as in config.json.

Look at the screenshot below.

Screenshot of my AWS console while creating the stack

Now a cloud formation stack would be creating. After successful creation, it will create a CodePipeline which creates another stack. So after the application is successfully deployed, you should see two CloudFormation stacks.

Now we are done with the manual part of the deployment. This ideally should not take more than 10 minutes.

The pipeline would look like this.

CodePipeline created by the CloudFormation stack

The first build can take a while and the successive builds are significantly quicker. This is because of the local caching enabled for the CodeBuild Stage.

Cache: 
Type: LOCAL
Modes:
- LOCAL_CUSTOM_CACHE
- LOCAL_DOCKER_LAYER_CACHE
- LOCAL_SOURCE_CACHE

Note: If you intend to create a shared cache which can be distributed over multiple pipelines, use S3 based cache instead of LOCAL

The ECR Repo has a expiry rule which limits the image versions to 5.

{
"rules":[
{
"rulePriority":1,
"description":"Maintain maximum of 5 versions",
"selection":{
"tagStatus":"any",
"countType":"imageCountMoreThan",
"countNumber":5
},
"action":{
"type":"expire"
}
}
]
}

The above JSON is stringified and used as LifecyclePolicyText.

After the pipeline is executed successfully, the second stack is created. Among the outputs of this stack, you will see the endpoint of the load balancer for the service you just deployed.

Stack outputs

Additional considerations

  1. Upon a commit to the master branch, the task definition is updated with the new imageTag. The config.json is edited in the CodeBuild stage. See buildspec.yaml. You will not see this change in the source code and it is also not important for the developer. It is however important for the deployment to update the task definition with the relevant imageTag.
  2. The focus of this application is not SpringBoot but a demo of how you can deploy SpringBoot Applications to AWS ECS. That being said, the Springboot Application here is not production ready. It does not have a real persistent layer. The database used is confined only to the run time i.e. the Database is created when the application runs and is destroyed when the application stops.
  3. The ECR Repo in this case has a rule to keep only 5 image versions at a time. You may not want to do this in production.

Update:

I was reported about some cases where this configuration did not work. The reason for that was the Account did not have a Service Linked Role for AWS and this was needed. If there was some action with AWS ECS happened in the past using console or CLI, the role is automatically created by AWS.

aws iam create-service-linked-role --aws-service-name ecs.amazonaws.com

or create a task definition manually from the ECS console.

Please also make sure to read the README on GitHub before you start.

Happy Coding!

--

--