DEVOPS

Deploy your application to AWS ECS in minutes using GitHub Pipelines.

By Tejaswini Duluri
15 mins Read
Hello, I'm Tejashwini Duluri, software engineer at BeyondScale, specializing in microservices. I'm excited to share my journey in building a GitHub Actions workflow for deploying Python applications to AWS ECS, complete with dedicated staging and production environments. With a well-refined CI/CD pipeline, I'm here to guide you through the process.

In this article, I'll provide you with a comprehensive overview of these key steps, drawing from my hands-on experience, to serve as a valuable guide

Creating a GitHub Repository
One of the initial steps in this journey of course, is to create a GitHub repository for your project. This serves as the central hub for your code and CI/CD pipeline configuration.

Kindly push your project code to the GitHub repository. To ensure a streamlined deployment process, I'd recommend organizing your code into separate branches for staging and production environments.


Secrets Registration
When it comes to handling secrets and environment variables, maintaining a balance between transparency and security is crucial.

Instead of uploading the entire .env file to your remote repository, it is recommended to use "Secrets and Variables" manager offered by Github.

  • 1
    Navigate to Repository Settings
    Begin by accessing your GitHub repository's settings tab. This is where you'll initiate the setup for handling secrets and variables.
  • 2
    Access Secrets and Variables
    Inside the settings tab, you'll find the "Secrets and Variables" section. This is the control center for managing environment-related secrets and variables, you can set up secret variables tailored to your environment. The following table shows the type of variables you can access
Below is the reference for setting for environment variables in github console for various envs:
As mentioned earlier, remember to declare separate environment variables for both the development and production environments. All the remaining env variables are declared as repository secrets for my application.
Creating Task Definition Files

Now, let's dive into the process of creating task definition files tailored to your specific environments. These task definition files play a crucial role in orchestrating deployments on AWS ECS. Here's how to go about it:
  • 1
    Directory Setup
    Begin by organizing your project's repository. Inside the repository, you'll want to establish a designated folder for your AWS-related configurations. If this folder, typically named .aws, doesn't already exist, go ahead and create it at the root level of your repository.
  • 2
    Task Definition Files
    Within the newly created .aws directory, it's time to craft your task definition files. These files should be meticulously structured to correspond to your different deployment environments, such as staging and production. For instance, you might create files named task-definition-staging.json and task-definition-prod.json.

    Sample Task Definition File:
    To provide you with a reference, I'm including a sample task definition file below. This file serves as a blueprint for defining the essential parameters required for orchestrating your containers on AWS ECS:
Replace {image arn} with the docker container image ECR arn mentioned below

{
"family": "smb-backend-td",
"containerDefinitions": [
{
"name": "smb-backend",
"image": {image arn},
"cpu": 0,
"portMappings": [
{
"name": "smb-backend-8000-tcp",
"containerPort": 8000,
"hostPort": 8000,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"environment": [],
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/smb-backend-td",
"awslogs-region": "ap-south-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
}
}
],
"taskRoleArn": "arn:aws:iam::126819498774:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::126819498774:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "1024",
"memory": "2048",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
}
}

Creating a GitHub Workflow File

A GitHub workflow file is a critical component that defines your CI/CD pipeline. Inside your project's repository, create a .github/workflows directory if it's not already in place. This directory serves as the home for your workflow configurations.

Within the newly created .github/workflows directory, it's time to create a YAML file that encapsulates your workflow. This file, located at the root level of your repository, will define the various stages of your CI/CD pipeline. You can choose between the .yml or .yaml file extension.

Typical Components of a GitHub Workflow File:
A GitHub workflow file is a versatile tool that empowers you to orchestrate your CI/CD processes with precision. Here's a breakdown of the essential elements typically found in a GitHub workflow file:
  • 1
    Name and Trigger
    The name of your workflow, which provides a clear identifier.
    The event or trigger that initiates the workflow. This trigger can be tied to various actions, such as push events, pull requests, issue comments, and more.
  • 2
    Jobs
    Workflows are comprised of one or more jobs, each representing a set of interconnected tasks that must be executed. Depending on your configuration, jobs can run concurrently or sequentially, ensuring a smooth and efficient workflow.
  • 3
    Steps
    Within each job, a series of steps are defined. These steps are individual tasks or commands that need to be executed. You can include actions, which are pre-built and reusable sets of actions provided by the community or created by you, as well as shell commands or other tasks.
  • 4
    Environment
    You have the flexibility to specify the runtime environment for your workflow. This includes defining the version of an operating system, programming language, or tools to be employed during the workflow execution.
  • 5
    Conditionals
    With GitHub Actions, you can set conditions that determine when a job or step should run. For instance, you can configure a step to execute only if a specific file has been modified in a pull request, ensuring optimized workflow execution.
  • 6
    Outputs
    Workflows can generate outputs that are invaluable for subsequent jobs or steps in your pipeline. These outputs facilitate seamless communication and data sharing within your workflow.
  • 7
    Artifacts
    You have the option to define artifacts that should be preserved following the completion of a job. These artifacts can be downloaded or utilized in other workflows, enhancing the versatility of your CI/CD pipeline.
Attaching my sample workflow file for a prod environment:

name: Deploy to Amazon ECS


on:
push:
branches:
- staging


env:
AWS_REGION: ap-south-1
ECR_REPOSITORY: smb-backend
ECS_SERVICE: smb-backend
ECS_CLUSTER: smb-dev-cluster
ECS_TASK_DEFINITION: .aws/task-definition-staging.json
CONTAINER_NAME: smb-backend


permissions:
contents: read


jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: dev


steps:
- name: Checkout
uses: actions/checkout@v3


- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}


- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1


- name: Make envfile
uses: SpicyPizza/create-envfile@v2.0
with:
envkey_APP_TOKEN: ${{ secrets.SMB_APP_TOKEN }}
envkey_CLIENT_SECRET: ${{ secrets.SMB_CLIENT_SECRET}}
envkey_JWT_PUBLIC_KEY: ${{ secrets.SMB_JWT_PUBLIC_KEY }}
envkey_GOOGLE_CLIENT_SECRET: ${{ secrets.SMB_GOOGLE_CLIENT_SECRET }}
envkey_AWS_CLOUDWATCH_LOG_GROUP: ${{ secrets.SMB_AWS_CLOUDWATCH_LOG_GROUP }}
envkey_AWS_CLOUDWATCH_LOG_STREAM: ${{ secrets.SMB_AWS_CLOUDWATCH_LOG_STREAM }}
envkey_MONGO_USER: ${{ secrets.SMB_MONGO_USER }}
envkey_MONGO_PASSWORD: ${{ secrets.SMB_MONGO_PASSWORD }}
envkey_MONGO_HOST: ${{ secrets.SMB_MONGO_HOST }}
envkey_MONGO_DB: ${{ secrets.SMB_MONGO_DB }}
envkey_AWS_ACCESS_KEY: ${{ secrets.SMB_AWS_ACCESS_KEY }}
envkey_AWS_SECRET_KEY: ${{ secrets.SMB_AWS_SECRET_KEY }}
envkey_EN_SECRET_KEY: ${{ secrets.SMB_EN_SECRET_KEY }}
envkey_GOOGLE_SERVICE_TOKEN: ${{ secrets.SMB_GOOGLE_SERVICE_TOKEN }}
envkey_MICROSOFT_SERVICE_TOKEN: ${{ secrets.SMB_MICROSOFT_SERVICE_TOKEN }}
envkey_LOGGING_AWS_ACCESS_KEY: ${{ secrets.SMB_LOGGING_AWS_ACCESS_KEY }}
envkey_LOGGING_AWS_SECRET_KEY: ${{ secrets.SMB_LOGGING_AWS_SECRET_KEY }}


envkey_STRIPE_WEBHOOK_ENDPOINT_SECRET: ${{ secrets.STRIPE_WEBHOOK_ENDPOINT_SECRET }}
envkey_STRIPE_API_KEY: ${{ secrets.STRIPE_API_KEY }}
envkey_REDIS_HOST: ${{ vars.REDIS_HOST }}
envkey_REDIS_PORT: ${{ vars.REDIS_PORT }}
envkey_STRIPE_PAYMENT_CANCEL_PAGE: ${{ vars.STRIPE_PAYMENT_CANCEL_PAGE }}
envkey_STRIPE_PAYMENT_SUCCESS_PAGE: ${{ vars.STRIPE_PAYMENT_SUCCESS_PAGE }}


envkey_KAFKA_BROKERS: ${{ secrets.KAFKA_BROKERS }}
envkey_KAFKA_USERNAME: ${{ secrets.KAFKA_USERNAME }}
envkey_KAFKA_PASSWORD: ${{ secrets.KAFKA_PASSWORD }}


file_name: .env
fail_on_empty: false
sort_keys: false


- name: Build, tag, and push image to Amazon ECR for Development
id: build-image-dev
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT


- name: Fill in the new image ID in the Amazon ECS task definition for Development
id: task-def-dev
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image-dev.outputs.image }}


- name: Deploy Amazon ECS task definition for Development
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def-dev.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true

When you push these two folders, .aws and .github/workflows, GitHub automatically detects the workflows and triggers the pipelines when pushed to the configured branches.

In my scenario, pushing to the staging branch will trigger the pipeline, and you can observe this in the 'Actions' tab within the repository on the GitHub console, as depicted below:
When you click on a specific trigger you could see the progress as below: