How to deploy SBT Scala code to Amazon ECS via ECR
I struggled for a week to understand this process, so here, yet again, are notes for myself should I need to do it again.
The example project
Note that it won’t actually deploy as I have not attached the AWS secrets to it - you will have to clone it and try it on your own aws.
What was the problem
The documentation for using scala SBT from GitLab to deploy to ECS Fargate is pretty hard to find, and when you do it requires quite a bit of learning to get the ideas straight. So here is what I had to learn.
Key are the templates:
# This file is a template, and might need editing before it works on your project.
# To contribute improvements to CI/CD templates, please follow the Development guide at:
# This specific template is located at:
Also key is dind (docker in docker) and the idea of services:
There are helpers provided by Gitlab to publish to AWS ECS but they are not great out of the box for an SBT build. They also provide a helper which lets you manually run aws cli commands, and that is what I used.
AWS ECR - The registry
You have to create a registry and then an IAM user with permissions to manage it and make it a programmable user so you have access keys. AWS CLI can then read they keys from environment variables, which means you can login from AWS CLI in a gitlab script. Google GITLAB push to ECR and you should get instructions.
I first created a registry (ECR) - each docker image should have its own repository within the registry.
After creating a new IAM user I attached AmazonEC2ContainerRegistryPowerUser permission. I also recorded the Access key ID and Secret access Key - these will be used in GitLab to run the aws cli.
You need to create a cluster, and a task definition and then a service to run the task within the cluster.
Broadly speaking a task definition defines one or more docker images to run and deploy together. A service is where a task definition is actually run - ie you get running containers by having a service. The cluster contains one or more services.
When you create the task just choose an httpd image or something the first time. Also delete and recreate the task a couple of times, and for the task definition update it with a new version several times too.
ie just get some muscle memory of the console and the principles.
When you are creating the service you will see this checkbox, if you check it then a new image will force the service to restart. For instance, you just build :LATEST and use that as the image then every build will auto restart the service.
AWS CLI from .gitlab-ci.yml
In gitlab settings, expand cicd, then variables. You need to have the above programatic user set up with the access keys, so set variables for:
AWS_DEFAULT_REGION, eg eu-west-1
The top 3 variables are so aws cli can run and login, and the account_id for ECR build the docker tag url for pushing to ECR.
Or to put it another way:
AWS CLI can pick up the ECR user with the correct permissions and login using the access keys. Once logged in, it can tag the locally build docker image with the correct remote repository information and push it there.
How to build the docker image
I went for the assembly plugin to build the fat jar. Others will use java native packager to get the fat jar using enablePlugins( JavaAppPackaging ) in SBT.
I already mentioned GitLab services, which are started during the build process and share the same file system, so to start a dind service which docker can connect to, and so build an image from a docker file you do this in the .gitlab-ci.yml
- cd docker
- ls -lag
- docker build -t svc1 .
- docker login --username AWS --password-stdin $AWS_ECR_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com < aws_cred.txt
- docker tag svc1:latest $AWS_ECR_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/svc1:latest
- docker push $AWS_ECR_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/svc1:latest
The full file is available, here. Doing dind as a service is list starting dockerd & (i.e. in the script section, you could start the daemon as a background process.)
You can see I direct the aws_cred.txt file from stdin. I get that from a previous stage which uses the aws cli to log me in. Why? I have no image for dind and aws cli, I guess I could build one but I haven’t.
So to aws login:
expire_in: 1 hour
- cd docker
- aws ecr get-login-password --region $AWS_DEFAULT_REGION > aws_cred.txt
- ls -lag
By making the aws_cred.txt an artifact, the build preserves it between stages. When aws cli runs it can grab credentials from env vars or files etc. In this case it is using the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION.
So, to summarise, aws cli logs in as the ECR user via the access keys. Once logged in it gets a login password, because I have no docker image with both aws cli and dind within it, I make the login password an artifact, and hand it to the next stage, which uses dind to build the docker image (using a Dockerfile) and then tags it with the ECR url for my cluster, service, task and finally push it there.
What about the building the fat jar? This is pretty much stolen from the template that gitlab provide here
expire_in: 1 hour
- apt-get update -yqq
- apt-get install apt-transport-https -yqq
- echo "deb https://repo.scala-sbt.org/scalasbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list
- mkdir -p /root/.gnupg
- gpg --recv-keys --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/scalasbt-release.gpg --keyserver hkp://keyserver.ubuntu.com:80 2EE0EA64E40A89B84B2DF73499E82A75642AC823
- chmod 644 /etc/apt/trusted.gpg.d/scalasbt-release.gpg
- apt-get update -yqq
- apt-get install sbt -yqq
- java -version
- sbt sbtVersion
- sbt clean test assembly
So, a 3 stage build, using sbt, aws cli and docker, combining gitlab incantations with AWS magic.
Finally back to the task in AWS
So, look in AWS ECR (the registry), in the repository you made and you will see the image. Copy the address for the image, and create the task from scratch to use that image. Don’t forget to select forceNewDeployment when creating the service which runs the task.
Now edit your code, commit it, see the pipeline ru, see the new image in ECR and the task restart.
Can I do this locally?
Yes, you can practice all this on your own kit.
# Install aws cli
# Then paste in the access keys for the ECR user id
aws configure --profile ecr_user
sbt clean assembly
docker build -t svc1 .
#docker run -dp 80:8080 svc1
#eg for ECR accountId: 111111111111, which matches the access keys above.
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin 111111111111.dkr.ecr.eu-west-1.amazonaws.com
docker tag svc1:latest 111111111111.dkr.ecr.eu-west-1.amazonaws.com/svc1:latest
docker push 111111111111.dkr.ecr.eu-west-1.amazonaws.com/svc1:latest
Fargate ports and security groups
Fargate will create everything you need to access your task from the internet. If you decide to alter the port number in your Dockerfile, you will have to update the security group like this:
From the cluster->service->task Click on task. In the Network section you will see ENI Id, click on it
This gets you to the Network interfaces. Scroll right and click on Security Groups.
This is April 2022, no doubt in a years time none of this will be the same, but thats progress.
Building and releasing code in 1988 was actually better than it is now. Even in C++ with PRAGMAS.