Note: Clone the project repo to follow along...
In part one of this series we walked through using Docker to containerize a simple Node.js app and verified our shiny new container worked as expected. Unfortunately, containers aren't much use if we can't get them deployed to start delighting users with the sheer awesomeness of our brilliant code.
In the container world, the path to production involves some sort of registry... While it sounds fancy, a registry isn't much different from the simple web servers you historically used to host RPMs or DEBs (essentially just another tool to help you maintain the ITIL concept of a Definitive Software Library). At their heart they are still highly scalable web servers, conveniently tweaked to work with the Docker tool chain. You have a lot of choices when choosing a registry, and the right choice will depend on your requirements.
If you just want to get started quickly, the publicly hosted Docker Hub is a common starting point. Even when self-hosting, you will often pull a lot of starter images from Docker Hub. When you need more control over where your images are stored and integration with custom tooling, commercial offerings such as Sonatype's Nexus Repository are available. If already plugged into the AWS ecosystem, Elastic Container Registry (ECR) is a natural pairing for Elastic Container Service (ECS) apps. Since this series is focussed on shipping containerized apps atop ECS, let's walk through using ECR.
Working with containers, the cloud, or most anything in a post Twelve-Factor App world involves a lot of environmental manipulation. While not strictly related to AWS or ECR, I wanted to take a minute to briefly show how I reduce the effort it takes to maintain per-project environment settings. In the end this slight detour will be useful in our tour of ECR, since we'll use several convenience scripts which rely on environment variables.
There are several tools that can help us here... You might have used autoenv, be familiar with Node.js' .env files (similar idea in different context), or even have a custom setup letting you skip this entire section. In this project we'll use direnv. The concept is very simple, but will save us a lot of time. A shell hook (read the installation docs to get this right for your specific shell) automatically sources configuration files as we change into directories, and unloads any exported variables when we leave those directories. This means we can save per-project settings in
.envrc files and not have to remember all these details or waste time exporting them again and again!
Since this is an aside, I'm not going to re-cover how to configure or use direnv – their site has good documentation you can follow for that. I only want to point out two things. First, as soon as you create a
.envrc, you should also
echo .envrc >> .gitignore. While everything in
.envrc isn't necessarily secret, secrets often reside there. You want to be sure it's never committed to source control!
Last but not least, things which are truly secret may not need to reside directly in
.envrc. For example, you could have truly sensitive things stored in something like Ansible Vault, then include commands in your direnv configuration to pull directly from Vault when exporting into the environment (and conveniently removing those bits when you leave the project's work directory).
For now, it's enough to know some of the "magic" you'll see happening below is thanks to direnv...
Similar to other AWS services, there are a few prerequisites needed before you can interact with ECS or ECR. I'm not going to detail all of those here. For one thing, you probably already know them if you're reading this. For another, it would make this more of a book than a blog post. In case it's useful, I wanted to be explicit on a few things you'll need.
Aside from an AWS account, you should have an IAM user or assumed role with administrator access. If you want to get more advanced and lock things down more than that, you can use the linked doc to specifically grant the permissions required to interact with ECS and ECR. The important thing is to adhere to the best practice of not using your root account, and being aware that whatever user you do use needs permission to access ECR (or nothing we try below will work).
If you're using EC2 for other things, you may already have a Key Pair allowing you to SSH into instances. You won't have to worry about key pairs in this series, because we're going to ship our ECS app using the Fargate launch type (leveraging AWS-managed shared infrastructure). If you decide to use the EC2 launch type (hosting your containers on EC2 instances you manage), you will need an associated key pair for administrative tasks.
From a network perspective, you'll need a VPC and Security Group. For experimentation, assuming you haven't deleted it, you can just use the default VPC. For production services you'll likely want to create a dedicated VPC. Our container instances will need a security group which grants access to any ports we wish to expose. In our example this will just be HTTP over port 80/tcp.
Since we're going to use some scripts which wrap the AWS CLI, you also need that installed and configured. If you're on a Mac, the prior is as simple as
brew install awscli. For the latter, just run
aws configure and follow the prompts.
Interacting with ECR
Finally, the good stuff! Let's get the app we containerized last time pushed into ECR. Unfortunately we can't just
docker push, since we'll need to figure out the right registry and how to authenticate... Luckily AWS makes this easy!
The default registry associated with a region and account can be be derived from the account ID and region name. The AWS CLI provides a
get-login-password command we can use to authenticate with the docker CLI.
Our project repo provides convenience scripts wrapping the requisite AWS commands. These rely on a few environment variables. As mentioned above, I'll be using direnv to auto-export needed bits. Here's my
If it's the first time you've configured
.envrc, you'll need to
direnv allow to enable exporting its contents. This ensures random (untrusted) projects which include
.envrc files can't easily pwn you! Once allowed, future exports will be automatic until the file contents change.
Don't worry, we'll see how to get the
REPO_URI... but notice how the location of our default registry is easily derived from our account ID and region? The provided
ecs-login script simply wraps
docker login (along with bits provided by
.envrc) to simplify authentication:
Once authenticated, we're still not ready to push our image. First, we need to create a repository to hold our image. This is similar to Docker Hub or other registries, where you have per-project or service-related repositories to keep images organized and appropriately secured. The
ecr sub-command of the AWS CLI has a
create-repository option for this. Since that requires a few arguments we don't want to remember each time, another wrapper helps... simply provide the name of the repository to create:
Now you see where we got
REPO_URI – take the value of
repositoryUri and add it to your
.envrc (as you saw above in mine). Be sure to
direnv allow so
REPO_URI can be used by our scripts below.
Now that we've authenticated and have a repository created, the actual push is entirely handled by Docker. I almost always end up getting the tag format wrong at least once or having to refer to the documentation so use another script. This one takes a little longer to run since it has to transfer our image contents over the network, and takes the image name to push (remember how we created
node-aws-ecs-app in the last part using
By default the script just uses the
latest tag, but you can provide a tag name as the second argument if you want to push a different version. Since it's only a single command and the registry/image name (and more often done by a remote service – ECS in our case), I haven't wrapped
docker pull. However, if you want to verify the image you just pushed is actually available (trust buy verify!), you can:
We've officially published our sample app to ECR, so it can be pulled by ECS tasks to provide a real-world service! To clean things up and avoid any financial impact, we can easily remove our image and repository:
A note on our cleanup scripts... they are more liberal than you probably want to be in production. On the one hand, this is because we're testing in a sandbox. On the other, it's a good lesson to always read scripts before running them!
You can pass in a tag as the second argument to
ecr-delete-image, but it defaults to
latest and uses
--image-ids (maybe you want to require a tag name).
--force. This means the delete image step was technically not necessary, since the delete repo script would wipe out the repo even if it contained images (thankfully not default behavior). This makes for easy cleanup while experimenting, but might not be what you want.
The point of these wrappers was not so you could copy/paste. I'd like you to be mindful of the wrapped commands, take time to understand how they work, and then tune them for your environment. The point is whether you use shell scripts, Ansible, Terraform or some other tool... Once you figure out how to solve a problem with the AWS console or CLI, you can automate the minutia to reduce future effort. Hopefully these give you a good starting point for your own scripts – just be aware of what they contain.
With only a few commands we've managed to push our sample app into the cloud. While not serving users just yet, we've got our code in a container registry (ECR) accessible by ECS. In the next installment of this series we'll look at preparing the Task Definitions which are used to define container instances we can expose as a bonafide service. Be sure to check back next time as we continue our ECS journey!
This is part two of a multi-part series, jump to part one: