Note: Clone the project repo to follow along...
This is going to be another multi-part series... I don't want to get too far ahead of myself, but the end goal is shipping a Node.js app using Amazon Web Service's Elastic Container Service (ECS). We have to crawl before we can walk (or run an app), so this first article will cover the basics of getting a simple Node app containerized (feel free to plug in whatever app you want).
If you haven't spent much time with containers, don't worry – we'll walk through everything you need to get started. By the end of this article we'll have a shiny new container we can use for our ECS experiments.
Since our focus is learning about ECS, we'll keep our Node app as simple as possible so it doesn't become a distraction. We'll use the popular Express framework, but any app should work similarly... Larger apps with more dependencies will just require additional steps (maybe we'll go over some of those in later articles). Of course we could create such a simple app without a framework, but having at least one dependency is more representative of a typical app and lets us show how to handle
For now, all we need from a Node perspective is a
package.json describing our project and a
server.js with a single endpoint we can use for testing. If you cloned the project repo to follow along, you can just change into the cloned directory and
npm install. Otherwise, create a new directory to keep things organized, then create the requisite files:
Once you create and save
npm i to initialize the project and install our one lonely dependency. With that, create
server.js which will handle web requests with a static response:
To test, just run
npm start and then make sure you see
Hello World when sending HTTP requests to
localhost:8080 (I'm using HTTPie, but feel free to use your favorite request generator such as curl or Postman):
Since it's 2020, I'm not going to spend much time covering the ins and outs of containers. There are already reams of documentation for that. Suffice to say, a container is kind of like a virtual box housing your application along with dependencies it needs to run. In our case we'll build our container using Docker, but keep in mind this is just one option. Docker isn't containers, it's just one of the more popular tools available.
To define Docker containers, we need a
Dockerfile. Refer to the documentation for all the gory details, but we only need a few lines to build something useful. The main keywords you routinely see are
FROM which specifies a starting point to build upon (often another container image including a runtime environment) ,
RUN for executing commands inside our container (useful for things like installing dependencies),
COPY (how we'll get our source code into the container),
EXPOSE (which, oddly enough, exposes container ports to the outside world) and
CMD (the command ran inside the container to serve our content).
Let's start with a working example, then explain a bit more:
FROM pulls in the LTS version of Node wrapped in Alpine Linux from Docker Hub. This is a lightweight Linux distribution I'm sure you've heard of. You can use other variations, especially when testing, but less bloat means faster load times and Alpine is battle tested. Just as we'd do with other variants, we pull in updates with the
The next few lines configure
NODE_ENV (a common trick allowing your app to respond differently in production than it does in test environments), sets up a work directory, and copies
package.json for installing dependencies. Be sure to also include
package-lock.json, and rather than simply doing
npm i as we did above use
npm ci which can be a lot faster and is generally more reliable.
Since containers use a layered filesystem, the order of operations is important. We carefully keep the copying of source code after installing dependencies or other things less likely to change to leverage caching when rebuilding the container (something you do a lot during development so saved time adds up).
Lastly, we take advantage of the fact Node images include a non-root
node user (UID and GID 999). A couple more lines to adjust the user and group of our source code and define
USER lets us drop root privileges before spawning our app. Remember not to place the
USER line too early, since things like
apk that adjust OS-level packages require root privileges!
Building the Container
Dockerfile ready, we just need a couple more things... First, similar to a
.gitignore, you generally want a
.dockerignore file at the top level of your project to avoid copying things you don't need into your container. For example, we don't want to include
node_modules since we install dependencies using
Now we can use
docker build to turn our definition into a container:
That gave us a working instance of our test app consuming less than 100MB. Let's take a look, and fire it up to test:
Now that we have our test application containerized, we're ready to focus on AWS specifics. In the next article we'll look at how to push our container image to Amazon's Elastic Container Registry (ECR). Similar to Docker Hub, this is a container repository we can use to make our images available to the ECS infrastructure responsible for running instances of our image. Check back next time to continue following along!
This is part one of a multi-part series, continue to part two: