Note: Clone the project repo to follow along...

This is going to be another multi-part series... I don't want to get too far ahead of myself, but the end goal is shipping a Node.js app using Amazon Web Service's Elastic Container Service (ECS). We have to crawl before we can walk (or run an app), so this first article will cover the basics of getting a simple Node app containerized (feel free to plug in whatever app you want).

If you haven't spent much time with containers, don't worry – we'll walk through everything you need to get started. By the end of this article we'll have a shiny new container we can use for our ECS experiments.

Test App

Since our focus is learning about ECS, we'll keep our Node app as simple as possible so it doesn't become a distraction. We'll use the popular Express framework, but any app should work similarly... Larger apps with more dependencies will just require additional steps (maybe we'll go over some of those in later articles). Of course we could create such a simple app without a framework, but having at least one dependency is more representative of a typical app and lets us show how to handle package.json.

For now, all we need from a Node perspective is a package.json describing our project and a server.js with a single endpoint we can use for testing. If you cloned the project repo to follow along, you can just change into the cloned directory and npm install. Otherwise, create a new directory to keep things organized, then create the requisite files:

  "name": "node-aws-ecs",
  "version": "0.0.1",
  "description": "Node.js on AWS ECS",
  "author": "Mike Hoskins <>",
  "license": "MIT",
  "repository": {
    "type": "git",
    "url": ""
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  "dependencies": {
    "express": "^4.17.1"

Once you create and save package.json, run npm i to initialize the project and install our one lonely dependency.  With that, create server.js which will handle web requests with a static response:

'use strict';

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '';

// App
const app = express();
app.get('/', (req, res) => {
  res.send('Hello World');

app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);

To test, just run npm start and then make sure you see Hello World when sending HTTP requests to localhost:8080 (I'm using HTTPie, but feel free to use your favorite request generator such as curl or Postman):

➜ npm start

> node-aws-ecs@0.0.1 start /Users/mhoskins/src/node-aws-ecs
> node server.js

Running on

# ...

➜ http -v :8080
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/2.0.0

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 11
Content-Type: text/html; charset=utf-8
Date: Sun, 08 Mar 2020 20:40:01 GMT
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
X-Powered-By: Express

Hello World
Ready to go!

Container Definition

Since it's 2020, I'm not going to spend much time covering the ins and outs of containers. There are already reams of documentation for that. Suffice to say, a container is kind of like a virtual box housing your application along with dependencies it needs to run. In our case we'll build our container using Docker, but keep in mind this is just one option. Docker isn't containers, it's just one of the more popular tools available.

To define Docker containers, we need a Dockerfile. Refer to the documentation for all the gory details, but we only need a few lines to build something useful. The main keywords you routinely see are FROM which specifies a starting point to build upon (often another container image including a runtime environment) , RUN for executing commands inside our container (useful for things like installing dependencies), COPY (how we'll get our source code into the container), EXPOSE (which, oddly enough, exposes container ports to the outside world) and CMD (the command ran inside the container to serve our content).

Often, you will see ENTRYPOINT used as well. I'm not going on a tangent about the differences between CMD and ENTRYPOINT since it's not relevant here, but be aware of how they interact.

Let's start with a working example, then explain a bit more:

FROM node:lts-alpine
LABEL maintainer=""

# Dependencies
RUN apk update && \
      apk upgrade && \
      apk add curl

ENV NODE_ENV production

# Ensure we get package-lock.json and take advantage of cache
COPY package*.json ./

RUN npm ci --only=production

COPY --chown=999:999 src ./src

USER node

CMD [ "node", "./src/server.js" ]

Our FROM pulls in the LTS version of Node wrapped in Alpine Linux from Docker Hub. This is a lightweight Linux distribution I'm sure you've heard of. You can use other variations, especially when testing, but less bloat means faster load times and Alpine is battle tested. Just as we'd do with other variants, we pull in updates with the apk commands.

The next few lines configure NODE_ENV (a common trick allowing your app to respond differently in production than it does in test environments), sets up a work directory, and copies package.json for installing dependencies. Be sure to also include package-lock.json, and rather than simply doing npm i as we did above use npm ci which can be a lot faster and is generally more reliable.

Since containers use a layered filesystem, the order of operations is important. We carefully keep the copying of source code after installing dependencies or other things less likely to change to leverage caching when rebuilding the container (something you do a lot during development so saved time adds up).

Lastly, we take advantage of the fact Node images include a non-root node user (UID and GID 999). A couple more lines to adjust the user and group of our source code and define USER lets us drop root privileges before spawning our app. Remember not to place the USER line too early, since things like apk that adjust OS-level packages require root privileges!

Building the Container

With our Dockerfile ready, we just need a couple more things... First, similar to a .gitignore, you generally want a .dockerignore file at the top level of your project to avoid copying things you don't need into your container. For example, we don't want to include node_modules since we install dependencies using npm.

# Extend as needed...

Now we can use docker build to turn our definition into a container:

➜ docker build -t node-aws-ecs-app .
Sending build context to Docker daemon    105kB
Step 1/12 : FROM node:lts-alpine
 ---> b0dc3a5e5e9e
Step 2/12 : LABEL maintainer=""
 ---> Running in 0a34cecb1b9a
Removing intermediate container 0a34cecb1b9a
 ---> 5443cc852fc9
Step 3/12 : RUN apk update && apk upgrade
 ---> Running in bfc166bf526b
v3.11.3-113-g5cfecee709 []
v3.11.3-112-gbfe51f74f3 []
OK: 11268 distinct packages available
(1/3) Upgrading musl (1.1.24-r0 -> 1.1.24-r1)
(2/3) Upgrading ca-certificates-cacert (20191127-r0 -> 20191127-r1)
(3/3) Upgrading musl-utils (1.1.24-r0 -> 1.1.24-r1)
Executing busybox-1.31.1-r9.trigger
OK: 7 MiB in 16 packages
Removing intermediate container bfc166bf526b
 ---> 62270f989cd6
Step 4/12 : ENV NODE_ENV production
 ---> Running in acb7b5990adb
Removing intermediate container acb7b5990adb
 ---> 841ce91f153b
Step 5/12 : WORKDIR /app
 ---> Running in b1386e718137
Removing intermediate container b1386e718137
 ---> a9c85df95276
Step 6/12 : COPY package*.json ./
 ---> af2ea783c184
Step 7/12 : RUN npm ci --only=production
 ---> Running in 24e173dc19ba
added 50 packages in 1.145s
Removing intermediate container 24e173dc19ba
 ---> 2047a8958f2f
Step 8/12 : RUN mkdir -p ./src
 ---> Running in 072a3271b2dd
Removing intermediate container 072a3271b2dd
 ---> 0e8289224623
Step 9/12 : COPY --chown=999:999 server.js ./src
 ---> 8dba9af96dd0
Step 10/12 : USER node
 ---> Running in bab5f0735af1
Removing intermediate container bab5f0735af1
 ---> f83e6959644f
Step 11/12 : EXPOSE 8080
 ---> Running in 1256f013b1cf
Removing intermediate container 1256f013b1cf
 ---> 27d3adce4758
Step 12/12 : CMD [ "node", "./src/server.js" ]
 ---> Running in 52b0dc7f311f
Removing intermediate container 52b0dc7f311f
 ---> abfe310d5708
Successfully built abfe310d5708
Successfully tagged node-aws-ecs-app:latest
Building the container

That gave us a working instance of our test app consuming less than 100MB. Let's take a look, and fire it up to test:

➜ docker images|grep node-aws-ecs-app
node-aws-ecs-app        latest               abfe310d5708        2 minutes ago       90MB

➜ docker run -p 8080:8080 -d node-aws-ecs-app

➜ docker ps|grep node-aws-ecs-app
2ed68ceedaf7        node-aws-ecs-app    "docker-entrypoint.s…"   18 seconds ago      Up 18 seconds>8080/tcp   wizardly_aryabhata

➜ docker logs 2ed68ceedaf7
Running on

➜ http -v :8080
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/2.0.0

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 11
Content-Type: text/html; charset=utf-8
Date: Sun, 08 Mar 2020 21:46:28 GMT
ETag: W/"b-Ck1VqNd45QIvq3AZd8XYQLvEhtA"
X-Powered-By: Express

Hello World
Testing the container

Next Steps

Now that we have our test application containerized, we're ready to focus on AWS specifics. In the next article we'll look at how to push our container image to Amazon's Elastic Container Registry (ECR). Similar to Docker Hub, this is a container repository we can use to make our images available to the ECS infrastructure responsible for running instances of our image. Check back next time to continue following along!


This is part one of a multi-part series, continue to part two:

Preparing for ECS