Improve this doc

Define a container uses Docker containers to manage applications. You can use one or more containers to package your services with whichever environments and tools they need to run.

To ensure a service has everything it needs, you'll want to create a list of instructions for building a container image. Whether the build process is done on your device, on your workstation, or on the builders, the end result is a read-only image that ends up on your device. This image is used by the container engine (balena or Docker, depending on the resinOS version) to kick off a running container.


The instructions for building a container image are written in a Dockerfile - this is similar to a Makefile in that it contains a recipe or set of instructions to build our container.

The syntax of Dockerfiles is fairly simple - at core there are 2 valid entries in a Dockerfile - comments, prepended with # as in script files, and instructions of the format INSTRUCTION arguments.

Typically you will only need to use 4 instructions - FROM, RUN and ADD or COPY:-

  • FROM has to be the first instruction in any valid Dockerfile and defines the base image to use as the basis for the container you're building.

  • RUN simply executes commands in the container - this can be of the format of a single line to execute, e.g. RUN apt-get -y update which will be run via /bin/sh -c, or [ "executable", "param1", "param2", ... ] which is executed directly.

  • ADD copies files from the current directory into the container, e.g. ADD <src> <dest>. Note that if <dest> doesn't exist, it will be created for you, e.g. if you specify a folder. If the <src> is a local tar archive it will unpack it for you. It also allows the <src> to be a url but will not unpack remote urls.

  • COPY is very similar to ADD, but without the compression and url functionality. According to the Dockerfile best practices, you should always use COPY unless the auto-extraction capability of ADD is needed.

  • CMD this command provides defaults for an executing container. This command will be run when the container starts up on your device, whereas RUN commands will be executed on our build servers. In a application, this is typically used to execute a start script or entrypoint for the users application. CMD should always be the last command in your Dockerfile. The only processes that will run inside the container are the CMD command and all processes that it spawns.

For details on other instructions, consult the official Dockerfile documentation.

Using Dockerfiles with

To deploy a single-container application to, simply place a Dockerfile at the root of your repository. A docker-compose.yml file will be automatically generated, ensuring your container has host networking, is privileged, and has lib/modules, /lib/firmware, and /run/dbus bind mounted into the container. The default docker-compose.yml will look something like this:

version: '2.1'
networks: {}
  resin-data: {}
      context: .
    privileged: true
    restart: always
    network_mode: host
      - 'resin-data:/data'
      io.resin.features.kernel-modules: '1'
      io.resin.features.firmware: '1'
      io.resin.features.dbus: '1'
      io.resin.features.supervisor-api: '1'
      io.resin.features.resin-api: '1'
      io.resin.update.strategy: download-then-kill
      io.resin.update.handover-timeout: ''

Applications with multiple containers should include a Dockerfile or package.json in each service directory. A docker-compose.yml file will need to be defined at the root of the repository, as discussed in our multicontainer documentation.

You can also include a .dockerignore file with your project if you wish the builder to ignore certain files.

NOTE: You don't need to worry about ignoring .git as the builders already do this by default.

Dockerfile templates

One of the goals of is code portability and ease of use, so you can easily manage and deploy a whole fleet of different devices. This is why Docker containers were such a natural choice. However, there are cases where Dockerfiles fall short and can't easily target multiple different device architectures.

To allow our builders to build containers for multiple architectures from one code repository, we implemented simple Dockerfile templates.

It is now possible to define a Dockerfile.template file that looks like this:


COPY package.json /package.json
RUN npm install

COPY src/ /usr/src/app
CMD ["node", "/usr/src/app/main.js"]

This template will build and deploy a Node.js project for any of the devices supported by, regardless of whether the device architecture is ARM or x86. In this example, you can see the build variable %%RESIN_MACHINE_NAME%%. This will be replaced by the machine name (i.e.: raspberry-pi) at build time. See below for a list of machine names.

The machine name is inferred from the device type of the application you are pushing to. So if you have an Intel Edison application, the machine name will be intel-edison and an i386 architecture base image will be built.

Note: You need to ensure that your dependencies and Node.js modules are also multi-architecture, otherwise you will have a bad time.

Currently our builder supports the following build variables:

Variable Name Description
RESIN_MACHINE_NAME The name of the yocto machine this board is based on. It is the name that you will see in most of the Docker base images. This name helps us identify a specific BSP.
RESIN_ARCH The instruction set architecture for the base images associated with this device.

Note: If your application contains devices of different types, the %%RESIN_MACHINE_NAME%% build variable will not evaluate correctly for all devices. Your application containers are built once for all devices, and the %%RESIN_MACHINE_NAME%% variable will pull from the device type associated with the application, rather than the target device. In this scenario, you can use %%RESIN_ARCH%% to pull a base image that matches the shared architecture of the devices in your application.

If you want to see an example of build variables in action, have a look at this basic openssh example.

Here are the supported machine names and architectures:

Raspberry Pi (v1 and Zero) raspberry-pi rpi
Raspberry Pi 2 raspberry-pi2 armv7hf
Raspberry Pi 3 raspberrypi3 armv7hf
BeagleBone Black beaglebone-black armv7hf
BeagleBone Green Wireless beaglebone-green-wifi armv7hf
BeagleBone Green beaglebone-green armv7hf
Intel Edison intel-edison i386
Intel NUC intel-nuc amd64
Jetson TX2 jetson-tx2 aarch64
Hummingboard hummingboard armv7hf
Nitrogen 6X nitrogen6x armv7hf
Samsung Artik 1020 artik10 armv7hf
Samsung Artik 520 artik5 armv7hf
RushUp Kitra 520 kitra520 armv7hf
Samsung Artik 710 artik710 aarch64
RushUp Kitra 710 kitra710 aarch64
UpBoard up-board amd64
Technologic TS-4900 ts4900 armv7hf
Odroid C1/C1+ odroid-c1 armv7hf
Odroid XU4 odroid-xu4 armv7hf
Variscite DART-6UL imx6ul-var-dart armv7hf
Generic ARMv7-a HF generic-armv7ahf armv7hf
Generic AARCH64 (ARMv8) generic-aarch64 aarch64

Init system

Enable the init system

Whatever you define as CMD in your Dockerfile will be PID 1 of the process tree in your container. It also means that this PID 1 process needs to know how to properly process UNIX signals, reap orphan zombie processes [1] and if it crashes, your whole container crashes, meaning you lose logs and debug info.

For these reasons we have built an init system into most of the resin base images listed here: Resin Base Images Wiki. The init system will handle signals, reap zombies and also properly handle udev hardware events correctly.

There are two ways of enabling the init system in your application. You can add the following environment variable in your Dockerfile:

# enable container init system.

You can also enable the init system from the dashboard: navigate to the Service variables menu item on the left and add INITSYSTEM with a value of on. Enable init system

Once you have enabled your init system you should see something like this in your device logs: init system enabled in logs

You shouldn't need to make any adjustments to your code or CMD—it should just work out of the box. Note that if you are using our Debian or Fedora based images, then you should have systemd in your containers, whereas if you use one of our Alpine images you will have OpenRC as your init system.

Setting up a systemd service

In some cases its useful to set up a service that starts up when your container starts. To do this with systemd, make sure you have the init system enabled in your container as mentioned above. You can then create a basic service file in your code repository called my_service.service and add something like this:

Description=My Super Sweet Service



Then by adding the following to your Dockerfile your service should be added/enabled on startup:

COPY my_service.service /etc/systemd/system/my_service.service
RUN systemctl enable /etc/systemd/system/my_service.service

Check out if you need a different service type (OneShot is for services that exit once they're finished starting, e.g. daemons).

Node applications supports Node.js natively using the package.json file located in the root of the repository to determine how to build and execute node applications.

When you push your code to your application's git endpoint the deploy server generates a container for the environment your device operates in, deploys your code to it and runs npm install to resolve npm dependencies, reporting progress to your terminal as it goes.

If the build executes successfully the container is shipped over to your device where the supervisor runs it in place of any previously running containers, using npm start to execute your code (note that if no start script is specified, it defaults to running node server.js.)

Node.js Example

A good example of this is the text-to-speech application - here's its package.json file*:

  "name": "resin-text2speech",
  "description": "Simple resin app that uses Google's TTS endpoint",
  "repository": {
    "type": "git",
    "url": ""
  "scripts": {
    "preinstall": "bash"
  "version": "0.0.3",
  "dependencies": {
    "speaker": "~0.0.10",
    "request": "~2.22.0",
    "lame": "~1.0.2"
  "engines": {
      "node": "0.10.22"

Note: We don't specify a start script here which means node will default to running server.js.

We execute a bash script called before npm install tries to satisfy the code's dependencies. Let's have a look at that:-

apt-get install -y alsa-utils libasound2-dev
mv sound_start /usr/bin/sound_start

These are shell commands that are run within the container on the build server which are configured such that dependencies are resolved for the target architecture not the build server's - this can be very useful for deploying non-javascript code or fulfilling package dependencies that your node code might require.

We use Raspbian as our contained operating system, so this scripts uses aptitude to install native packages before moving a script for our node code to use over to /usr/bin (the install scripts runs with root privileges within the container.)

Note: With plain Node.js project, our build server will automatically detect the specified node version in package.json file and build the container based on Docker image with satisfied node version installed. The default node version is 0.10.22 and it will be used if a node version is not specified. There will be an error if the specified node version is not in our registry. You can either try another node version or contact us to be supported. More details about Docker node images in our registry can be found here.