Improve this doc

Container Runtime

On devices all your application code runs within a Docker container. This means that whatever you define as CMD in your Dockerfile will be PID 1 of the process tree in your container. It also means that this PID 1 process needs to know how to properly process UNIX signals, reap orphan zombie processes [1] and if it crashes your whole container crashes, losing logs and debug info.

Init System

For these reasons we have built an init system into most of the resin base images listed here: Resin Base Images Wiki. The init system will handle signals, reap zombies and also properly handle udev hardware events correctly.

There are two ways of enabling the init system in your application. You can either add the following environment variable in your Dockerfile:

# enable container init system.

or you can add an environment variable from the Dashboard by navigating to the Environment Variables menu item on the left and adding the variable as shown below: Enable init system

Once you have enabled your init system you should see something like this in your device logs: init system enabled in logs

You shouldn't need to make any adjustments to your code or CMD it should just work out of the box. Note that if you are using our Debian or Fedora based images, then you should have systemd in your containers, whereas if you use one of our Alpine images you will have OpenRC as your init system.

SSH Access

To help you debug and develop your application in a container, we've provided a browser based terminal and a command line tool called resin ssh. This gives you console access to your running container on the device and allows you to test out small snippets of code or check some system logs on your device.

In order for you to start a terminal session for your device container, you first need to ensure that your device is online and that it has a running application container. If your container code crashes or ends quickly, it is not possible to attach a console to it. One option to keep your containers running is to enable the INITSYSTEM in your container. This can easily be done by creating a device environment variable called INITSYSTEM and setting its value to on.

Using the dashboard web terminal

To use this feature, navigate to your application and select the device you want to access. You will see a Terminal window below the Logs window:

If your device is online and has a running container, then simply click the blue >_ Start Terminal session button and a terminal session should be initiated for you in a second or two. If you would like a bigger window for the terminal, you can click the Expand button in the upper-right corner.

Using resin ssh from the CLI

If you prefer to work from the command line, you can use resin ssh to connect to your running application container. First, you will need to install the resin Command Line Interface (CLI). Once that is set up, run the following in your development machine's terminal:

$ resin ssh <device-uuid>

<device-uuid> is the unique identifier for the device you want to access, which can be found on the dashboard.

resin ssh makes use of the resin VPN connection to access a device. This allows you to access and test devices wherever they are. If you want to SSH only on the internal network, you can simply install an SSH server in your container as in this example:


One note is that if you run your own SSH in the container you won't automatically get your environment variables in the ssh session. To bring them in, simply run . <(xargs -0 bash -c 'printf \"export %q\n\" \"\[email protected]\"' -- < /proc/1/environ). Now any operations or code you run from the SSH session will be able to access the environment variables you set on your dashboard (see gitter discussion for more info). Alternatively, use the following command in a Dockerfile to update the root's .profile so resin variables are sourced at each tty/ssh login:

echo ". <(xargs -0 bash -c 'printf \"export %q\n\" \"\[email protected]\"' -- < /proc/1/environ)" >> /root/.profile

Accessing the host OS

For devices running resinOS versions 2.7.5 and above, it is possible to SSH into the host OS as well as the application container. This gives you access to logs and tools for services that operate outside the scope of your application, such as NetworkManager, Docker, the VPN, and the supervisor. Like container SSH access, it requires the VPN to be active and connected.

Warning: Making changes to running services and network configurations carries the risk of losing access to your device. Before making changes to the host OS of a remote device, it is best to test locally. Changes made to the host OS will not be maintained when the OS is updated, and some changes could break the updating process. When in doubt, reach out to us for guidance.

Host OS SSH access is available via the dashboard. To start a session, click Select a target in the Terminal window, and then select Host OS:

To use this option in the CLI, add the --host or -s option to the resin ssh command:

$ resin ssh <device-uuid> -s

Host OS access via the CLI requires version 6.12.0 or above.

The Container Environment

When you start a terminal session, either via the web terminal or the CLI, you are dropped into your applications running container. It's important to note that your container needs to be running for you to actually SSH into it, this is where the init system helps a lot. By default you are the root user and granted root privileges in the container.

If you're running a custom Dockerfile the location of your code will be as specified by you in the file. The recommended file path for your code is /usr/src/app as you will see in most of our demo projects. If you're running a pure node.js application (i.e. an application that has no Dockerfile or Dockerfile.template but rather a package.json), all your code will be automatically placed in /app, which has a symbolic link to /usr/src/app.

Inside the container we provide a number of RESIN_ namespaced environment variables. Below is a short description of some of these.

Variable Description
RESIN_DEVICE_UUID The unique identification number for the device. This is used to identify it on
RESIN_APP_ID ID number of the application the device is associated.
RESIN_APP_NAME The name of the application the device is associated with.
RESIN_APP_RELEASE The commit hash of the deployed application version.
RESIN_DEVICE_NAME_AT_INIT The name of the device on first initialisation.
RESIN_DEVICE_TYPE The type of device the application is running on.
RESIN The RESIN=1 variable can be used by your software to detect that it is running on a device.
RESIN_SUPERVISOR_VERSION The current version of the supervisor agent running on the device.
RESIN_SUPERVISOR_API_KEY Authentication key for the supervisor API. This makes sure requests to the supervisor are only coming from containers on the device. See the Supervisor API reference for detailed usage.
RESIN_SUPERVISOR_ADDRESS The network address of the supervisor API. Default:
RESIN_SUPERVISOR_HOST The IP address of the supervisor API. Default:
RESIN_SUPERVISOR_PORT The network port number for the supervisor API. Default: 48484
RESIN_API_KEY API key which can be used to authenticate requests to the backend. Can be used with resin SDK on the device. WARNING This API key gives the code full user permissions, so can be used to delete and update anything as you would on the Dashboard.
RESIN_HOST_OS_VERSION The version of the resin host OS.
RESIN_DEVICE_RESTART This is a internal mechanism for restarting containers and can be ignored as its not very useful to application code. Example: 1.13.0

Here's an example from a Raspberry Pi 3:

[email protected]:/# printenv | grep RESIN
RESIN_HOST_OS_VERSION=Resin OS 1.24.0                                    

Persistent Storage

If you want specific data or configurations to persist on the device through the update process, you will need to store them in /data . This is a special folder on the device file system which is essentially a Docker data VOLUME.

This folder is guaranteed to be maintained across updates and thus files contained in it can act as persistent storage. This is a good place to write system logs, etc.

Note that this folder is not mounted when your project is building on our build server, so you can't access it from your Dockerfile. The /data volume only exists when the container is running on the deployed devices.

Additionally, it is worth mentioning that the /data folder is created per-device and it is not kept in sync between devices in your fleet, so ensure your application takes this into account.

Exposed Ports devices expose all ports by default, meaning you can run applications which listen on any port without issue. There is no need to have the Docker EXPOSE command in your Dockerfile.

Public Device URLS currently exposes port 80 for web forwarding. To enable web forwarding on a specific device, navigate to the device's actions tab on the dashboard and select the Enable a public URL for this device checkbox. For more information about device URLS you can head over to the Device Management Page

Enable device url

Running a server listening on port 80 with public device URL enabled will allow you to serve content from the device to the world. Here is an example of an express.js server which will serve to the devices URL.

var express = require('express')
var app = express()

app.get('/', function (req, res) {
  res.send('Hello World!')

var server = app.listen(80, function () {

  var host = server.address().address
  var port = server.address().port

  console.log('Example app listening at http://%s:%s', host, port)


Access to /dev

In many projects you may need to control or have access to some external hardware via interfaces like GPIO, I2C or SPI. On your container application will automatically have access to /dev and these interfaces since the container is run in privileged mode. This means you should be able to use any hardware modules like you would in a vanilla linux environment.

Note: If you are not using one of the Docker base images recommended in our base images wiki, then it's most likely you will need to handle the updating of /dev via udev yourself. You can see an example of how our base images handle this here.

Tips, Tricks and Troubleshooting

Writing to logs on the Dashboard

Anything written from the application to stdout and stderr should appear on the device's dashboard logs. Have a look at some of our example projects on github to get an idea of how to do this.

Reboot from Inside the Container

You may notice that if you issue a reboot, halt, or shutdown your container either gets into a weird zombie state or doesn't do anything. The reason for this is that these commands do not propagate down to the hostOS system. If you need to issue a reboot from your container you should use the supervisor API as shown:

curl -X POST --header "Content-Type:application/json" \

Read more about the supervisor API

Note: RESIN_SUPERVISOR_API_KEY and RESIN_SUPERVISOR_ADDRESS should already be in your environment by default. You will also need curl installed in your container.

Alternatively, it is possible to reboot the device via the dbus interface as described in the next section.

Dbus communication with hostOS

In some cases its necessary to communicate with the hostOS systemd to perform actions on the host, for example changing the hostname. To do this you can use dbus. In order to ensure that you are communicating to the hostOS systemd and not the systemd in your container it is important to set DBUS_SYSTEM_BUS_ADDRESS for all dbus communication. The setting of that environment variable is different for older and newer devices (based on the supervisor version), choose the line that is correct for your device's OS version (can be found in your device dashboard):

# for supervisor versions 1.7.0 and newer (both resinOS 1.x and 2.x) use this version:
# for supervisor before 1.7.0 use this version:

Below you can find a couple of examples. All of them requires either prepending the command with the above DBUS_SYSTEM_BUS_ADDRESS=... or setting the variable for all commands by running export DBUS_SYSTEM_BUS_ADDRESS=... with the correct environment variable value from above.

Note: To use the dbus-send command in the example you will need to install the dbus package in your Dockerfile if you are using the Debian image, or check under what name does your chosen operating system supply the dbus-send executable.

Change the Device hostname

DBUS_SYSTEM_BUS_ADDRESS=unix:path=/host/run/dbus/system_bus_socket \
  dbus-send \
  --system \
  --print-reply \
  --reply-timeout=2000 \
  --type=method_call \
  --dest=org.freedesktop.hostname1 \
  /org/freedesktop/hostname1 \
  org.freedesktop.hostname1.SetStaticHostname \
  string:"YOUR-NEW-HOSTNAME" boolean:true

Rebooting the Device

DBUS_SYSTEM_BUS_ADDRESS=unix:path=/host/run/dbus/system_bus_socket \
  dbus-send \
  --print-reply \
  --dest=org.freedesktop.systemd1 \
  /org/freedesktop/systemd1 \

Checking if device time is NTP synchronized

DBUS_SYSTEM_BUS_ADDRESS=unix:path=/host/run/dbus/system_bus_socket \
  dbus-send \
  --system \
  --print-reply \
  --reply-timeout=2000 \
  --type=method_call \
  --dest=org.freedesktop.timedate1 \
  /org/freedesktop/timedate1  \
  org.freedesktop.DBus.Properties.GetAll \

The reply would look like this:

method return time=1474008856.507103 sender=:1.12 -> destination=:1.11 serial=4 reply_serial=2
   array [
      dict entry(
         string "Timezone"
         variant             string "UTC"
      dict entry(
         string "LocalRTC"
         variant             boolean false
      dict entry(
         string "CanNTP"
         variant             boolean true
      dict entry(
         string "NTP"
         variant             boolean true
      dict entry(
         string "NTPSynchronized"
         variant             boolean true
      dict entry(
         string "TimeUSec"
         variant             uint64 1474008856505839
      dict entry(
         string "RTCTimeUSec"
         variant             uint64 1474008857000000

Failed to install release agent

You may see the following weird warning when enabling your init system:

Failed to install release agent, ignoring: No such file or directory

This is a known issue and doesn't affect your code in any way. It was fixed in images deployed after 13-07-2016, so we recommend moving to a newer base image. You can see the fix here: release agent fix

Terminal Closes On Update

When you push updates or restart your container, the terminal session is automatically closed and you will see something like:

[email protected]:/# SSH session disconnected                                                   
SSH reconnecting...                                                                                             
Spawning shell...

The session should automatically restart once your container is up and running again.

Blacklisting kernel modules won't work

Since the /etc/modules you see in your container belongs to the container's filesystem and is not the same as /etc/modules in the hostOS, adding kernel modules to the modules blacklist in the container will have no effect. So in order to remove a module, you need to explicitly do a rmmod.

Inconsistency in /tmp Directory

At the time of writing there is an inconsistency in the behaviour of /tmp directory during reboot and application restart. With the current behaviour any thing in /tmp will persist over a reboot, but will not persist over an application restart.

Setting Up a systemd service

In some cases its useful to set up a service that starts up when your container starts. To do this with systemd, make sure you have the initsystem enabled in your container as mentioned above. You can then create a basic service file in your code repository called my_service.service and add something like this:

Description=My Super Sweet Service



Then by adding the following to your Dockerfile your service should be added/enabled on startup:

COPY my_service.service /etc/systemd/system/my_service.service
RUN systemctl enable /etc/systemd/system/my_service.service

You may also need to check out in case you need a different service type (OneShot is for services that exit once they're finished starting, e.g. daemons)

Using DNS resolvers in your container

In the host OS dnsmasq is used to manage DNS since resinOS 1.1.2. This means that if you have dnsmasq or other DNS resolvers such as bind9 running in your container, it can potentially cause problems because they usually try to bind to which interferes with the host dnsmasq. To get around this you need to add bind-interfaces to your dnsmasq configuration in your container, or make sure your server only binds to external IPs, and there shouldn't be conflicts anymore.

Mounting external storage media

Mounting external storage media, such as SD cards or USB thumb drives, within your application (running inside Docker) works somewhat different compared to mounting devices directly in Linux. Here we include a set of recommendations that helps you can get started.

Without the init system

If you have not enabled an init system in your application or chose to mount manually, you can add the mount logic into your start script. This can be made simpler by adding the storage media settings to /etc/fstab in your Dockerfile:

RUN echo "LABEL=mysdcard /mnt/storage ext4 rw,relatime,discard,data=ordered 0 2" >> /etc/fstab

Modify your settings as apporopriate (device identification, mount endpoint, file system, mount options), and see more information about the possible settings at the fstab man page.

Then in your start script you need to create the mount directory and mount the device:

mkdir -p /mnt/storage && mount /mnt/storage

Using systemd

Normally systemd mounts entries from /etc/fstab on startup automatically, but running within Docker, it will only mount entries that are not block devices, such as tempfs entries. For non-block devices, adding entries /etc/fstab is sufficient, for example in your Dockerfile:

RUN echo "tmpfs  /cache  tmpfs  rw,size=200M,nosuid,nodev,noexec  0 0" >> /etc/fstab

For block devices (SD cards, USB sticks), /etc/fstab entries would result in this error at runtime: Running in a container, ignoring fstab device entry for .... Instead, you have to use systemd .mount files. Let's assume you want to mount an external SD card to /mnt/storage. Then you have to create a file with the name mnt-storage.mount and the content such as:

Description = External SD Card

What = LABEL=mysdcard
Where = /mnt/storage
Type = ext4
Options = rw,relatime,data=ordered

WantedBy =

Above you need to modify the options with the [Unit] and [Mount] sections as appropriate. For more information, see the systemd.mount documentation.

Finally copy and enable these systemd settings in your Dockerfile:

COPY mnt-storage.mount /etc/systemd/system/
RUN systemctl enable mnt-storage.mount

This way your storage media will be mounted on your application start. You can check the status of this job with the systemctl is-active mnt-storage.mount command.

Systemd is the init system on our Debian and Fedora base images.

Using OpenRC

OpenRC is the init system on our Alpine Linux base images. Its localmount service mounts entries defined in /etc/fstab. Unfortunately in its current form the localmount service is explicitly filtered out and disabled by the -lxc keyword in /etc/init.d/localmount when running inside Docker. This setting modifies some of its behaviour.

To use OpenRC to automount your media, add your /etc/fstab entries in your Dockerfile, such as:

RUN echo "LABEL=mysdcard /mnt/storage ext4 rw,relatime,discard,data=ordered 0 2" >> /etc/fstab

Then start the localmount service manually in your start script:

rc-service localmount start

After running that command, the device should be mounted and ready to use in your application.

Because of the keyword filter, localmount cannot be automatically started (using rc-update add) and won't appear in the output of rc-status, even when it works correctly.

General tips for external media

Devices can be selected in many ways, for example by /dev entry, labels, or UUID. From a practical point of view, we recommend using labels (LABEL=... entries). Labels can easily be made the same across multiple cards or thumb drives, while you can still identify each device by their UUID. Also, /dev entries are not static on some platforms, and their value depends on which order the system brings up the devices.