FROM debian:jessie The Debian repo includes Debian Stretch (Debian 9) images. It’s the next version after Debian Jessie (Debian 8). We will not automatically roll forward.NET Core tags that use Debian Jessie to Debian Stretch but will create a new tag for it. This is similar to what other platforms do. Debian and Ubuntu. Docker runs on: Ubuntu Xenial 16.04 LTS; Ubuntu Wily 15.10; Ubuntu Trusty 14.04 LTS; Ubuntu Precise 12.04 LTS; Debian testing stretch; Debian 8.0 Jessie; Debian 7.0 Wheezy (you must enable backports) Debian Wheezy. If so, you need to enable backports (if not, ignore this section). The following example command sets each of these three flags on a debian:jessie container. $ docker run -it -cpu-rt-runtime = 950000 -ulimit rtprio = 99 -cap-add = sysnice debian:jessie.
- Debian Jessie Install Docker-compose
- Debian Jessie Docker
- Debian Jessie Docker Centos
- Debian Jessie Docker Command
balenalib
is the central home for 26000+ IoT focused Docker images built specifically for balenaCloud and balenaOS. This set of images provide a way to get up and running quickly and easily, while still providing the option to deploy slim secure images to the edge when you go to production.
Features Overview
- Multiple Architectures:
- armv5e
- armv6
- armv7hf
- aarch64
- amd64
- i386
- Multiple Distributions:
- Debian: jessie (8), stretch (9), buster (10), bullseye (11), and sid
- Alpine: 3.9, 3.10, 3.11, 3.12 and edge
- Ubuntu: xenial (16.04), bionic (18.04), cosmic (18.10), disco (19.04), eoan (19.10) and focal (20.04)
- Fedora: 30, 31, 32, 33 and 34
- Multiple language stacks:
- Node.js: 15.7.0, 14.16.0, 12.21.0 and 10.23.1
- Python: 2.7.18 (deprecated), 3.5.10, 3.6.12, 3.7.9, 3.8.6 and 3.9.1
- openJDK: 7-jdk/jre, 8-jdk/jre and 11-jdk/jre
- Golang: 1.16, 1.15.3 and 1.14.10
- Dotnet: 2.1-sdk/runtime/aspnet, 2.2-sdk/runtime/aspnet, 3.1-sdk/runtime/aspnet and 5.0-sdk/runtime/aspnet
run
andbuild
variants designed for multistage builds.- cross-build functionality for building ARM containers on x86.
- Helpful package installer script called
install_packages
inspired by minideb.
How to Pick a Base Image
When starting out a project, it's generally easier to have a 'fatter' image, which contains a lot of prebuilt dependencies and tools. These images help you get setup faster and work out the requirements for your project. For this reason, it's recommended to start with -build
variants, and as your project progresses, switch to a -run
variant with some docker multistage build magic to slim your deploy image down. In most cases, your project can just use a Debian based distribution, which is the default if not specified, but if you know the requirements of your project or prefer specific distros, Ubuntu, Alpine, and Fedora images are available. The set of balenalib
base images follow a simple naming scheme described below, which will help you select a base image for your specific needs.
How the Image Naming Scheme Works
Debian Jessie Install Docker-compose
With over 26000 balenalib
base images to choose from, it can be overwhelming to decide which image and tag are correct for your project. To pick the correct image, it helps to understand how the images are named as that indicates what is installed in the image. In general, the naming scheme for the balenalib
image set follows the pattern below:
Image Names
<hw>
is either architecture or device type and is mandatory. If usingDockerfile.template
, you can replace this with%%BALENA_MACHINE_NAME%%
or%%BALENA_ARCH%%
. For a list of available device names and architectures, see the Device types.<distro>
is the Linux distribution. Currently there are 4 distributions, namely Debian, Alpine, Ubuntu and Fedora. This field is optional and will default to Debian if left out.<lang_stack>
is the programming language pack, currently we support Node.js, Python, OpenJDK, .Net, and Go. This field is optional, and if left out, no language pack will be installed, so you will just have the distribution and you can later install and use any language in your image/container.
Image Tags
In the tags, all of the fields are optional, and if they are left out, they will default to their latest
pointer.
<lang_ver>
is the version of the language stack, for example, Node.js 10.10, it can also be substituted forlatest
.<distro_ver>
is the version of the Linux distro, for example in the case of Debian, there are 4 valid versions, namelysid
,jessie
,buster
andstretch
.- For each combination of distro and stack, we have two variants called
run
andbuild
. The build variant is much heavier as it has a number of tools preinstalled to help with building source code. You can see an example of the tools that are included in the Debian Stretch variant here. Therun
variants are stripped down and only include a few useful runtime tools, see an example here. If no variant is specified, the image defaults torun
- The last optional field on tags is the date tag
<yyyymmdd>
. If a date tag is specified, the pinned release will always be pulled from Docker Hub, even if there is a new one available.
Note: Pinning to a date-frozen base image is highly recommended if you are running a fleet in production. This ensures that all your dependencies have a fixed version and won't get randomly updated until you decide to pin the image to a newer release.
Examples
balenalib/raspberrypi3-node:10.18
<hw>
: raspberrypi3 - The Raspberry Pi 3 device type.<distro>
: omitted, so it defaults to Debian.<lang>
: node - the Node.js runtime and npm will be installed<lang_ver>
: 10.18 - This gives us Node.js version 10.18.x whatever is the latest patch version provided on balenalib<distro_ver>
: omitted, so it defaults tobuster
(build|run)
: omitted, so the image defaults to the slimmed downrun
variant<yyyymmdd>
: omitted, we don't have a date frozen image, so new updates pushed to our 10.18 tag, for example patch versions from Node.js will automatically be inherited when they are available.
balenalib/i386-ubuntu-python:latest-bionic-build-20191029
<hw>
: i386 - the intel 32 bit architecture that runs on Intel Edison<distro>
: ubuntu<lang>
: python<lang_ver>
:latest
points to the latest Python 2 version, which currently is 2.7.17<distro_ver>
: bionic is Ubuntu 18.04(build|run)
:build
- to include things likebuild-essential
andgcc
<yyyymmdd>
: 20191029 is a date frozen image - so this image will never be updated on Docker Hub.
run vs. build
For each combination of <hw>
-<distro>
-<lang>
there is both a run
and a build
variant. These variants are provided to allow for easier multistage builds.
The run
variant is designed to be a slim and minimal variant with only runtime essentials packaged into it. An example of the packages installed in can be seen in the Dockerfile
of balenalib/armv7hf-debian:run
.
The build
variant is a heavier image that includes many of the tools required for building from source such as build-essential
, gcc
, etc. As an example, you can see the types of packages installed in the balenalib/armv7hf-debian:build
variant here.
These variants make building multistage projects easier, take for example, installing an I2C node.js package, which requires a number of build time dependencies to build the native i2c
node module, but we don't want to send all of those down to our device. This is the perfect time for multistage builds and to use the build
and run
variants.
Supported Architectures, Distros and Languages
Currently, balenalib supports the following OS distributions and Language stacks, if you would like to see others added, create an issue on the balena base images repo.
Distribution | Default (latest) | Supported Architectures |
---|---|---|
Debian | Debian GNU/Linux 10 (buster) | armv5e, armv6, armv7hf, aarch64, amd64, i386 |
Alpine | Alpine Linux v3.12 | armv6, armv7hf, aarch64, amd64, i386 |
Ubuntu | 18.04 LTS (bionic) | armv7hf, aarch64, amd64, i386 |
Fedora | Fedora 32 | armv7hf, aarch64, amd64, i386 |
Language | Default (latest) | Supported Architectures |
---|---|---|
Node.js | 15.7.0 | armv6, armv7hf, aarch64, amd64, i386 |
Python | 3.9.1 | armv5e, armv6, armv7hf, aarch64, amd64, i386 |
OpenJDK | 11-jdk | armv7hf, aarch64, amd64, i386, armv6 |
Go | 1.16 | armv7hf, aarch64, amd64, i386, armv6 |
Dotnet | 5.0-sdk | armv7hf, aarch64, amd64 |
Notes
Devices with a device type of raspberry-pi
(Raspberry Pi1 and Zero) will be built from balenalib/rpi-raspbian
and will be Raspbian base images. The raspberry-pi2
and raspberrypi3
device types Debian base images have the Raspbian package source added, and Raspbian userland pre-installed.
Not all OS distro and language stack versions are compatible with each other. Notice that there are some combinations that are not available in the balenalib
base images.
- Node.js dropped 32-bit builds a while ago so i386-based nodejs images (Debian, Fedora and Ubuntu) v8.x and v6.x are official. New series (v10.x and v12.x) are using unofficial builds.
- armv6 binaries were officially dropped from Node.js v12 and v12 armv6 support is now considered unofficial.
- The Node.js v6.x and v8.x series are not available for i386 Alpine Linux base images v3.9 and edge as node crashes with segfault error, we are investigating the issue and will add them back as soon as the issue is resolved.
Installing Packages
Installing software packages in balenalib containers is very easy, and in most cases, you can just use the base image operating system package manager. However to make things even easier, every balenalib image includes a small install_packages
script that abstracts away the specifics of the underlying package managers, and adds the following useful features:
- Install the named packages, skipping prompts etc.
- Clean up the package manager metadata afterward to keep the resulting image small.
- Retries if package install fails. Sometimes a package will fail to download due to a network issue, and retrying may fix this, which is particularly useful in an automated build pipeline.
An example of this in action is as follows:
This will run an apt-get update -qq
, then install wget
and git
via apt-get with -y --no-install-recommends
flags, and it will by default try this 2 times before failing. You can see the source of install_packages
here.
How the Images Work at Runtime
Each balenalib
base image has a default ENTRYPOINT
which is defined as ENTRYPOINT ['/usr/bin/entry.sh']
. This ensures that entry.sh is run before your code defined in CMD
of your Dockerfile
.
On container startup, the entry.sh script first checks if the UDEV
flag is set to true
or false
. In the case where it is false
, the CMD
is then executed. In the case it is true
(or 1
), the entry.sh will check if the container is running privileged, if it is, it will mount /dev
to a devtmpfs and then start udevd
. In the case the container is an unprivileged container, no mount will be performed, and udevd
will be started (although it won't be very much use without the privilege).
At the end of a container's lifecycle, when a request to container restart, reboot or shutdown is sent to the supervisor, the balenaEngine will send a SIGTERM
(signal 15) to the containers, and 10 seconds later it will issue a SIGKILL
if the container is still running. This timeout can also be configured via the stop_grace_period in your docker-compose.yml.
Working with Dynamically Plugged Devices
In many IoT projects, your containers will want to interact with some hardware, often this hardware is plugged in at runtime, in the case of USB or serial devices. In these cases, you will want to enable udevd
in your container. In balenalib
images this can easily be done either by adding ENV UDEV=1
in your Dockerfile
or by setting an environment variable.
You will also need to run your container privileged
. By default, any balenaCloud projects that don't contain a docker-compose.yml
will run their containers privileged
. If you are using a multicontainer project, you will need to add privileged: true
to each of the service definitions for the services that need hardware access.
When a balenalib
container runs with UDEV=1
it will first detect if it is running on a privileged
container. If it is, it will mount the host OS /dev
to a devtmpfs and then start udevd
. Now anytime a new device is plugged in, the kernel will notify the container udevd
daemon and the relevant device nodes in the container /dev
will appear.
Note: The new balenalib base images make sure udevd
runs in its own network namespace, so as to not interfere with cellular modems. These images should not have any of the past udev restrictions of the resin/
base images.
Major Changes
When moving from the legacy resin/...
base images to the balenalib
ones, there are a number of breaking changes that you should take note of, namely:
UDEV
now defaults tooff
, so if you have code that relies on detecting dynamically plugged devices, you will need to enable this in either your Dockerfile or via a device environment variable. See Working with Dynamically Plugged Devices.- The
INITSYSTEM
functionality has been completely removed, so applications that rely on systemd or openRC should install and set up the initsystem in their apps. See Installing your own Initsystem. - Mounting of
/dev
to a devtmpfs will now only occur whenUDEV=on
and the container is running asprivileged
.1
,true
andon
are valid value forUDEV
and will be evaluated asUDEV=on
, all other values will turnUDEV
off. - Support for Debian Wheezy has been dropped.
armel
architecture has been renamed toarmv5e
.
Installing your own Initsystem
Since the release of multicontainer on the balenaCloud platform, we now recommend the use of multiple containers and no longer recommend the use of an initsystem, particularly systemd, in the container as it tends to cause a myriad of issues, undefined behavior and requires the container to run fully privileged.
However, if your application relies on initsystem features, it is fairly easy to add this functionality to a balenalib base image. We have provided some examples for systemd and openRC. Please note that different systemd versions require different implementation so for Debian Jessie and older, please refer to this example and for Debian Stretch and later, please refer to this example.
Generally, for systemd, it just requires installing the systemd package, masking a number of services and defining a new entry.sh
and a balena.service
. The Dockerfile
below demonstrates this:
Building ARM Containers on x86 Machines
This is a unique feature of balenalib ARM base images that allows you to run them anywhere (running ARM image on x86/x86_64 machines). A tool called resin-xbuild
and QEMU are installed inside any balenalib ARM base image and can be triggered by RUN ['cross-build-start']
and RUN ['cross-build-end']
. QEMU will emulate any instructions between cross-build-start
and cross-build-end
. So this Dockerfile:
can run on your x86 machine and there will be no Exec format error
, which is the error when you run an ARM binary on x86. This approach works only if the image is being built on x86 systems. Use the --emulated
flag in balena push
to trigger a qemu emulated build targetting the x86 architecture. More details can be found in our blog post here. You can find the full source code for the two cross-build scripts here.
Estimated reading time: 16 minutes
By default, a container has no resource constraints and can use as much of agiven resource as the host’s kernel scheduler allows. Docker provides waysto control how much memory, or CPU a container can use, setting runtimeconfiguration flags of the docker run
command. This section provides detailson when you should set such limits and the possible implications of setting them.
Many of these features require your kernel to support Linux capabilities. Tocheck for support, you can use thedocker info
command. If a capabilityis disabled in your kernel, you may see a warning at the end of the output likethe following:
Consult your operating system’s documentation for enabling them.Learn more.
Memory
Understand the risks of running out of memory
It is important not to allow a running container to consume too much of thehost machine’s memory. On Linux hosts, if the kernel detects that there is notenough memory to perform important system functions, it throws an OOME
, orOut Of Memory Exception
, and starts killing processes to free upmemory. Any process is subject to killing, including Docker and other importantapplications. This can effectively bring the entire system down if the wrongprocess is killed.
Docker attempts to mitigate these risks by adjusting the OOM priority on theDocker daemon so that it is less likely to be killed than other processeson the system. The OOM priority on containers is not adjusted. This makes it morelikely for an individual container to be killed than for the Docker daemonor other system processes to be killed. You should not try to circumventthese safeguards by manually setting --oom-score-adj
to an extreme negativenumber on the daemon or a container, or by setting --oom-kill-disable
on acontainer.
For more information about the Linux kernel’s OOM management, seeOut of Memory Management.
You can mitigate the risk of system instability due to OOME by:
- Perform tests to understand the memory requirements of your application beforeplacing it into production.
- Ensure that your application runs only on hosts with adequate resources.
- Limit the amount of memory your container can use, as described below.
- Be mindful when configuring swap on your Docker hosts. Swap is slower andless performant than memory but can provide a buffer against running out ofsystem memory.
- Consider converting your container to a service,and using service-level constraints and node labels to ensure that theapplication runs only on hosts with enough memory
Limit a container’s access to memory
Docker can enforce hard memory limits, which allow the container to use no morethan a given amount of user or system memory, or soft limits, which allow thecontainer to use as much memory as it needs unless certain conditions are met,such as when the kernel detects low memory or contention on the host machine.Some of these options have different effects when used alone or when more thanone option is set.
Most of these options take a positive integer, followed by a suffix of b
, k
,m
, g
, to indicate bytes, kilobytes, megabytes, or gigabytes.
Option | Description |
---|---|
-m or --memory= | The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 4m (4 megabyte). |
--memory-swap * | The amount of memory this container is allowed to swap to disk. See --memory-swap details. |
--memory-swappiness | By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set --memory-swappiness to a value between 0 and 100, to tune this percentage. See --memory-swappiness details. |
--memory-reservation | Allows you to specify a soft limit smaller than --memory which is activated when Docker detects contention or low memory on the host machine. If you use --memory-reservation , it must be set lower than --memory for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit. |
--kernel-memory | The maximum amount of kernel memory the container can use. The minimum allowed value is 4m . Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See --kernel-memory details. |
--oom-kill-disable | By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the --oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option. If the -m flag is not set, the host can run out of memory and the kernel may need to kill the host system’s processes to free memory. |
For more information about cgroups and memory in general, see the documentationfor Memory Resource Controller.
--memory-swap
details
--memory-swap
is a modifier flag that only has meaning if --memory
is alsoset. Using swap allows the container to write excess memory requirements to diskwhen the container has exhausted all the RAM that is available to it. There is aperformance penalty for applications that swap memory to disk often.
Its setting can have complicated effects:
If
--memory-swap
is set to a positive integer, then both--memory
and--memory-swap
must be set.--memory-swap
represents the total amount ofmemory and swap that can be used, and--memory
controls the amount used bynon-swap memory. So if--memory='300m'
and--memory-swap='1g'
, thecontainer can use 300m of memory and 700m (1g - 300m
) swap.If
--memory-swap
is set to0
, the setting is ignored, and the value istreated as unset.If
--memory-swap
is set to the same value as--memory
, and--memory
isset to a positive integer, the container does not have access to swap.SeePrevent a container from using swap.If
--memory-swap
is unset, and--memory
is set, the container can useas much swap as the--memory
setting, if the host container has swapmemory configured. For instance, if--memory='300m'
and--memory-swap
isnot set, the container can use 600m in total of memory and swap.If
--memory-swap
is explicitly set to-1
, the container is allowed to useunlimited swap, up to the amount available on the host system.Inside the container, tools like
free
report the host’s available swap, not what’s available inside the container. Don’t rely on the output offree
or similar tools to determine whether swap is present.
Prevent a container from using swap
If --memory
and --memory-swap
are set to the same value, this preventscontainers from using any swap. This is because --memory-swap
is the amount ofcombined memory and swap that can be used, while --memory
is only the amountof physical memory that can be used.
--memory-swappiness
details
- A value of 0 turns off anonymous page swapping.
- A value of 100 sets all anonymous pages as swappable.
- By default, if you do not set
--memory-swappiness
, the value isinherited from the host machine.
--kernel-memory
details
Kernel memory limits are expressed in terms of the overall memory allocated toa container. Consider the following scenarios:
- Unlimited memory, unlimited kernel memory: This is the defaultbehavior.
- Unlimited memory, limited kernel memory: This is appropriate when theamount of memory needed by all cgroups is greater than the amount ofmemory that actually exists on the host machine. You can configure thekernel memory to never go over what is available on the host machine,and containers which need more memory need to wait for it.
- Limited memory, unlimited kernel memory: The overall memory islimited, but the kernel memory is not.
- Limited memory, limited kernel memory: Limiting both user and kernelmemory can be useful for debugging memory-related problems. If a containeris using an unexpected amount of either type of memory, it runs outof memory without affecting other containers or the host machine. Withinthis setting, if the kernel memory limit is lower than the user memorylimit, running out of kernel memory causes the container to experiencean OOM error. If the kernel memory limit is higher than the user memorylimit, the kernel limit does not cause the container to experience an OOM.
When you turn on any kernel memory limits, the host machine tracks “high watermark” statistics on a per-process basis, so you can track which processes (inthis case, containers) are using excess memory. This can be seen per processby viewing /proc/<PID>/status
on the host machine.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited.You can set various constraints to limit a given container’s access to the hostmachine’s CPU cycles. Most users use and configure thedefault CFS scheduler. You can alsoconfigure the realtime scheduler.
Configure the default CFS scheduler
The CFS is the Linux kernel CPU scheduler for normal Linux processes. Severalruntime flags allow you to configure the amount of access to CPU resources yourcontainer has. When you use these settings, Docker modifies the settings forthe container’s cgroup on the host machine.
Option | Description |
---|---|
--cpus=<value> | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus='1.5' , the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting --cpu-period='100000' and --cpu-quota='150000' . |
--cpu-period=<value> | Specify the CPU CFS scheduler period, which is used alongside --cpu-quota . Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. For most use-cases, --cpus is a more convenient alternative. |
--cpu-quota=<value> | Impose a CPU CFS quota on the container. The number of microseconds per --cpu-period that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, --cpus is a more convenient alternative. |
--cpuset-cpus | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU). |
--cpu-shares | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access. |
If you have 1 CPU, each of the following commands guarantees the container atmost 50% of the CPU every second.
Which is the equivalent to manually specifying --cpu-period
and --cpu-quota
;
Configure the realtime scheduler
You can configure your container to use the realtime scheduler, for tasks whichcannot use the CFS scheduler. You need tomake sure the host machine’s kernel is configured correctlybefore you can configure the Docker daemon orconfigure individual containers.
Warning
CPU scheduling and prioritization are advanced kernel-level features. Mostusers do not need to change these values from their defaults. Setting thesevalues incorrectly can cause your host system to become unstable or unusable.
Configure the host machine’s kernel
Verify that CONFIG_RT_GROUP_SCHED
is enabled in the Linux kernel by runningzcat /proc/config.gz | grep CONFIG_RT_GROUP_SCHED
or by checking for theexistence of the file /sys/fs/cgroup/cpu.rt_runtime_us
. For guidance onconfiguring the kernel realtime scheduler, consult the documentation for youroperating system.
Configure the Docker daemon
To run containers using the realtime scheduler, run the Docker daemon withthe --cpu-rt-runtime
flag set to the maximum number of microseconds reservedfor realtime tasks per runtime period. For instance, with the default period of1000000 microseconds (1 second), setting --cpu-rt-runtime=950000
ensures thatcontainers using the realtime scheduler can run for 950000 microseconds for every1000000-microsecond period, leaving at least 50000 microseconds available fornon-realtime tasks. To make this configuration permanent on systems which usesystemd
, see Control and configure Docker with systemd.
Configure individual containers
You can pass several flags to control a container’s CPU priority when youstart the container using docker run
. Consult your operating system’sdocumentation or the ulimit
command for information on appropriate values.
Option | Description |
---|---|
--cap-add=sys_nice | Grants the container the CAP_SYS_NICE capability, which allows the container to raise process nice values, set real-time scheduling policies, set CPU affinity, and other operations. |
--cpu-rt-runtime=<value> | The maximum number of microseconds the container can run at realtime priority within the Docker daemon’s realtime scheduler period. You also need the --cap-add=sys_nice flag. |
--ulimit rtprio=<value> | The maximum realtime priority allowed for the container. You also need the --cap-add=sys_nice flag. |
The following example command sets each of these three flags on a debian:jessie
container.
If the kernel or Docker daemon is not configured correctly, an error occurs.
GPU
Access an NVIDIA GPU
Prerequisites
Visit the official NVIDIA drivers pageto download and install the proper drivers. Reboot your system once you havedone so.
Verify that your GPU is running and accessible.
Install nvidia-container-runtime
Follow the instructions at (https://nvidia.github.io/nvidia-container-runtime/)and then run this command:
Ensure the nvidia-container-runtime-hook
is accessible from $PATH
.
Restart the Docker daemon.
Expose GPUs for use
Include the --gpus
flag when you start a container to access GPU resources.Specify how many GPUs to use. For example:
Exposes all available GPUs and returns a result akin to the following:
Use the device
option to specify GPUs. For example:
Exposes that specific GPU.
Debian Jessie Docker
Exposes the first and third GPUs.
Note
NVIDIA GPUs can only be accessed by systems running a single engine.
Set NVIDIA capabilities
You can set capabilities manually. For example, on Ubuntu you can run thefollowing:
This enables the utility
driver capability which adds the nvidia-smi
tool tothe container.
Capabilities as well as other configurations can be set in images viaenvironment variables. More information on valid variables can be found at thenvidia-container-runtimeGitHub page. These variables can be set in a Dockerfile.
Debian Jessie Docker Centos
You can also utitize CUDA images which sets these variables automatically. Seethe CUDA images GitHub pagefor more information.