Complete tutorial on building images using Docker
Everything you need to know to write Dockerfile and run containers using Docker with an example of setting up Ubuntu, default user, Miniconda, PyTorch.
- Building the image
- Escape character
- FROM
- ARG and ENV
- LABEL and EXPOSE
- ADD and COPY
- RUN
- Summary of all commands
- Ubuntu, PyTorch Dockerfile
- Useful Docker commands
- Links
Docker provides a way to run your programs as containers on top of a host operating system. Dockerfile provides instructions on how to build the images that are then run as containers. This post discusses all the things you need to create Dockerfile’s with an example of setting up Ubuntu, Miniconda, and PyTorch.
Building the image
docker build
command is used to build an image from Dockerfile
and a context (a PATH or URL). context refers to all the files specified in the build command. The steps to build the image are as follows
- All the files specified by context are sent to the docker daemon. Due to this reason, you should create Dockerfile in an empty directory to avoid unnecessary transfers.
- Files specified in
.dockerignore
are not passed to the docker daemon..dockerignore
follows the same syntax as.gitignore
. - The Dockerfile is checked for any syntax errors
- Docker daemon starts building the image by reading instructions from the
Dockerfile
.
Specify context
The context can be a PATH
on the local file system or a remote URL
referring to a Git repository. An example of specifying the build context is shown below
> docker build .
> docker build /path/to/context
Specify context with URL
Docker also provides you the ability to build an image from Git URL. This is mostly used for continuous integration pipelines. To build an image using URL the Dockerfile should be present at the specified URL (not on the local file system). docker build
will automate the following steps for you when building an image from a Github URL
> git clone {GITHUB_URL}
> git checkout {BRANCH}
> cd repo
> docker build .
The command to build the image is as follows
> docker build {GITHUB_URL}#{BRANCH}
> docker build https://github.com/KushajveerSingh/Dockerfile.git#master
Note:- The above command will fail as there is no Dockerfile in the specified location.
Note:- The Dockerfile should be present on the specified URL.
Specify location of Dockerfile
You can use the -f
flag to specify the path to Dockerfile. By default, it is assumed the Dockerfile is present at the root of the context.
> docker build -f /path/to/Dockerfile .
> docker build -f ubuntu_conda_pytorch/Dockerfile https://github.com/KushajveerSingh/Dockerfile.git#master
Docker automatically cd
’s into the Github repository so the path should not include the name of the repository.
Specify repository and tag
Consider the following Dockerfile
FROM ubuntu:18.04
LABEL PURPOSE = "test Dockerfile"
When we build an image docker assigns a commit hash as the IMAGE ID
.
> docker build .
> docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> e1eaeb418bbb 5 seconds ago 63.1MB
You can specify a repository and tag to every docker image which can then be used to easily access the image using -t
flag. REPOSITORY
can be thought of like the name of your Github repository and TAG
is used to specify the version of your image. For example, ubuntu:18.04
, ubuntu:latest
.
> docker build -t test:0.1 .
> docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
test 0.1 e1eaeb418bbb 6 minutes ago 63.1MB
You can specify multiple tags also
> docker build -t test:0.1 -t test:latest .
Docker image layers
The Docker daemon runs the instructions from top-to-bottom one-by-one. And the result of most of the instructions (FROM
, ADD
, COPY
) is committed to a new image. Due to this reason, you need to be careful when using these instructions as every single usage of them would result in the creation of a new image which would increase the size of the final image.
Why Docker does this? Consider the following Dockerfile
FROM ubuntu:18.04
RUN COMMAND_1
RUN COMMAND_2
RUN COMMAND_1
Now when we build the image we would create the following layers
- Layers from
ubuntu:18.04
Dockerfile -
RUN COMMAND_1
will create a new layer -
RUN COMMAND_2
will create a new layer -
RUN COMMAND_3
will create a new layer
A layer is basically a change on an image or an intermediate image. Whenever we run an instruction (like FROM
, RUN
, COPY
) we are making changes to the previous image. These changes result in the creation of a new layer. Having intermediate layers helps during the build process. If you make a change in the Dockerfile, then Docker will only build the layer that was changed and the layers after that. This can save a lot of time.
Note:- Be careful when creating new layers as these would also increase the size of your image.
BuildKit
Docker supports moby/buildkit backend for building images. BuildKit provides many benefits over the default implementation provided by Docker
- Detect and skip executing unused build stages
- Parallelize building independent build stages
- Incrementally transfer only the changed files in your build context between builds
- Detect and skip transferring unused files in your build context
- Use external Dockerfile implementations with many new features
- Avoid side-effects with rest of the API (intermediate images and containers)
- Prioritize your build cache for automatic pruning
To use BuildKit backend, you need to set DOCKER_BUILDKIT=1
environment variable.
> DOCKER_BUILDKIT=1 docker build .
OR
> export DOCKER_BUILDKIT=1
> docker build .
Summary
In summary, to build a Docker image you can use the following command
> DOCKER_BUILDKIT=1 docker build \
-t {REPOSITORY_1}:{TAG_1} -t {REPOSITORY_2}:{TAG_2} \
-f /path/to/Dockerfile \
/path/to/build/context
> DOCKER_BUILDKIT=1 docker build \
-t test:0.1 -t test:latest \
-f . \
.
Escape character
This is an example of a parser directive. Parser directives are specified in the first line of Dockerfile and do not add layers to the build. You can specify the character to break up lines in Dockerfile using this.
# escape=\
FROM ubuntu:18.04
RUN INSTRUCTION_1 \
INSTRUCTION_2
On Windows \
is used to specify the path. So changing this to something like backtick can be useful.
# escape=`
FROM ubuntu:18.04
RUN INSTRUCTION_1 `
INSTRUCTION_2
FROM
All the Dockerfile’s start with the FROM
instruction. FROM
initializes a new build stage and sets the base image for the subsequent instructions. The general syntax is
FROM [--platform=<platform>] <image>[:tag] [AS <name>]
Here […] means optional. You can start from a scratch image and build everything on top of that
FROM scratch
Or you can build on top of a public image (like Ubuntu, PyTorch, nvidia/cuda)
FROM ubuntu:18.04
For this post, I build on top of the Ubuntu image. You can build the image and try running it using the following commands
> DOCKER_BUILDKIT=1 docker build -t test:0.1 .
> docker run -it --name temp test:0.1
You will see that it is a bare-bones install. It does not have any user or sudo. It provides us with the Linux kernel and we have to build everything on top of that.
In the next couple of sections, we look at all the instructions that we can in the Dockerfile, and then we will use all these instructions to build a Ubuntu/Miniconda/PyTorch image.
ARG and ENV
Environment variables are often used to declare variables in the scripts or set up some variables that will persist when the container is run. Docker allows us two ways to set variables: ARG
and ENV
.
-
ARG
instruction defines a variables that users will pass at build-time with thedocker build
command using--build-arg <name>=<value>
flag. These will only be used in the Dockerfile. -
ENV
instruction sets the environment variable in the Dockerfile and the environment variable will persist when a container is run from the resulting image.
ARG
We can specify the version of Ubuntu as ARG
(code of Dockerfile is shown below)
ARG UBUNTU_VERSION
FROM ubuntu:$UBUNTU_VERSION
And then we can specify the version of ubuntu when building the image
> DOCKER_BUILDKIT=1 docker build -t test --build-arg UBUNTU_VERSION=18.04 .
We can also specify a default value to ARG
as
ARG UBUNTU_VERSION=18.04
To access the value of ARG
you can either use $UBUNTU_VERSION
or ${UBUNTU_VERSION}
syntax. The second method is useful when you want to access the value of ARG
inside a string.
Using ARG
- Use
ARG
for variables that are only needed in the Dockerfile and are not needed when the container is running. In this case, the version of Ubuntu is not needed when the container is running. -
ARG
that is used beforeFROM
can only be used inFROM
-
ARG
used afterFROM
can be used anywhere in the Dockerfile (there is an exception to this in the case of multi-stage builds i.e. when we use multipleFROM
instructions in the same Dockerfile)
ENV
This is the same as ARG
except the ENV
will persist when a container is run from the resulting image. An example of this includes
ENV PYTORCH_VERSION 1.9.0
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
# Setting PATH variables
ENV PATH /home/default/miniconda3/bin:$PATH
LABEL and EXPOSE
These two instructions can be considered as documentation instructions. Having these instructions in Dockerfile has no effect on the image, these are just used to provide metadata information.
LABEL
LABEL
can be used to specify information like the author of the Dockerfile and other relevant information
LABEL author = "Kushajveer Singh"
LABEL email = "kushajreal@gmail.com"
LABEL website = "kushajveersingh.github.io"
You can get the list of labels after building the image as follows
> DOCKER_BUILDKIT=1 docker build -t test:0.1 .
# This will output a lot of information
> docker image inspect --format='' test:0.1
# To get only the LABEL values modify the above command as follows
> docker image inspect --format='' test:0.1
{"author":"= Kushajveer Singh","email":"= kushajreal@gmail.com","website":"= kushajveersingh.github.io"}
EXPOSE
The EXPOSE
instruction informs Docker that the container listens on the specified network ports at runtime. It does not actually publish the port. It just acts as a type of documentation between the person who build the image and the person running the container, about which ports are intended to be published.
EXPOSE 8889
EXPOSE 80/udp
EXPOSE 80/tcp
Now the person running the container can specify the port using -p
flag as follows
> DOCKER_BUILDKIT=1 docker build -t test .
> docker run -p 80:80/tcp -p 80:80/udp test
ADD and COPY
We discussed in the starting that a context is passed to the Docker daemon before reading the Dockerfile. Now to add files to the image from the context, we can use ADD
or COPY
. Both the instructions are similar but ADD
does some extra things (that you need to be careful of). The syntax of both the commands is the same
ADD [--chown=<user>:<group>] <src>... <dest>
COPY [--chown=<user>:<group>] <src>... <dest>
And a usage example would be
COPY --chown=default:sudo /file/on/local/drive /path/on/image
COPY --chown=default:sudo script.sh /home/default
The <dest>
path is either an absolute path or a path relative to WORKDIR
which we will discuss later.
Now let’s look at the differences between each instruction.
COPY
COPY [--chown=<user>:<group>] <src>... <dest>
The COPY
instruction copies files or directories from <src>
and adds them to the filesystem of the container at <dest>
. The files are created with a default UID and GID of 0, unless you specify --chown
. And that is it. COPY
will just copy a file from <src>
to <dest>
.
ADD
ADD [--chown=<user>:<group>] <src>... <dest>
The ADD
instruction also copies files or directories from <src>
to <dest>
like COPY
but it does some extra things also
-
<src>
can be remote file URL - If
<src>
is a tar file (identity, gzip, bzip2, xz) then it will be unpacked as a directory. In case the tar file is a remote URL then it is not decompressed
Note:- Due to these extra features of
ADD
it is recommended you useCOPY
unless you know what exactly is being added to the image byADD
.
WORKDIR
The WORKDIR
instruction sets the working directory for any RUN
, CMD
, ENTRYPOINT
, COPY
, ADD
instructions that follow it in the Dockerfile. You can use this multiple times to set the working directory as per your needs.
WORKDIR /home/default
RUN ...
# You can also provide path relative to previous WORKDIR
WORKDIR ../home
RUN
The syntax of the command is
RUN <command>
and it will run the command in a shell (by default /bin/sh -c
on Linux). Every RUN
instruction will create a new layer and due to this reason you should try to group multiple RUN
instructions into a single logical group.
To group multiple RUN
instructions you can either use a semicolon or &&
.
RUN <command_1> ; \
<command_2> ; \
<command_3>
RUN <command_1> && \
<command_2> && \
<command_3>
It is preferred that you use &&
over ;
. The reason being when you group multiple commands using semicolon then the next instruction will run regardless of whether the previous instruction gave an error or not. This is not the case with &&
. If a command fails then the execution would stop and the next command would not be executed.
These are all the instructions that we would need to create a Docker image. I left some instructions as we would not need them but you can check the complete Dockerfile reference for information about those commands (like ENTRYPOINT
, CMD
, VOLUME
).
Summary of all commands
-
FROM
- Every Dockerfile starts with this instruction and it provides the base image on which we build our image -
ARG
- We can specify command-line arguments using--build-arg
to be used in the Dockerfile only -
ENV
- Environment variables that will be used in Dockerfile and will persist when the container is run -
LABEL
- To specify metadata like author name, … -
EXPOSE
- To document the ports that are intended to be used when the container is run -
COPY
- To add files or directories from context to image (only copies the file) -
ADD
- To add files or directories from context to image (can copy from remote URL, automatically decompress a tar file) -
WORKDIR
- Specify the working directory for other instructions that use path -
RUN
- To run any command from shell -
USER
- Set the default user when the container is run
Ubuntu, PyTorch Dockerfile
Base image
We will use Ubuntu as the base image. As discussed before it provides us with a bare-bones Linux distribution and we have to setup everything that we need.
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
Update ubuntu and install utilities
Let’s check the Dockerfile before discussing what is happening
RUN apt update --fix-missing && apt install -y --no-install-recommends ca-certificates git sudo curl && \
apt clean && \
rm -rf /var/lib/apt/lists/*
The first step is updating Ubuntu. Note the default user on Ubuntu is root
. After, we setup sudo
and a default
user then we will have to append these instructions with sudo
. This is done using
RUN apt update --fix-missing && \
apt install -y --no-install-recommends
--fix-missing
is optional. It is used in case we have broken dependencies and using this flag can help us resolve the issue in most cases. As we are starting from a clean install, this flag does not do much.
apt install -y --no-install-recommends
. The -y
flag helps us to bypass the yes/no prompt. Each package in Ubuntu comes with three dependencies
- main dependencies
- recommended packages
- suggested packages
By default, Ubuntu will install main and recommended packages (to install suggested packages you need to provide --install-suggests
flag to apt install
). Our main goal is to keep the size of the docker image to a minimum and due to this reason, we do not want to waste space installing recommended packages. The --no-install-recommends
flag does this and as a result, we only install the main dependencies.
Now you can install any other package that you may require like ca-certificates
(required by curl
), sudo
, curl
, git
.
The second step is to clean up the packages that are no longer needed and clear any local cache. This is done using
RUN apt clean && \
rm -rf /var/lib/apt/lists/*
When we install packages, Ubuntu maintains a cache of the packages in /var/cache
. The reason for doing this is if something goes wrong when upgrading something and we do not have access to a network connection then we can revert to the old version in the cache to downgrade the package. But we do not need this cache for the Docker image, so we can remove it using apt clean
. Specifically, apt clean
will remove the files in /var/cache/apt/archives/
and /var/cache/apt/archives/partial
, leaving out a lock
file and partial
subdirectory.
/var/lib/apt
stores data related to the apt package manager. This data is automatically downloaded each time we run apt update
, so there is no point in storing this data and we can safely remove this to reduce the image size using rm -rf /var/lib/apt/lists/*
.
Note:- To install a package after
rm -rf /var/lib/apt/lists/*
you have to first runapt update
and then only you can install the required package.
Setup sudo and default user
The next step is setting up a root account and a default user. The contents of the Dockerfile to do so are shown next
ARG USERNAME=default
ARG PASSWORD=default
RUN useradd -rm -d /home/default -s /bin/bash -g root -G sudo -u 1000 $USERNAME && \
echo "${USERNAME}:${PASSWORD}" | chpasswd && \
echo "Set disable_coredump false" >> /etc/sudo.conf && \
touch /home/$USERNAME/.sudo_as_admin_successful
USER $USERNAME
WORKDIR /home/$USERNAME
Let’s see what each flag does in useradd
-
-r
is used to create a system account i.e. an account created by the operating system during installation or the root account -
-m
is used to create a home directory if it does not exist -
-d /home/default
is used to provide the location of the home directory -
-s /bin/bash
is used to specify the name of user’s login shell. You can skip this if you want. -
-g root -G sudo
This is interesting. The-g
flag is used to specify the group a user belongs to and-G
is used to provide a list of additional groups that a user belongs to.By default,
root
user is not a member of groupsudo
and we need to explicitly set this up.root
has all the privileges on the system but we still need asudo
group. When a user is part ofsudo
group then this means that the user can use their user password to executesudo
commands.Having a
sudo
group is useful when we have multiple users on a system and each user can gainroot
privileges by using their own user password. -
-u 1000
is used to provide user ID -
$USERNAME
is the name of the user that will be created
By default in Ubuntu, the root account has no password set. To set a password we can use the following command
echo "${USERNAME}:${PASSWORD}" | chpasswd
After following the above steps we have done the following
- Added a user
default
- User
default
can get root privileges usingsudo su
and usingdefault
as the password
There is a bug with sudo
that is discussed here in which you will get the following warning message every time you try to execute a command with sudo
> sudo hello > /dev/null
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
This has been resolved in the latest patch but Ubuntu does not ship with that, so to stop this annoying warning you can use the following command
echo "Set disable_coredump false" >> /etc/sudo.conf
When you run the container you will see a note about sudo as shown below
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
This message will be removed when you run a command with sudo
for the first time or you can remove it by adding the ~/.sudo_as_admin_successful
file.
touch /home/$USERNAME/.sudo_as_admin_successful
And that is it, you have setup sudo
and a default
user. When you run the container, you will be root
by default. But you can set the default user to be default
using the following command
USER $USERNAME
WORKDIR /home/$USERNAME
Summary of Dockerfile till now
Our Dockerfile will have the following contents
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
ARG USERNAME=default
ARG PASSWORD=default
RUN apt update --fix-missing && apt install -y --no-install-recommends ca-certificates git sudo curl && \
apt clean && \
rm -rf /var/lib/apt/lists/* && \
useradd -rm -d /home/default -s /bin/bash -g root -G sudo -u 1000 $USERNAME && \
echo "${USERNAME}:${PASSWORD}" | chpasswd && \
echo "Set disable_coredump false" >> /etc/sudo.conf && \
touch /home/$USERNAME/.sudo_as_admin_successful
USER $USERNAME
WORKDIR /home/$USERNAME
We can group all the commands to update Ubuntu, setup sudo
and default user in a single RUN
command to save space.
Accessing Nvidia GPU
To access host GPU in a Docker container you can specify the --gpus
flag when running the container
> docker run --gpus all -it --name temp test:0.1
> docker run --gpus 0,2 -it --name temp test:0.1
This only requires you to have the Nvidia driver setup on the host machine and then you can use nvidia-smi
in the Docker container to check if the GPUs are being detected.
Installing Miniconda
Miniconda can be used for an easy python setup. Just grab the link of the latest Linux installer from the documentation page (in this case it is Python 3.9) and specify the location where you want to install Miniconda
ARG MINICONDA_DOWNLOAD_LINK=https://repo.anaconda.com/miniconda/Miniconda3-py39_4.10.3-Linux-x86_64.sh
ARG MINICONDA_INSTALL_PATH=/home/$USERNAME
WORKDIR $MINICONDA_INSTALL_PATH
ENV PATH ${MINICONDA_INSTALL_PATH}/miniconda3/bin:$PATH
RUN curl $MINICONDA_DOWNLOAD_LINK --create-dirs -o Miniconda.sh && \
bash Miniconda.sh -b -p ./miniconda3 && \
rm Miniconda.sh && \
conda init && \
conda update -y --all
The setup of Miniconda is quite simple
- Set the
MINICONDA_DOWNLOAD_LINK
andMINICONDA_INSTALL_PATH
variables - Run
curl
to download the.sh
file in theMINICONDA_INSTALL_PATH
folder (it will create the folder if the folder does not exist) - Set the environment variable
ENV PATH ${MINICONDA_INSTALL_PATH}/miniconda3/bin:$PATH
that will allow us to run conda from anywhere in the file system - Install miniconda using
bash Miniconda.sh -b -p ./miniconda3
- Remove the
.sh
file to save space usingrm Miniconda.sh
-
conda init
can be skipped as we already set theENV PATH
- Update the conda packages to the latest version using
conda update -y --all
Installing PyTorch
Now we are ready to install PyTorch. Head to the PyTorch install page to grab the command you want to use to install PyTorch.
RUN pip install numpy && \
conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c nvidia
Note:- For some reason
numpy
was not getting installed when I ran theconda install
command so I had to add thepip install numpy
command.
Remove conda/pip cache
Conda stores index cache, lock files, tarballs, and the cache of unused packages. These can be safely removed to save space using the command conda clean -afy
. So add this as the last conda command in Dockerfile.
pip
stores cache in ~/.cache/pip
and this folder can be safely removed using the command rm -rf ~/.cache/pip
.
Build the image
Now we are ready to build the image.
> DOCKER_BUILDKIT=1 docker build -t test:0.1 .
And then we can run a container to try out the image
> docker run --gpus all -it --name temp test:0.1
default@a7f862b6bf73:~$
We can try nvidia-smi
to check if the GPUs are being detected
default@a7f862b6bf73:~$ nvidia-smi
Wed Oct 6 05:54:35 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro M1200 Off | 00000000:01:00.0 On | N/A |
| N/A 38C P5 N/A / N/A | 415MiB / 4043MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
default@a7f862b6bf73:~$
And as we can see the GPU is being detected along with the Nvidia driver of the host machine. Next, we can try running some PyTorch commands
(base) default@7d9b75595a27:~$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.backends.cudnn.version()
8005
>>> x = torch.tensor([1,2], device='cuda:0')
>>> x
tensor([1, 2], device='cuda:0')
We have successfully installed PyTorch and having a working Ubuntu environment. The next section discusses various docker
commands you need to build the image, run the containers and push the images to dockerhub
.
Useful Docker commands
Build images
We have already discussed the command to build images from Dockerfile in the first section.
> DOCKER_BUILDKIT=1 docker build \
-t {REPOSITORY_1}:{TAG_1} -t {REPOSITORY_2}:{TAG_2} \
-f /path/to/Dockerfile \
/path/to/build/context
The reference for docker build
can be accessed here.
List all images
docker image ls
can be used to get a list of all the images on your local file system.
> docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
test 0.1 e3a710895926 19 hours ago 7.47GB
ubuntu 18.04 5a214d77f5d7 5 days ago 63.1MB
You can use this command to check the SIZE
of the image and get the IMAGE ID
in case you forgot to tag the image. Docker stores the images in /var/lib/docker/
on Linux, however it is not a good idea to mess with the contents of this folder because Docker’s storage is complicated and it depends on what storage driver is being used.
The reference for docker image ls
can be accessed here.
Deleting images
docker image rm
can be used to delete image by specifying the tag or IMAGE ID
.
> docker image rm 16379d98ded4
Deleted: sha256:16379d98ded4a6a90d02f8709ca27ba4c9abe750ba948b5de9096607599e8ce0
> docker image rm test:0.3
Untagged: test:0.3
Deleted: sha256:fdc3055dbe5f7239a4371f5abb6da4fdced836ff6278b19a2d70a8b654a85d04
The reference for docker image rm
can be accessed here.
In case you have a lot of untagged images you can use docker image prune
to delete all the images, instead of deleting every single image manually.
docker image ls
> REPOSITORY TAG IMAGE ID CREATED SIZE
test 0.1 e3a710895926 20 hours ago 10.8GB
test latest 9419a081213b 5 days ago 63.1MB
<none> <none> 4734a6f5567e 5 days ago 63.1MB
<none> <none> 7deb7fdc3b5e 5 days ago 63.1MB
> docker image prune
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] y
Deleted Images:
deleted: sha256:4734a6f5567e1554bcd0fc5dc674b80e21f50816058d29a125eb697b8d8af9c5
...
>
The reference for docker image prune
can be accessed here.
List all containers
docker container ls -a
can be used to list all containers (running and stopped). If you want to list only running containers use docker container ls
.
> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d5367ac6cf01 test:0.2 "bash" 3 seconds ago Up 2 seconds temp_2
20ebc6bca143 test:0.2 "bash" 5 minutes ago Exited (0) 5 minutes ago temp_1
We can get the status of all the containers. In the above examples, temp_1
is not running and temp_2
is running. And we can also see that the containers are not using any PORTS
.
The reference for docker container ls
can be accessed here.
Starting a container
docker start
can be used to start one or more stopped containers (docker container start
can also be used for this).
> docker start temp_1
temp_1
The reference for docker start
can be accessed here.
Attaching to a container
To open a new terminal session with a running docker container docker attach
can be used.
> docker attach temp_1
The reference for docker attach
can be accessed here.
Stopping a container
docker stop
or docker kill
can be used to stop multiple running containers. docker stop
can be considered as a graceful stop. The difference between the two commands is
-
docker stop
. Stop a running container. The main process will receiveSIGTERM
and after a grace periodSIGKILL
will be received. -
docker kill
. Kill a running container. The main process will receiveSIGKILL
or any signal specified by the user using--signal
.> docker stop temp_1
The reference for docker stop
can be accessed here and reference for docker kill
can be accessed here.
Deleting containers
docker rm
can be used to delete stopped containers.
> docker rm temp_1
In some cases, you may require to delete a container that does not exist and not throw an error at the same time. This can be done as follows
> docker rm temp_1 || true
The reference for docker rm
can be accessed here.
Running a container
docker run
command is used to create a container from an image. There are many useful flags when using this command
> docker run -it --gpus all --name temp -p 8888:8888 test:0.1
-
-it
will open a terminal connected to the container -
--gpus all
is used to specify which GPUs to give access to the container -
--name temp
name of the container -
-p 8888:8888
to publish ports. The format is{HOST_PORT}:{CONTAINER_PORT}
. To use jupyter notebooks inside docker you need to publish a port.
The reference for docker run
can be accessed here.
Connect new terminal to container
If you want to connect a new terminal to a running container, then you can use the following command
> docker exec -it {container_name} bash
docker attach {container_name}
cannot create a new terminal session. It will connect to the old session.
Delete all container/images
To delete all the docker containers and images on your system you can use the following two commands (in the order specified)
# Remove all containers
> docker rm -vf $(docker ps -a -q)
# Remove all images
> docker rmi -f $(docker images -a -q)
Push image to Dockerhub
To push an image to Dockerhub use the following commands
> docker login --username {USERNAME}
> docker tag <REPOSITORY>:<TAG> {USERNAME}/<REPOSITORY>:<TAG>
> docker push {USERNAME}/<REPOSITORY>:<TAG>
A working example is shown below
> docker login --username kushaj
> docker tag test:0.1 kushaj/pytorch:1.9.0
> docker push kushaj/pytorch:1.9.0
The first argument (test:0.1
) to docker tag
is the name of the image on the local file system and the second argument is the location on Dockerhub where you want to push the image to.
The references for the above commands
Links
- KushajveerSingh/Dockerfile - You can access the Dockerfile’s from this repository to build your own images. Documentation is provided in the README for all the available Dockerfile’s.
- Dockerhub kushaj - All the images can be found at this link. If you want to use an image but the contents of the image are old, then you can access the Dockerfile used to create the image from the above Github link and build the image. Instructions to do so are provided in the READEME.