Docker Tutorial: Getting started

A beginner friendly Docker tutorial that doesn’t bite you. Step by step guide to help you get started with Docker.

Let’s say you are writing an app. You’ve setup a development environment in a local virtual machine and everything runs smoothly. After spending hundreds of hours perfecting your application, you are happy with the final product and you are excited to push the code to the production server.

However the app stopped working in the production server because of the differences in the environment — software, libraries, operating system and etc.

But that’s okay because you can always explain that to your clients, right?

An angry client is sometimes scarier than running out of toilet papers…

Wrong, the truth is the clients will yell at you and you end up spending hours on fixing the server while getting depressed.

Look, we all don’t want this to happen, and this is when Docker saves the day.

Docker makes it easy to setup, replicate and share environments. The idea behind it is that we encapsulate our environment, services and/or code into containers. These containers act as middlemen between the server and our code. So it doesn’t really matter where we put these containers in, we can always expect our app to work.

Concepts

There are 3 core components in the docker ecosystem. Image, container and registry.

You can think of a Docker image as a blueprint of a house and docker file as the step by step instruction manual on how to create the blueprint. You’ll build a docker image using a Dockerfile.

Containers are houses where your app lives. They can be created from an image. You can think of containers as tiny VMs.

A registry is somewhere to store your docker images (very similar to code repositories). By default, docker images that you built will be stored in your local registry in your computer. You also have the option to upload the images to a cloud registry like Docker Hub (the official docker container registry).

VM works by creating a new set of Kernel and OS in the host. A Kernel is the middleman that allows the communication between the OS and hardware. However, creating a new Kernel for each VM requires a lot of resources.

Unlike VM, Docker containers share the kernel of the host OS. This makes docker containers highly scalable and lightweight. That means you can easily deploy multiple containers in seconds instead of minutes. This is why Docker is a much more attractive solution than Vagrant and other similar technologies.

Let’s Get Started

Go to https://docs.docker.com/ and select your Operating System. Then follow the instructions.

If you are using VSCode, I highly recommend you to install the docker extension by Microsoft.

We will be building a super simple PHP website that prints out some text on the screen:

<?php
// /src/index.php

echo "Hello there";
?>

Create a Dockerfile (with capital D) in the root folder. Your project directory should look something like this:

Copy the following instructions into the Dockerfile, I will explain what is going on here line by line:

FROM php:7.4.1-apacheCOPY src/ /var/www/html/EXPOSE 80

A Dockerfile typically starts with the FROM keyword, followed by an “image:tag” label which referred to a pre-built image in a registry.

You can search for thousands of pre-built images in Dockerhub. In our case, we are using the official PHP image with the “7.4.1-apache” tag, which means it comes with a built-in Apache web server.

Image variants

Each row represents a variant and from left to right, the image version becomes less and less specific. It is a good practice to use a specific version, otherwise if we choose the latest tag, we could experience unexpected breaking change when PHP got updated. Tags with the word “buster” means the image is based on Debian Buster, and “alpine” means it is a minified image where it is built only using the bare minimum.

COPY lets us to copy files and folder to the image. First argument is the host path and second argument is the container path.

We also want to expose port 80 of the container to the host. This will not expose our website to the public yet, only to the host. We still need to forward a port of the host to port 80 of the container.

Run docker build -t php-demo ./ in project root.

  • docker build is the command to build an image.
  • -t flag to define the image tag, in this case, I will call it php-demo .
  • ./ refers to the location of Dockerfile. In this case it is the current directory.

Run docker run -p 3000:80 php-demo in the shell. You can run this command anywhere since we have built the image and the Docker Engine automatically register the image in the local registry.

  • -p flag to forward port 3000 of the host to port 80 of the container.

You should see Apache running in the shell. Now go to localhost:3000, and you should see this:

PHP website is up and running!

However, if we edit our index.php (eg change the echo text to something else), the changes are not immediately reflected in our container. This is because with the COPY keyword, our index.php was copied to the image at the time when the image was built. So even though we changed the file in our host computer, the file in the container remains as the original file.

This is a pain as we will need to rebuild our container every time we make a change. Good news is we can easily resolve this by using Volume instead. With Volume, we are referencing the file directly to the container.

Let’s stop our container and recreate it. Simply hit Control + c in the terminal. Then run this:

docker run -p 3000:80 -v /absolute/host/path/to/src/:/var/www/html php-demo

  • -v Specify the path in host machine to be mounted.

Now try to make changes in index.php . You will notice as soon as you refresh your browser, the changes are reflected immediately.

  1. You will need the target container ID. Run docker container ls to list all active container instance.
  2. Run docker exec -it {container_id} bash to start a shell session in the container.
  • -i flag for interactive
  • -t flag for a TTY shell session
  • bash the program we want to run in the container. In this case we want to start bash

We can run any arbitrary shell command by using the RUN keyword in the Dockerfile. For example, you can install wget by:

# Dockerfile# ...RUN apt-get update && apt-get install -y wget 

Sometimes you may want to build your image based on some predefined config, for example, conditionally install software or switch the base image tag without changing the Dockerfile. In these cases, you can use the ARG keyword. The ARG keyword will initialise the variable based on the environmental variable passed in at buildtime. If Docker couldn’t find one, it will simply use the default value that you specified in the Dockerfile.

For example:

# DockerfileARG PHP_VERSION=7.4.1-apache  # default image tag: 7.4.1-apacheFROM php:${PHP_VERSION}  ARG INSTALL_WGET=false
# conditionally install wget based on env variable suppliedRUN if [ ${INSTALL_WGET} = true ]; then \ apt-get update && apt-get install -y wget \;fi# ...

We can overwrite the ARG variables by using the — build-arg flag at build-time.

For example, docker build --build-arg PHP_VERSION=7.4.1-fpm-buster --build-arg INSTALL_WGET=true will build an image with PHP 7.4.1 FPM-Buster and install wget.

By default, Docker runs all the command as the root user. You can explicitly change the user by using the USER keyword. However, you need to create a user first.

# Adding a user 
RUN useradd -ms /bin/bash sam
USER samRUN mkdir ./folder# set the active user back to root
USER root

If you want to configure the container to be an executable, for example, to start a web server daemon like Apache, or run a one off composer update, you can either use the CMD or ENTRYPOINT keyword.

While both keywords can coexist in a Dockerfile, you can’t have 2 CMD or 2 ENTRYPOINT keywords at the same time, the later will overwrite the former.

With CMD, it will be a default command that the container will run at when it is created. For example, I can make the container to print out “hello” every time it is started by doing this:

# Dockerfile# ...CMD ["echo", "helloo!"]

Rebuild the image and run the container again you should see:

Container printing “helloo!”

Notice that our Apache server has stopped running. This is because the CMD keyword we used in our Dockerfile has overwritten the original CMD instruction to start Apache in the base image.

Now if I specify a command to run when starting the container:

docker run php-demo pwd

Printing current working directory (pwd)

Note that it will overwrite our default CMD instruction. Again, Docker only allows 1 CMD for each container instance. The newest command will always win.

ENTRYPOINT is a little bit different to CMD. If I add commands when I run the container, they will not overwrite the ENTRYPOINT command but passed as the arguments.

# Dockerfile# ...ENTRYPOINT ["echo", "helloo!"]

“pwd” is appended to the end of “helloo!”. It was passed as an argument to echo.

If I use both ENTRYPOINT and CMD at the same time, everything in CMD will be passed to the ENTRYPOINT executable as arguments. In other words, ENTRYPOINT takes priority when executing command.

# Dockerfile# ...
CMD ["echo", "helloo!"]
ENTRYPOINT [ "echo", "entrypoint" ]# ...

What’s next?

Checkout part 2 of this tutorial where we talk about docker compose and microservices!

Web Development. https://acadea.io/learn . Follow me on Youtube: https://www.youtube.com/c/acadeaio