Skip to content

Instantly share code, notes, and snippets.

@dominikkaegi
Last active January 27, 2022 12:32
Show Gist options
  • Save dominikkaegi/5c7c6deb5f3cf3aa8acaf7f23017156b to your computer and use it in GitHub Desktop.
Save dominikkaegi/5c7c6deb5f3cf3aa8acaf7f23017156b to your computer and use it in GitHub Desktop.

Start Machine up

<!--startup application-->
docker-compose up
<!--remove application-->
docker-compose down --rmi all

Container: An isolated environment for running an application.

// ./Dockerfile
// Create a light weight alpine linux machine which runs node
FROM node:alpine
// Copy all the files from the current directory into the /app folder within the machine
COPY . /app
// Set the working directory
WORKDIR /app
// Run the app with node within the machine
CMD node app.js
// Build the docker image
// -t for giving a tag 
// .  for indicating where the Dockerfile can be found
docker build -t hello-docker .
// Show all docker iamges
docker image ls
// or
docker images
// run an image
docker run hello-docker
// Download an image from dockerhub
docker pull codewithmosh/hello-docker
// Show all running containers
docker ps
// Show all running or stopped contianers
docker ps -a
// Run a container in interactive mode
docker run -it "image-name"
docker run -it ubuntu

Useful Linux Commands

// shows all the previous commands executed
history
// result
    1  node -v
    2  python
    3  echo $0
    4  whoami
    5  echo hello
    6  history
// run any of the shown commands with !number
!4    // runs whoami
// apt (Advanced Package Manager) for linux
// updates all the packages
apt udpate
// installs the nano package
apt install nano
// uninstalls the nano package
apt remove nano

Linux Directories

* bin  // Binaries or Programs
* boot // All files related to booting
* dev // devices, in linux everything is a file, including devies, directories, networksockets etc. The files which are needed to access devices are stored in this directory
* etc // editable text configuration -> place for configuration files
* home // where the home directories for users are stored
* root // home directory of the root user
* lib // library files like software library dependencies
* var // variable, files that are updated frequently (lock file, application data)
* proc // files that represent running processes

Creating and Modifying Users

// Add user john and create a directory form him within the /home folder
useradd -m john

// Alternatively the interactive command adduser can be user
adduser john   
// Set a password for user john
passwd john
// login as user john
su john
// deleting a user
userdel john
// Find users in /etc/passwd. In there the configurations for the users are stored
cat /etc/passwd
// Output
john:x:1000:1000::/home/john:/bin/sh

username = john
x = password is stored in a different location
1000 = the user id
1000 = the group id 
/home/john = the home directory of the user
/bin/sh = the shell program which is executed if the user logs in
// modifying a user
// modify the default shell to be bash for user john
usermod -s /bin/bash john

// check if changes applied
cat /etc/passwd | grep john

// output -> It worked 🎉
john:x:1000:1000::/home/john:/bin/bash
// Place where passwords are stored in a encrypted format
// (this file is only accessible for the root user)
cat /etc/shadow

User Groups

// All users in the same group have the same kind of permissions
// Create a developer group
groupadd developers

// Find all the groups
cat /etc/group
// output
developers:x:1006:
// Every linux user has one primary group and zero or multiple secondary user groups.
// Every file is owned by one user and one group. If a user is part of multiple groups,
// the question arises which group should be assigned to own that new file the user creates.
// Thats why a primary groupd is needed. The primary group is automatically created when 
// creating a new user. 


// set primary group of user john
usermod -g developers john

// set secondary group of user john
usermod -G developers john
// set multiple secondary groups
usermod -G artists,developers john

// see the groups a user is n
groups john 

Delete Files

// remove a file named file.txt
rm file.txt

// remove every file which starts with "file", e.g. file1.txt, file2.txt
rm file*

// delete directory
rm -r myDirectory

Rename / Move Files

// Rename file1.txt to hello.txt
mv file1.txt hello.txt

// Rename file1.txt to hello.txt and move it into the home directory of john
mv file1.txt /home/john/hello.txt

Show File Content

// Print the whole file content to the console
cat /etc/adduser.conf

// Reveal more and more content of the file by pressing enter
more /etc/adduser.conf

// Newer command for more, able to go up and down (apt-get install less)

// up / down arrow or j and k, q to quit, enter to go to next file
less /etc/adduser.conf

// Show first 5 lines
head -n 5

// Show last 5 lines
tail -n 5

Redirect

// Write whatever is in file1 into a new file2
cat file1.txt > file2.txt

// Create a file with the word "whatever" in it
echo whatever > temp.txt

// Write all the filenames in the etc folder into the filenames.txt
ls /etc > filenames.txt

Redirect Append

// Append the contents of file1.txt to file2.txt
// but do not overwrite any existing data.
cat file1.txt >> file2.txt

Search For String

// grep - Global Regular Expression Print

// Search for "hello" in file1.txt
grep hello file1.txt

// Search for "hello" in file1.txt ignoring case sensitivity
grep -i hello file1.txt
grep -i root /etc/passwd

// Search in multiple files
grep -i Hello hello1.txt hello2.txt

// Search in multiple files with patter
grep -i Hello file*

// Search in a direcotry and subdirectories
grep -i -r hello /etc

// Search current directory and subdirectories
grep -i -r hello .

Combining Option Flags

// Instead of adding option flags one by one we can combine then just in one flag
// the two commabs below do exactly the same
grep -i -r hello /etc
grep -ir hello /etc

Finding Files and Directories

// recursively finds every file in the directory and subdirectory
find

// find directories
find -type d

// find files
find -type f

// find files which start with "file"
find -type f -name "file*"

// make the previous command capital insensitive
find -type f -iname "File*"

// find all the python files in this image and write them to a file
find -type f -iname "*.py" > pythonfiles.txt

Chaining Commands

// Can Chaing multiple commands with `;`
mkdir test; cd test; echo done

// Chain commands but only execute the next one if the first one succeded
mkdir test && cd test && echo done

// Chain commands but only execute the second part if the first part fails
mkdir test || echo "directory exists"

Pipeing

// send the output of one command as the input arguent to another command
ls /bin | less
ls /bin | head -n 5
ls -la bin/ | grep chmod

// splitting up a chained command into multiple lines
mkdir hello;\
cd hello;\
echo done

Environment Variables

// print environemnt variables
printenv
// print a sepcific variable
printenv PATH
// print a specific variable with echo
echo $PATH
// The path indicates the directories that OS will go through to run a command
// you want to execute. If it can not find the command in one fo those folders
// it will throw an error.
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
// To make environmentvariables persist between terminal sessions, add them
// to the .bashrc file
echo DB_USER=thomas >> ~/.bashrc

Managing Processes

// see all the running process
ps
// OUTPUT
  PID TTY          TIME CMD
    1 pts/0    00:00:00 bash
   25 pts/0    00:00:00 ps
// Two processes, one is running bash and one ps
// PID: Unique identifier to identify the process
// TTY: teletypewriter
// pts/0 -> indicates the terminal which is running those processes. If multiple people are logged in you would see pts/1 etc.
// TIME: How much time the processes takes to be executed -> how much CPU a process takes
// kill a process
kill "processId"

// example
// create long running process in the background with sleep 100 seconds
sleep 100 &
// show all process with ps to find the id
ps
// kill the process with its id
kill 60

File Permissions

// Create deploy file
echo echo hello > deploy.sh

// To see the file permission run 
ls -l

// output
-rw-r--r-- 1 root root 11 Nov 29 20:31 deploy.sh

// split up in one identfier and three groups
- rw- r-- r--
1. First letter is either `-` (file) or `d` (directory). 
2. First group indicates the right of the file owner
3. Second groups indicates the permission for the group which owns the file
4. Third group inidicates the permissions for everybody else
// run the bash script
./deploy.sh
// bash: ./deploy.sh: Permission denied

// Because we only have rad and write permission, we can not execute
// Change file permissions with chmod

// Add the execute permission for the user
chmod u+x deploy.sh

// Add write permission for group
chmod g+w deploy.sh

// Remove read permission for others
chmod o-r deploy.sh

// Combine permission changes
// add write & execute permission, remove read permission to "others" and "group"
// to all the shell script files
chmod og+w+x-r *.sh

Docker

Start a container

// find all containers, also stopped ones
docker ps -a
// start the container with ther id
doker start id1234
// check out the console
docker attach id1234

Run a command in a container

// executed a command in interative mode for container 1234, namely the bash command
docker exec -it 1234 bash
// start the process with user tom
docker exec -it -u tom 368d05d36a1d bash

Building Images

Image vs Container

Image:

Each Image is a blue print to create a container.

  • A cut-down OS
  • Third-party libraries
  • Application files
  • Environment variables

Container

Each container is an isolated environment for executing an application

  • Provides an isolated environment
  • Can be stopped & restarted
  • Just a process
docker ps
// start a new container
docker run -it ubuntu

DOCKERFILE

  • FROM - specifing base image
  • WORKDIR - specifing working directory, all following commands will be executed in the working directory
  • COPY - copy files and directories
  • ADD - add files and directories
  • RUN - executing operation system commands
  • ENV - setting environment variables
  • EXPOSE - starting container on specific port
  • USER - specifying the user who runs the applications, usually a user with limited privilieges
  • CMD - for specifying the command to execute when starting the container
  • ENTRYPOINT - for specifying the command to execute when starting the container

DOCKERFILE FROM

It is up to you to select the right image from DockerHub. In general you want to always be specfic with your version. Because if you use a general version such as "latest" you will download a new image every time you build the docker image.

In the example below we define a node version 14 container which is running on top of the light weight linux distribution alpine.

FROM node:14.16.0-alpine3.13

Once the docker file has been added to the project, we can build the image.

// build the image and tag it with react-app, the . indicates where the Dockerfile can be found
docker build -t react-app .

Then we can run the container

// Starts the container in the node
docker run -it react-app 
// Starts the container in the shell
docker run -it react-app sh

COPY APPLICATION FILE INTO THE IMAGE

We can for example copy the package.json file to a folder called app within the image. If the the app folder does not exist on the image, it will be created.

FROM node:14.16.0-alpine3.13
COPY package.json /app

Normally we want to copy paste everything from within the folder where the Dockerfile is located. We can do this by using the . syntax.

FROM node:14.16.0-alpine3.13
COPY . /app/

To us relative paths and not need to speciy the /app folder for every command we can set the WORKDIR

FROM node:14.16.0-alpine3.13
WORKDIR /app
COPY . .

Instead of using COPY, we could also use the ADD command. The ADD command allows you to also copy from a remote directory by adding a url or to add a .zip file and it decompresses it and adds the contents to the container.

Exlcude Files with .dockerignore

Like .gitignore we create a .dockerignore in which can define files or directories which should be ignored by docker. This is particulary handy if we want to exclude certain directories to be copied to the machine, for example the node_modules but keep the simple syntax in the Dockerfile to copy everything with the .

// .dockerignore
node_modules

RUN command scripts

To run any shells commands we can use the RUN command. For example if we want to install the node_modules for every image build we can specify it as follow:

FROM node:14.16.0-alpine3.13
WORKDIR /app
COPY . .
RUN npm install

ENV to setup enviroment variables

We can define environment variables with the ENV property

FROM node:14.16.0-alpine3.13
WORKDIR /app
COPY . .
RUN npm install
ENV API_URL=http://api.myapp.com

Exposint Ports

To expose ports we can use the EXPOSE defintion. Since our app in the docker file will be listening on the port 3000 we set it to that. This expose is more of a documentation. It does not setup up your container that you can access the application if you acces localhost:3000 on your application. For now this is only for documentation purposes, until we run docker container.

FROM node:14.16.0-alpine3.13
WORKDIR /app
COPY . .
RUN npm install
ENV API_URL=http://api.myapp.com
EXPOSE 3000

Setting up users for Dockercontainers

It is good practice to set up users in the linux environemnts to run the applications. This is a secuirty measure that if a hacker manages to gain access to the machine, he only has the privilege of a user and not of the root.

For that we need to do two things:

  1. Set up a user group
  2. Set up a user with that group
// Create a group
addgroup appusers

// Linux lets you set up system users with -S
// System User are user which usually do not login and only exist to execute programs
// Add a user name "app" and add it to the "appuser" group
adduser -S -G appusers app

In the Dockerfile we can run a command to create the user and then add it as the default user when starting the container with the USER keyword.

FROM node:14.16.0-alpine3.13
RUN addgroup appusers && adduser -S -G appusers app
USER app
WORKDIR /app
COPY . .
RUN npm install
ENV API_URL=http://api.myapp.com
EXPOSE 3000

It makes a difference where we define the user and switch to it. Because all the following commands are exectued as that user. If we for example only create the user at the end of the container creation, previous commands are executed by the root user. This could cause the problem that user which should access or execute certain files does not have permission for them.

CMD and ENTRYPOINT to start Application

Once we created the image, we can define what command should be executed, once the images is spinned up. This can be done with CMD or ENTRYPOINT.

CMD

The CMD defines what is executed once a container is started.The difference between RUN and CMD is that RUN is executed at the build time of the container and CMD once the container is started.

There are two ways to define an entry point, in shell mode or exec mode. Shell mode spins up another shell from /bin/sh and exec mode directly executes the command. To reduce less resources it is best practise to use exec mode.

// Shell Mode
CMD npm start
// Exec Mode
CMD [ "npm", "start" ]

If you have multiple entrypoints defined, only the last one will be executed.

CMD npm lint
CMD npm build
CMD npm start   // only this command will be executed 

To start an react webapp we add the CMD command to our Dockerfile:

FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app appuser
USER appuser
WORKDIR /app
COPY . .
RUN npm install
ENV API_URL=http://api.myapp.com
EXPOSE 3000
CMD [ "npm", "start" ]

ENTRYPOINT

With ENTRYPOINT we can achieve the same functionality as with CMD. The command defined in the entrypoint is also executed at run time of the container.

The entrypoint also has a exec and a shell mode.

// Shell Mode
ENTRYPOINT npm start
// Exec Mode
ENTRYPOINT [ "npm", "start"]

The difference to CMD is that it is a little harder to overwrite the entrypoint. If we want to for example start a container with has a CMD defined in interactive mode, we can simply run:

// This overwrites the CMD and starts the container with the shell
docker run -it react-app sh

With the ENTRYPOINT configured, it is a lot harder (I could not figure out how to override the entrypoint comand so far).

CMD or ENTRYPOINT?

It does not matter that much. If we know exactly which command should be executed when starting the container and want to protect this command from being overwritten easily we can use ENTRYPOINT. If we do not care for that extra protection to make sure our start command is not overwritten easily, we can use the CMD command.

Optimising Builds

Takeaway:

To optimise your build you should move the instructions which do not change often to the top of your Dockerfile (for example dependency installations)

Docker is build on top of layers. Every command in our docker file is layer. We can see these layers by executing the docker history "image-name" command. The history output should be read from the bottom up. Docker caches every layer. If a layer has not changed compared to the previous one it will just take the cache. If a layer has changed, it will through the layer away and create a new one. And every layer build on top of that layer will also be created again. As soon as one layer can not be retrived from Cache, all subsequent layers can also not be retrieved.

The reason why docker can not cache the npm modules properly in our app is because it if you copy all the files of our app with the COPY . . command, it will check if anything changed. As soon as we change any file in our project, docker will through away the cache and start again from new.

docker history react-app

// result
IMAGE          CREATED          CREATED BY                                      SIZE      COMMENT
2833ce0ddfbb   3 minutes ago    ENTRYPOINT ["/bin/sh" "-c" "npm start"]         0B        buildkit.dockerfile.v0
<missing>      3 minutes ago    EXPOSE map[3000/tcp:{}]                         0B        buildkit.dockerfile.v0
<missing>      3 minutes ago    ENV API_URL=http://api.myapp.com                0B        buildkit.dockerfile.v0
+ Install Packages
<missing>      3 minutes ago    RUN /bin/sh -c npm install # buildkit           179MB     buildkit.dockerfile.v0
+ Copy Project
<missing>      4 minutes ago    COPY . . # buildkit                             1.99MB    buildkit.dockerfile.v0
<missing>      51 minutes ago   WORKDIR /app                                    0B        buildkit.dockerfile.v0
<missing>      51 minutes ago   USER appuser                                    0B        buildkit.dockerfile.v0
<missing>      51 minutes ago   RUN /bin/sh -c addgroup app && adduser -S -G…   4.86kB    buildkit.dockerfile.v0
+ Linux + Node Setup finished
<missing>      8 months ago     /bin/sh -c #(nop)  CMD ["node"]                 0B
<missing>      8 months ago     /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B
<missing>      8 months ago     /bin/sh -c #(nop) COPY file:238737301d473041…   116B
<missing>      8 months ago     /bin/sh -c apk add --no-cache --virtual .bui…   7.84MB
<missing>      8 months ago     /bin/sh -c #(nop)  ENV YARN_VERSION=1.22.5      0B
<missing>      8 months ago     /bin/sh -c addgroup -g 1000 node     && addu…   103MB
<missing>      8 months ago     /bin/sh -c #(nop)  ENV NODE_VERSION=14.16.0     0B
<missing>      8 months ago     /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>      8 months ago     /bin/sh -c #(nop) ADD file:7119167b56ff1228b…   5.61MB

To prevent this cache change on every file change we can change our Dockerfile a little bit. If we first only copy the package.json and package-lock.json, we can add a layer which is only responsible to check the depenencies. Then we can run npm install. If the none of the depency files changed, docker will be able to use the cache of the npm install command. Once we completed the installation of the depencies, we can copy over the rest of the application files.

FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app appuser
USER appuser
WORKDIR /app
COPY . .
RUN npm install
ENV API_URL=http://api.myapp.com
EXPOSE 3000
CMD [ "npm", "start" ]

Because we now can use the cache for our dependencies it speeds up the container build time a lot because we do not need to download the 180MB dependency files.

IMAGE          CREATED          CREATED BY                                      SIZE      COMMENT
4b54b4217315   13 minutes ago   ENTRYPOINT ["/bin/sh" "-c" "npm start"]         0B        buildkit.dockerfile.v0
<missing>      13 minutes ago   EXPOSE map[3000/tcp:{}]                         0B        buildkit.dockerfile.v0
<missing>      13 minutes ago   ENV API_URL=http://api.myapp.com                0B        buildkit.dockerfile.v0
+ Copy all the application files
<missing>      13 minutes ago   COPY . . # buildkit                             1.99MB    buildkit.dockerfile.v0
+ Install Dependencies
<missing>      15 minutes ago   RUN /bin/sh -c npm install # buildkit           179MB     buildkit.dockerfile.v0
+ Copy Depenceny Files
<missing>      16 minutes ago   COPY package*.json . # buildkit                 682kB     buildkit.dockerfile.v0
<missing>      55 minutes ago   WORKDIR /app                                    0B        buildkit.dockerfile.v0
<missing>      55 minutes ago   USER appuser                                    0B        buildkit.dockerfile.v0
<missing>      55 minutes ago   RUN /bin/sh -c addgroup app && adduser -S -G…   4.86kB    buildkit.dockerfile.v0
+ Linux + Node Setup finished
<missing>      8 months ago     /bin/sh -c #(nop)  CMD ["node"]                 0B
<missing>      8 months ago     /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B
<missing>      8 months ago     /bin/sh -c #(nop) COPY file:238737301d473041…   116B
<missing>      8 months ago     /bin/sh -c apk add --no-cache --virtual .bui…   7.84MB
<missing>      8 months ago     /bin/sh -c #(nop)  ENV YARN_VERSION=1.22.5      0B
<missing>      8 months ago     /bin/sh -c addgroup -g 1000 node     && addu…   103MB
<missing>      8 months ago     /bin/sh -c #(nop)  ENV NODE_VERSION=14.16.0     0B
<missing>      8 months ago     /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>      8 months ago     /bin/sh -c #(nop) ADD file:7119167b56ff1228b…   5.61MB

Removing Containers

Remove Unused Images

With the command docker images we can see all the docker images. Often we have docker images laying around which we are no longer using. To delete them we can run:

docker image prune

This only removes images, which no longer have a running or stopped container. To also remove images which still have existing containers we first need to delete those containers before deleting the image.

Remove Unused Containers

We can delete all stopped containers with the container prune command:

// Have a look at the stopped images
docker ps -a

// Delete all stopped cotainers
docker container prune

Remove one or more images

To remove images which are still in use we can use the image rm with either the image name or the ID.

docker image rm react-app
docker image rm c059bfaa849c

Tagging Images

docker images

// output
REPOSITORY     TAG       IMAGE ID       CREATED          SIZE
react-app      latest    4b54b4217315   45 minutes ago   298MB
alpine         latest    c059bfaa849c   7 days ago       5.59MB
hello-docker   latest    5a1fb8e91224   10 days ago      169MB
ubuntu         latest    ba6acccedd29   6 weeks ago      72.8MB

The latest tag is okay in development, but in production you always want a number. That way you can trouble shoot exactly which version is giving you problems. Whereas lates does not tell you which version is running.

Tag during build

We can add a tag while building an image:

// Define the tag after the ":"
docker build -t react-app:1 .
docker build -t react-app:buster .
docker build -t react-app:3.4.1 .

Tag after build

// Tag the react-app which currently has tag "latest"
// and change the tag to the tag "1"
docker image tag react-app:latest react-app:1

Latest Tag

The latest tag does not always represent the latest image which was built. If you want to use the latest tag to point to your latest build, you need to explicitly need to set it.

Example:

We start with one image

docker build -t react-app .

This vies use one image:

  REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
+ react-app    latest    89b73529e844   2 minutes ago   298MB

We changes something in the code and create a new image but this time we tag it:

docker build -t react-app:1 .

We now have two images. The newest image is the one with tag 1 which has id b0f7e6012d83. The latest tag is still pointing to an image with id 89b73529e844. Thus altough it is called latest, it is not the latest build we made of this image.

  REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
+ react-app    1         b0f7e6012d83   2 seconds ago   298MB
  react-app    latest    89b73529e844   5 minutes ago   298MB

To solve this we can set the tag ourselves

docker image tag b0f7e60 react-app:latest

This makes the latest tag to point to the image which we built the latest. The previous latest image with id 89b73529e844 still exists, but it does not have a name or a tag.

  REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
  react-app    1         b0f7e6012d83   2 minutes ago   298MB
+ react-app    latest    b0f7e6012d83   2 minutes ago   298MB
- <none>       <none>    89b73529e844   7 minutes ago   298MB

Container Commands

Run Detached Containers

To run an app in detach mode we can use the -d flag.

docker run -d react-app
// docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS      NAMES
5c34d7f46da5   react-app   "docker-entrypoint.s…"   21 seconds ago   Up 20 seconds   3000/tcp   distracted_lalande

To give the docker process a name yourself use the --name flag.

docker run -d --name blue-sky react-app
// docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED              STATUS              PORTS      NAMES
691ebc2ee899   react-app   "docker-entrypoint.s…"   3 seconds ago        Up 2 seconds        3000/tcp   blue-sky
5c34d7f46da5   react-app   "docker-entrypoint.s…"   About a minute ago   Up About a minute   3000/tcp   distracted_lalande

Container Logs

To check out the logs of a detached container we can run

docker logs "containerId"

docker logs 691ebc2ee899

If you have a container which continoulsy puts out logs you can follow the logs with the -f flag.

docker logs -f 691ebc2ee899

To look at the las n lines we can use the -n flag

docker logs -n 5 691ebc2ee899

To see time stamps we can use -t option

docker logs -t 691ebc2ee899

Publish Port

At the moment the application within our docker container is running on port 3000. But if we acess localhost:3000 nothing appears. That is because the application is running within container and not on our machine. To be able to connect to the application we can connect ports of your machine with the ports of the docker container.

In this example we are connectiong port 80 from the host machine with port 3000 of the docker container.

docker run -d -p 80:3000 --name c1 react-app

Once the container runs with the mapping of the docker container with the machine, the port 80 is reserved by this forwarding. On the other hand because it is now connect with the container, we can access the application on localhost:80.

Executing Comands in Running Containers

We can execute commands in running containers with exec

Run the "ls" command in the container with name "c1"

docker exec c1 ls

Stopping and Starting Containers

Stop

Stop the container with name "c1"

docker stop c1

Start

You can start a container with docker start. The difference between docker run and docker start, is that with docker run you create a new container. With start you start a container which previously has been stopped.

Start the container with name "c1"

docker start c1

Removing Containers

We can remove / delete containers only once they are stopped.

Remove / Delete

There are two ways to remove containers

docker container rm c1
docker rm c1

Prune

We can delete all stopped containers with the prune command

docker container prune

Persisting Data using Volumes

Every docker container has its own file system. If we create a file in on docker container, the file does not exist in another docker container. They are totally separate environments. If we remove a container, all the files and possible data we store in that container is no longer in the file system. Because of that we should never store data in a containers file system.

To get around this problem of losing data we can use Volumes. Volumes is storage out side of containers. It can be directly on the host or somwhere in the cloud.

Creating a volume

Creating a volume called "app-data"

docker volume create app-data

Inspecting volumes

Inspect the "app-data" volume

docker volume inspect app-data

Driver:

Indicates where the volume ist. "local" means it is on the host. If you are using a cloud provider, you would need to find the driver for that cloud platform.

Mountpoint:

Indicates where the directory is created on the host. On Windows it would be something like C:\Program. On MacOs it shows a linux path, that is because Docker on mac runs in a lightweight linux virtual machine. The path which you see is the path inside that virtual machine. It does not exist on the Mac.

//output
[
    {
        "CreatedAt": "2021-12-02T22:58:02Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/app-data/_data",
        "Name": "app-data",
        "Options": {},
        "Scope": "local"
    }
]

List all volumes

docker volume ls

Using volumes inside containers

Volumes exist outside of containers. When we run a container, we can define the volume the container has access to. Volumes exist outside of the containers, multiple containers can have access to the same volume.

We can spin up a docker container and assign it a volume.

docker run -d -p 4000:3000 -v app-data:/app/data react-app

We add a volume called app-data to the container. If the volume app-data does not exit, docker will create it. We define an absolute path /app/data in the file system of the container. If those folders do not exists already, docker will create them.

-v app-data:/app/data

The problem with letting docker create the folders for /app/data is that docker creates them as the root user. Any user we define such as the app user defined in our Dockerfile will not have write permissions for that folder. To prevent this problem, we need to create the /app/data folder during the build time of the container with the app user.

  FROM node:14.16.0-alpine3.13
  RUN addgroup app && adduser -S -G app app
  USER app
  WORKDIR /app
+ RUN mkdir data
  COPY package*.json .
  RUN npm install
  COPY . . 
  ENV API_URL=http://api.myapp.com/
  EXPOSE 3000
  CMD ["npm", "start"]

Copying Files between the Host and Containers

Copy file from Container to Host

We have a running container with a log file located at /app/log.txt

CONTAINER ID   IMAGE          COMMAND                  CREATED             STATUS             PORTS                    NAMES
93b172ccce4c   react-app      "docker-entrypoint.s…"   10 minutes ago      Up 10 minutes      0.0.0.0:6000->3000/tcp   contvol

We can copy the from the container to a directory on the host system. The . indicates that the file should be copied to the folder in which the command is executed on the host.

docker cp 93b172ccce4c:/app/log.txt .

Copy file from Container to host

We have a file called ballon.txt in the current folder on the host which we want to copy into the /app folder within the container.

docker cp balloon.txt 93b172ccce4c:/app

Sharing the Source Code with a Container

When we are developing, we want code which we changed in the IDE to be automatically updated within the container, so that we can see if our changes are working. We can achieve this with a mapping.

We can create a mapping from the current directory to a directory within docker with the -v option. For example if we want to create a mapping between the current directory in which we are chaning the code the to app directory in the container we can do it with -v $(pwd):/app. The $(pwd) evaluates to the current directory. The parmeter $(pwd):/app mapps the /app directory wihtin the container to the current directory on your computer.

docker run -d -p 5001:3000 -v $(pwd):/app react-app

Since the container will reference your current direcotry for the /app directory, you need to make sure that your current directory contains all the dependencies. If you for example don't have the node_module installed in the directory where the $(pwd) points to, then the container won't be able to run the npm start command.

Docker Compose

A Docker Compose File

A Docker Compose file in which we define three containers and one volume.

version: "3.8"

services:
  web:
    build: ./frontend
    ports:
      - 3000:3000
  api:
    build: ./backend
    ports:
      - 3001:3001
    environment:
      DB_URL: mongodb://db/vidly
  database:
    image: mongo:4.0-xenial
    ports:
      - 27017:27017
    volumes:
      - vidly:/data/db

volumes:
  vidly:

Building a docker composition

docker-compose build

Start a build

docker-compose up

Stop

docker-compose down

Sharing Code with a container

If we want to share code bewteen the host and the container we can take the same approach as we did before when we only had a single container. We can define a volume which is shared between the host and the container. In this case we would define it for the web and api.

version: "3.8"

services:
  web:
    build: ./frontend
    ports:
      - 3000:3000
+   volumes:
+     - ./frontend:/app
  api:
    build: ./backend
    ports:
      - 3001:3001
    environment:
      DB_URL: mongodb://database/vidly
+   volumes:
+     - ./backend:/app
  database:
    image: mongo:4.0-xenial
    ports:
      - 27017:27017
    volumes:
      - vidly:/data/db

volumes:
  vidly:

These changes are equivalent to run a container -v flag which points to the code folder on the host.

docker run -d -p 3000:3000 -v $(pwd):/app web

Running Commands / Database Mirgration

We can overwride the command which is being executed with the command property. If in the Dockerfile it is for example defined to run npm run start, we can override that by defining the command property. This can be used to run all kind of scripts, such as for example database migration scripts.

version: "3.8"

services:
  web:
    build: ./frontend
    ports:
      - 3000:3000
    volumes:
      - ./frontend:/app
  api:
    build: ./backend
    ports:
      - 3001:3001
    environment:
      DB_URL: mongodb://database/vidly
    volumes:
      - ./backend:/app
+   command: ./docker-entrypoint.sh
  database:
    image: mongo:4.0-xenial
    ports:
      - 27017:27017
    volumes:
      - vidly:/data/db

volumes:
  vidly:

In the compose file we said it should run the docker-entrypoint.sh command. In that file we can define all the steps which should be taken. Since we want to run database migration scripts we have to make sure the database is up and running. For that we use the wait-for-it.sh script, which makes sure that the database is receiving traffic on the 27017 port. Once that is clarified we can start with the db migrations script npm run db:up and then finally start the backend.

// docker-entrypoint.sh
#!/bin/sh

echo "Waiting for MongoDB to start..."
./wait-for database:27017 

echo "Migrating the databse..."
npm run db:up 

echo "Starting the server..."
npm start 

Useful Links

Revisit

Building Images

  • Sharing Images
  • Saving and Loading Images
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment