5 minute read

Docker basic

Ref docker doc and docker --help

Here is a good docker tutorial

docker image download and commit

docker pull [OPTIONS] NAME[:TAG|@DIGEST]
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

check docker

docker ps
docker images
docker inspect 

Others

docker run
docker rmi
docker rm 
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH

Ctrl+p then Ctrl+q will turn container interactive mode to daemon mode. If you want to reattach, run

docker attach [OPTIONS] CONTAINER

Setup docker

run docker without sudo

sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker 

Push to docker hub

docker login
docker tag image_id yourhubusername/REPOSITORY_NAME:tag
docker push yourhubusername/REPOSITORY_NAME

Dockerfile

a basic template, ref. dockerfile

FROM ubuntu:18.04
COPY . /app
EXPOSE 9000
RUN make /app
CMD python /app/app.py

docker-compose.yml ref compose

version: "3.8"
services:
  webapp:
    build:
      context: ./dir
      dockerfile: Dockerfile-alternate
      args:
        buildno: 1

a full example of django dockerfile and docker-compose

Docker proxy

  1. set proxy for docker pull, ref docker-proxy
sudo mkdir -p /etc/systemd/system/docker.service.d
vim /etc/systemd/system/docker.service.d/http-proxy.conf
  1. add
[Service]
Environment="HTTP_PROXY=socks5://127.0.0.1:1080"
Environment="HTTPS_PROXY=socks5://127.0.0.1:1080"

where 127.0.0.1:1080 is your proxy forward port.

  1. Take effect
sudo systemctl daemon-reload
sudo systemctl restart docker or systemctl reload docker
  1. Verify and enjoy
sudo systemctl show --property=Environment docker

Docker with cuda

refer nvidia-docker

setup nvidia-container-toolkit

# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

pull cuda image depending your cuda version

docker pull nvidia/cuda:10.2-basic

run docker

docker run --gpus all --ipc=host --net host -it --rm -v /etc/localtime:/etc/localtime:ro -v /dev/shm:/dev/shm -v $(pwd):/workspace --user $(id -u):$(id -g) nvidia/cuda:10.2-runtime-ubuntu18.04

-v /dev/shm:/dev/shm for torch distributed train with nccl back-end

or create a docker file

ARG DOCKER_BASE_IMAGE=nvidia/cuda:10.2-basic
FROM $DOCKER_BASE_IMAGE
# remove cuda list avoid gpg error when apt-get
RUN rm /etc/apt/sources.list.d/cuda.list && rm /etc/apt/sources.list.d/nvidia-ml.list
RUN apt-get update && apt-get -y install sudo 

COPY pre-install.sh .
RUN ./pre-install.sh

ARG UID=1000
ARG GID=1000
ARG USER=docker
# default password for user
ARG PW=docker
RUN useradd -m ${USER} --uid=${UID} -s /bin/bash  && echo "${USER}:${PW}" | chpasswd && adduser ${USER} sudo
# Setup default user, when enter docker container
USER ${USER}
WORKDIR /home/${USER}

you can put pre-installed script as

apt-get install -y iputils-ping
apt-get install -y vim
apt-get install -y wget
apt-get install -y git

build by

export UID=$(id -u)
export GID=$(id -g)
sudo docker build --build-arg USER=$USER \
			 --build-arg UID=$UID \
             --build-arg GID=$GID \
			 -t mycuda \
			 -f Dockerfile \
			 .

run docker with

docker run --gpus all --ipc=host --net host -it -e "TERM=xterm-256color" -v /etc/localtime:/etc/localtime:ro -v /dev/shm:/dev/shm -v $(pwd):/home/docker/workspace --hostname mycontainer --user docker mycuda

add --rm if you want to remove the container after it exit.

re-attach to stopped container by

docker start -ai [container name or id]

Container using Host proxy

I set up a proxy on host:8118, and I want my container use it to access network, How to achieve this?

1. Setup client proxy

  • According docker document, set up proxy for docker is by setting environment variables, there are two method to do it, one is create/edit the file ~/.docker/config.json, add following to it.
{
 "proxies":
 {
   "default":
   {
     "httpProxy": "http://127.0.0.1:8118",
     "httpsProxy": "http://127.0.0.1:8118",
     "noProxy": "localhost"
   }
 }
}

After set the config, when you rebuild the docker image, the proxy will automatically set up for the image. but there is one thing should be noticed, that is when building the docker image, you may run some commands using network (e.g. apt update) within the container, but you do not finished the proxy set up. So these commands could fail duo to network error. Here you have to make it use the host network when building docker image by using

docker build --net host xxx
  • another way is set the environment variables manually in Dockerfile by ENV HTTP_PROXY "http://127.0.0.1:8118", or your can just set it in docker CLI.

2. Setup network mapping

  • There are two ways to map the host network to container, one is just port all host network to container by using docker run --net host.

  • Another way is only map the proxy port to the your host by docker run -p 8118:8118.

SSH to container

ssh on host machine

one directly ssh to a running container.

  1. make sure the container installed and started ssh service;

  2. inspect the container ip address by

docker inspect <container id> | grep "IPAddress"
  1. ssh to container
ssh user@container_ip_address

Direct ssh on remote machine

map container ssh service port to some host port by run container with -p <hostPort>:<containerPort>. i.e:

docker run -p 52022:22 container1 
docker run -p 53022:22 container2

Then, if ports 52022 and 53022 of host’s are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:

ssh -p 52022 myuser@RemoteServer –> SSH to container1

ssh -p 53022 myuser@RemoteServer –> SSH to container2

Access files inside container

There are three ways to do achieve this:

  1. you can just map container’s folder to host, and then access it as it on host, e.g. sftp;
  2. you can create a http server by python3 -m http.server, and map the server port to host, and then access files on browser by http://host:port, but this is only a sample http file server, you can not edit files just as sftp.
  3. you can WebDAV for file access.

Here I illustrate the WebDAV solution, In my case, I run my container on a remote host. The remote host is accessible by ssh specific port, say port 12345 (actually there is a jumper server between me and the remote host) . In this case I can not directly use ssh to the container, unless I map the 12345 to container’s ssh server port, but this will lead to the result that I can not access the remote host directly but the container. To avoid that, I have to access the files inside container by http protocol not ssh directly. Then I ssh tunnel the http fileserver port to my local machine.

conatiner ----WebDAV server port 8000
    |
    |port mapping 8000->host:8000
    |           22                    12345
  host--------------------jumper----------------my computer 9980

WebDAV is a long-standing protocol that enables a webserver to act as a fileserver and support collaborative authoring of content on the web. so it satisfy my requirements.

Set up WebDAV server on container

  1. install python WebDAV

     pip install wsgidav
     pip install cheroot
    
  2. create wsgidav configuration file wsgidav.yaml, refer wsgidav config for detail

  3. run wsgidav

    wsgidav --config=`wsgidav.yaml --host=0.0.0.0 --port=8000 --root ./share
    

set up ssh tunnel

on your computer

ssh -f -N -L 9980:0.0.0.0:8000  -p 12345 host_username@jumper_ip

Then you can access the files inside remote container by dav://localhost:9980/, on Linux, this is quite sample, just open you files explorer, select other location and type the dav://localhost:9980/, fill user name and password set by wsgidav.yaml, then you done! you can read/write these file just as it on your computer, enjoy!

On windows, you may download a WebDAV client to use.

Tags:

Updated:

Comments