docker.io

This covers testing docker swarm, especially to verify the overlay network encrypts data send among the containers. Three nodes are needed: one manager and two workers

VMs setup

This test relies on debvm, and a virbr0 network bridge already configured.

/etc/qemu/bridge.conf should have

Enable network bridge for being used by qemu:

sudo chown root:kvm /etc/qemu/bridge.conf
sudo chmod 640 /etc/qemu/bridge.conf
sudo chmod u+s /usr/lib/qemu/qemu-bridge-helper

apt-cacher-ng is also running on the host machine. Adjust MIRROR URL if not is your case.

export RELEASE=buster
debvm-create --size=3GB --release=$RELEASE -- http://192.168.122.1:3142/deb.debian.org/debian
mv rootfs.ext4 rootfs-manager.ext4
debvm-create --size=3GB --release=$RELEASE -- http://192.168.122.1:3142/deb.debian.org/debian
mv rootfs.ext4 rootfs-worker1.ext4
debvm-create --size=3GB --release=$RELEASE -- http://192.168.122.1:3142/deb.debian.org/debian
mv rootfs.ext4 rootfs-worker2.ext4

debvm-run -i rootfs-manager.ext4 -- -netdev bridge,id=net1,br=virbr0 -device virtio-net-pci,netdev=net1,mac=52:54:00:12:34:00 -smp 2
debvm-run -i rootfs-worker1.ext4 -- -netdev bridge,id=net1,br=virbr0 -device virtio-net-pci,netdev=net1,mac=52:54:00:12:34:01 -smp 2
debvm-run -i rootfs-worker2.ext4 -- -netdev bridge,id=net1,br=virbr0 -device virtio-net-pci,netdev=net1,mac=52:54:00:12:34:02 -smp 2

In the example documented here, the machines have these network interfaces and addresses:

#virsh net-dhcp-leases default
 Expiry Time           MAC address         Protocol   IP address          Hostname   Client ID or DUID
-----------------------------------------------------------------------------------------------------------------------------------------------
 2023-08-01 09:41:00   52:54:00:12:23:01   ipv4       192.168.122.32/24   worker-1   ff:c2:72:f6:09:00:02:00:00:ab:11:18:aa:a0:aa:74:b7:84:be
 2023-08-01 09:42:06   52:54:00:12:23:02   ipv4       192.168.122.33/24   worker-2   ff:c2:72:f6:09:00:02:00:00:ab:11:06:b4:b1:fd:ba:22:08:91
 2023-08-01 09:44:12   52:54:00:12:45:56   ipv4       192.168.122.35/24   manager    ff:c2:72:f6:09:00:02:00:00:ab:11:4a:c0:f0:69:cf:3d:9b:1e

Docker

On each VM

  • Ajust their hostnames

hostnamectl set-hostname {manager,worker-1,worker-2}

if hostnamectl is available (as in buster), or manually, otherwise.

  • Install docker.io

apt install -y docker.io
  • On bookworm and trixie, you need to aditionally install e2fsprogs

apt install -y e2fsprogs
modprobe -v xt_u32

On manager:

  • pull nginx (used to test)

docker image pull nginx
  • Init the swarm

docker swarm init --advertise-addr=$MANAGER_IP

The output should give you the swarm token

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4788k6slks7r8zc6v8vilr4m453v1uohv57s0ynbg95etkxj6o-cc7kalnbuo0xnbi44fzeubu7j 192.168.122.35:2377

In any case, it is possible to get the token later with:

docker swarm join-token worker
  • create a network (encrypted):

docker network create -d overlay --opt encrypted nginx-net

On the runners:

docker swarm join --token SWMTKN-1-4788k6slks7r8zc6v8vilr4m453v1uohv57s0ynbg95etkxj6o-cc7kalnbuo0xnbi44fzeubu7j 192.168.122.35:2377

On manager:

  • Workers should be visible:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
kl59ohp9xlggbtktgtblwdth7 *   manager             Ready               Active              Leader              18.09.1
g2yc5o0k9162kggu9rc5nnakk     worker-1            Ready               Active                                  18.09.1
t0yhj5l6fl4i3k90lztw3oo0h     worker-2            Ready               Active                                  18.09.1

It may be useful to label workers for being able to apply service deployment constraints.

docker node update --label-add foo=bar worker-1
docker node update --label-add foo=bar worker-2
  • create a service:

docker service create  --name my-nginx  --publish target=80,published=80  --replicas=3  --network nginx-net nginx

Or better, with deployment constraints:

docker service create  --name my-nginx  --publish target=80,published=80  --replicas=2  --network nginx-net --constraint node.labels.foo==bar nginx
  • Verify the workers are serving the service:

docker service ps my-nginx

The worker nodes should be listed in the Nodes column, and their status should be Running.

  • On the host, use tcpcump to sniff the traffic, e.g. on 4789/udp port (docker network overlay data)

sudo tcpdump -vvv --print "udp port 4789" -i virbr0 -w capture.pcap

References

Copyright (C) 2023 Santiago Ruano Rincón