Docker Networks
One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads.
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
- Bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
- Host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher. See use the host network.
- Overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other.
1. Bridge Network
There are 2 types of bridge networks:
- Default bridge network: All containers without a --network specified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.
- User defined bridge networks: Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.
Containers can be attached and detached from user-defined networks on the fly.
One containers can be part of multiple bridge networks.
Commands with use-case
- Create 2 containers, inspect them > networking part where it would have mentioned “bridge” which is default network, ping from 1 container to another to check connectivity
docker network create my-net
– Create new user defined bridge networkdocker inspect my-net
docker network connect my-net containerName/ID
– connect container to new networkdocker inspect containerName/ID
– to verify new network details- Go inside this container, check connectivity again with another container which should work as this container would be still part of existing default bridge network
docker network disconnect bridge containerName/ID
– disconnect container from default bridge network- Go inside this container, check connectivity again with another container which will not work now because both containers are part of different bridge networks
docker create --network my-net imageName
– --network will create container as part of user defined networkdocker network rm my-net
2. Host Network
If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address.
Commands with use-case
docker run -d --network host --name my_nginx nginx
– Create container using network type host for nginx imagedocker ps
– verify the container, no any port mapping- Verify the nginx UI from dockerhost:80
docker inspect containerName/ID
– Inspect the network part where it would showing “Host” networking with no any IP addressifconfig
– docker host ip address eth0docker exec -it containerName/ID /bin/bash
hostname -i
– verify the same ip address as docker hostnetstat -tulpn | grep :80
– check the attached process which would be container’s nginx
3. Overlay Network
The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it to communicate securely when encryption is enabled.
In order to create overlay network, we need to create docker swarm cluster.
Commands to create Docker swarm cluster
- Keep open 2 SSH terminals – one for docker manager and another for docker worker node (Make sure these 2 nodes should be connected with each other with ping and open ports TCP port 2377, TCP and UDP port 7946 and UDP port 4789)
- On manager initialize the swarm.
docker swarm init
Make a note of the text that is printed, as this contains the token that you will use to join worker node to the swarm. - On worker node, join the
swarm.
docker swarm join --token <TOKEN> --advertise-addr <IP-ADDRESS-OF-WORKER-1> <IP-ADDRESS-OF-MANAGER>:2377
- On manager, list all the nodes
docker node ls
Commands for Overlay Network with use-case
- Keep open 2 SSH terminals – one for docker manager and another for docker worker node
- Example 1
- On manager node
docker network create --driver=overlay --attachable test-net
docker inspect network test-net
docker run -it --name alpine1 --network test-net alpine
- On worker node
docker run -dit --name alpine2 --network test-net alpine
docker network ls
- verify that test-net was created (and has the same NETWORK ID as test-net on host1) - From manager node
ping -c 2 alpine2
- The two containers communicate with the overlay network connecting the two hosts. - Example 2
-
docker network create -d overlay nginx-net
docker inspect network nginx-net
docker service create --name my-nginx --publish target=80,published=80 --replicas=5 --network nginx-net nginx
docker service ls
docker ps
-
docker ps
docker network ls
- Verify the same network as created in manager node - Check UI of nginx web server using any of manager or worker node Public IP, that should respond nginx welcome page
Comments
Post a Comment