Skip to main content

Docker Networks

Docker Networks

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads.

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

  • Bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
  • Host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher. See use the host network.
  • Overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other.

1. Bridge Network

There are 2 types of bridge networks:

  • Default bridge network: All containers without a --network specified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.
  • User defined bridge networks: Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.

Containers can be attached and detached from user-defined networks on the fly.

One containers can be part of multiple bridge networks.

Commands with use-case

  • Create 2 containers, inspect them > networking part where it would have mentioned “bridge” which is default network, ping from 1 container to another to check connectivity
  • docker network create my-net – Create new user defined bridge network
  • docker inspect my-net
  • docker network connect my-net containerName/ID – connect container to new network
  • docker inspect containerName/ID – to verify new network details
  • Go inside this container, check connectivity again with another container which should work as this container would be still part of existing default bridge network
  • docker network disconnect bridge containerName/ID – disconnect container from default bridge network
  • Go inside this container, check connectivity again with another container which will not work now because both containers are part of different bridge networks
  • docker create --network my-net imageName – --network will create container as part of user defined network
  • docker network rm my-net

2. Host Network

If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address.

Commands with use-case

  • docker run -d --network host --name my_nginx nginx – Create container using network type host for nginx image
  • docker ps – verify the container, no any port mapping
  • Verify the nginx UI from dockerhost:80
  • docker inspect containerName/ID – Inspect the network part where it would showing “Host” networking with no any IP address
  • ifconfig – docker host ip address eth0
  • docker exec -it containerName/ID /bin/bash
    • hostname -i – verify the same ip address as docker host
  • netstat -tulpn | grep :80 – check the attached process which would be container’s nginx

3. Overlay Network

The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it to communicate securely when encryption is enabled.

In order to create overlay network, we need to create docker swarm cluster.

Commands to create Docker swarm cluster

  • Keep open 2 SSH terminals – one for docker manager and another for docker worker node (Make sure these 2 nodes should be connected with each other with ping and open ports TCP port 2377, TCP and UDP port 7946 and UDP port 4789)
  • On manager initialize the swarm.
    docker swarm init
    Make a note of the text that is printed, as this contains the token that you will use to join worker node to the swarm.
  • On worker node, join the swarm.
    docker swarm join --token <TOKEN> --advertise-addr <IP-ADDRESS-OF-WORKER-1> <IP-ADDRESS-OF-MANAGER>:2377

  • On manager, list all the nodes
    docker node ls

Commands for Overlay Network with use-case

  • Keep open 2 SSH terminals – one for docker manager and another for docker worker node
  • Example 1
    • On manager node
      docker network create --driver=overlay --attachable test-net

      docker inspect network test-netdocker run -it --name alpine1 --network test-net alpine
    • On worker node
      docker run -dit --name alpine2 --network test-net alpine

      docker network ls - verify that test-net was created (and has the same NETWORK ID as test-net on host1)
    • From manager node
      ping -c 2 alpine2 - The two containers communicate with the overlay network connecting the two hosts.

  • Example 2
    • docker network create -d overlay nginx-net

      docker inspect network nginx-net

      docker service create --name my-nginx --publish target=80,published=80 --replicas=5 --network nginx-net nginx

      docker service ls

      docker ps

    • docker ps

      docker network ls - Verify the same network as created in manager node
    • Check UI of nginx web server using any of manager or worker node Public IP, that should respond nginx welcome page

Comments

Popular posts from this blog

How to skip resources, compiler, surfire, install plugin in maven's default build process

When we want to use maven command line to upload zip type artifact to artifact repository then we don't want resources, compiler, surefire, install phases in maven process, only assembly would be enough. To skip particular phases go to each plugin's original website phase according to latest running plugin version download the same to our own project refer the skip phase configuration of particular phase, either it can be done command line or as part of the build-plugin-configuration. Example using POM.xml file <project> [...] <build> <plugins> <plugin> <groupId...

How to clone Github repository using SSH

Run ssh-keygen Upload ssh public key to github account curl -u "gitUsername:password" --data '{"title":"keyName","key":"'"$(cat ~/.ssh/id_rsa.pub)"'"}' https://api.github.com/user/keys Sample output of ssh public key upload command { "id": 1234567890, "key": "ssh-rsa aaaaaaaaaaaaaaaaaaaa/dddd/+aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "url": "https://api.github.com/user/keys/123456...

Install AWS CLI on Ubuntu localhost using Ansible Playbook

Install AWS CLI on Ubuntu localhost using Ansible Playbook --- - hosts: localhost tasks: - name: Installing Unzip package package: name: unzip state: present when: ansible_facts['os_family'] == "Debian" become: true - name: Create awscli directory in home directory file: path: ~/awscli state: directory mode: '0755' - name: Download bundled installer zip file get_url: url: https://s3.amazonaws.com/aws-cli/awscli-bundle.zip dest: ~/awscli/awscli-bundle.zip - name: Extract zip file unarchive: src: ~/awscli/awscli-bundle.zip dest: ~/awscli - name: Run install command shell: /home/ubuntu/awscli/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws become: true Same script is also avail...