2016年2月16日 星期二

[ Doc ] User Guide - Network Configuration - Understand Docker container networks

Source From Here 
Introduction 
To build web applications that act in concert but do so securely, use the Docker networks feature. Networks, by definition, provide complete isolation for containers. So, it is important to have control over the networks your applications run on. Docker container networks give you that control. This section provides an overview of the default networking behavior that Docker Engine delivers natively. It describes the type of networks created by default and how to create your own, user--defined networks. It also describes the resources required to create networks on a single host or across a cluster of hosts. 

Default Networks 
When you install Docker, it creates three networks automatically. You can list these networks using the docker network ls command: 
# docker network ls
NETWORK ID NAME DRIVER
802823da51f6 none null
d013e91e2d69 host host
a59835aedeb6 bridge bridge

Historically, these three networks are part of Docker’s implementation. When you run a container you can use the --net flag to specify which network you want to run a container on. These three networks are still available to you. Thebridge network represents the docker0 network present in all Docker installations. Unless you specify otherwise with the docker run --net=<NETWORK> option, the Docker daemon connects containers to this network by default. You can see this bridge as part of a host’s network stack by using the ifconfig or ip commands on the host: 
# ip addr show docker0
3: docker0: mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:ff:5d:2b:07 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
valid_lft forever preferred_lft forever

The none network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at it’s stack you see this: 
# d run -it --net=none docker.io/ubuntu bash
root@d280d712eb3f:/# ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Note: You can detach from the container and leave it running with CTRL-p CTRL-q

The host network adds a container on the hosts network stack. You’ll find the network configuration inside the container is identical to the host. With the exception of the the bridge network, you really don’t need to interact with these default networks. While you can list and inspect them, you cannot remove them. They are required by your Docker installation. However, you can add your own user-defined networks and these you can remove when you no longer need them. Before you learn more about creating your own networks, it is worth looking at the default network a bit. 

The default bridge network in detail 
The default bridge network is present on all Docker hosts. 
# docker network inspect bridge
  1. [  
  2.     {  
  3.         "Name""bridge",  
  4.         "Id""a59835aedeb632147c1588bb2d4cf1226bcdc52e4efd1d726135f214954916b8",  
  5.         "Scope""local",  
  6.         "Driver""bridge",  
  7.         "IPAM": {  
  8.             "Driver""default",  
  9.             "Config": [  
  10.                 {  
  11.                     "Subnet""172.17.42.1/16",  
  12.                     "Gateway""172.17.42.1"  
  13.                 }  
  14.             ]  
  15.         },  
  16.         "Containers": {},  
  17.         "Options": {  
  18.             "com.docker.network.bridge.default_bridge""true",  
  19.             "com.docker.network.bridge.enable_icc""true",  
  20.             "com.docker.network.bridge.enable_ip_masquerade""true",  
  21.             "com.docker.network.bridge.host_binding_ipv4""0.0.0.0",  
  22.             "com.docker.network.bridge.name""docker0",  
  23.             "com.docker.network.driver.mtu""1500"  
  24.         }  
  25.     }  
  26. ]  

The Engine automatically creates a Subnet and Gateway to the network. The docker run command automatically adds new containers to this network. 
# docker run -itd --name=container1 busybox
# docker run -itd --name=container2 busybox
# docker ps -q
66f1e67da7dd
2c380f256fde

Inspecting the bridge network again after starting two containers shows both newly launched containers in the network. Their ids show up in the container: 
# docker network inspect bridge
  1. [  
  2.     {  
  3.         "Name""bridge",  
  4.         "Id""a59835aedeb632147c1588bb2d4cf1226bcdc52e4efd1d726135f214954916b8",  
  5.         "Scope""local",  
  6.         "Driver""bridge",  
  7.         "IPAM": {  
  8.             "Driver""default",  
  9.             "Config": [  
  10.                 {  
  11.                     "Subnet""172.17.42.1/16",  
  12.                     "Gateway""172.17.42.1"  
  13.                 }  
  14.             ]  
  15.         },  
  16.         "Containers": {  
  17.             "2c380f256fde8a2d18ae4f8798adeb569debb3ff732adf20fd78be032ed7371e": {  
  18.                 "EndpointID""7de1260cad7226ad2f24422093adf2920910efaba4d2a07449807262a614b20e",  
  19.                 "MacAddress""02:42:ac:11:00:01",  
  20.                 "IPv4Address""172.17.0.1/16",  
  21.                 "IPv6Address"""  
  22.             },  
  23.             "66f1e67da7dd60719d8db09c1546ff4c7c550c152ce96772f1ab30029a751b58": {  
  24.                 "EndpointID""6f32e61934467348c367b238694c805a3da9a6e65265c0895ac421b33b17ec03",  
  25.                 "MacAddress""02:42:ac:11:00:02",  
  26.                 "IPv4Address""172.17.0.2/16",  
  27.                 "IPv6Address"""  
  28.             }  
  29.         },  
  30.         "Options": {  
  31.             "com.docker.network.bridge.default_bridge""true",  
  32.             "com.docker.network.bridge.enable_icc""true",  
  33.             "com.docker.network.bridge.enable_ip_masquerade""true",  
  34.             "com.docker.network.bridge.host_binding_ipv4""0.0.0.0",  
  35.             "com.docker.network.bridge.name""docker0",  
  36.             "com.docker.network.driver.mtu""1500"  
  37.         }  
  38.     }  
  39. ]  

The docker network inspect command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option (--link=[]: Add link to another container). 

You can attach to a running container and investigate its configuration: 
# docker attach container1
/ # ifconfig // Inside container
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:01
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KiB) TX bytes:648 (648.0 B)
...

Then use ping for about 3 seconds to test the connectivity of the containers on this bridge network. 
/ # ping -w3 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.080 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.056 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms

Finally, use the cat command to check the container1 network configuration: 
/ # cat /etc/hosts
  1. 172.17.0.1      2c380f256fde  
  2. 127.0.0.1       localhost  
  3. ::1     localhost ip6-localhost ip6-loopback  
  4. fe00::0 ip6-localnet  
  5. ff00::0 ip6-mcastprefix  
  6. ff02::1 ip6-allnodes  
  7. ff02::2 ip6-allrouters  

To detach from a container1 and leave it running use CTRL-p CTRL-q.Then, attach to container2 and repeat these three commands. The default docker0 bridge network supports the use of port mapping and docker run --link to allow communications between containers in the docker0 network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead. 

User-defined networks 
You can create your own user-defined networks that better isolate containers. Docker provides some default network drivers for creating these networks. You can create a new bridge network or overlay network. You can also create a network plugin or remote network written to your own specifications. You can create multiple networks. You can add containers to more than one network. Containers can only communicate within networks but not across networks. A container attached to two networks can communicate with member containers in either network

The next few sections describe each of Docker’s built-in network drivers in greater detail. 

A bridge network 
The easiest user-defined network to create is a bridge network. This network is similar to the historical, default docker0 network. There are some added features and some old features that aren’t available. 
# docker network create --driver bridge isolated_new
c4d89e334866dcba235a81cf4f9919079ec742814f3e202b722df88e1972b6ed
# docker network inspect isolated_new
  1. [  
  2.     {  
  3.         "Name""isolated_new",  
  4.         "Id""c4d89e334866dcba235a81cf4f9919079ec742814f3e202b722df88e1972b6ed",  
  5.         "Scope""local",  
  6.         "Driver""bridge",  
  7.         "IPAM": {  
  8.             "Driver""default",  
  9.             "Config": [  
  10.                 {}  
  11.             ]  
  12.         },  
  13.         "Containers": {},  
  14.         "Options": {}  
  15.     }  
  16. ]  
# docker network ls
NETWORK ID NAME DRIVER
802823da51f6 none null
d013e91e2d69 host host
a59835aedeb6 bridge bridge
c4d89e334866 isolated_new bridge

After you create the network, you can launch containers on it using the docker run --net=<NETWORK> option. 
# docker run --net=isolated_new -itd --name=container3 busybox
eb1b3d8120f9407d753f9ac616934e4088bb89c1ce7ef620a9146d17a365a90e
# docker network inspect isolated_new
  1. [  
  2.     {  
  3.         "Name""isolated_new",  
  4.         "Id""c4d89e334866dcba235a81cf4f9919079ec742814f3e202b722df88e1972b6ed",  
  5.         "Scope""local",  
  6.         "Driver""bridge",  
  7.         "IPAM": {  
  8.             "Driver""default",  
  9.             "Config": [  
  10.                 {}  
  11.             ]  
  12.         },  
  13.         "Containers": {  
  14.             "eb1b3d8120f9407d753f9ac616934e4088bb89c1ce7ef620a9146d17a365a90e": {  
  15.                 "EndpointID""3ae84483662e583f0ce00ee12ca80cb37dcfac45bd2319d64793b3c2c7c413bf",  
  16.                 "MacAddress""02:42:ac:12:00:02",  
  17.                 "IPv4Address""172.18.0.2/16",  
  18.                 "IPv6Address"""  
  19.             }  
  20.         },  
  21.         "Options": {}  
  22.     }  
  23. ]  


The containers you launch into this network must reside on the same Docker host. Each container in the network can immediately communicate with other containers in the network. Though, the network itself isolates the containers from external networks. 
 

Within a user-defined bridge network, linking is not supported. You can expose and publish container ports on containers in this network. This is useful if you want to make a portion of the bridge network available to an outside network. 
 


bridge network is useful in cases where you want to run a relatively small network on a single host. You can, however, create significantly larger networks by creating an overlay network. 

An overlay network 
Docker’s overlay network driver supports multi-host networking natively out-of-the-box. This support is accomplished with the help of libnetwork, a built-in VXLAN-based overlay network driver, and Docker’s libkv library. The overlay network requires a valid key-value store service. Currently, Docker’s libkv supports Consul, Etcd, and ZooKeeper (Distributed store). Before creating a network you must install and configure your chosen key-value store service. The Docker hosts that you intend to network and the service must be able to communicate. 
 


Each host in the network must run a Docker Engine instance. The easiest way to provision the hosts are with Docker Machine. 
 


You should open the following ports between each of your hosts. 

 
Your key-value store service may require additional ports. Check your vendor’s documentation and open any required ports. Once you have several machines provisioned, you can use Docker Swarm to quickly form them into a swarm which includes a discovery service as well. To create an overlay network, you configure options on the daemon on each Docker Engine for use with overlay network. There are three options to set: 
 

Create an overlay network on one of the machines in the Swarm. 
# docker network create --driver overlay my-multi-host-network

This results in a single network spanning multiple hosts. An overlay network provides complete isolation for the containers. 
 


Then, on each host, launch containers making sure to specify the network name. 

# docker run -itd --net=my-multi-host-network busybox

Once connected, each container has access to all the containers in the network regardless of which Docker host the container was launched on. 
 


If you would like to try this for yourself, see the Getting started for overlay

Custom network plugin 
If you like, you can write your own network driver plugin. A network driver plugin makes use of Docker’s plugin infrastructure. In this infrastructure, a plugin is a process running on the same Docker host as the Docker daemon. Network plugins follow the same restrictions and installation rules as other plugins. All plugins make use of the plugin API. They have a lifecycle that encompasses installation, starting, stopping and activation. Once you have created and installed a custom network driver, you use it like the built-in network drivers. For example: 

# docker network create --driver weave mynet

You can inspect it, add containers too and from it, and so forth. Of course, different plugins may make use of different technologies or frameworks. Custom networks can include features not present in Docker’s default networks. For more information on writing plugins, see Extending Docker and Writing a network driver plugin

Docker embedded DNS server 
Docker daemon runs an embedded DNS server to provide automatic service discovery for containers connected to user defined networks. Name resolution requests from the containers are handled first by the embedded DNS server. If the embedded DNS server is unable to resolve the request it will be forwarded to any external DNS servers configured for the container. To facilitate this when the container is created, only the embedded DNS server reachable at127.0.0.11 will be listed in the container’s resolv.conf file. More information on embedded DNS server on user-defined networks can be found in the embedded DNS server in user-defined networks

Links 
Before the Docker network feature, you could use the Docker link feature to allow containers to discover each other. With the introduction of Docker networks, containers can be discovered by its name automatically. But you can still create links but they behave differently when used in the default docker0 bridge network compared to user-defined networks. For more information, please refer to Legacy Links for link feature in default bridge network and the linking containers in user-defined networks for links functionality in user-defined networks. 

Related information 
Work with network commands 
Get started with multi-host networking 
Managing Data in Containers 
Docker Machine overview 
Docker Swarm overview 
Investigate the LibNetwork project

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...