对于一个项目来说,通常包含多个容器实例,而且这多个容器实例通常不会在同一台机器上。因此,如何在多个主机上通过容器来部署项目,需要解决的就是不同主机上容器的通信问题。
- 角色划分:
etcd 192.168.30.128 192.168.30.129 flask 192.168.30.128 redis 192.168.30.129
- 环境准备:
# systemctl stop firewalld && systemctl disable firewalld# sed -i 's/=enforcing/=disabled/g' /etc/selinux/config && setenforce 0
- 安装docker:
# curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker.repo# yum makecache fast# yum install -y docker-ce# systemctl start docker && systemctl enable docker
- 建立etcd集群:
192.168.30.128
# cd /software# wget https://github.com/etcd-io/etcd/releases/download/v3.3.17/etcd-v3.3.17-linux-amd64.tar.gz# tar xf etcd-v3.3.17-linux-amd64.tar.gz# cd etcd-v3.3.17-linux-amd64/# nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://192.168.30.128:2380 \--listen-peer-urls http://192.168.30.128:2380 \ --listen-client-urls http://192.168.30.128:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.30.128:2379 \ --initial-cluster-token etcd-cluster \ --initial-cluster docker-node1=http://192.168.30.128:2380,docker-node2=http://192.168.30.129:2380 \ --initial-cluster-state new &
192.168.30.129
# cd /software# wget https://github.com/etcd-io/etcd/releases/download/v3.3.17/etcd-v3.3.17-linux-amd64.tar.gz# tar xf etcd-v3.3.17-linux-amd64.tar.gz# cd etcd-v3.3.17-linux-amd64/# nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.30.129:2380 \--listen-peer-urls http://192.168.30.129:2380 \ --listen-client-urls http://192.168.30.129:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.30.129:2379 \ --initial-cluster-token etcd-cluster \ --initial-cluster docker-node1=http://192.168.30.128:2380,docker-node2=http://192.168.30.129:2380 \ --initial-cluster-state new &
检查etcd集群状态(两台机器都可执行)
# ./etcdctl cluster-healthmember 1a3a8b811f89111 is healthy: got healthy result from http://192.168.30.128:2379 member deb21af19c6dc76c is healthy: got healthy result from http://192.168.30.129:2379 cluster is healthy
- 重启docker:
192.168.30.128
# systemctl stop docker# /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.30.128:2379 --cluster-advertise=192.168.30.128:2375 &
192.168.30.129
# systemctl stop docker# /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.30.129:2379 --cluster-advertise=192.168.30.129:2375 &
- 创建overlay network:
192.168.30.128上创建一个demo的overlay network
# docker network create -d overlay demo41dad4bca4c9c8cc30ccd36861a3cf5f2ad7e86b774136bb4878cd54ffbf6e0b# docker network lsNETWORK ID NAME DRIVER ScopE 66e5abd85e23 bridge bridge local 41dad4bca4c9 demo overlay global 535808221d2e host host local 2addad8d8857 none null local# docker network inspect demo[ { "Name": "demo", "Id": "41dad4bca4c9c8cc30ccd36861a3cf5f2ad7e86b774136bb4878cd54ffbf6e0b", "Created": "2019-10-22T13:44:32.228005687+08:00", "Scope": "global", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} }]
192.168.30.129上这个demo的overlay network会被同步创建
# docker network lsNETWORK ID NAME DRIVER ScopE 9ef262030586 bridge bridge local 41dad4bca4c9 demo overlay global f9abc9c0c99a host host local 596441310cb9 none null local
- 测试不同主机上容器通信:
192.168.30.128
# docker run -d --name test1 --network demo busyBox /bin/sh -c "while true; do sleep 3600; done"# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7facfa39738 busyBox "/bin/sh -c 'while t…" 14 seconds ago Up 11 seconds test1
192.168.30.129
# docker run -d --name test2 --network demo busyBox /bin/sh -c "while true; do sleep 3600; done"# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e82998cf7e1a busyBox "/bin/sh -c 'while t…" 11 seconds ago Up 9 seconds test2
192.168.30.128
# docker exec -it test1 /bin/sh/ # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:02 inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0 UP broADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:02 inet addr:172.18.0.2 Bcast:172.18.255.255 Mask:255.255.0.0 UP broADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1046 (1.0 KiB) TX bytes:0 (0.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)/ # ping test2 #直接ping容器名字,可自动解析PING test2 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: seq=0 ttl=64 time=14.997 ms 64 bytes from 10.0.0.3: seq=1 ttl=64 time=1.131 ms 64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.756 ms 64 bytes from 10.0.0.3: seq=3 ttl=64 time=0.998 ms 64 bytes from 10.0.0.3: seq=4 ttl=64 time=1.123 ms 64 bytes from 10.0.0.3: seq=5 ttl=64 time=1.045 ms 64 bytes from 10.0.0.3: seq=6 ttl=64 time=1.464 ms ^C --- test2 ping statistics --- 7 packets transmitted, 7 packets received, 0% packet loss round-trip min/avg/max = 0.756/3.073/14.997 ms
192.168.30.129
# docker exec -it test2 /bin/sh/ # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:03 inet addr:10.0.0.3 Bcast:10.0.0.255 Mask:255.255.255.0 UP broADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:728 (728.0 B) TX bytes:728 (728.0 B)eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:02 inet addr:172.18.0.2 Bcast:172.18.255.255 Mask:255.255.0.0 UP broADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1046 (1.0 KiB) TX bytes:0 (0.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)/ # ping test1PING test1 (10.0.0.2): 56 data bytes 64 bytes from 10.0.0.2: seq=0 ttl=64 time=4.060 ms 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.636 ms 64 bytes from 10.0.0.2: seq=2 ttl=64 time=1.101 ms 64 bytes from 10.0.0.2: seq=3 ttl=64 time=0.597 ms 64 bytes from 10.0.0.2: seq=4 ttl=64 time=0.769 ms 64 bytes from 10.0.0.2: seq=5 ttl=64 time=1.372 ms 64 bytes from 10.0.0.2: seq=6 ttl=64 time=1.109 ms 64 bytes from 10.0.0.2: seq=7 ttl=64 time=1.106 ms ^C --- test1 ping statistics --- 8 packets transmitted, 8 packets received, 0% packet loss round-trip min/avg/max = 0.597/1.343/4.060 ms
可以看到,上面两台不同主机上的容器可以相互ping通。
另外需要注意,在overlay network下,即使主机不同,它们上面的容器名字也不能相同,即192.168.30.128和192.168.30.129上面不能同时存在名字一样的容器,否则会直接报错。
- 部署falsk-redis项目:
192.168.30.129
# docker run -d --name redis --network demo redis# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7c720195b5f3 redis "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp redis e82998cf7e1a busyBox "/bin/sh -c 'while t…" 32 minutes ago Up 32 minutes test2
redis属于后端数据库容器,不需要暴露端口给外部,只需要能被内部容器访问即可。
192.168.30.128
# cd /software/# mkdir flask && cd flask# vim app.pyfrom flask import Flask from redis import Redisimport osimport socket app = Flask(__name__)redis = Redis(host=os.environ.get('REdis_HOST', '127.0.0.1'), port=6379)@app.route('/')def hello(): redis.incr('hits') return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname())if __name__ == "__main__": app.run(host="0.0.0.0", port=5000, debug=True)# vim DockerfileFROM python:2.7 LABEL maintaner="LZX admin@lzxlinux.com"copY . /app workdir /app RUN pip install flask redis EXPOSE 5000 CMD [ "python", "app.py" ]# docker build -t flask-redis -f Dockerfile .# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE flask-redis latest 5c87dd098394 12 seconds ago 891MB python 2.7 6c7366bb93f6 3 days ago 886MB centos latest 0f3e07c0138f 2 weeks ago 220MB busyBox latest 19485c79a9bb 6 weeks ago 1.22MB# docker run -d -p 5000:5000 --name flask --network demo -e REdis_HOST=redis flask-redis# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ffef4e430ae3 flask-redis "python app.py" 20 seconds ago Up 18 seconds 0.0.0.0:5000->5000/tcp flask a7facfa39738 busyBox "/bin/sh -c 'while t…" 36 minutes ago Up 37 minutes test1# netstat -lntp |grep 5000tcp6 0 0 :::5000 :::* LISTEN 5551/docker-proxy# docker network inspect demo [ { "Name": "demo", "Id": "41dad4bca4c9c8cc30ccd36861a3cf5f2ad7e86b774136bb4878cd54ffbf6e0b", "Created": "2019-10-22T13:44:32.228005687+08:00", "Scope": "global", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "a7facfa3973828147ff03c24c5612df920620ad154076ce264b0faab5b2e5517": { "Name": "test1", "EndpointID": "6be2066d6b6979ec0d1c7f69f38b9029062d0e852b27559f2a95772e7b8724ea", "MacAddress": "02:42:0a:00:00:05", "IPv4Address": "10.0.0.5/24", "IPv6Address": "" }, "ep-208303f322d93949c27accb54cf8bbc3baf5042dcb5bbcad5e8d3fe74d231f42": { "Name": "redis", "EndpointID": "208303f322d93949c27accb54cf8bbc3baf5042dcb5bbcad5e8d3fe74d231f42", "MacAddress": "02:42:0a:00:00:02", "IPv4Address": "10.0.0.2/24", "IPv6Address": "" }, "ep-83247c6a39ae8b63df6d0cafd2eb3e6b46b17f74fc98e194a911c4ea3b149eee": { "Name": "test2", "EndpointID": "83247c6a39ae8b63df6d0cafd2eb3e6b46b17f74fc98e194a911c4ea3b149eee", "MacAddress": "02:42:0a:00:00:03", "IPv4Address": "10.0.0.3/24", "IPv6Address": "" }, "ffef4e430ae3e66c4eb1007796a9c85754c50660adba3c38eb7c3ca61f8ecfd8": { "Name": "flask", "EndpointID": "4eda8a036afd351ad9b36a2bad6fc4f22e4dbf7e4b4f7ed2a747a92786b3def2", "MacAddress": "02:42:0a:00:00:04", "IPv4Address": "10.0.0.4/24", "IPv6Address": "" } }, "Options": {}, "Labels": {} }]
docker run -e REdis_HOST=redis
设置环境变量REdis_HOST
为redis,这样在flask程序中寻找对应容器名为redis的容器。
- 访问
192.168.30.128:5000
:
浏览器访问,
再次刷新
命令行访问,
# curl 192.168.30.128:5000Hello Container World! I have been seen 3 times and my hostname is ffef4e430ae3.
可以看到,访问次数达到自增效果。只要在这个overlay network中的容器,即使不同主机不同容器,也是可以相互通信的。
这里仅仅通过两台主机演示多机容器部署项目的效果,实际上的项目部署更为复杂多样。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。