如何解决Microk8s,MetalLB,ingress-nginx-如何路由外部流量? 入口层
Kubernetes / Ubuntu新手在这里!
我正在使用单个RaspBerry Pi设置一个k8s集群(希望将来有更多)。我正在使用microk8s v1.18.8
和Ubuntu Server 20.04.1 LTS (GNU/Linux 5.4.0-1018-raspi aarch64)
。
我正在尝试在端口80
上访问我的k8s服务之一,但无法正确设置它。我还设置了用于访问服务的静态IP地址,并将流量从路由器路由到服务的IP地址。
我想知道我在做错什么,或者是否有更好的方法来做我想做的事情!
我要执行的步骤:
- 我已经跑过
microk8s enable dns Metallb
。我给了DHPC服务器(192.168.0.90-192.168.0.99
)未处理的MetalLB IP地址。 - 我已经通过运行
ingress-Nginx
安装了kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-Nginx/controller-v0.35.0/deploy/static/provider/bareMetal/deploy.yaml
。这将为NodePort
创建一个ingress-Nginx-controller
服务,该服务不适用于MetalLB。如here所述,我通过运行spec.type
将服务的NodePort
从LoadBalancer
编辑为kubectl edit service ingress-Nginx-controller -n ingress-Nginx
。然后,MetalLB将IP192.168.0.90
分配给服务。 - 然后我应用以下配置文件:
apiVersion: v1
kind: Service
Metadata:
name: wow-ah-api-service
namespace: develop
spec:
selector:
app: wow-ah-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
Metadata:
# Unique key of the Deployment instance
name: wow-ah-api
namespace: develop
spec:
# 3 Pods should exist at all times.
replicas: 3
selector:
matchLabels:
app: wow-ah-api
template:
Metadata:
namespace: develop
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: wow-ah-api
spec:
imagePullSecrets:
- name: some-secret
containers:
- name: wow-ah-api
# Run this image
image: some-image
imagePullPolicy: Always
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
Metadata:
name: wow-ah-api-ingress
namespace: develop
spec:
backend:
serviceName: wow-ah-api-service
servicePort: 3000
这些是我看到的一些输出:
microk8s kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
develop pod/wow-ah-api-6c4bff88f9-2x48v 1/1 Running 4 4h21m
develop pod/wow-ah-api-6c4bff88f9-ccw9z 1/1 Running 4 4h21m
develop pod/wow-ah-api-6c4bff88f9-rd6lp 1/1 Running 4 4h21m
ingress-Nginx pod/ingress-Nginx-admission-create-mnn8g 0/1 Completed 0 4h27m
ingress-Nginx pod/ingress-Nginx-admission-patch-x5r6d 0/1 Completed 1 4h27m
ingress-Nginx pod/ingress-Nginx-controller-7896b4fbd4-nglsd 1/1 Running 4 4h27m
kube-system pod/coredns-588fd544bf-576x5 1/1 Running 4 4h26m
Metallb-system pod/controller-5f98465b6b-hcj9g 1/1 Running 4 4h23m
Metallb-system pod/speaker-qc9pc 1/1 Running 4 4h23m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 21h
develop service/wow-ah-api-service ClusterIP 10.152.183.88 <none> 80/TCP 4h21m
ingress-Nginx service/ingress-Nginx-controller LoadBalancer 10.152.183.216 192.168.0.90 80:32151/TCP,443:30892/TCP 4h27m
ingress-Nginx service/ingress-Nginx-controller-admission ClusterIP 10.152.183.41 <none> 443/TCP 4h27m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 4h26m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
Metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 4h23m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
develop deployment.apps/wow-ah-api 3/3 3 3 4h21m
ingress-Nginx deployment.apps/ingress-Nginx-controller 1/1 1 1 4h27m
kube-system deployment.apps/coredns 1/1 1 1 4h26m
Metallb-system deployment.apps/controller 1/1 1 1 4h23m
NAMESPACE NAME DESIRED CURRENT READY AGE
develop replicaset.apps/wow-ah-api-6c4bff88f9 3 3 3 4h21m
ingress-Nginx replicaset.apps/ingress-Nginx-controller-7896b4fbd4 1 1 1 4h27m
kube-system replicaset.apps/coredns-588fd544bf 1 1 1 4h26m
Metallb-system replicaset.apps/controller-5f98465b6b 1 1 1 4h23m
NAMESPACE NAME COMPLETIONS DURATION AGE
ingress-Nginx job.batch/ingress-Nginx-admission-create 1/1 27s 4h27m
ingress-Nginx job.batch/ingress-Nginx-admission-patch 1/1 29s 4h27m
microk8s kubectl get ingress --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
develop wow-ah-api-ingress <none> * 192.168.0.236 80 4h23m
我一直在想这可能与我的iptables配置有关,但是我不确定如何配置它们以与microk8一起使用。
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
ACCEPT all -- 10.1.0.0/16 anywhere /* generated for MicroK8s pods */
ACCEPT all -- anywhere 10.1.0.0/16 /* generated for MicroK8s pods */
ACCEPT all -- 10.1.0.0/16 anywhere
ACCEPT all -- anywhere 10.1.0.0/16
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
更新#1
Metallb ConfigMap
(microk8s kubectl edit ConfigMap/config -n Metallb-system
)
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.90-192.168.0.99
kind: ConfigMap
Metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config":"address-pools:\n- name: default\n protocol: layer2\n addresses:\n - 192.168.0.90-192.168.0.99\n"},"kind":"ConfigMap","Metadata":{"annotations":{},"name":"config","namespace":"Metallb-system"}}
creationTimestamp: "2020-09-19T21:18:45Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:config: {}
f:Metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: kubectl
operation: Update
time: "2020-09-19T21:18:45Z"
name: config
namespace: Metallb-system
resourceVersion: "133422"
selfLink: /api/v1/namespaces/Metallb-system/configmaps/config
uid: 774f6a73-b1e1-4e26-ba73-ef71bc2e1060
感谢您能给我任何帮助!
解决方法
简短答案:
- 您仅需要(可能有一个)IP地址。您必须可以从Microk8s机器上ping它。
- 这是错误。删除此步骤
长答案示例:
清洁Microk8s。仅一个公共IP(或本地计算机IP。在您的用例中,我将使用192.168.0.90)。
您如何测试?例如
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
从机器外部。
运行测试。它必须失败。
启用microk8s dns和入口
microk8s.enable dns ingress
运行测试。失败了吗?
如果是相同的错误,则:您需要metallb
-
具有Internet公共IP
microk8s.enable metallb:$(curl ipinfo.io/ip)-$(curl ipinfo.io/ip)
-
使用LAN IP 192.168.0.90
microk8s.enable metallb:192.168.0.90-192.168.0.90
再次运行测试
如果“测试未返回” 503或404,则:您无法执行下一步。也许您遇到网络问题或防火墙过滤器。
入口层
我们的测试到达了Microk8s Ingress控制器。他不知道该怎么办并返回404错误(有时是503)。
没关系。继续!
我将使用https://youtu.be/A_PjjCM1eLA?t=984 16:24的示例
[Kube 32]在kubernetes裸机金属集群上设置Traefik入口
设置kubectl别名
alias kubectl=microk8s.kubectl
部署应用
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-main.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-blue.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-green.yaml
在内部群集网络中公开应用。默认情况下为ClusterIP。
kubectl expose deploy nginx-deploy-main --port 80
kubectl expose deploy nginx-deploy-blue --port 80
kubectl expose deploy nginx-deploy-green --port 80
运行测试。还行不通...
入口规则示例:如何按主机名传递配置主机nginx.example.com,blue.nginx.example.com和green.nginx.example.com并将请求分发到公开的部署:
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/ingress-resource-2.yaml
运行此测试:
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
现在您将收到类似
的答复<h1>I am <font color=blue>BLUE</font></h1>
您可以玩
curl -H "Host: nginx.example.com" http://PUBLIC_IP
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
curl -H "Host: green.nginx.example.com" http://PUBLIC_IP
结论:
- 我们只有1个IP地址和多个主机。
- 我们使用同一端口提供3种不同的服务。
- 使用Ingress完成请求分配。
刚刚开始使用 MicroK8s - 它似乎很有前途。在梳理了信息站点和文档之后;能够使用 Traefik Ingress Controller(带有自定义资源定义和 Ingress 路由)实现裸机演示; Linkerd 服务网格;和 metallb 负载平衡器。这是在运行 Ubuntu 20.04 的 VirtualBox 来宾 VM 上完成的;此 github 链接中还包含将 Metallb 提供给访客 VM 的 Traefik Ingress Controller 外部 IP 公开的“方式”。见https://github.com/msb1/microk8s-traefik-linkerd-whoami。
更喜欢此实现而不是 Youtube 链接中显示的内容,因为它包含工作服务网格并使用 Ingress 的自定义资源定义(这是 Traefik 独有的,也是继续使用 Traefik 而非其他 Ingress 控制器的原因之一)。>
希望这对其他人有所帮助 - 应该能够在此演示(当前重点)之后使用 MicroK8s 构建出色的部署。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。