如何解决在带 Alpine 的 Raspberry Pi4 上安装 k8s 1.21.0 时,Kubelet 消息“cni config uninitialized”?
我正在使用 Alpine for aarch64 在我的 RaspBerry Pi4 上运行 kubeadm init --node-name k8s-node-01 --token=${TOKEN}
,并在控制台上得到以下结果。查看 kubelet 日志,似乎我需要配置一个 CNI……但我不断发现的说明假定我已经拥有一个具有工作 kubectl
配置的 k8s 集群。我不知道如何继续。
k8s-node-01:~# kubeadm init --node-name k8s-node-01 --token=${TOKEN}
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two,depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node-01 localhost] and IPs [192.168.1.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node-01 localhost] and IPs [192.168.1.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately,an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system,you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally,a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot,list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container,you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: Couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
以下是 /var/log/kubelet/kubelet.log
的一些输出:
I0429 20:48:52.250906 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:48:52.546014 3237 kubelet.go:2218] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I0429 20:48:54.073913 3237 kubelet_node_status.go:71] "Attempting to register node" node="k8s-node-01"
E0429 20:48:54.075479 3237 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://192.168.1.201:6443/api/v1/nodes\": dial tcp 192.168.1.201:6443: connect: connection refused" node="k8s-node-01"
I0429 20:48:54.251246 3237 kubelet.go:461] "Kubelet nodes not sync"
I0429 20:48:54.637281 3237 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
E0429 20:48:56.301131 3237 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://192.168.1.201:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.1.201:6443: connect: connection refused
I0429 20:48:57.250927 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:48:57.602935 3237 kubelet.go:2218] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
E0429 20:48:57.723059 3237 controller.go:144] Failed to ensure lease exists,will retry in 7s,error: Get "https://192.168.1.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-node-01?timeout=10s": dial tcp 192.168.1.201:6443: connect: connection refused
E0429 20:48:59.251706 3237 kubelet.go:2298] "Error getting node" err="nodes have not yet been read at least once,cannot construct node object"
I0429 20:48:59.638137 3237 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
E0429 20:49:01.020298 3237 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"",APIVersion:""},ObjectMeta:v1.ObjectMeta{Name:"k8s-node-01.167a6d02c5fc74e6",GenerateName:"",Namespace:"default",SelfLink:"",UID:"",ResourceVersion:"",Generation:0,CreationTimestamp:v1.Time{Time:time.Time{wall:0x0,ext:0,loc:(*time.Location)(nil)}},DeletionTimestamp:(*v1.Time)(nil),DeletionGracePeriodSeconds:(*int64)(nil),Labels:map[string]string(nil),Annotations:map[string]string(nil),OwnerReferences:[]v1.OwnerReference(nil),Finalizers:[]string(nil),ClusterName:"",ManagedFields:[]v1.ManagedFieldsEntry(nil)},Involvedobject:v1.ObjectReference{Kind:"Node",Namespace:"",Name:"k8s-node-01",UID:"k8s-node-01",APIVersion:"",Fieldpath:""},Reason:"NodeHasNodiskPressure",Message:"Node k8s-node-01 status is Now: NodeHasNodiskPressure",Source:v1.EventSource{Component:"kubelet",Host:"k8s-node-01"},FirstTimestamp:v1.Time{Time:time.Time{wall:0xc01ae266a80ff0e6,ext:4600481839,loc:(*time.Location)(0x3b69380)}},LastTimestamp:v1.Time{Time:time.Time{wall:0xc01ae26a11c222cb,ext:18226284051,Count:8,Type:"normal",EventTime:v1.MicroTime{Time:time.Time{wall:0x0,Series:(*v1.EventSeries)(nil),Action:"",Related:(*v1.ObjectReference)(nil),ReportingController:"",ReportingInstance:""}': 'Patch "https://192.168.1.201:6443/api/v1/namespaces/default/events/k8s-node-01.167a6d02c5fc74e6": dial tcp 192.168.1.201:6443: connect: connection refused'(may retry after sleeping)
I0429 20:49:01.244928 3237 kubelet_node_status.go:71] "Attempting to register node" node="k8s-node-01"
E0429 20:49:01.247474 3237 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://192.168.1.201:6443/api/v1/nodes\": dial tcp 192.168.1.201:6443: connect: connection refused" node="k8s-node-01"
I0429 20:49:02.352656 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:49:02.658584 3237 kubelet.go:2218] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I0429 20:49:04.353118 3237 kubelet.go:461] "Kubelet nodes not sync"
I0429 20:49:04.639286 3237 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
E0429 20:49:04.725136 3237 controller.go:144] Failed to ensure lease exists,error: Get "https://192.168.1.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-node-01?timeout=10s": dial tcp 192.168.1.201:6443: connect: connection refused
I0429 20:49:07.352911 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:49:07.714725 3237 kubelet.go:2218] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I0429 20:49:08.416803 3237 kubelet_node_status.go:71] "Attempting to register node" node="k8s-node-01"
E0429 20:49:08.419563 3237 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://192.168.1.201:6443/api/v1/nodes\": dial tcp 192.168.1.201:6443: connect: connection refused" node="k8s-node-01"
I0429 20:49:09.353151 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:49:09.353408 3237 kubelet.go:2298] "Error getting node" err="nodes have not yet been read at least once,cannot construct node object"
E0429 20:49:09.561414 3237 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:45: Failed to watch *v1.Pod: Failed to list *v1.Pod: Get "https://192.168.1.201:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-node-01&limit=500&resourceVersion=0": dial tcp 192.168.1.201:6443: connect: connection refused
I0429 20:49:09.640110 3237 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
E0429 20:49:09.977853 3237 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Failed to list *v1.Service: Get "https://192.168.1.201:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.201:6443: connect: connection refused
I0429 20:49:10.454221 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:49:11.022600 3237 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"",ReportingInstance:""}': 'Patch "https://192.168.1.201:6443/api/v1/namespaces/default/events/k8s-node-01.167a6d02c5fc74e6": dial tcp 192.168.1.201:6443: connect: connection refused'(may retry after sleeping)
I0429 20:49:11.455253 3237 kubelet.go:461] "Kubelet nodes not sync"
E0429 20:49:11.727673 3237 controller.go:144] Failed to ensure lease exists,error: Get "https://192.168.1.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-node-01?timeout=10s": dial tcp 192.168.1.201:6443: connect: connection refused
E0429 20:49:12.770559 3237 kubelet.go:2218] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
这里有一些关于我的 RaspBerry Pi 4 Alpine 安装的详细信息。
k8s-node-01:~# uname -a
Linux k8s-node-01 5.10.29-0-rpi4 #1-Alpine SMP PREEMPT Mon Apr 12 15:55:08 UTC 2021 aarch64 Linux
k8s-node-01:~# kubectl version
Client Version: version.Info{Major:"1",Minor:"21",GitVersion:"v1.21.0",GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479",GitTreeState:"archive",BuildDate:"2021-04-12T14:02:47Z",GoVersion:"go1.16.3",Compiler:"gc",Platform:"linux/arm64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
以及(一些)已安装软件包/版本的列表,以防万一:
k8s-node-01:~# apk list -I | sort | cut -f1 -d\
alpine-base-3.13.5-r0
alpine-baselayout-3.2.0-r8
alpine-conf-3.11.0-r2
alpine-keys-2.2-r0
cni-plugins-0.9.1-r0
containerd-1.4.4-r0
docker-20.10.3-r1
docker-cli-20.10.3-r1
docker-engine-20.10.3-r1
docker-openrc-20.10.3-r1
iptables-1.8.6-r0
iptables-openrc-1.8.6-r0
kubeadm-1.21.0-r1
kubectl-1.21.0-r1
kubelet-1.21.0-r1
kubelet-openrc-1.21.0-r1
kubernetes-1.21.0-r1
libnetfilter_conntrack-1.0.8-r0
libnetfilter_cthelper-1.0.0-r1
libnetfilter_cttimeout-1.0.0-r1
libnetfilter_queue-1.0.5-r0
linux-firmware-brcm-20201218-r0
linux-rpi4-5.10.29-r0
openrc-0.42.1-r19
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。