微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

kubernetes kubeadm 初始化失败

如何解决kubernetes kubeadm 初始化失败

version:
OS: CentOS Linux release 7.6.1810 (Core)  3.10.0-957.1.3.el7.x86_64
hardware: vmware 8C 32G
docker: 19.03.8
kubeadm: v1.20.2

控制台输出

    #############kubeadm init logs ##############
    [root@master01 ~]# kubeadm  init
    W0214 08:58:05.787128   60348 version.go:101] Could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    W0214 08:58:05.787312   60348 version.go:102] falling back to the local client version: v1.20.2
    [init] Using Kubernetes version: v1.20.2
    [preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active,please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING Service-Docker]: docker service is not enabled,please run 'systemctl enable docker.service'
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two,depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 10.30.201.97]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [10.30.201.97 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [10.30.201.97 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    
            Unfortunately,an error has occurred:
                    timed out waiting for the condition
    
            This error is likely caused by:
                    - The kubelet is not running
                    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
    
            If you are on a systemd-powered system,you can try to troubleshoot the error with the following commands:
                    - 'systemctl status kubelet'
                    - 'journalctl -xeu kubelet'
    
            Additionally,a control plane component may have crashed or exited when started by the container runtime.
            To troubleshoot,list all containers using your preferred container runtimes CLI.
    
            Here is one example how you may list all Kubernetes containers running in docker:
                    - 'docker ps -a | grep kube | grep -v pause'
                    Once you have found the failing container,you can inspect its logs with:
                    - 'docker logs CONTAINERID'
    
    error execution phase wait-control-plane: Couldn't initialize a Kubernetes cluster
    To see the stack trace of this error execute with --v=5 or higher
    [root@master01 ~]# docker ps -a | grep kube
    [root@master01 ~]# 
    [root@master01 ~]# docker ps -a 
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    [root@master01 ~]#

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。