微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

将 1.18.20 升级到 1.19.21:关于剩余的控制计划:不支持或未知的 Kubernetes 版本

如何解决将 1.18.20 升级到 1.19.21:关于剩余的控制计划:不支持或未知的 Kubernetes 版本

我正在执行从 1.18.20 到 1.19.21 的 kubeadm 升级,我现在陷入了在剩余控制平面上运行 sudo kubeadm upgrade node 的步骤。 需要关于下一步做什么的帮助/建议。

注意:在这升级之前,我确实将 1.17.16 升级到了 1.18.20 并且没有这个问题。

在第一个控制平面上,升级看起来没问题:

[root@tncp-stg-master01 ~]# sudo kubeadm upgrade apply v1.19.12-0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.12-0"
[upgrade/versions] Cluster version: v1.18.20
[upgrade/versions] kubeadm version: v1.19.12
[upgrade/version] FATAL: the --version argument is invalid due to these errors:

        - Specified version to upgrade to "v1.19.12-0" is an unstable version and such upgrades weren't allowed via setting the --allow-*-upgrades flags

Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
[root@tncp-stg-master01 ~]# sudo kubeadm upgrade apply v1.19.12
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.12"
[upgrade/versions] Cluster version: v1.18.20
[upgrade/versions] kubeadm version: v1.19.12
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two,depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.12"...
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-07-0514-41-52/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 739364b92b99a8c6e8c092c9385fa5a0
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests583076120"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-021-07-05-14-41-52/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 709f3d0d7fb7a8c5685693801ff110cb
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-mnifests-2021-07-05-14-41-52/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: 039553ac73de7e2aebd99a9d9e7d4b1d
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-021-07-05-14-41-52/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: b719a6c7edf46f2cbff4eef358c5f633
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: coredns
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.12". Enjoy!

但是,当我在剩余的控制平面上执行以下步骤时,遇到以下错误

[root@tncp-stg-master02 pki]# sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.12"...
Static pod: kube-apiserver-tncp-stg-master02.time.com.my hash: 2977ce343053936f861ccbdc9fbcabce
Static pod: kube-controller-manager-tncp-stg-master02.time.com.my hash: 251b3efdde07f767bb4a8380c1dc04bb
Static pod: kube-scheduler-tncp-stg-master02.time.com.my hash: eda45837f8d54b8750b297583fe7441a
[upgrade/etcd] Upgrading to TLS for etcd
error execution phase control-plane: Couldn't complete the static pod upgrade: Failed to retrieve an etcd version for the target Kubernetes version: unsupported or unkown Kubernetes version(1.19.12)
To see the stack trace of this error execute with --v=5 or higher
[root@tncp-stg-master02 pki]#

如果使用 --v=5 运行:


[root@tncp-stg-master02 pki]# sudo kubeadm upgrade node --v=5
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.12"...
Static pod: kube-apiserver-tncp-stg-master02.time.com.my hash: 2977ce343053936f861ccbdc9fbcabce
Static pod: kube-controller-manager-tncp-stg-master02.time.com.my hash: 251b3efdde07f767bb4a8380c1dc04bb
Static pod: kube-scheduler-tncp-stg-master02.time.com.my hash: eda45837f8d54b8750b297583fe7441a
I0705 16:59:21.578428   17158 etcd.go:108] etcd endpoints read from pods: https://10.210.117.31:2379,https://10.210.117.32:2379,https://10.210.117.33:2379
I0705 16:59:21.596763   17158 etcd.go:167] etcd endpoints read from etcd: https://10.210.117.32:2379,https://10.210.117.33:2379,https://10.210.117.31:2379
I0705 16:59:21.596880   17158 etcd.go:126] update etcd endpoints: https://10.210.117.32:2379,https://10.210.117.31:2379
[upgrade/etcd] Upgrading to TLS for etcd
unsupported or unkNown Kubernetes version(1.19.12)
k8s.io/kubernetes/cmd/kubeadm/app/constants.EtcdSupportedVersion
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/constants/constants.go:452
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:284
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.StaticPodControlPlane
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:455
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:606
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node.runcontrolPlane.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node/controlplane.go:77
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
Failed to retrieve an etcd version for the target Kubernetes version
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:286
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.StaticPodControlPlane
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:455
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:606
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node.runcontrolPlane.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node/controlplane.go:77
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
Couldn't complete the static pod upgrade
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node.runcontrolPlane.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node/controlplane.go:78
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

解决方法

在运行 kubeadm 之前,请确保您在所有节点上安装了相同版本的 kubeadm upgrade node

当 kubeadm 无法将 kubernetes 补丁版本映射到 etcd 版本时,会发生 unsupported or unknown Kubernetes version 错误,请参阅 sources

每个 kubeadm 版本只支持一定数量的 kubernetes 补丁版本。 例如,当前的 kubeadm master 仅支持从 .13 到 .23 (sources)

的补丁版本

如果您在运行的节点中运行相同的 kubeadm 版本,则不应发生错误。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。