微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

启用 Istio sidecar 后,Rook Ceph pod 无法正常启动

如何解决启用 Istio sidecar 后,Rook Ceph pod 无法正常启动

当 istio sidecar 启用时,我们在 Kubernetes 中面临 rook-ceph 部署的问题。问题是 OSD 没有出现,因为 crashcollector 没有正确初始化。如下图卡住。

rook-ceph        csi-cephfsplugin-7jcr9                             3/3     Running            0          63m
rook-ceph        csi-cephfsplugin-c4dnd                             3/3     Running            0          63m
rook-ceph        csi-cephfsplugin-provisioner-8658f67749-6gzkk      7/7     Running            2          63m
rook-ceph        csi-cephfsplugin-provisioner-8658f67749-bgdpx      7/7     Running            1          63m
rook-ceph        csi-cephfsplugin-zj9xm                             3/3     Running            0          63m
rook-ceph        csi-rbdplugin-58xf4                                3/3     Running            0          63m
rook-ceph        csi-rbdplugin-87rjn                                3/3     Running            0          63m
rook-ceph        csi-rbdplugin-provisioner-94f699d86-rh2r6          7/7     Running            1          63m
rook-ceph        csi-rbdplugin-provisioner-94f699d86-xkv6h          7/7     Running            1          63m
rook-ceph        csi-rbdplugin-tvjvz                                3/3     Running            0          63m
rook-ceph        rook-ceph-crashcollector-node1-f7f6c6f8d-lfs6d     0/2     Init:0/3           0          63m
rook-ceph        rook-ceph-crashcollector-node2-998bb8769-pspnn     0/2     Init:0/3           0          51m
rook-ceph        rook-ceph-crashcollector-node3-6c48c99c8-7bbl6     0/2     Init:0/3           0          40m
rook-ceph        rook-ceph-mon-a-7966994c76-z9phm                   2/2     Running            0          51m
rook-ceph        rook-ceph-mon-b-8cbf8579f-g6nd9                    2/2     Running            0          51m
rook-ceph        rook-ceph-mon-c-d65968cc4-wcpmr                    2/2     Running            0          40m
rook-ceph        rook-ceph-operator-5c47844cf-z9jcb                 2/2     Running            1          67m

当我们在这个 pod 上执行 kubectl describe 时,我们会遇到以下问题:

Warning  FailedMount  59m                  kubelet,node1  Unable to attach or mount volumes: unmounted volumes=[rook-ceph-crash-collector-keyring],unattached volumes=[rook-config-override rook-ceph-log rook-ceph-crash-collector-keyring istio-data istio-podinfo istiod-ca-cert istio-envoy rook-ceph-crash default-token-htvcq]: timed out waiting for the condition

还注意到秘密“rook-ceph-crash-collector-keyring”没有被创建。

经过大量调试后,注意到无法通过服务端点访问“mon”pod。但所有其他通信(如 Kubernetes API、其他命名空间中的其他服务等)都运行良好。

当我们执行到“mon”pod 并做一个 curl 时, 如果我们使用它连接的主机名。

sh-4.4# curl -f rook-ceph-mon-b-8cbf8579f-g6nd9:6789
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway,or consider "--output
Warning: <FILE>" to save to a file.

但使用服务名称不起作用

sh-4.4# curl -f rook-ceph-mon-a:6789
curl: (56) Recv failure: Connection reset by peer

在 rook-ceph-operator 日志中还注意到,有一些潜在的线索表明 mons 没有达到法定人数。

2021-02-13 06:11:23.532494 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change,nothing to update
2021-02-13 06:11:23.532658 I | op-mon: waiting for mon quorum with [a c b]
2021-02-13 06:11:24.123965 I | op-mon: mons running: [a c b]
2021-02-13 06:11:44.354283 I | op-mon: mons running: [a c b]
2021-02-13 06:12:04.553052 I | op-mon: mons running: [a c b]
2021-02-13 06:12:24.760423 I | op-mon: mons running: [a c b]
2021-02-13 06:12:44.953344 I | op-mon: mons running: [a c b]
2021-02-13 06:13:05.153151 I | op-mon: mons running: [a c b]
2021-02-13 06:13:25.354678 I | op-mon: mons running: [a c b]
2021-02-13 06:13:45.551489 I | op-mon: mons running: [a c b]
2021-02-13 06:14:05.910343 I | op-mon: mons running: [a c b]
2021-02-13 06:14:26.188100 I | op-mon: mons running: [a c b]
2021-02-13 06:14:46.377549 I | op-mon: mons running: [a c b]
2021-02-13 06:15:06.563272 I | op-mon: mons running: [a c b]
2021-02-13 06:15:27.119178 I | op-mon: mons running: [a c b]
2021-02-13 06:15:47.372562 I | op-mon: mons running: [a c b]
2021-02-13 06:16:07.565653 I | op-mon: mons running: [a c b]
2021-02-13 06:16:27.751456 I | op-mon: mons running: [a c b]
2021-02-13 06:16:47.952091 I | op-mon: mons running: [a c b]
2021-02-13 06:17:08.168884 I | op-mon: mons running: [a c b]
2021-02-13 06:17:28.358448 I | op-mon: mons running: [a c b]
2021-02-13 06:17:48.559239 I | op-mon: mons running: [a c b]
2021-02-13 06:18:08.767715 I | op-mon: mons running: [a c b]
2021-02-13 06:18:28.987579 I | op-mon: mons running: [a c b]
2021-02-13 06:18:49.242784 I | op-mon: mons running: [a c b]
2021-02-13 06:19:09.456809 I | op-mon: mons running: [a c b]
2021-02-13 06:19:29.671632 I | op-mon: mons running: [a c b]
2021-02-13 06:19:49.871453 I | op-mon: mons running: [a c b]
2021-02-13 06:20:10.062897 I | op-mon: mons running: [a c b]
2021-02-13 06:20:30.258163 I | op-mon: mons running: [a c b]
2021-02-13 06:20:50.452097 I | op-mon: mons running: [a c b]
2021-02-13 06:21:10.655282 I | op-mon: mons running: [a c b]
2021-02-13 06:21:25.854570 E | ceph-cluster-controller: Failed to reconcile. Failed to reconcile cluster "rook-ceph": Failed to configure local ceph cluster: Failed to create cluster: Failed to start ceph monitors: Failed to start mon pods: Failed to check mon quorum a: Failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum

看起来 mons 不再能通过服务端点访问了,这使得整个初始化过程都卡住了。

以下是在 rook-ceph 命名空间下运行的服务。

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics   ClusterIP   10.233.30.235   <none>        8080/TCP,8081/TCP   83m
csi-rbdplugin-metrics      ClusterIP   10.233.61.8     <none>        8080/TCP,8081/TCP   83m
rook-ceph-mon-a            ClusterIP   10.233.2.224    <none>        6789/TCP,3300/TCP   83m
rook-ceph-mon-b            ClusterIP   10.233.39.129   <none>        6789/TCP,3300/TCP   72m
rook-ceph-mon-c            ClusterIP   10.233.51.59    <none>        6789/TCP,3300/TCP   61m

其他注意事项: Wee 正在使用所有最新版本的 istio、rook-ceph 等。集群是使用 Kubespray 创建的,在具有 3 个节点的 Ubuntu 仿生上运行。使用 Calico。

如果您需要更多详细信息,请告诉我们。提前致谢。

解决方法

我已将问题范围缩小到“rook-ceph-mon”pod。如果我们在 rook-ceph-mon 和 rook-ceph-osd-prepare pod 上排除 sidecar 注入(这应该没问题,因为它是一次性计划的工作),一切正常。

在 Istio 配置中,我添加了这个以从 sidecar 注入中排除 mon 和准备 pod,此后一切正常。

neverInjectSelector: - matchExpressions: - {key: mon,operator: Exists} - matchExpressions: - {key: job-name,operator: Exists}

我必须做的另一件事是使 MTLS 模式从“STRICT”变为“PERMISSIVE”。

清单现在看起来像这样(注意,mons 没有 sidecar)

rook-ceph        csi-cephfsplugin-444gk                             3/3     Running     0          16m
rook-ceph        csi-cephfsplugin-9cdkz                             3/3     Running     0          16m
rook-ceph        csi-cephfsplugin-n6k5x                             3/3     Running     0          16m
rook-ceph        csi-cephfsplugin-provisioner-8658f67749-ms985      7/7     Running     2          16m
rook-ceph        csi-cephfsplugin-provisioner-8658f67749-v2g8x      7/7     Running     2          16m
rook-ceph        csi-rbdplugin-lsfhl                                3/3     Running     0          16m
rook-ceph        csi-rbdplugin-mbf67                                3/3     Running     0          16m
rook-ceph        csi-rbdplugin-provisioner-94f699d86-5fvrf          7/7     Running     2          16m
rook-ceph        csi-rbdplugin-provisioner-94f699d86-zl7js          7/7     Running     2          16m
rook-ceph        csi-rbdplugin-swnvt                                3/3     Running     0          16m
rook-ceph        rook-ceph-crashcollector-node1-779c58d4c4-rx7jd    2/2     Running     0          9m20s
rook-ceph        rook-ceph-crashcollector-node2-998bb8769-h4dbx     2/2     Running     0          12m
rook-ceph        rook-ceph-crashcollector-node3-88695c488-gskgb     2/2     Running     0          9m34s
rook-ceph        rook-ceph-mds-myfs-a-6f94b9c496-276tw              2/2     Running     0          9m35s
rook-ceph        rook-ceph-mds-myfs-b-66977b55cb-rqvg9              2/2     Running     0          9m21s
rook-ceph        rook-ceph-mgr-a-7f478d8d67-b4nxv                   2/2     Running     1          12m
rook-ceph        rook-ceph-mon-a-57b6474f8f-65c9z                   1/1     Running     0          16m
rook-ceph        rook-ceph-mon-b-978f77998-9dqdg                    1/1     Running     0          15m
rook-ceph        rook-ceph-mon-c-756fbf5c66-thcjq                   1/1     Running     0          13m
rook-ceph        rook-ceph-operator-5c47844cf-gzms8                 2/2     Running     2          19m
rook-ceph        rook-ceph-osd-0-7d48c6b97d-t725c                   2/2     Running     0          12m
rook-ceph        rook-ceph-osd-1-54797bdd48-zgkrw                   2/2     Running     0          12m
rook-ceph        rook-ceph-osd-2-7898d6cc4-wc2c2                    2/2     Running     0          12m
rook-ceph        rook-ceph-osd-prepare-node1-mczj7                  0/1     Completed   0          12m
rook-ceph        rook-ceph-osd-prepare-node2-tzrk6                  0/1     Completed   0          12m
rook-ceph        rook-ceph-osd-prepare-node3-824lx                  0/1     Completed   0          12m

当在 rook-ceph-mon 上启用 sidecar 时,发生了一些奇怪的事情,并且无法通过服务端点访问它。

我知道这是一种解决方法。期待更好的答案。

,

当您注入 sidecar 时,您必须考虑到 istio-proxy 需要几秒钟才能准备好。

在某些情况下,许多 Jobs/Cronjobs 没有进行任何重试,因此由于网络问题而失败,在其他情况下,它们可以工作但永远不会结束,因为它们需要终止 sidecar 容器,因此 Jobs 永远不会以状态 1 完成/2.

这种行为也可能发生在未实现重试的部署和应用中。

在这里你可以看到很多如何解决它的例子。 https://github.com/istio/istio/issues/11659

我用这个:

until curl -fsI http://localhost:150.21/healthz/ready; do sleep 1 ; done

WHATEVER TASK YOU NEED TO DO

RC=$(echo $?)
curl -fsI -X POST http://localhost:15020/quitquitquit
exit $RC

就我而言,我只遇到了 rook-ceph-osd-prepare-* 的问题,所以我决定将注释设置为不向它们注入 sidecar。 在您使用 crashcollector 的情况下,更新 Istio 版本可能就足够了。

我的版本是 Kubernetes 1.20、Istio 1.10、Ceph 15.2.8

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。