如何解决规模 1 的部署有 2 个 pod
我有一个 scale=1 的部署 但是当我运行 get pods 时,我有 2/2 ... 当我将部署扩展到 0 而不是 1 时,我又取回了 2 个 pod……这怎么可能? 正如我在下面看到的 prometeus-server 有 2 个:
PS C:\dev\> kubectl.exe get pods -n monitoring
NAME READY STATUS RESTARTS AGE
grafana-6c79d58dd-5k8cs 1/1 Running 0 3d21h
prometheus-alertmanager-5584c7b8d-k7zrn 2/2 Running 0 3d21h
prometheus-kube-state-metrics-6b46f67bf6-kt5dq 1/1 Running 0 3d21h
prometheus-node-exporter-fj5zv 1/1 Running 0 3d21h
prometheus-node-exporter-vgjtt 1/1 Running 0 3d21h
prometheus-node-exporter-xfm5h 1/1 Running 0 3d21h
prometheus-node-exporter-zp9mw 1/1 Running 0 3d21h
prometheus-pushgateway-6c9764ff46-s295t 1/1 Running 0 3d21h
prometheus-server-b647558d5-jxgtl 2/2 Running 0 2m18s
部署是:
PS C:\dev> kubectl.exe describe deployment prometheus-server -n monitoring
Name: prometheus-server
Namespace: monitoring
CreationTimestamp: Thu,16 Jul 2020 11:46:58 +0300
Labels: app=prometheus
app.kubernetes.io/managed-by=Helm
chart=prometheus-11.7.0
component=server
heritage=Helm
release=prometheus
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: monitoring
Selector: app=prometheus,component=server,release=prometheus
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable,25% max surge
Pod Template:
Labels: app=prometheus
chart=prometheus-11.7.0
component=server
heritage=Helm
release=prometheus
Service Account: prometheus-server
Containers:
prometheus-server-configmap-reload:
Image: jimmidyson/configmap-reload:v0.3.0
Port: <none>
Host Port: <none>
Args:
--volume-dir=/etc/config
--webhook-url=http://127.0.0.1:9090/-/reload
Environment: <none>
Mounts:
/etc/config from config-volume (ro)
prometheus-server:
Image: prom/prometheus:v2.19.0
Port: 9090/TCP
Host Port: 0/TCP
Args:
--storage.tsdb.retention.time=15d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
Liveness: http-get http://:9090/-/healthy delay=30s timeout=30s period=15s #success=1 #failure=3
Readiness: http-get http://:9090/-/ready delay=30s timeout=30s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/data from storage-volume (rw)
/etc/config from config-volume (rw)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-server
Optional: false
storage-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-server
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: prometheus-server-b647558d5 (1/1 replicas created)
NewReplicaSet: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m32s deployment-controller Scaled down replica set prometheus-server-b647558d5 to 0
Normal ScalingReplicaSet 5m14s deployment-controller Scaled up replica set prometheus-server-b647558d5 to 1
奇怪的是,如上所示,k8s 认为它是 1 个 pod,如果看起来像是手动操作。我现在不知道是什么:/
解决方法
两个容器,一个吊舱。您也可以在描述输出中看到它们都列在 Containers:
下。一个是 Prometheus 本身,另一个是在配置文件更改时触发重新加载的 sidecar,因为 Prometheus 本身不会这样做。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。