微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

部署Kubernetes资源时更改Pulumi的超时

如何解决部署Kubernetes资源时更改Pulumi的超时

当我和Pulumi一起向Kubernetes部署资源时,如果我犯了一个错误,Pulumi将等待Kubernetes资源的健康运行。

     Type                                                                               Name                               Status                  Info
 +   pulumi:pulumi:Stack                                                                aws-load-balancer-controller-dev   **creating Failed**     1 error
 +   ├─ jaxxstorm:aws:loadbalancercontroller                                            foo                                created
 +   ├─ kubernetes:yaml:ConfigFile                                                      foo-crd                            created
 +   │  └─ kubernetes:apiextensions.k8s.io/v1beta1:CustomresourceDeFinition             targetgroupbindings.elbv2.k8s.aws  created                 1 warning
 +   ├─ kubernetes:core/v1:Namespace                                                    foo-namespace                      created
 +   ├─ kubernetes:core/v1:Service                                                      foo-webhook-service                **creating Failed**     1 error
 +   ├─ kubernetes:rbac.authorization.k8s.io/v1:Role                                    foo-role                           created
 +   ├─ pulumi:providers:kubernetes                                                     k8s                                created
 +   ├─ aws:iam:Role                                                                    foo-role                           created
 +   │  └─ aws:iam:Policy                                                               foo-policy                         created
 +   ├─ kubernetes:core/v1:Secret                                                       foo-tls-secret                     created
 +   ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole                             foo-clusterrole                    created
 +   ├─ kubernetes:admissionregistration.k8s.io/v1beta1:ValidatingWebhookConfiguration  foo-validating-webhook             created                 1 warning
 +   ├─ kubernetes:admissionregistration.k8s.io/v1beta1:MutatingWebhookConfiguration    foo-mutating-webhook               created                 1 warning
 +   └─ kubernetes:core/v1:ServiceAccount                                               foo-serviceAccount                 **creating Failed**     1 error
 C
Diagnostics:
  kubernetes:core/v1:ServiceAccount (foo-serviceAccount):
    error: resource aws-load-balancer-controller/foo-serviceaccount was not successfully created by the Kubernetes API server : ServiceAccount "foo-serviceaccount" is invalid: Metadata.labels: Invalid value: "arn:aws:iam::616138583583:role/foo-role-10b9499": a valid label must be an empty string or consist of alphanumeric characters,'-','_' or '.',and must start and end with an alphanumeric character (e.g. 'MyValue',or 'my_value',or '12345',regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')

  kubernetes:core/v1:Service (foo-webhook-service):
    error: 2 errors occurred:
        * resource aws-load-balancer-controller/foo-webhook-service-4lpopjpr was successfully created,but the Kubernetes API server reported that it Failed to fully initialize or become live: Resource operation was cancelled for "foo-webhook-service-4lpopjpr"
        * Service does not target any Pods. Selected Pods may not be ready,or field '.spec.selector' may not match labels on any Pods

有没有一种方法可以禁用此功能,这样我就不必向Pulumi发送信号来终止?

解决方法

Pulumi对Kubernetes资源具有特殊的等待逻辑。您可以了解有关此here

的更多信息

Pulumi将等待Kubernetes资源变得“健康”。可以根据所创建的资源来更改“健康”的定义,但通常Pulumi会等待该资源:

  • 存在
  • 具有就绪状态(如果资源有一个状态)

您可以通过向该资源添加注释来跳过此逻辑,如下所示:

pulumi.com/skipAwait: "true"

您还可以使用以下示例更改超时或Pulumi等待的时间:

pulumi.com/timeoutSeconds: 600

这会添加到您使用Pulumi管理的任何Kubernetes资源中,例如,服务资源可能看起来像这样(使用Pulumi的打字稿SDK):

const service = new k8s.core.v1.Service(`${name}-service`,{
  metadata: {
    namespace: "my-service",},annotations: {
    "pulumi.com/timeoutSeconds": "60" // Only wait 1 minute for pulumi to timeout
    "pulumi.com/skipAwait": "true" // don't use the await logic at all
}
  spec: {
    ports: [{
      port: 443,targetPort: 9443,}],selector: {
      "app.kubernetes.io/name": "my-deployment","app.kubernetes.io/instance": "foo",});

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。