微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

GO获取K8S API服务器运行状况 已添加原始 main.go dockerfile deploy.yaml 屏幕截图 cronjob.yaml rbac.yaml

如何解决GO获取K8S API服务器运行状况 已添加原始 main.go dockerfile deploy.yaml 屏幕截图 cronjob.yaml rbac.yaml

我有一个golang程序,我需要向K8S API server状态(livez)api添加一个调用获取健康状态。

https://kubernetes.io/docs/reference/using-api/health-checks/

该程序应在api服务器的同一集群上运行,并且需要获得/livez状态,我试图在client-go lib中找到此API,但没有找到一种方法来实现它...

https://github.com/kubernetes/client-go

是否可以通过与API服务器运行在同一群集上的Go程序来执行此操作?

解决方法

更新(最终答案)

已添加

OP请求我修改答案以显示“微调”或“特定”服务帐户的配置,而无需使用群集管理。

据我所知,默认情况下,每个pod都有读取/healthz的权限。例如,以下CronJob可以很好地工作而根本不使用ServiceAccount

# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: is-healthz-ok-no-svc
spec:
  schedule: "*/5 * * * *" # at every fifth minute
  jobTemplate:
    spec:
      template:
        spec:
######### serviceAccountName: health-reader-sa
          containers:
            - name: is-healthz-ok-no-svc
              image: oze4/is-healthz-ok:latest
          restartPolicy: OnFailure

enter image description here

原始

我继续为此写了概念证明。 You can find the full repo here,但代码在下面。

main.go

package main

import (
    "os"
    "errors"
    "fmt"

    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/rest"
)

func main() {
    client,err := newInClusterClient()
    if err != nil {
        panic(err.Error())
    }

    path := "/healthz"
    content,err := client.Discovery().RESTClient().Get().AbsPath(path).DoRaw()
    if err != nil {
        fmt.Printf("ErrorBadRequst : %s\n",err.Error())
        os.Exit(1)
    }

    contentStr := string(content)
    if contentStr != "ok" {
        fmt.Printf("ErrorNotOk : response != 'ok' : %s\n",contentStr)
        os.Exit(1)
    }

    fmt.Printf("Success : ok!")
    os.Exit(0)
}

func newInClusterClient() (*kubernetes.Clientset,error) {
    config,err := rest.InClusterConfig()
    if err != nil {
        return &kubernetes.Clientset{},errors.New("Failed loading client config")
    }
    clientset,err := kubernetes.NewForConfig(config)
    if err != nil {
        return &kubernetes.Clientset{},errors.New("Failed getting clientset")
    }
    return clientset,nil
}

dockerfile

FROM golang:latest
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go build -o main .
CMD ["/app/main"]

deploy.yaml

(作为CronJob)

# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: is-healthz-ok
spec:
  schedule: "*/5 * * * *" # at every fifth minute
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: is-healthz-ok
          containers:
            - name: is-healthz-ok
              image: oze4/is-healthz-ok:latest
          restartPolicy: OnFailure
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: is-healthz-ok
  namespace: default
---
# cluster role binding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: is-healthz-ok
subjects:
  - kind: ServiceAccount
    name: is-healthz-ok
    namespace: default
roleRef:
  kind: ClusterRole
  ##########################################################################
  # Instead of assigning cluster-admin you can create your own ClusterRole #
  # I used cluster-admin because this is a homelab                         #
  ##########################################################################
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---

屏幕截图

成功运行CronJob

enter image description here


更新1

OP正在询问如何部署“ in-cluster-client-config”,因此我提供了一个示例部署(我正在使用的一个)。

您可以找到仓库here

示例部署(我正在使用CronJob,但可以是任何东西)

cronjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: remove-terminating-namespaces-cronjob
spec:
  schedule: "0 */1 * * *" # at minute 0 of each hour aka once per hour
  #successfulJobsHistoryLimit: 0
  #failedJobsHistoryLimit: 0
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: svc-remove-terminating-namespaces
          containers:
          - name: remove-terminating-namespaces
            image: oze4/service.remove-terminating-namespaces:latest
          restartPolicy: OnFailure

rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: svc-remove-terminating-namespaces
  namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: crb-namespace-reader-writer
subjects:
- kind: ServiceAccount
  name: svc-remove-terminating-namespaces
  namespace: default
roleRef:
  kind: ClusterRole
  ##########################################################################
  # Instead of assigning cluster-admin you can create your own ClusterRole #
  # I used cluster-admin because this is a homelab                         #
  ##########################################################################
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---

原始答案

听起来您正在寻找的是来自client-go的“ in-cluster-client-config”。

重要的是要记住,当使用“ in-cluster-client-config”时,Go代码中的API调用使用“那个” pod的服务帐户。只是想确保您正在使用具有读取“ / livez”权限的帐户进行测试。

我测试了以下代码,并能够获得“ livez”状态。

package main

import (
    "errors"
    "flag"
    "fmt"
    "path/filepath"

    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/tools/clientcmd"
    "k8s.io/client-go/rest"
    "k8s.io/client-go/util/homedir"
)

func main() {
    // I find it easiest to use "out-of-cluster" for tetsing
    // client,err := newOutOfClusterClient()

    client,err := newInClusterClient()
    if err != nil {
        panic(err.Error())
    }

    livez := "/livez"
    content,_ := client.Discovery().RESTClient().Get().AbsPath(livez).DoRaw()

    fmt.Println(string(content))
}

func newInClusterClient() (*kubernetes.Clientset,nil
}

// I find it easiest to use "out-of-cluster" for tetsing
func newOutOfClusterClient() (*kubernetes.Clientset,error) {
    var kubeconfig *string
    if home := homedir.HomeDir(); home != "" {
        kubeconfig = flag.String("kubeconfig",filepath.Join(home,".kube","config"),"(optional) absolute path to the kubeconfig file")
    } else {
        kubeconfig = flag.String("kubeconfig","","absolute path to the kubeconfig file")
    }
    flag.Parse()

    // use the current context in kubeconfig
    config,err := clientcmd.BuildConfigFromFlags("",*kubeconfig)
    if err != nil {
        return nil,err
    }

    // create the clientset
    client,err := kubernetes.NewForConfig(config)
    if err != nil {
        return nil,err
    }

    return client,nil
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。