如何解决gRPC 和 gRPC-web 后端未通过 kubernetes nginx 入口连接
我在 AWS EKS 中设置了一个 gRPC 服务器,并使用 nginx-ingress-Controller 和网络负载均衡器,gRPC 的信息,Envoy 正在使用,所以它会像这样 - NLB >> Ingress >> envoy >> gRPC 问题是当我们从bloomrPC创建reqy=uist时,请求没有登陆Envoy
您预计会发生什么: 它应该是从外部连接到 grpc 服务的请求,并且需要将 gRPC 和 gRPC-web 与 ssl 一起使用,为此寻找最佳解决方案。
如何重现它(尽可能少且精确): Spinup normail gRPC 和 grpc-web 服务,使用 envoy 连接 gRPC 服务,下面是我使用 Envoy 的 conf 和 inginx-ingress-controller 我也尝试使用 Nginx 入口控制器 nginx-ingress-controller:0.30.0 图像,因为它的将有助于使用 Nginx 入口规则连接 HTTP2 和 gRPC
apiVersion: networking.k8s.io/v1
kind: Ingress
Metadata:
annotations:
kubernetes.io/ingress.class: "Nginx"
Nginx.ingress.kubernetes.io/use-http2: enabled
Nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: tk-ingress
spec:
tls:
- hosts:
- test.domain.com
secretName: tls-secret
rules:
- host: test.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: envoy
port:
number: 80
特使 - conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0,port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8803
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.httpconnectionManager
stat_prefix: ingress_http
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "test.domain.com"
routes:
- match:
prefix: /
grpc:
route:
cluster: tkmicro
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET,PUT,DELETE,POST,OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type,channel,api-key,lang
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional,but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/home/ubuntu/envoy/sync.pb"
ignore_unkNown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
alpn_protocols: "h2"
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: tkmicro
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: tkmicro
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.20.120.201
port_value: 8081
http2_protocol_options: {} # Force HTTP/2
还有什么我们需要知道的吗?: 从bloomrPC得到这个错误
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
环境: Kubernetes 版本(使用 kubectl 版本):GitVersion:"v1.21.1" 云提供商或硬件配置:AWS -EKS
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。