GKE Nginx Ingress 控制器的 Ingress 未绑定到外部 IP
GKE Nginx Ingress controller's Ingress not binding to External IP
我正在尝试在 GKE 集群上添加一个 NGINX Ingress 控制器,使用现有的 HAProxy Ingress 控制器(在重写规则方面存在一些问题)
首先,我尝试将控制器的服务公开给 LoadBalancer
类型。流量可以到达入口和后端,但不适用于托管证书。
所以我尝试使用 L7 负载均衡器(URL 映射)将流量转发到 GKE 集群 IP,并为我的入口控制器本身创建一个入口对象。
问题是,这个 Ingress 对象似乎没有绑定到外部 IP。并且路由到域会产生“默认后端 - 404”响应。
$ kubectl -n ingress-controller get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress NodePort 172.16.xxx.xxx <none> 80:31579/TCP,443:31769/TCP 595d
ingress-default-backend ClusterIP 172.16.xxx.xxx <none> 8080/TCP 595d
nginx-ingress-svc NodePort 172.16.xxx.xxx <none> 80:32416/TCP,443:31299/TCP 2d17h
$ kubectl -n ingress-controller get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
haproxy-l7-ing <none> * 34.xxx.xxx.aaa 80 594d
ingress-nginx-ing nginx * 172.xxx.xxx.xxx 80 2d16h
$ gcloud compute addresses list --global --project my-project
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
my-ext-ip 34.xxx.xxx.aaa EXTERNAL IN_USE
my-test-ext-ip 34.xxx.xxx.bbb EXTERNAL IN_USE
在这种情况下,我想 ingress-nginx-ing
应该绑定到 34.xxx.xxx.bbb (my-test-ext-ip)
,就像 haproxy-l7-ing
绑定到 34.xxx.xxx.aaa (my-ext-ip)
但它没有。
负载均衡器:
$ gcloud compute forwarding-rules list --global --project my-project
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
haproxy-http-fwdrule 34.xxx.xxx.aaa TCP haproxy-http-proxy
haproxy-https-fwdrule 34.xxx.xxx.aaa TCP haproxy-https-proxy
nginx-http-fwdrule 34.xxx.xxx.bbb TCP nginx-http-proxy
nginx-https-fwdrule 34.xxx.xxx.bbb TCP nginx-https-proxy
$ gcloud compute target-http-proxies list --global --project my-project
NAME URL_MAP
haproxy-http-proxy haproxy-http-urlmap
nginx-http-proxy nginx-https-urlmap
$ gcloud compute target-https-proxies list --global --project my-project
NAME SSL_CERTIFICATES URL_MAP
haproxy-https-proxy default-cert,mcrt-xxxxxx-xxxxxx haproxy-https-urlmap
nginx-https-proxy mcrt-xxxxxx-xxxxxx nginx-https-urlmap
$ gcloud compute url-maps list --global --project my-project
NAME DEFAULT_SERVICE
haproxy-https-urlmap backendServices/k8s-be-xxxxxx--xxxxxx
haproxy-http-urlmap
nginx-https-urlmap backendServices/nginx-lb-backendservice
$ gcloud compute backend-services list --global --project my-project
NAME BACKENDS PROTOCOL
k8s-be-xxxxxx--xxxxxx asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx HTTP
nginx-lb-backendservice asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx HTTP
后端:asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx
指向 GKE 集群。
K8S YAML是这样的:
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
namespace: ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
kind: Service
apiVersion: v1
metadata:
name: nginx-ingress-svc
namespace: ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: NodePort
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
appProtocol: http
- name: https
port: 443
targetPort: https
protocol: TCP
appProtocol: https
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-ing
namespace: ingress-controller
labels:
app: ingress-nginx
tier: ingress
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
# kubernetes.io/ingress.allow-http: 'false'
kubernetes.io/ingress.global-static-ip-name: 'my-test-ext-ip'
ingress.kubernetes.io/url-map: nginx-https-urlmap
networking.gke.io/managed-certificates: 'my-managed-cert'
ingress.gcp.kubernetes.io/pre-shared-cert: 'default-cert'
spec:
ingressClassName: nginx
defaultBackend:
service:
name: nginx-ingress-svc
port:
number: 80
知道我在这里可能遗漏了什么吗?
谢谢!
更新
我调整了负载均衡器的一些配置,创建了我自己的后端和健康检查,如下所示:
$ gcloud compute backend-services describe nginx-lb-backendservice --global
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
capacityScaler: 1.0
group: https://www.googleapis.com/compute/v1/projects/my-project/zones/asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx
maxRatePerInstance: 1.0
cdnPolicy:
cacheKeyPolicy:
includeHost: true
includeProtocol: true
includeQueryString: false
cacheMode: USE_ORIGIN_HEADERS
negativeCaching: false
requestCoalescing: true
serveWhileStale: 0
signedUrlCacheMaxAgeSec: '0'
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2022-01-07T00:48:38.900-08:00'
description: '{"kubernetes.io/service-name":"ingress-controller/nginx-ingress-svc","kubernetes.io/service-port":"80"}'
enableCDN: true
fingerprint: ****
healthChecks:
- https://www.googleapis.com/compute/v1/projects/mtb-development-project/global/healthChecks/nginx-lb-backend-healthcheck
id: '7699213954898870409'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
logConfig:
enable: true
sampleRate: 1.0
name: nginx-lb-backendservice
port: 31579
portName: port31579
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/nginx-lb-backendservice
sessionAffinity: NONE
timeoutSec: 30
然后,我将这个注释添加到 Ingress ingress-nginx-ing
:
ingress.kubernetes.io/url-map: nginx-https-urlmap
后端状态是 HEALTHY,但不知何故 ingress-nginx-ing
仍然不会绑定到保留的外部 IP。
并且还有 none 这些注释附加到它:ingress.kubernetes.io/backends
、ingress.kubernetes.io/https-forwarding-rule
、ingress.kubernetes.io/https-target-proxy
,与 HAProxy 不同。
正在向我的主机发送 HTTP(S)。mydomain/whatever(解析为 IP:34.xxx.xxx.bbb
)仍然得到“默认后端 - 404”响应。
更新#2(成功!)
我尝试了 boredabdel's answer ,从 ingress-nginx-ing
中删除了 ingressClassName: nginx
,它似乎有效。
根据新警告删除手动创建的 LB 对象并调整自动生成的健康检查后,流量可以按预期到达 API。
(混淆的来源是同时具有 kubernetes.io/ingress.class
注释和来自示例的 ingressClassName
。)
托管证书仅适用于 L7 (HTTP) LoadBalancer,不适用于 TCP。
我的理解是您想将 nginx 用作 GKE 上的 Ingress 控制器,但您想将其暴露在 L7 LoadBalancer 后面,以便您可以使用 Google 托管证书?
是的,所以我在您的 YAML 文件中看到的问题是,您正试图使用 nginx IngressClass 公开 NGINX 入口本身,这是行不通的。
您需要做的是使用名为 gce 的 GKE 默认 IngressClass 公开 NGINX。如果您在 Ingress 对象中省略它,它就是默认值。所以你的对象大致看起来像这样
HTTP LB(通过带有 gce IngressClass 的 Ingress)-> nginx 服务 -> NGINX pods --> 应用服务 --> App pod
我们确实有一个例子here
然而,您需要牢记的事情很少。 NGINX Ingress Controller 所做的事情与 GKE 默认 Ingress 控制器几乎相同。他们都在您的应用程序前面设置了一个 HTTP(s) LoadBalancer。在此设置中,您将尝试实现 2 个 LoadBalancer,Google HTTP LB 通过 Ingress 和另一个 NGINX 提供。这意味着 2 次 tcp 终止并可能导致延迟增加。只是要记住一些事情
我正在尝试在 GKE 集群上添加一个 NGINX Ingress 控制器,使用现有的 HAProxy Ingress 控制器(在重写规则方面存在一些问题)
首先,我尝试将控制器的服务公开给 LoadBalancer
类型。流量可以到达入口和后端,但不适用于托管证书。
所以我尝试使用 L7 负载均衡器(URL 映射)将流量转发到 GKE 集群 IP,并为我的入口控制器本身创建一个入口对象。
问题是,这个 Ingress 对象似乎没有绑定到外部 IP。并且路由到域会产生“默认后端 - 404”响应。
$ kubectl -n ingress-controller get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress NodePort 172.16.xxx.xxx <none> 80:31579/TCP,443:31769/TCP 595d
ingress-default-backend ClusterIP 172.16.xxx.xxx <none> 8080/TCP 595d
nginx-ingress-svc NodePort 172.16.xxx.xxx <none> 80:32416/TCP,443:31299/TCP 2d17h
$ kubectl -n ingress-controller get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
haproxy-l7-ing <none> * 34.xxx.xxx.aaa 80 594d
ingress-nginx-ing nginx * 172.xxx.xxx.xxx 80 2d16h
$ gcloud compute addresses list --global --project my-project
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
my-ext-ip 34.xxx.xxx.aaa EXTERNAL IN_USE
my-test-ext-ip 34.xxx.xxx.bbb EXTERNAL IN_USE
在这种情况下,我想 ingress-nginx-ing
应该绑定到 34.xxx.xxx.bbb (my-test-ext-ip)
,就像 haproxy-l7-ing
绑定到 34.xxx.xxx.aaa (my-ext-ip)
但它没有。
负载均衡器:
$ gcloud compute forwarding-rules list --global --project my-project
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
haproxy-http-fwdrule 34.xxx.xxx.aaa TCP haproxy-http-proxy
haproxy-https-fwdrule 34.xxx.xxx.aaa TCP haproxy-https-proxy
nginx-http-fwdrule 34.xxx.xxx.bbb TCP nginx-http-proxy
nginx-https-fwdrule 34.xxx.xxx.bbb TCP nginx-https-proxy
$ gcloud compute target-http-proxies list --global --project my-project
NAME URL_MAP
haproxy-http-proxy haproxy-http-urlmap
nginx-http-proxy nginx-https-urlmap
$ gcloud compute target-https-proxies list --global --project my-project
NAME SSL_CERTIFICATES URL_MAP
haproxy-https-proxy default-cert,mcrt-xxxxxx-xxxxxx haproxy-https-urlmap
nginx-https-proxy mcrt-xxxxxx-xxxxxx nginx-https-urlmap
$ gcloud compute url-maps list --global --project my-project
NAME DEFAULT_SERVICE
haproxy-https-urlmap backendServices/k8s-be-xxxxxx--xxxxxx
haproxy-http-urlmap
nginx-https-urlmap backendServices/nginx-lb-backendservice
$ gcloud compute backend-services list --global --project my-project
NAME BACKENDS PROTOCOL
k8s-be-xxxxxx--xxxxxx asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx HTTP
nginx-lb-backendservice asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx HTTP
后端:asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx
指向 GKE 集群。
K8S YAML是这样的:
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
namespace: ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
kind: Service
apiVersion: v1
metadata:
name: nginx-ingress-svc
namespace: ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: NodePort
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
appProtocol: http
- name: https
port: 443
targetPort: https
protocol: TCP
appProtocol: https
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-ing
namespace: ingress-controller
labels:
app: ingress-nginx
tier: ingress
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
# kubernetes.io/ingress.allow-http: 'false'
kubernetes.io/ingress.global-static-ip-name: 'my-test-ext-ip'
ingress.kubernetes.io/url-map: nginx-https-urlmap
networking.gke.io/managed-certificates: 'my-managed-cert'
ingress.gcp.kubernetes.io/pre-shared-cert: 'default-cert'
spec:
ingressClassName: nginx
defaultBackend:
service:
name: nginx-ingress-svc
port:
number: 80
知道我在这里可能遗漏了什么吗? 谢谢!
更新
我调整了负载均衡器的一些配置,创建了我自己的后端和健康检查,如下所示:
$ gcloud compute backend-services describe nginx-lb-backendservice --global
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
capacityScaler: 1.0
group: https://www.googleapis.com/compute/v1/projects/my-project/zones/asia-southeast1-a/instanceGroups/k8s-ig--xxxxxx
maxRatePerInstance: 1.0
cdnPolicy:
cacheKeyPolicy:
includeHost: true
includeProtocol: true
includeQueryString: false
cacheMode: USE_ORIGIN_HEADERS
negativeCaching: false
requestCoalescing: true
serveWhileStale: 0
signedUrlCacheMaxAgeSec: '0'
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2022-01-07T00:48:38.900-08:00'
description: '{"kubernetes.io/service-name":"ingress-controller/nginx-ingress-svc","kubernetes.io/service-port":"80"}'
enableCDN: true
fingerprint: ****
healthChecks:
- https://www.googleapis.com/compute/v1/projects/mtb-development-project/global/healthChecks/nginx-lb-backend-healthcheck
id: '7699213954898870409'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
logConfig:
enable: true
sampleRate: 1.0
name: nginx-lb-backendservice
port: 31579
portName: port31579
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/nginx-lb-backendservice
sessionAffinity: NONE
timeoutSec: 30
然后,我将这个注释添加到 Ingress ingress-nginx-ing
:
ingress.kubernetes.io/url-map: nginx-https-urlmap
后端状态是 HEALTHY,但不知何故 ingress-nginx-ing
仍然不会绑定到保留的外部 IP。
并且还有 none 这些注释附加到它:ingress.kubernetes.io/backends
、ingress.kubernetes.io/https-forwarding-rule
、ingress.kubernetes.io/https-target-proxy
,与 HAProxy 不同。
正在向我的主机发送 HTTP(S)。mydomain/whatever(解析为 IP:34.xxx.xxx.bbb
)仍然得到“默认后端 - 404”响应。
更新#2(成功!)
我尝试了 boredabdel's answer ingress-nginx-ing
中删除了 ingressClassName: nginx
,它似乎有效。
根据新警告删除手动创建的 LB 对象并调整自动生成的健康检查后,流量可以按预期到达 API。
(混淆的来源是同时具有 kubernetes.io/ingress.class
注释和来自示例的 ingressClassName
。)
托管证书仅适用于 L7 (HTTP) LoadBalancer,不适用于 TCP。
我的理解是您想将 nginx 用作 GKE 上的 Ingress 控制器,但您想将其暴露在 L7 LoadBalancer 后面,以便您可以使用 Google 托管证书?
是的,所以我在您的 YAML 文件中看到的问题是,您正试图使用 nginx IngressClass 公开 NGINX 入口本身,这是行不通的。
您需要做的是使用名为 gce 的 GKE 默认 IngressClass 公开 NGINX。如果您在 Ingress 对象中省略它,它就是默认值。所以你的对象大致看起来像这样
HTTP LB(通过带有 gce IngressClass 的 Ingress)-> nginx 服务 -> NGINX pods --> 应用服务 --> App pod
我们确实有一个例子here
然而,您需要牢记的事情很少。 NGINX Ingress Controller 所做的事情与 GKE 默认 Ingress 控制器几乎相同。他们都在您的应用程序前面设置了一个 HTTP(s) LoadBalancer。在此设置中,您将尝试实现 2 个 LoadBalancer,Google HTTP LB 通过 Ingress 和另一个 NGINX 提供。这意味着 2 次 tcp 终止并可能导致延迟增加。只是要记住一些事情