在 istio-proxy 和出口网关之间使用自定义 mTLS 为出口流量发起 mTLS
mTLS origination for egress traffic with custom mTLS between istio-proxy and egress gateway
我们的安全部门对出口流量的要求非常严格:POD 内的每个应用程序都必须使用应用程序的专用证书通过一些具有 mTLS 身份验证的代理(应用程序代理)。他们建议使用带隧道的 squid 来处理双重 mTLS(一个用于代理,另一个用于特定流量应用程序服务器),但随后我们强制该应用程序支持 ssl。 Istio 可以进来完成这项工作,但使用开箱即用的 ISTIO_MUTUAL 模式(在 istio-proxy 和出口网关之间)对我们来说不是这样。
因此,我尝试使用示例 Configure mutual TLS origination for egress traffic 对其进行如下修改(更改标记为 #- 和 #+):
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my-nginx.mesh-external.svc.cluster.local
tls:
#mode: ISTIO_MUTUAL #-
mode: MUTUAL #+
credentialName: egress-gateway-credential #+
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-for-nginx
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: nginx
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
#mode: ISTIO_MUTUAL #-
mode: MUTUAL #+
credentialName: egress-app-credential #+
sni: my-nginx.mesh-external.svc.cluster.local
创建秘密的位置:
kubectl create -n istio-system secret generic egress-app-credential \
--from-file=tls.key=client.app.key \
--from-file=tls.crt=client.app.crt \
--from-file=ca.crt=some-root.crt
kubectl create -n istio-system secret generic egress-gateway-credential \
--from-file=tls.key=egress.key \
--from-file=tls.crt=egress.crt \
--from-file=ca.crt=some-root.crt
我认为这在逻辑上是正确的,但可能不是因为我收到错误:
kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -vsS http://my-nginx.mesh-external.svc.cluster.local
* Trying 10.98.10.231:80...
* Connected to my-nginx.mesh-external.svc.cluster.local (10.98.10.231) port 80 (#0)
> GET / HTTP/1.1
> Host: my-nginx.mesh-external.svc.cluster.local
> User-Agent: curl/7.77.0-DEV
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection termination* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< content-length: 95
< content-type: text/plain
< date: Mon, 07 Jun 2021 11:01:08 GMT
< server: envoy
<
{ [95 bytes data]
* Connection #0 to host my-nginx.mesh-external.svc.cluster.local left intact
附加信息(上述请求的 istio-egressgateway 日志):
- ISTIO_MUTUAL(示例 - 标准 istio 代码)
客户端 pod 日志:
istio-proxy [2021-06-08T09:18:02.777Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 612 2 1 "-" "curl/7.77.0-DEV" "148be8db-5675-40eb-a246-26f51a5c73d2" "my-nginx.mesh-external.svc.cluste │
│ r.local" "172.17.0.7:8443" outbound|443|nginx|istio-egressgateway.istio-system.svc.cluster.local 172.17.0.5:37858 10.111.175.215:80 172.17.0.5:50610 - -
出口 pod 日志:
[2021-06-07T11:20:52.907Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 612 2 1 "172.17.0.5" "curl/7.77.0-DEV" "f163fbb1-8c9d-4960-9814-fc7bf11549ff" "my-nginx.mesh-external.svc.c
- 自定义 MUTUAL 设置(IP:172.17.0.8 是 istio-egress POD):
客户端 pod 日志:
[2021-06-07T12:02:20.626Z] "GET / HTTP/1.1" 503 UC upstream_reset_before_response_started{connection_termination} - "-" 0 95 1 - "-" "curl/7.77.0-DEV" "5fb31226-21fd-4c10-882c-f72bed3483e7" "my-nginx.mesh-external.svc.cluster.local" "172.17.0.8:8443" outbound|443|nginx|istio-egressgateway.istio-system.svc.cluster.local 172.17.0.5:49588 10.98.10.231:80 172.17.0.5:41028 - -
出口 pod 日志:
[2021-06-07T11:20:38.018Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 0 - "-" "-" "-" "-" "-" - - 172.17.0.8:8443 172.17.0.5:44558 - -
任何帮助都是有价值的,因为我自己也在努力解决这个问题,也许我在某个地方犯了逻辑错误。
已编辑:
至于8443端口号:
istioctl x describe pod istio-egressgateway-79fcc9c54b-bnbzm -n istio-system
Pod: istio-egressgateway-79fcc9c54b-bnbzm
Pod Ports: 8080 (istio-proxy), 8443 (istio-proxy), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: istio-egressgateway
Port: http2 80/HTTP2 targets pod port 8080
Port: https 443/HTTPS targets pod port 8443
测试于:
- 1.10
- 1.9.2
好的,我终于解决了。这里的关键点是 DestinationRule 规范的一部分,它说:
- credentialName -> 注意:此字段目前仅适用于网关。 Sidecars 将继续使用证书路径。
所以我修改了以下清单:
sleep.yml 的客户端部署(安装证书)
kind: Deployment
metadata:
name: sleep
# putting it here does not work
# annotations:
# sidecar.istio.io/userVolumeMount: '[{"name":"app-certs", "mountPath":"/etc/istio/egress-app-credential", "readonly":true}]'
# sidecar.istio.io/userVolume: '[{"name":"app-certs", "secret":{"secretName":"egress-app-credential"}}]'
spec:
replicas: 1
selector:
matchLabels:
app: sleep
template:
metadata:
annotations: #+
sidecar.istio.io/userVolumeMount: '[{"name":"app-certs", "mountPath":"/etc/istio/egress-app-credential", "readonly":true}]' #+
sidecar.istio.io/userVolume: '[{"name":"app-certs", "secret":{"secretName":"egress-app-credential"}}]' #+
labels:
app: sleep
...
egressgateway-for-nginx DR:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-for-nginx
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: nginx
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
# mode: ISTIO_MUTUAL #-
mode: MUTUAL #+
clientCertificate: /etc/istio/egress-app-credential/tls.crt #+
privateKey: /etc/istio/egress-app-credential/tls.key #+
caCertificates: /etc/istio/egress-app-credential/ca.crt #+
sni: my-nginx.mesh-external.svc.cluster.local
现在所有证书都已正确部署在我的客户端 POD 上:
istioctl proxy-config secret "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})"
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
file-cert:/etc/istio/egress-app-credential/tls.crt~/etc/istio/egress-app-credential/tls.key Cert Chain ACTIVE true 1 2022-05-06T09:19:24Z 2021-05-06T09:19:24Z
default Cert Chain ACTIVE true 200416862686144849012679224886550934182 2021-06-10T07:41:17Z 2021-06-09T07:41:17Z
file-root:/etc/istio/egress-app-credential/ca.crt CA ACTIVE true 422042020503057064387036627903001284930102376872 2022-05-06T08:07:57Z 2021-05-06T08:07:57Z
ROOTCA CA ACTIVE true 11126135119553711053963756442081214010 2031-06-06T07:45:55Z 2021-06-08T07:45:55Z
使用
对其进行测试
kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sS http://my-nginx.mesh-external.svc.cluster.local
给出了预期的结果。
我们的安全部门对出口流量的要求非常严格:POD 内的每个应用程序都必须使用应用程序的专用证书通过一些具有 mTLS 身份验证的代理(应用程序代理)。他们建议使用带隧道的 squid 来处理双重 mTLS(一个用于代理,另一个用于特定流量应用程序服务器),但随后我们强制该应用程序支持 ssl。 Istio 可以进来完成这项工作,但使用开箱即用的 ISTIO_MUTUAL 模式(在 istio-proxy 和出口网关之间)对我们来说不是这样。
因此,我尝试使用示例 Configure mutual TLS origination for egress traffic 对其进行如下修改(更改标记为 #- 和 #+):
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my-nginx.mesh-external.svc.cluster.local
tls:
#mode: ISTIO_MUTUAL #-
mode: MUTUAL #+
credentialName: egress-gateway-credential #+
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-for-nginx
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: nginx
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
#mode: ISTIO_MUTUAL #-
mode: MUTUAL #+
credentialName: egress-app-credential #+
sni: my-nginx.mesh-external.svc.cluster.local
创建秘密的位置:
kubectl create -n istio-system secret generic egress-app-credential \
--from-file=tls.key=client.app.key \
--from-file=tls.crt=client.app.crt \
--from-file=ca.crt=some-root.crt
kubectl create -n istio-system secret generic egress-gateway-credential \
--from-file=tls.key=egress.key \
--from-file=tls.crt=egress.crt \
--from-file=ca.crt=some-root.crt
我认为这在逻辑上是正确的,但可能不是因为我收到错误:
kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -vsS http://my-nginx.mesh-external.svc.cluster.local
* Trying 10.98.10.231:80...
* Connected to my-nginx.mesh-external.svc.cluster.local (10.98.10.231) port 80 (#0)
> GET / HTTP/1.1
> Host: my-nginx.mesh-external.svc.cluster.local
> User-Agent: curl/7.77.0-DEV
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection termination* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< content-length: 95
< content-type: text/plain
< date: Mon, 07 Jun 2021 11:01:08 GMT
< server: envoy
<
{ [95 bytes data]
* Connection #0 to host my-nginx.mesh-external.svc.cluster.local left intact
附加信息(上述请求的 istio-egressgateway 日志):
- ISTIO_MUTUAL(示例 - 标准 istio 代码)
客户端 pod 日志:
istio-proxy [2021-06-08T09:18:02.777Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 612 2 1 "-" "curl/7.77.0-DEV" "148be8db-5675-40eb-a246-26f51a5c73d2" "my-nginx.mesh-external.svc.cluste │
│ r.local" "172.17.0.7:8443" outbound|443|nginx|istio-egressgateway.istio-system.svc.cluster.local 172.17.0.5:37858 10.111.175.215:80 172.17.0.5:50610 - -
出口 pod 日志:
[2021-06-07T11:20:52.907Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 612 2 1 "172.17.0.5" "curl/7.77.0-DEV" "f163fbb1-8c9d-4960-9814-fc7bf11549ff" "my-nginx.mesh-external.svc.c
- 自定义 MUTUAL 设置(IP:172.17.0.8 是 istio-egress POD):
客户端 pod 日志:
[2021-06-07T12:02:20.626Z] "GET / HTTP/1.1" 503 UC upstream_reset_before_response_started{connection_termination} - "-" 0 95 1 - "-" "curl/7.77.0-DEV" "5fb31226-21fd-4c10-882c-f72bed3483e7" "my-nginx.mesh-external.svc.cluster.local" "172.17.0.8:8443" outbound|443|nginx|istio-egressgateway.istio-system.svc.cluster.local 172.17.0.5:49588 10.98.10.231:80 172.17.0.5:41028 - -
出口 pod 日志:
[2021-06-07T11:20:38.018Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 0 - "-" "-" "-" "-" "-" - - 172.17.0.8:8443 172.17.0.5:44558 - -
任何帮助都是有价值的,因为我自己也在努力解决这个问题,也许我在某个地方犯了逻辑错误。
已编辑: 至于8443端口号:
istioctl x describe pod istio-egressgateway-79fcc9c54b-bnbzm -n istio-system
Pod: istio-egressgateway-79fcc9c54b-bnbzm
Pod Ports: 8080 (istio-proxy), 8443 (istio-proxy), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: istio-egressgateway
Port: http2 80/HTTP2 targets pod port 8080
Port: https 443/HTTPS targets pod port 8443
测试于:
- 1.10
- 1.9.2
好的,我终于解决了。这里的关键点是 DestinationRule 规范的一部分,它说:
- credentialName -> 注意:此字段目前仅适用于网关。 Sidecars 将继续使用证书路径。
所以我修改了以下清单:
sleep.yml 的客户端部署(安装证书)
kind: Deployment
metadata:
name: sleep
# putting it here does not work
# annotations:
# sidecar.istio.io/userVolumeMount: '[{"name":"app-certs", "mountPath":"/etc/istio/egress-app-credential", "readonly":true}]'
# sidecar.istio.io/userVolume: '[{"name":"app-certs", "secret":{"secretName":"egress-app-credential"}}]'
spec:
replicas: 1
selector:
matchLabels:
app: sleep
template:
metadata:
annotations: #+
sidecar.istio.io/userVolumeMount: '[{"name":"app-certs", "mountPath":"/etc/istio/egress-app-credential", "readonly":true}]' #+
sidecar.istio.io/userVolume: '[{"name":"app-certs", "secret":{"secretName":"egress-app-credential"}}]' #+
labels:
app: sleep
...
egressgateway-for-nginx DR:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-for-nginx
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: nginx
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
# mode: ISTIO_MUTUAL #-
mode: MUTUAL #+
clientCertificate: /etc/istio/egress-app-credential/tls.crt #+
privateKey: /etc/istio/egress-app-credential/tls.key #+
caCertificates: /etc/istio/egress-app-credential/ca.crt #+
sni: my-nginx.mesh-external.svc.cluster.local
现在所有证书都已正确部署在我的客户端 POD 上:
istioctl proxy-config secret "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})"
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
file-cert:/etc/istio/egress-app-credential/tls.crt~/etc/istio/egress-app-credential/tls.key Cert Chain ACTIVE true 1 2022-05-06T09:19:24Z 2021-05-06T09:19:24Z
default Cert Chain ACTIVE true 200416862686144849012679224886550934182 2021-06-10T07:41:17Z 2021-06-09T07:41:17Z
file-root:/etc/istio/egress-app-credential/ca.crt CA ACTIVE true 422042020503057064387036627903001284930102376872 2022-05-06T08:07:57Z 2021-05-06T08:07:57Z
ROOTCA CA ACTIVE true 11126135119553711053963756442081214010 2031-06-06T07:45:55Z 2021-06-08T07:45:55Z
使用
对其进行测试kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sS http://my-nginx.mesh-external.svc.cluster.local
给出了预期的结果。