如何在 kubernetes 中调试与 istio 的 mTLS 通信?

How to debug mTLS communication with istio in kubernetes?

我是 Istio 的新手,因此这可能是一个简单的问题,但我对 Istio.I 使用 Istio 1.8.0 和 1.19 对于 k8s.Sorry 的多个问题有一些困惑,但我会很感激如果你能帮助我阐明最佳方法。

  1. 在我注入 Istio 后,我想我无法直接在 pod 内访问服务到服务,但正如您在下面看到的,我可以。也许我误会了,但这是预期的行为吗?同时,我如何调试服务是否通过 envoy 代理与 mTLS 相互通信?我正在使用 STRICT 模式,我是否应该在微服务所在的命名空间部署 peerauthentication 运行 避免这种情况?

     kubectl get peerauthentication --all-namespaces
     NAMESPACE      NAME      AGE
     istio-system   default   26h
    
  2. 如何限制流量假设 api-dev 服务不应该访问 auth-dev 但可以访问 backend-dev?

  3. 一些微服务需要与数据库通信,其中它也在 database 命名空间中 运行。我们也有一些我们不想注入 istio 也使用相同的数据库?那么,数据库是否也应该部署在我们进行 istio 注入的同一个命名空间中?如果是,是否意味着我需要为其余服务部署另一个数据库实例?


    $ kubectl get ns --show-labels
    NAME              STATUS   AGE    LABELS
    database          Active   317d   name=database
    hub-dev           Active   15h    istio-injection=enabled
    dev               Active   318d   name=dev


    capel0068340585:~ semural$ kubectl get pods -n hub-dev
    NAME                                     READY   STATUS    RESTARTS   AGE
    api-dev-5b9cdfc55c-ltgqz                  3/3     Running   0          117m
    auth-dev-54bd586cc9-l8jdn                 3/3     Running   0          13h
    backend-dev-6b86994697-2cxst              2/2     Running   0          120m
    cronjob-dev-7c599bf944-cw8ql              3/3     Running   0          137m
    mp-dev-59cb8d5589-w5mxc                   3/3     Running   0          117m
    ui-dev-5884478c7b-q8lnm                   2/2     Running   0          114m
    redis-hub-master-0                        2/2     Running   0           2m57s
    
    $ kubectl get svc -n hub-dev
    NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    api-dev                ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    auth-dev               ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    backend-dev            ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    14h
    cronjob-dev            ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    14h
    mp-dev                 ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    ui-dev                 ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    redis-hub-master       ClusterIP   xxxxxxxxxxxxx      <none>        6379/TCP  3m47s
    


----------


    $ kubectl exec -ti ui-dev-5884478c7b-q8lnm -n hub-dev sh
    Defaulting container name to oneapihub-ui.
    Use 'kubectl describe pod/ui-dev-5884478c7b-q8lnm -n hub-dev' to see all of the containers in this pod.
    /usr/src/app $ curl -vv  http://hub-backend-dev
    *   Trying 10.254.78.120:80...
    * TCP_NODELAY set
    * Connected to backend-dev (10.254.78.120) port 80 (#0)
    > GET / HTTP/1.1
    > Host: backend-dev
    > User-Agent: curl/7.67.0
    > Accept: */*
    >
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 404 Not Found
    < content-security-policy: default-src 'self'

    <
    <!DOCTYPE html>
    <html lang="en">
    <head>
    <meta charset="utf-8">
    <title>Error</title>
    </head>
    <body>
    <pre>Cannot GET /</pre>
    </body>
    </html>
    * Connection #0 to host oneapihub-backend-dev left intact
    /usr/src/app $
  1. 根据 documentation,如果您使用 STRICT mtls,那么工作负载应该只接受加密流量。

对等认证

Peer authentication policies specify the mutual TLS mode Istio enforces on target workloads. The following modes are supported:

  • PERMISSIVE: Workloads accept both mutual TLS and plain text traffic. This mode is most useful during migrations when workloads without sidecar cannot use mutual TLS. Once workloads are migrated with sidecar injection, you should switch the mode to STRICT.
  • STRICT: Workloads only accept mutual TLS traffic.
  • DISABLE: Mutual TLS is disabled. From a security perspective, you shouldn’t use this mode unless you provide your own security solution.

When the mode is unset, the mode of the parent scope is inherited. Mesh-wide peer authentication policies with an unset mode use the PERMISSIVE mode by default.

也值得一看 here,因为 banzaicloud 在这里对其进行了很好的描述。


您可以启用严格的 mtls 模式 globally, but also per specific namespace or workload


  1. 您可以使用 istio Authorization Policy 来做到这一点。

Istio Authorization Policy enables access control on workloads in the mesh.

有个例子。

The following is another example that sets action to “DENY” to create a deny policy. It denies requests from the “dev” namespace to the “POST” method on all workloads in the “foo” namespace.

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
 name: httpbin
 namespace: foo
spec:
 action: DENY
 rules:
 - from:
   - source:
       namespaces: ["dev"]
   to:
   - operation:
       methods: ["POST"]

  1. 您可以在不注入数据库的情况下设置数据库,然后使用 ServiceEntry 对象将其添加到 Istio 注册表,以便它能够与其余的 istio 服务进行通信。

ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.

istio 文档中有示例:


回答关于如何调试 mtls 通信的主要问题

最基本的测试是尝试从未注入的 pod 调用注入的 pod,例如使用 curl。关于那个有 istio documentation

你也可以使用istioctl x describe,更多关于它here


不确定 curl -vv http://hub-backend-dev 有什么问题,但因为它是 404,我怀疑这可能是你的 istio 依赖项的问题,比如错误的虚拟服务配置。