运行 入口 nginx 作为带有 appid oauth2 提供程序的 kibana 的反向代理

Run ingress nginx as a reverse proxy for kibana with appid oauth2 provider

我在这里和在线博客上阅读了许多类似的问题,我尝试了一些配置更改,但似乎无法正常工作。我正在使用 ECK 在 IBM 云 IKS(经典)上管理弹性和 kibana 堆栈。

我想将 App ID 用作 oauth2 提供程序,并使用入口 运行 nginx 进行身份验证。我让那部分部分工作,我获得了 SSO 登录名并且必须在那里成功进行身份验证,但我没有被重定向到 kibana 应用程序登录页面,而是被重定向到 kibana 登录页面。我正在使用 helm 来管理 Elastic、Kibana 和 Ingress 资源。我将对资源进行模板化,并将带有一些虚拟值的 yaml 清单放在这里。

helm template --name-template=es-kibana-ingress es-k-stack -s templates/kibana.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > kibana_template.yaml

apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
  name: es-kibana-ingress-es-k-stack
spec:
  config:
    server.rewriteBasePath: true
    server.basePath: /kibana-es-kibana-ingress
    server.publicBaseUrl: https://CLUSTER.REGION.containers.appdomain.cloud/kibana-es-kibana-ingress
  version: 7.16.3
  count: 1
  elasticsearchRef:
    name: es-kibana-ingress-es-k-stack
  podTemplate:
      spec:
        containers:
        - name: kibana
          readinessProbe:
            httpGet:
              scheme: HTTPS
              path: /kibana-es-kibana-ingress
              port: 5601

helm template --name-template=es-kibana-ingress es-k-stack -s templates/ingress.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > kibana_ingress_template.yaml

kind: Ingress
metadata:
  name: es-kibana-ingress
  namespace: es-kibana-ingress
  annotations:
    kubernetes.io/ingress.class: "public-iks-k8s-nginx"
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-APPID_INSTANCE_NAME/start?rd=$escaped_request_uri
    nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-APPID_INSTANCE_NAME/auth
    nginx.ingress.kubernetes.io/configuration-snippet: |
      auth_request_set $name_upstream_1 $upstream_cookie__oauth2_APPID_INSTANCE_NAME_1;
      auth_request_set $access_token $upstream_http_x_auth_request_access_token;
      auth_request_set $id_token $upstream_http_authorization;
      access_by_lua_block {
        if ngx.var.name_upstream_1 ~= "" then
          ngx.header["Set-Cookie"] = "_oauth2_APPID_INSTANCE_NAME_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
        end
        if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
          ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
        end
      }
    nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  tls:
  - hosts:
    - CLUSTER.REGION.containers.appdomain.cloud
    secretName: CLUSTER_SECRET
  rules:
  - host: CLUSTER.REGION.containers.appdomain.cloud
    http:
      paths:
      - backend:
          service:
            name: es-kibana-ingress-xdr-datalake-kb-http
            port:
              number: 5601
        path: /kibana-es-kibana-ingress
        pathType: ImplementationSpecific

helm template --name-template=es-kibana-ingress ~/Git/xdr_datalake/helm/xdr-es-k-stack/ -s templates/elasticsearch.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > elastic_template.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: es-kibana-ingress-es-k-stack
spec:
  version: 7.16.3
  nodeSets:
  - name: master
    count: 1
    config:
      node.store.allow_mmap: true
      node.roles: ["master"]
      xpack.ml.enabled: true
      reindex.remote.whitelist: [CLUSTER.REGION.containers.appdomain.cloud:443]
      indices.query.bool.max_clause_count: 3000
      xpack:
        license.self_generated.type: basic
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20Gi
        storageClassName: ibmc-file-retain-gold-custom-terraform
    podTemplate:
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
                topologyKey: kubernetes.io/hostname
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
                topologyKey: kubernetes.io/zone
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        volumes:
        - name: elasticsearch-data
          emptyDir: {}
        containers:
        - name: elasticsearch
          resources:
            limits:
              cpu: 4
              memory: 6Gi
            requests:
              cpu: 2
              memory: 3Gi
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: NETWORK_HOST
              value: _site_
            - name: MAX_LOCAL_STORAGE_NODES
              value: "1"
            - name: DISCOVERY_SERVICE
              value: elasticsearch-discovery
            - name: HTTP_CORS_ALLOW_ORIGIN
              value: '*'
            - name: HTTP_CORS_ENABLE
              value: "true"
  - name: data
    count: 1
    config:
      node.roles: ["data", "ingest", "ml", "transform"]
      reindex.remote.whitelist: [CLUSTER.REGION.containers.appdomain.cloud:443]
      indices.query.bool.max_clause_count: 3000
      xpack:
        license.self_generated.type: basic
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20Gi
        storageClassName: ibmc-file-retain-gold-custom-terraform
    podTemplate:
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
                topologyKey: kubernetes.io/hostname
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
                topologyKey: kubernetes.io/zone
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        volumes:
        - name: elasticsearch-data
          emptyDir: {}
        containers:
        - name: elasticsearch
          resources:
            limits:
              cpu: 4
              memory: 6Gi
            requests:
              cpu: 2
              memory: 3Gi
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: NETWORK_HOST
              value: _site_
            - name: MAX_LOCAL_STORAGE_NODES
              value: "1"
            - name: DISCOVERY_SERVICE
              value: elasticsearch-discovery
            - name: HTTP_CORS_ALLOW_ORIGIN
              value: '*'
            - name: HTTP_CORS_ENABLE
              value: "true"

任何指点将不胜感激。我确定这是我遗漏的一些小东西,但我无法在网上的任何地方找到它 - 我想我缺少一些令牌或授权 header 重写,但我无法弄清楚。

所以这是我的误会。在以前的 self-managed ELK 堆栈上,上述工作正常,不同之处在于默认情况下启用 ECK 安全性。因此,当您将 nginx 反向代理设置为正确提供 SAML 集成时(如上所述),您仍然可以获得 kibana 登录页面。

为了避免这种情况,我设置了一个 filerealm 用于身份验证,并为 kibana 管理员用户提供了一个 username/password:

helm template --name-template=es-kibana-ingress xdr-es-k-stack -s templates/crd_kibana.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME --set kibana.kibanaUser="kibanaUSER" --set kibana.kibanaPass="kibanaPASS"

apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
  name: es-kibana-ingress-xdr-datalake
  namespace: default
spec:
  config:
    server.rewriteBasePath: true
    server.basePath: /kibana-es-kibana-ingress
    server.publicBaseUrl: https://CLUSTER.REGION.containers.appdomain.cloud/kibana-es-kibana-ingress
    server.host: "0.0.0.0"
    server.name: kibana
    xpack.security.authc.providers:
      anonymous.anonymous1:
        order: 0
        credentials:
          username: kibanaUSER
          password: kibanaPASS
  version: 7.16.3
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  count: 1
  elasticsearchRef:
    name: es-kibana-ingress-xdr-datalake
  podTemplate:
      spec:
        containers:
        - name: kibana
          readinessProbe:
            timeoutSeconds: 30
            httpGet:
              scheme: HTTP
              path: /kibana-es-kibana-ingress/app/dev_tools
              port: 5601
          resources:
            limits:
              cpu: 3
              memory: 1Gi
            requests:
              cpu: 3
              memory: 1Gi

helm template --name-template=es-kibana-ingress xdr-es-k-stack -s templates/crd_elasticsearch.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME --set kibana.kibanaUser="kibanaUSER" --set kibana.kibanaPass="kibanaPASS"

您可能注意到我删除了自签名证书 - 这是因为将 kafka 连接到集群上的 ES 时出现问题。我们已经决定使用 ISTIO 来提供内部网络连接——但如果你没有这个问题,你可以保留它们。我还必须稍微更新入口以使用这个新的 HTTP 后端(以前的 HTTPS):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: es-kibana-ingress-kibana
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "public-iks-k8s-nginx"
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-APPID_INSTANCE_NAME/start?rd=$escaped_request_uri
    nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-APPID_INSTANCE_NAME/auth
    nginx.ingress.kubernetes.io/configuration-snippet: |
      auth_request_set $name_upstream_1 $upstream_cookie__oauth2_APPID_INSTANCE_NAME_1;
      auth_request_set $access_token $upstream_http_x_auth_request_access_token;
      auth_request_set $id_token $upstream_http_authorization;
      access_by_lua_block {
        if ngx.var.name_upstream_1 ~= "" then
          ngx.header["Set-Cookie"] = "_oauth2_APPID_INSTANCE_NAME_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
        end
        if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
          ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
        end
      }
    nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  tls:
  - hosts:
    - CLUSTER.REGION.containers.appdomain.cloud
    secretName: CLUSTER_SECRET
  rules:
  - host: CLUSTER.REGION.containers.appdomain.cloud
    http:
      paths:
      - backend:
          service:
            name: es-kibana-ingress-xdr-datalake-kb-http
            port:
              number: 5601
        path: /kibana-es-kibana-ingress
        pathType: ImplementationSpecific

希望这对以后的其他人有所帮助。