调试 istio 速率限制处理程序
Debugging istio rate limiting handler
我正在尝试对我们的一些内部服务(在网格内)应用速率限制。
我使用了文档中的示例并生成了 redis 速率限制配置,其中包括 (redis) 处理程序、配额实例、配额规范、配额规范绑定和应用处理程序的规则。
这个redis处理程序:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: redishandler
namespace: istio-system
spec:
compiledAdapter: redisquota
params:
redisServerUrl: <REDIS>:6379
connectionPoolSize: 10
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 10
validDuration: 100s
rateLimitAlgorithm: FIXED_WINDOW
overrides:
- dimensions:
destination: s1
maxAmount: 1
- dimensions:
destination: s3
maxAmount: 1
- dimensions:
destination: s2
maxAmount: 1
配额实例(我目前只对目的地限制感兴趣):
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
一个配额规范,如果我理解正确的话,每个请求收取 1:
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
所有参与服务预取的配额绑定规范。我也试过 service: "*"
也什么也没做。
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: s2
namespace: default
- name: s3
namespace: default
- name: s1
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
应用处理程序的规则。目前在所有场合(尝试匹配但也没有改变任何东西):
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: redishandler
instances:
- requestcountquota
所有参与者的 VirtualService 定义都非常相似:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: s1
spec:
hosts:
- s1
http:
- route:
- destination:
host: s1
问题是什么都没有发生,也没有发生速率限制。我在网格内用 pods 中的 curl
进行了测试。 redis 实例是空的(db 0 上没有键,我假设这是速率限制会使用的)所以我知道它实际上不能限制任何东西。
处理程序似乎配置正确(我如何确定?)因为我在其中有一些错误,这些错误已在混合器(策略)中报告。仍然有一些错误,但 none 我将其与此问题或配置相关联。唯一提到 redis 处理程序的行是:
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
但不清楚是不是问题。我假设不是。
这些是我部署后重新加载的其余行:
2019-12-17T13:44:22.601644Z info Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.602718Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.904808Z info Setting up event handlers
2019-12-17T13:44:22.904939Z info Starting Secrets controller
2019-12-17T13:44:22.904991Z info Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info adapters deleted remote controller {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info adapters adapter closed all scheduled daemons and workers {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
2019-12-17T13:44:22.958065Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info adapters adapter closed all scheduled daemons and workers {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
我正在使用带有 disablePolicyChecks: false
的 demo
配置文件来启用速率限制。这是在 istio 1.4.0 上,部署在 EKS 上。
我还尝试了低限制的 memquota(这是我们的暂存环境),但似乎没有任何效果。无论我超过配置的速率限制多少,我都没有收到 429。
我不知道如何调试它并查看配置错误导致它什么都不做。
感谢任何帮助。
我也花了几个小时试图破译文档并让示例工作。
根据文档,他们建议我们启用策略检查:
https://istio.io/docs/tasks/policy-enforcement/rate-limiting/
然而,当这不起作用时,我做了一个 "istioctl profile dump",搜索策略,并尝试了几种设置。
我使用了 Helm 安装并通过了以下操作,然后能够获得所描述的行为:
--设置global.disablePolicyChecks=false \
--set values.pilot.policy.enabled=true \ ===> 这使它起作用,但它不在文档中。
我正在尝试对我们的一些内部服务(在网格内)应用速率限制。
我使用了文档中的示例并生成了 redis 速率限制配置,其中包括 (redis) 处理程序、配额实例、配额规范、配额规范绑定和应用处理程序的规则。
这个redis处理程序:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: redishandler
namespace: istio-system
spec:
compiledAdapter: redisquota
params:
redisServerUrl: <REDIS>:6379
connectionPoolSize: 10
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 10
validDuration: 100s
rateLimitAlgorithm: FIXED_WINDOW
overrides:
- dimensions:
destination: s1
maxAmount: 1
- dimensions:
destination: s3
maxAmount: 1
- dimensions:
destination: s2
maxAmount: 1
配额实例(我目前只对目的地限制感兴趣):
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
一个配额规范,如果我理解正确的话,每个请求收取 1:
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
所有参与服务预取的配额绑定规范。我也试过 service: "*"
也什么也没做。
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: s2
namespace: default
- name: s3
namespace: default
- name: s1
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
应用处理程序的规则。目前在所有场合(尝试匹配但也没有改变任何东西):
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: redishandler
instances:
- requestcountquota
所有参与者的 VirtualService 定义都非常相似:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: s1
spec:
hosts:
- s1
http:
- route:
- destination:
host: s1
问题是什么都没有发生,也没有发生速率限制。我在网格内用 pods 中的 curl
进行了测试。 redis 实例是空的(db 0 上没有键,我假设这是速率限制会使用的)所以我知道它实际上不能限制任何东西。
处理程序似乎配置正确(我如何确定?)因为我在其中有一些错误,这些错误已在混合器(策略)中报告。仍然有一些错误,但 none 我将其与此问题或配置相关联。唯一提到 redis 处理程序的行是:
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
但不清楚是不是问题。我假设不是。
这些是我部署后重新加载的其余行:
2019-12-17T13:44:22.601644Z info Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.602718Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.904808Z info Setting up event handlers
2019-12-17T13:44:22.904939Z info Starting Secrets controller
2019-12-17T13:44:22.904991Z info Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info adapters deleted remote controller {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info adapters adapter closed all scheduled daemons and workers {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
2019-12-17T13:44:22.958065Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info adapters adapter closed all scheduled daemons and workers {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
我正在使用带有 disablePolicyChecks: false
的 demo
配置文件来启用速率限制。这是在 istio 1.4.0 上,部署在 EKS 上。
我还尝试了低限制的 memquota(这是我们的暂存环境),但似乎没有任何效果。无论我超过配置的速率限制多少,我都没有收到 429。
我不知道如何调试它并查看配置错误导致它什么都不做。
感谢任何帮助。
我也花了几个小时试图破译文档并让示例工作。
根据文档,他们建议我们启用策略检查:
https://istio.io/docs/tasks/policy-enforcement/rate-limiting/
然而,当这不起作用时,我做了一个 "istioctl profile dump",搜索策略,并尝试了几种设置。
我使用了 Helm 安装并通过了以下操作,然后能够获得所描述的行为:
--设置global.disablePolicyChecks=false \ --set values.pilot.policy.enabled=true \ ===> 这使它起作用,但它不在文档中。