对 config-autoscaler configmap 的编辑在应用更改后几分钟自动恢复
Edits to config-autoscaler configmap automatically revert a few minutes after changes applied
我正在尝试为启用了 cloud-运行 插件的 Google Kubernetes Engine 集群调整自动缩放器。当我编辑 configmap 时 API 服务器接受更改。但是,几分钟后,configmap 恢复为原始版本。使用 cloud-运行 集群插件时有什么方法可以调整自动缩放器吗?
重现步骤:
- 编辑配置图
kubectl edit cm config-autoscaler -n knative-serving
# ...
configmap/config-autoscaler configured
- 查看结果:
kubectl get -n knative-serving configmap config-autoscaler -o json | jq '.data'
#=>
{
"container-concurrency-target-default": "1",
"container-concurrency-target-percentage": "0.5",
"enable-scale-to-zero": "false",
"max-scale-up-rate": "1000",
"panic-threshold-percentage": "200.0",
"panic-window-percentage": "10.0",
"scale-to-zero-grace-period": "90s",
"stable-window": "600s",
"target-burst-capacity": "-1",
"tick-interval": "2s"
}
- 稍等几分钟再检查:
kubectl get -n knative-serving configmap config-autoscaler -o json | jq '.data'
#=>
{
"_example": "################################\n# #\n# EXAMPLE CONFIGURATION #\n# #\n################################\n# This block is not actually functional configuration,\n# but serves to illustrate the available configuration\n# options and document them in a way that is accessible\n# to users that `kubectl edit` this config map.\n#\n# These sample configuration options may be copied out of\n# this example block and unindented to be in the data block\n# to actually change the configuration.\n# The Revision ContainerConcurrency field specifies the maximum number\n# of requests the Container can handle at once. Container concurrency\n# target percentage is how much of that maximum to use in a stable\n# state. E.g. if a Revision specifies ContainerConcurrency of 10, then\n# the Autoscaler will try to maintain 7 concurrent connections per pod\n# on average. A value of 0.7 is chosen because the Autoscaler panics\n# when concurrency exceeds 2x the desired set point. So we will panic\n# before we reach the limit.\ncontainer-concurrency-target-percentage: \"1.0\"\n# The container concurrency target default is what the Autoscaler will\n# try to maintain when the Revision specifies unlimited concurrency.\n# Even when specifying unlimited concurrency, the autoscaler will\n# horizontally scale the application based on this target concurrency.\n#\n# A value of 100 is chosen because it's enough to allow vertical pod\n# autoscaling to tune resource requests. E.g. maintaining 1 concurrent\n# \"hello world\" request doesn't consume enough resources to allow VPA\n# to achieve efficient resource usage (VPA CPU minimum is 300m).\ncontainer-concurrency-target-default: \"100\"\n# When operating in a stable mode, the autoscaler operates on the\n# average concurrency over the stable window.\nstable-window: \"60s\"\n# When observed average concurrency during the panic window reaches\n# panic-threshold-percentage the target concurrency, the autoscaler\n# enters panic mode. When operating in panic mode, the autoscaler\n# scales on the average concurrency over the panic window which is\n# panic-window-percentage of the stable-window.\npanic-window-percentage: \"10.0\"\n# Absolute panic window duration.\n# Deprecated in favor of panic-window-percentage.\n# Existing revisions will continue to scale based on panic-window\n# but new revisions will default to panic-window-percentage.\npanic-window: \"6s\"\n# The percentage of the container concurrency target at which to\n# enter panic mode when reached within the panic window.\npanic-threshold-percentage: \"200.0\"\n# Max scale up rate limits the rate at which the autoscaler will\n# increase pod count. It is the maximum ratio of desired pods versus\n# observed pods.\nmax-scale-up-rate: \"10\"\n# Scale to zero feature flag\nenable-scale-to-zero: \"true\"\n# Tick interval is the time between autoscaling calculations.\ntick-interval: \"2s\"\n# Dynamic parameters (take effect when config map is updated):\n# Scale to zero grace period is the time an inactive revision is left\n# running before it is scaled to zero (min: 30s).\nscale-to-zero-grace-period: \"30s\"\n"
}
我是否遗漏了什么,或者是否无法编辑管理 knative 服务自动缩放器的配置映射?如果没有,我有什么选择?
您无法对此 confgiMap 进行更改,因为 GKE 协调器将还原您所做的更改。
具有 addonmanager.kunernetes.io/mode: reconcile
的任何资源都将作为 GKE
的托管组件的一部分进行还原
我正在尝试为启用了 cloud-运行 插件的 Google Kubernetes Engine 集群调整自动缩放器。当我编辑 configmap 时 API 服务器接受更改。但是,几分钟后,configmap 恢复为原始版本。使用 cloud-运行 集群插件时有什么方法可以调整自动缩放器吗?
重现步骤:
- 编辑配置图
kubectl edit cm config-autoscaler -n knative-serving
# ...
configmap/config-autoscaler configured
- 查看结果:
kubectl get -n knative-serving configmap config-autoscaler -o json | jq '.data'
#=>
{
"container-concurrency-target-default": "1",
"container-concurrency-target-percentage": "0.5",
"enable-scale-to-zero": "false",
"max-scale-up-rate": "1000",
"panic-threshold-percentage": "200.0",
"panic-window-percentage": "10.0",
"scale-to-zero-grace-period": "90s",
"stable-window": "600s",
"target-burst-capacity": "-1",
"tick-interval": "2s"
}
- 稍等几分钟再检查:
kubectl get -n knative-serving configmap config-autoscaler -o json | jq '.data'
#=>
{
"_example": "################################\n# #\n# EXAMPLE CONFIGURATION #\n# #\n################################\n# This block is not actually functional configuration,\n# but serves to illustrate the available configuration\n# options and document them in a way that is accessible\n# to users that `kubectl edit` this config map.\n#\n# These sample configuration options may be copied out of\n# this example block and unindented to be in the data block\n# to actually change the configuration.\n# The Revision ContainerConcurrency field specifies the maximum number\n# of requests the Container can handle at once. Container concurrency\n# target percentage is how much of that maximum to use in a stable\n# state. E.g. if a Revision specifies ContainerConcurrency of 10, then\n# the Autoscaler will try to maintain 7 concurrent connections per pod\n# on average. A value of 0.7 is chosen because the Autoscaler panics\n# when concurrency exceeds 2x the desired set point. So we will panic\n# before we reach the limit.\ncontainer-concurrency-target-percentage: \"1.0\"\n# The container concurrency target default is what the Autoscaler will\n# try to maintain when the Revision specifies unlimited concurrency.\n# Even when specifying unlimited concurrency, the autoscaler will\n# horizontally scale the application based on this target concurrency.\n#\n# A value of 100 is chosen because it's enough to allow vertical pod\n# autoscaling to tune resource requests. E.g. maintaining 1 concurrent\n# \"hello world\" request doesn't consume enough resources to allow VPA\n# to achieve efficient resource usage (VPA CPU minimum is 300m).\ncontainer-concurrency-target-default: \"100\"\n# When operating in a stable mode, the autoscaler operates on the\n# average concurrency over the stable window.\nstable-window: \"60s\"\n# When observed average concurrency during the panic window reaches\n# panic-threshold-percentage the target concurrency, the autoscaler\n# enters panic mode. When operating in panic mode, the autoscaler\n# scales on the average concurrency over the panic window which is\n# panic-window-percentage of the stable-window.\npanic-window-percentage: \"10.0\"\n# Absolute panic window duration.\n# Deprecated in favor of panic-window-percentage.\n# Existing revisions will continue to scale based on panic-window\n# but new revisions will default to panic-window-percentage.\npanic-window: \"6s\"\n# The percentage of the container concurrency target at which to\n# enter panic mode when reached within the panic window.\npanic-threshold-percentage: \"200.0\"\n# Max scale up rate limits the rate at which the autoscaler will\n# increase pod count. It is the maximum ratio of desired pods versus\n# observed pods.\nmax-scale-up-rate: \"10\"\n# Scale to zero feature flag\nenable-scale-to-zero: \"true\"\n# Tick interval is the time between autoscaling calculations.\ntick-interval: \"2s\"\n# Dynamic parameters (take effect when config map is updated):\n# Scale to zero grace period is the time an inactive revision is left\n# running before it is scaled to zero (min: 30s).\nscale-to-zero-grace-period: \"30s\"\n"
}
我是否遗漏了什么,或者是否无法编辑管理 knative 服务自动缩放器的配置映射?如果没有,我有什么选择?
您无法对此 confgiMap 进行更改,因为 GKE 协调器将还原您所做的更改。
具有 addonmanager.kunernetes.io/mode: reconcile
的任何资源都将作为 GKE