Kubernetes:删除后复制控制器仍然存在
Kubernetes : Replication Controller still there after deletion
我管理一个 K8s 集群,由 terraform 管理:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
我要删除堆栈;所以我删除了代码并申请了。由于超时,它抛出了一个错误。我重试了,我成功了。
但是现在,我还有 2 个复制控制器(它们是空的):
portal-api 0 0 0 2h
portal-app 0 0 0 2h
没有服务了,没有了horizontal_pod_scheduler;但还是我的 replication_controller.
我试图删除它们:
$ kubectl delete rc portal-api
error: timed out waiting for "portal-api" to be synced
如果我想强制删除也一样:
$ kubectl delete rc portal-api --cascade=false --force=true
$
$ kubectl get rc
[...]
portal-api 0 0 0 2h
portal-app 0 0 0 2h
[...]
我也还可以看到它的配置(填了一个deletionTimestamp
):
$ kubectl edit rc portal-api
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: ReplicationController
metadata:
creationTimestamp: 2018-12-05T14:00:15Z
deletionGracePeriodSeconds: 0
deletionTimestamp: 2018-12-05T15:22:00Z
finalizers:
- orphan
generation: 3
labels:
App: portal-api
name: portal-api
namespace: default
resourceVersion: "32590661"
selfLink: /api/v1/namespaces/default/replicationcontrollers/portal-api
uid: 171f605e-f896-11e8-b761-02d4b8553a0e
spec:
replicas: 0
selector:
App: portal-api
template:
metadata:
creationTimestamp: null
labels:
App: portal-api
spec:
automountServiceAccountToken: false
containers:
- env:
- name: AUTHORITY_MGR
value: http://system-authority-manager-service
image: gitlab.********************:4567/apps/portal/api:prd
imagePullPolicy: Always
name: portal-api
ports:
- containerPort: 3300
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
terminationGracePeriodSeconds: 30
status:
replicas: 0
有人可以帮我解决这个问题吗?任何想法 ?
谢谢,
使用 kubectl edit rc portal-api
从资源中删除 finalizer
部分:
finalizers:
- orphan
这是关于 Garbage Collection 以及如何删除曾经拥有但不再拥有的某些对象。
When you delete an object, you can specify whether the object’s dependents are also deleted automatically. Deleting dependents automatically is called cascading deletion. There are two modes of cascading deletion: background and foreground.
If you delete an object without deleting its dependents automatically, the dependents are said to be orphaned.
您可以阅读有关 Controlling how the garbage collector deletes dependents 的文档,前台级联删除和后台级联删除是如何工作的。
Setting the cascading deletion policy
To control the cascading deletion policy, set the propagationPolicy
field on the deleteOptions
argument when deleting an Object. Possible values include “Orphan”, “Foreground”, or “Background”.
Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was orphan
. This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and Deployment. For kinds in the extensions/v1beta1
, apps/v1beta1
, and apps/v1beta2
group versions, unless you specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the apps/v1
group version, dependent objects are deleted by default
kubectl also supports cascading deletion. To delete dependents automatically using kubectl, set --cascade
to true. To orphan dependents, set --cascade
to false. The default value for --cascade
is true.
Here’s an example that orphans the dependents of a ReplicaSet:
kubectl delete replicaset my-repset --cascade=false
我管理一个 K8s 集群,由 terraform 管理:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
我要删除堆栈;所以我删除了代码并申请了。由于超时,它抛出了一个错误。我重试了,我成功了。
但是现在,我还有 2 个复制控制器(它们是空的):
portal-api 0 0 0 2h
portal-app 0 0 0 2h
没有服务了,没有了horizontal_pod_scheduler;但还是我的 replication_controller.
我试图删除它们:
$ kubectl delete rc portal-api
error: timed out waiting for "portal-api" to be synced
如果我想强制删除也一样:
$ kubectl delete rc portal-api --cascade=false --force=true
$
$ kubectl get rc
[...]
portal-api 0 0 0 2h
portal-app 0 0 0 2h
[...]
我也还可以看到它的配置(填了一个deletionTimestamp
):
$ kubectl edit rc portal-api
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: ReplicationController
metadata:
creationTimestamp: 2018-12-05T14:00:15Z
deletionGracePeriodSeconds: 0
deletionTimestamp: 2018-12-05T15:22:00Z
finalizers:
- orphan
generation: 3
labels:
App: portal-api
name: portal-api
namespace: default
resourceVersion: "32590661"
selfLink: /api/v1/namespaces/default/replicationcontrollers/portal-api
uid: 171f605e-f896-11e8-b761-02d4b8553a0e
spec:
replicas: 0
selector:
App: portal-api
template:
metadata:
creationTimestamp: null
labels:
App: portal-api
spec:
automountServiceAccountToken: false
containers:
- env:
- name: AUTHORITY_MGR
value: http://system-authority-manager-service
image: gitlab.********************:4567/apps/portal/api:prd
imagePullPolicy: Always
name: portal-api
ports:
- containerPort: 3300
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
terminationGracePeriodSeconds: 30
status:
replicas: 0
有人可以帮我解决这个问题吗?任何想法 ?
谢谢,
使用 kubectl edit rc portal-api
从资源中删除 finalizer
部分:
finalizers:
- orphan
这是关于 Garbage Collection 以及如何删除曾经拥有但不再拥有的某些对象。
When you delete an object, you can specify whether the object’s dependents are also deleted automatically. Deleting dependents automatically is called cascading deletion. There are two modes of cascading deletion: background and foreground.
If you delete an object without deleting its dependents automatically, the dependents are said to be orphaned.
您可以阅读有关 Controlling how the garbage collector deletes dependents 的文档,前台级联删除和后台级联删除是如何工作的。
Setting the cascading deletion policy
To control the cascading deletion policy, set the
propagationPolicy
field on thedeleteOptions
argument when deleting an Object. Possible values include “Orphan”, “Foreground”, or “Background”.Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was
orphan
. This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and Deployment. For kinds in theextensions/v1beta1
,apps/v1beta1
, andapps/v1beta2
group versions, unless you specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in theapps/v1
group version, dependent objects are deleted by defaultkubectl also supports cascading deletion. To delete dependents automatically using kubectl, set
--cascade
to true. To orphan dependents, set--cascade
to false. The default value for--cascade
is true.Here’s an example that orphans the dependents of a ReplicaSet:
kubectl delete replicaset my-repset --cascade=false