K8 服务中的 Envoy Pod 到 Pod 通信
Envoy Pod to Pod communication within a Service in K8
配置了Envoy后,是否可以向属于Kubernetes中同一个Service的另一个K8 Pod发送http Rest请求?
重要提示:我有另一个问题 指示我使用 Envoy 特定标签提问。
E. G。
服务名称 = UserService , 2 Pods (replica = 2)
Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
连接结束休息GET 1.2.3.4:7079/user/1
主机+端口的值取自kubectl get ep
两个 pod IP 在 pods 之外都成功工作,但是当我对 pod 执行 kubectl exec -it
并通过 CURL 发出请求时,它 returns 未找到 404为端点。
Q 我想知道是否可以向同一服务中的另一个 K8 Pod 发出请求?
回答:这绝对有可能。
Q为什么我能打到一个成功ping 1.2.3.4
,但打不到其余的API?
Q配置Envoy后是否可以直接向另一个Pod请求一个Pod IP?
请告诉我需要哪些配置文件或需要输出才能继续,因为我是 K8 的初学者。谢谢。
下面是我的配置文件
#values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
service.yml
kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
从 master 执行
从同一微服务的 pod 内执行
编辑 2:
来自'kubectl get -o yaml deployment '
的输出
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-01-29T20:34:36Z
generation: 1
labels:
app: msg-messaging-room
chart: msg-messaging-room-0.0.22
heritage: Tiller
release: msg-messaging-room
name: msg-messaging-room
namespace: default
resourceVersion: "25447023"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/msg-messaging-room
uid: 4b283304-2405-11e9-abb9-000c29c7d15c
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: msg-messaging-room
release: msg-messaging-room
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: msg-messaging-room
release: msg-messaging-room
spec:
containers:
- env:
- name: KAFKA_HOST
value: confluent-kafka-cp-kafka-headless
- name: KAFKA_PORT
value: "9092"
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: MY_POD_PORT
value: "7079"
image: msg-messaging-room:0.0.22
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/alive
port: 7079
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: msg-messaging-room
ports:
- containerPort: 7079
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/ready
port: 7079
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-01-29T20:35:43Z
lastUpdateTime: 2019-01-29T20:35:43Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-01-29T20:34:36Z
lastUpdateTime: 2019-01-29T20:36:01Z
message: ReplicaSet "msg-messaging-room-6f49b5df59" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2
来自'kubectl get -o yaml svc $the_service'
的输出
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-01-29T20:34:36Z
labels:
app: msg-messaging-room
chart: msg-messaging-room-0.0.22
heritage: Tiller
release: msg-messaging-room
name: msg-messaging-room
namespace: default
resourceVersion: "25446807"
selfLink: /api/v1/namespaces/default/services/msg-messaging-room
uid: 4b24bd84-2405-11e9-abb9-000c29c7d15c
spec:
clusterIP: 1.2.3.172.201
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31849
port: 7079
protocol: TCP
targetPort: 7079
selector:
app: msg-messaging-room
release: msg-messaging-room
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
我在另一个问题上发布的是,我在安装服务之前禁用了 Istio 注入,然后在安装服务后重新启用它,现在一切正常,所以对我有用的命令是:
对于 Pod 到 Pod 部分:
添加另一个服务(无头)将允许您通过 curl 访问另一个 Pod,同时仍然启用 Istio。
例如添加
kind: Service
metadata:
name: {{ template "app.fullname" . }}-headless
labels:
... [same as other service]
spec:
clusterIP: None
... [same as other service]
作为无头服务提供 Pods 作为端点,而不是它自己的 clusterIP。
如果您不需要负载均衡,您可以只使用无头服务,但如果您需要两者,您可以将第一个服务用于外部流量,将无头服务用于 pod 到 pod 通信。
配置了Envoy后,是否可以向属于Kubernetes中同一个Service的另一个K8 Pod发送http Rest请求?
重要提示:我有另一个问题
E. G。 服务名称 = UserService , 2 Pods (replica = 2)
Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
连接结束休息GET 1.2.3.4:7079/user/1
主机+端口的值取自kubectl get ep
两个 pod IP 在 pods 之外都成功工作,但是当我对 pod 执行 kubectl exec -it
并通过 CURL 发出请求时,它 returns 未找到 404为端点。
Q 我想知道是否可以向同一服务中的另一个 K8 Pod 发出请求? 回答:这绝对有可能。
Q为什么我能打到一个成功ping 1.2.3.4
,但打不到其余的API?
Q配置Envoy后是否可以直接向另一个Pod请求一个Pod IP?
请告诉我需要哪些配置文件或需要输出才能继续,因为我是 K8 的初学者。谢谢。
下面是我的配置文件
#values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
service.yml
kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
从 master 执行
从同一微服务的 pod 内执行
编辑 2: 来自'kubectl get -o yaml deployment '
的输出apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-01-29T20:34:36Z
generation: 1
labels:
app: msg-messaging-room
chart: msg-messaging-room-0.0.22
heritage: Tiller
release: msg-messaging-room
name: msg-messaging-room
namespace: default
resourceVersion: "25447023"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/msg-messaging-room
uid: 4b283304-2405-11e9-abb9-000c29c7d15c
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: msg-messaging-room
release: msg-messaging-room
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: msg-messaging-room
release: msg-messaging-room
spec:
containers:
- env:
- name: KAFKA_HOST
value: confluent-kafka-cp-kafka-headless
- name: KAFKA_PORT
value: "9092"
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: MY_POD_PORT
value: "7079"
image: msg-messaging-room:0.0.22
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/alive
port: 7079
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: msg-messaging-room
ports:
- containerPort: 7079
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/ready
port: 7079
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-01-29T20:35:43Z
lastUpdateTime: 2019-01-29T20:35:43Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-01-29T20:34:36Z
lastUpdateTime: 2019-01-29T20:36:01Z
message: ReplicaSet "msg-messaging-room-6f49b5df59" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2
来自'kubectl get -o yaml svc $the_service'
的输出apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-01-29T20:34:36Z
labels:
app: msg-messaging-room
chart: msg-messaging-room-0.0.22
heritage: Tiller
release: msg-messaging-room
name: msg-messaging-room
namespace: default
resourceVersion: "25446807"
selfLink: /api/v1/namespaces/default/services/msg-messaging-room
uid: 4b24bd84-2405-11e9-abb9-000c29c7d15c
spec:
clusterIP: 1.2.3.172.201
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31849
port: 7079
protocol: TCP
targetPort: 7079
selector:
app: msg-messaging-room
release: msg-messaging-room
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
我在另一个问题上发布的是,我在安装服务之前禁用了 Istio 注入,然后在安装服务后重新启用它,现在一切正常,所以对我有用的命令是:
对于 Pod 到 Pod 部分:
添加另一个服务(无头)将允许您通过 curl 访问另一个 Pod,同时仍然启用 Istio。
例如添加
kind: Service
metadata:
name: {{ template "app.fullname" . }}-headless
labels:
... [same as other service]
spec:
clusterIP: None
... [same as other service]
作为无头服务提供 Pods 作为端点,而不是它自己的 clusterIP。
如果您不需要负载均衡,您可以只使用无头服务,但如果您需要两者,您可以将第一个服务用于外部流量,将无头服务用于 pod 到 pod 通信。