我无法在 nginx ingress 和 route53 上访问 kibana
I cannot access kibana on nginx ingress and route53
我已经在我的 EKS 集群上部署了带有内部负载均衡器和外部 DNS 的 nginx 入口控制器,所以我尝试使用在 route53 上注册的主机名和私有托管区域 (my-hostname.com) 公开 kibana。
但是当我使用 vpn 在浏览器上访问它时,它显示无法访问网站。所以我需要知道我做错了什么
这里是所有资源:
入口控制器:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.3
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: internal-ingress
name: internal-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
app: internal-ingress
template:
metadata:
labels:
app: internal-ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/internal-ingress-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/internal-tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/internal-udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class=internal-ingress
- --publish-service=$(POD_NAMESPACE)/internal-ingress
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: internal-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
labels:
app: internal-ingress
name: internal-ingress
spec:
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: internal-ingress
sessionAffinity: None
type: LoadBalancer
外部 DNS :
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: external-dns-private
name: external-dns-private
spec:
replicas: 1
selector:
matchLabels:
app: external-dns-private
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns-private
spec:
serviceAccountName: external-dns
containers:
- args:
- --source=ingress
- --domain-filter=my-hostname.com
- --provider=aws
- --registry=txt
- --txt-owner-id=dev.k8s.nexus
- --annotation-filter=kubernetes.io/ingress.class=internal-ingress
- --aws-zone-type=private
image: registry.opensource.zalan.do/teapot/external-dns:latest
name: external-dns-private
入口资源:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "internal-ingress"
labels:
app: app
name: app-private
spec:
rules:
- host: kibana.my-hostname.com
http:
paths:
- backend:
serviceName: kibana
servicePort: 5601
kibana 服务:
apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
selector:
app: kibana
ports:
- name: client
port: 5601
protocol: TCP
type: ClusterIP
我检查了我的私有托管区域的记录集,发现 kibana.my-hostname.com 已添加,但仍然无法访问它。
Route53 将仅响应来自您内部和允许的 VPC 的请求。您无法从您的 VPC 访问该域。
要解决此问题,请将您的区域更改为 public,或使用带有简单 AD 的 VPN 将请求转发到您的私人区域,如 here 所述。
参考文献:
我已经在我的 EKS 集群上部署了带有内部负载均衡器和外部 DNS 的 nginx 入口控制器,所以我尝试使用在 route53 上注册的主机名和私有托管区域 (my-hostname.com) 公开 kibana。 但是当我使用 vpn 在浏览器上访问它时,它显示无法访问网站。所以我需要知道我做错了什么
这里是所有资源:
入口控制器:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.3
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: internal-ingress
name: internal-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
app: internal-ingress
template:
metadata:
labels:
app: internal-ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/internal-ingress-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/internal-tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/internal-udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class=internal-ingress
- --publish-service=$(POD_NAMESPACE)/internal-ingress
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: internal-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
labels:
app: internal-ingress
name: internal-ingress
spec:
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: internal-ingress
sessionAffinity: None
type: LoadBalancer
外部 DNS :
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: external-dns-private
name: external-dns-private
spec:
replicas: 1
selector:
matchLabels:
app: external-dns-private
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns-private
spec:
serviceAccountName: external-dns
containers:
- args:
- --source=ingress
- --domain-filter=my-hostname.com
- --provider=aws
- --registry=txt
- --txt-owner-id=dev.k8s.nexus
- --annotation-filter=kubernetes.io/ingress.class=internal-ingress
- --aws-zone-type=private
image: registry.opensource.zalan.do/teapot/external-dns:latest
name: external-dns-private
入口资源:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "internal-ingress"
labels:
app: app
name: app-private
spec:
rules:
- host: kibana.my-hostname.com
http:
paths:
- backend:
serviceName: kibana
servicePort: 5601
kibana 服务:
apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
selector:
app: kibana
ports:
- name: client
port: 5601
protocol: TCP
type: ClusterIP
我检查了我的私有托管区域的记录集,发现 kibana.my-hostname.com 已添加,但仍然无法访问它。
Route53 将仅响应来自您内部和允许的 VPC 的请求。您无法从您的 VPC 访问该域。
要解决此问题,请将您的区域更改为 public,或使用带有简单 AD 的 VPN 将请求转发到您的私人区域,如 here 所述。
参考文献: