用于访问集群服务的 VPN / pods : 除了 openvpn 服务器之外无法 ping 通任何东西

VPN to access cluster services / pods : cannot ping anything except openvpn server

我正在尝试设置一个 VPN 来访问我集群的工作负载,而无需设置 public 个端点。

使用 OpenVPN helm chart 部署服务,使用 Rancher v2.3.2 部署 kubernetes

什么有效/无效:

我的文件

vars.yml

---
replicaCount: 1
nodeSelector:
  openvpn: "true"
openvpn:
  OVPN_K8S_POD_NETWORK: "10.42.0.0"
  OVPN_K8S_POD_SUBNET: "255.255.0.0"
  OVPN_K8S_SVC_NETWORK: "10.43.0.0"
  OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
  storageClass: "local-path"
service:
  externalPort: 444

连接有效,但我无法访问集群内的任何 ip。 我能访问的唯一 ip 是 openvpn 集群 ip。

openvpn.conf:

server 10.240.0.0 255.255.0.0
verb 3

key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem



key-direction 0
keepalive 10 60
persist-key
persist-tun

proto tcp
port  443
dev tun0
status /tmp/openvpn-status.log

user nobody
group nogroup

push "route 10.42.2.11 255.255.255.255"

push "route 10.42.0.0 255.255.0.0"


push "route 10.43.0.0 255.255.0.0"



push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"

client.ovpn

client
nobind
dev tun

remote xxxx xxx tcp
CERTS CERTS

dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net

我真的不知道如何调试它。

我正在使用 windows

route 来自客户端的命令

Destination     Gateway         Genmask         Flags Metric Ref    Use Ifac
0.0.0.0         livebox.home    255.255.255.255 U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     256    0        0 eth0
192.168.1.17    0.0.0.0         255.255.255.255 U     256    0        0 eth0
192.168.1.255   0.0.0.0         255.255.255.255 U     256    0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 eth0
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 eth1
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 eth1
0.0.0.0         10.240.0.5      255.255.255.255 U     0      0        0 eth1
10.42.2.11      10.240.0.5      255.255.255.255 U     0      0        0 eth1
10.42.0.0       10.240.0.5      255.255.0.0     U     0      0        0 eth1
10.43.0.0       10.240.0.5      255.255.0.0     U     0      0        0 eth1
10.240.0.1      10.240.0.5      255.255.255.255 U     0      0        0 eth1
127.0.0.0       0.0.0.0         255.0.0.0       U     256    0        0 lo  
127.0.0.1       0.0.0.0         255.255.255.255 U     256    0        0 lo  
127.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 lo  
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 lo  
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 lo  

最后 ifconfig

        inet 192.168.1.17  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 2a01:cb00:90c:5300:603c:f8:703e:a876  prefixlen 64  scopeid 0x0<global>
        inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2  prefixlen 128  scopeid 0x0<global>
        inet6 fe80::603c:f8:703e:a876  prefixlen 64  scopeid 0xfd<compat,link,site,host>
        ether 00:d8:61:31:22:32  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.240.0.6  netmask 255.255.255.252  broadcast 10.240.0.7
        inet6 fe80::b9cf:39cc:f60a:9db2  prefixlen 64  scopeid 0xfd<compat,link,site,host>
        ether 00:ff:42:04:53:4d  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 1500
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0xfe<compat,link,site,host>
        loop  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

不知道这个答案是否正确。

但是我通过在 pods 中添加一个 sidecar 来执行它 net.ipv4.ip_forward=1

解决了问题

对于寻找工作示例的任何人,这将与您的容器定义一起进入您的 openvpn 部署:

initContainers:
- args:
  - -w
  - net.ipv4.ip_forward=1
  command:
  - sysctl
  image: busybox
  name: openvpn-sidecar
  securityContext:
    privileged: true

您可以在 values.yaml

中将 ipForwardInitContainer 选项设置为“true”