安装在 Kubernetes PODs 中的副本 3 仲裁器 1 的 GlusterFS 卷包含零大小文件

GlusterFS volume with replica 3 arbiter 1 mounted in Kubernetes PODs contains zero size files

我正计划使用仲裁器 1 从副本 3 迁移到副本 3,但在我的第三个节点(充当仲裁器)上遇到了一个奇怪的问题。
当我将新的卷端点安装到 Gluster 仲裁器 POD 为 运行 的节点时,我遇到了奇怪的行为:有些文件很好,但有些文件大小为零。当我在另一个节点上挂载相同的共享时,所有文件都很好。
GlusterFS 运行 作为 Kubernetes daemonset,我正在使用 heketi 从 Kubernetes 自动管理 GlusterFS。
我正在使用 glusterfs 4.1.5 和 Kubernetes 1.11.1。

gluster volume info vol_3ffdfde93880e8aa39c4b4abddc392cf

Type: Replicate
Volume ID: e67d2ade-991a-40f9-8f26-572d0982850d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.2.70:/var/lib/heketi/mounts/vg_426b28072d8d0a4c27075930ddcdd740/brick_35389ca30d8f631004d292b76d32a03b/brick
Brick2: 192.168.2.96:/var/lib/heketi/mounts/vg_3a9b2f229b1e13c0f639db6564f0d820/brick_953450ef6bc25bfc1deae661ea04e92d/brick
Brick3: 192.168.2.148:/var/lib/heketi/mounts/vg_7d1e57c2a8a779e69d22af42812dffd7/brick_b27af182cb69e108c1652dc85b04e44a/brick (arbiter)
Options Reconfigured:
user.heketi.id: 3ffdfde93880e8aa39c4b4abddc392cf
user.heketi.arbiter: true
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

状态输出:

gluster volume status vol_3ffdfde93880e8aa39c4b4abddc392cf
Status of volume: vol_3ffdfde93880e8aa39c4b4abddc392cf
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.2.70:/var/lib/heketi/mounts/v
g_426b28072d8d0a4c27075930ddcdd740/brick_35
389ca30d8f631004d292b76d32a03b/brick        49152     0          Y       13896
Brick 192.168.2.96:/var/lib/heketi/mounts/v
g_3a9b2f229b1e13c0f639db6564f0d820/brick_95
3450ef6bc25bfc1deae661ea04e92d/brick        49152     0          Y       12111
Brick 192.168.2.148:/var/lib/heketi/mounts/
vg_7d1e57c2a8a779e69d22af42812dffd7/brick_b
27af182cb69e108c1652dc85b04e44a/brick       49152     0          Y       25045
Self-heal Daemon on localhost               N/A       N/A        Y       25069
Self-heal Daemon on worker1-aws-va          N/A       N/A        Y       12134
Self-heal Daemon on 192.168.2.70            N/A       N/A        Y       13919

Task Status of Volume vol_3ffdfde93880e8aa39c4b4abddc392cf
------------------------------------------------------------------------------
There are no active volume tasks

治疗输出:

gluster volume heal vol_3ffdfde93880e8aa39c4b4abddc392cf info
Brick 192.168.2.70:/var/lib/heketi/mounts/vg_426b28072d8d0a4c27075930ddcdd740/brick_35389ca30d8f631004d292b76d32a03b/brick
Status: Connected
Number of entries: 0

Brick 192.168.2.96:/var/lib/heketi/mounts/vg_3a9b2f229b1e13c0f639db6564f0d820/brick_953450ef6bc25bfc1deae661ea04e92d/brick
Status: Connected
Number of entries: 0

Brick 192.168.2.148:/var/lib/heketi/mounts/vg_7d1e57c2a8a779e69d22af42812dffd7/brick_b27af182cb69e108c1652dc85b04e44a/brick
Status: Connected
Number of entries: 0

有解决此问题的想法吗?

将 Kubernetes Workers 上的 glusterfs-client 和 glusterfs-common 软件包更新到更新版本后,问题已得到修复。