Clickhouse 的 Zookeeper 如何迁移到新实例?

How to migrate Clickhouse's Zookeeper to new instances?

我在 Kubernetes 上的 2 个副本中托管 ClickHouse (v20.4.3.16),它在 3 个副本中使用 Zookeeper (v3.5.5)(也托管在同一个 Kubernetes 集群上)。

我需要用另一个安装迁移 ClickHouse 使用的 Zookeeper,仍然是 3 个副本,但是 v3.6.2。

我尝试做的是:

2021.01.13 13:03:36.454415 [ 135 ] {885576c1-832e-4ac6-82d8-45fbf33b7790} <Warning> default.check_in_availability: Tried to add obsolete part 202101_0_0_0 covered by 202101_0_1159_290 (state Committed)

并且永远不会插入新数据。

我已经阅读了所有关于数据复制和重复数据删除的信息,但我确信我在插入中添加了新数据,而且所有 table 都使用了时间字段(event_time 或 update_timestamp 等等),但它根本不起作用。

将 ClickHouse 附加回旧 Zookeeper,插入相同数据时不会发生问题。

在更改 Zookeeper 端点之前是否需要做一些事情?我是否漏掉了一些明显的东西?

Using zk-shell, I

您不能使用此方法,因为它不会复制用于部件块编号的自动增量值。

还有更简单的方法。您可以通过添加新的 ZK 节点作为关注者来迁移 ZK 集群。

Here is a plan for ZK 3.4.9 (no dynamic reconfiguration):
1. Configure the 3 new ZK nodes as a cluster of 6 nodes (3 old + 3 new), start them. No changes needed for the 3 old ZK nodes at this time.
    The new server would not connect and download a snapshot, so I had to start one of them in the cluster of 4 nodes first.
2. Make sure the 3 new ZK nodes connected to the old ZK cluster as followers (run echo stat | nc localhost 2181 on the 3 new ZK nodes)
3. Confirm that the leader has 5 synced followers (run echo mntr | nc localhost 2181 on the leader, look for zk_synced_followers)
7. Remove the 3 old ZK nodes from zoo.cfg on the 3 new ZK nodes.
8. Stop data loading in CH (this is to minimize errors when CH loses ZK).
4. Change the zookeeper section in the configs on the CH nodes (remove the 3 old ZK servers, add the 3 new ZK servers)
5. Restart all CH nodes (CH must restart to connect to different ZK servers)
6. Make sure there are no connections from CH to the 3 old ZK nodes (run echo stat | nc localhost 2181 on the 3 old nodes, check their Client ssection).
11. Turn off the 3 old ZK nodes
9. Restart the 3 new ZK nodes. They should form a cluster of 3 nodes.
10. When CH reconnects to ZK, start data loading.

Altinity KB