Couchbase 中的故障转移和强一致性

Failover and strong consistency in Couchbase

我们有一个三节点 Couchbase 集群,具有两个副本和持久性级别 MAJORITY。 这意味着在确认成功之前,突变将被复制到活动节点(node A)和两个副本之一(node B)。

就一致性而言,如果 node A 变得不可用并且硬故障转移过程会在 node A 设法将突变复制到 node C 之前提升 node C 副本,将会发生什么?

根据文档 Protection Guarantees and Automatic Failover,写入是持久的但会立即可用?

@ingenthr 回答here

Assuming the order is that the client gets the acknowledgment of the durability, then the hard failover is triggered of your node A, during the failover the cluster manager and the underlying data service will determine whether node B or C should be promoted to active for that vbucket (a.k.a. partition) to satisfy all promised durability. That was actually one of the trickier bits of implementation.

“Immediately” is pretty much correct. Technically it does take some time to do the promotion of the vbucket, but this should be very short as it’s just metadata checks and state changes and doesn’t involve any data movement. Clients will need to be updated with the new topology as well. How long is a function of the environment and what else is going on, but I’d expect single-digit-seconds or even under a second. Assuming you’re using a modern SDK API 3.x client with best-effort retries, it will be mostly transparent to your application, but not entirely transparent since you’re doing a hard failover. Non-idempotent operations, for example, may bubble up as errors.