在 mongodb 3.0 复制中,次要节点出现故障时如何进行选举
In mongodb 3.0 replication, how elections happen when a secondary goes down
情况: 我在两台计算机上设置了 MongoDB 复制。
- 一台计算机是一台服务器,拥有主节点和仲裁器。该服务器是实时服务器,并且始终在线。复制中使用的本地 IP 是
192.168.0.4
.
- 第二个是辅助节点所在的 PC,并且每天开启几个小时。复制中使用的本地 IP 是
192.168.0.5
.
我的期望:我希望实时服务器成为我的应用程序数据交互的主要点,而不管 PC 的状态(是否可达,因为 PC 是次要的),所以我想确保服务器的节点始终是主要的。
下面是rs.config()
的结果:
liveSet:PRIMARY> rs.config()
{
"_id" : "liveSet",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.4:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.5:5051",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.4:5052",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
此外,如果重要的话,我已将存储引擎设置为 WiredTiger。
我实际得到的结果和问题:当我关闭 PC 或终止其 mongod 进程时,服务器上的节点变为次要节点。
以下是当我杀死 PC 的 mongod 进程时服务器的输出,同时连接到主节点的 shell:
liveSet:PRIMARY>
2015-11-29T10:46:29.471+0430 I NETWORK Socket recv() errno:10053 An established connection was aborted by the software in your host machine. 127.0.0.1:27017
2015-11-29T10:46:29.473+0430 I NETWORK SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:27017]
2015-11-29T10:46:29.475+0430 I NETWORK DBClientCursor::init call() failed
2015-11-29T10:46:29.479+0430 I NETWORK trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2015-11-29T10:46:29.481+0430 I NETWORK reconnect 127.0.0.1:27017 (127.0.0.1) ok
liveSet:SECONDARY>
我有两个疑惑:
Replica sets use elections to determine which set member will become primary. Elections occur after initiating a replica set, and also any time the primary becomes unavailable.
选举发生在主节点不可用时(或者在启动时,但这部分与我们的情况无关),但主节点始终可用,所以为什么要进行选举。
- 考虑同一文档的这一部分:
If a majority of the replica set is inaccessible or unavailable, the replica set cannot accept writes and all remaining members become read-only.
考虑到 'members become read-only' 部分,我有两个节点在运行,一个在运行,所以这应该不会影响我们的复制。
现在我的问题是:当 PC 上的节点不可访问时,如何将服务器上的节点保持为主节点?
更新:
这是 rs.status()
的输出。
感谢 Wan Bachtiar,现在这使行为变得明显,因为无法访问仲裁器。
liveSet:PRIMARY> rs.status()
{
"set" : "liveSet",
"date" : ISODate("2015-11-30T04:33:03.864Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.0.4:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1807553,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"electionTime" : Timestamp(1448857488, 1),
"electionDate" : ISODate("2015-11-30T04:24:48Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.0.5:5051",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 496,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"lastHeartbeat" : ISODate("2015-11-30T04:33:03.708Z"),
"lastHeartbeatRecv" : ISODate("2015-11-30T04:33:02.451Z"),
"pingMs" : 1,
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.0.4:5052",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"lastHeartbeat" : ISODate("2015-11-30T04:33:00.008Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"configVersion" : -1
}
],
"ok" : 1
}
liveSet:PRIMARY>
如文档中所述,如果副本集的大部分不可访问或不可用,副本集将无法接受写入,所有剩余成员将变为只读。
在这种情况下,如果仲裁器和次要节点不可访问,则主要节点必须下台。 rs.status()
应该能够确定副本成员的健康状况。
您还应该注意的一件事是初级 oplog size. The size of the oplog determines how long a replica set member can be down for and still be able to catch up when it comes back online. The bigger the oplog size, the longer you can deal with a member being down for as the oplog can hold more operations. If it does fall too far behind, you must resynchronise the member by removing its data files and performing an initial sync。
有关详细信息,请参阅 Check the size of the Oplog。
此致,
万.
情况: 我在两台计算机上设置了 MongoDB 复制。
- 一台计算机是一台服务器,拥有主节点和仲裁器。该服务器是实时服务器,并且始终在线。复制中使用的本地 IP 是
192.168.0.4
. - 第二个是辅助节点所在的 PC,并且每天开启几个小时。复制中使用的本地 IP 是
192.168.0.5
.
我的期望:我希望实时服务器成为我的应用程序数据交互的主要点,而不管 PC 的状态(是否可达,因为 PC 是次要的),所以我想确保服务器的节点始终是主要的。
下面是rs.config()
的结果:
liveSet:PRIMARY> rs.config()
{
"_id" : "liveSet",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.4:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.5:5051",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.4:5052",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
此外,如果重要的话,我已将存储引擎设置为 WiredTiger。
我实际得到的结果和问题:当我关闭 PC 或终止其 mongod 进程时,服务器上的节点变为次要节点。
以下是当我杀死 PC 的 mongod 进程时服务器的输出,同时连接到主节点的 shell:
liveSet:PRIMARY>
2015-11-29T10:46:29.471+0430 I NETWORK Socket recv() errno:10053 An established connection was aborted by the software in your host machine. 127.0.0.1:27017
2015-11-29T10:46:29.473+0430 I NETWORK SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:27017]
2015-11-29T10:46:29.475+0430 I NETWORK DBClientCursor::init call() failed
2015-11-29T10:46:29.479+0430 I NETWORK trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2015-11-29T10:46:29.481+0430 I NETWORK reconnect 127.0.0.1:27017 (127.0.0.1) ok
liveSet:SECONDARY>
我有两个疑惑:
Replica sets use elections to determine which set member will become primary. Elections occur after initiating a replica set, and also any time the primary becomes unavailable.
选举发生在主节点不可用时(或者在启动时,但这部分与我们的情况无关),但主节点始终可用,所以为什么要进行选举。
- 考虑同一文档的这一部分:
If a majority of the replica set is inaccessible or unavailable, the replica set cannot accept writes and all remaining members become read-only.
考虑到 'members become read-only' 部分,我有两个节点在运行,一个在运行,所以这应该不会影响我们的复制。
现在我的问题是:当 PC 上的节点不可访问时,如何将服务器上的节点保持为主节点?
更新:
这是 rs.status()
的输出。
感谢 Wan Bachtiar,现在这使行为变得明显,因为无法访问仲裁器。
liveSet:PRIMARY> rs.status()
{
"set" : "liveSet",
"date" : ISODate("2015-11-30T04:33:03.864Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.0.4:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1807553,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"electionTime" : Timestamp(1448857488, 1),
"electionDate" : ISODate("2015-11-30T04:24:48Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.0.5:5051",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 496,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"lastHeartbeat" : ISODate("2015-11-30T04:33:03.708Z"),
"lastHeartbeatRecv" : ISODate("2015-11-30T04:33:02.451Z"),
"pingMs" : 1,
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.0.4:5052",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"lastHeartbeat" : ISODate("2015-11-30T04:33:00.008Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"configVersion" : -1
}
],
"ok" : 1
}
liveSet:PRIMARY>
如文档中所述,如果副本集的大部分不可访问或不可用,副本集将无法接受写入,所有剩余成员将变为只读。
在这种情况下,如果仲裁器和次要节点不可访问,则主要节点必须下台。 rs.status()
应该能够确定副本成员的健康状况。
您还应该注意的一件事是初级 oplog size. The size of the oplog determines how long a replica set member can be down for and still be able to catch up when it comes back online. The bigger the oplog size, the longer you can deal with a member being down for as the oplog can hold more operations. If it does fall too far behind, you must resynchronise the member by removing its data files and performing an initial sync。
有关详细信息,请参阅 Check the size of the Oplog。
此致,
万.