备份生效时 ActiveMQ Artemis 主从错误

ActiveMQ Artemis master slave error when backup becomes live

我有一个主从设置,有 1 个主控和 2 个从属。当我杀死主人时,其中一个奴隶试图成为主人但失败并出现以下异常:

2022/03/08 16:13:28.746 | mb | ERROR | 1-156 | o.a.a.a.c.server                         |                                      | AMQ224000: Failure in initialisation: java.lang.IndexOutOfBoundsException: length(32634) exceeds src.readableBytes(32500) where src is: UnpooledHeapByteBuf(ridx: 78, widx: 32578, cap: 32578/32578)
    at io.netty.buffer.AbstractByteBuf.checkReadableBounds(AbstractByteBuf.java:643)
    at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1095)
    at org.apache.activemq.artemis.core.message.impl.CoreMessage.reloadPersistence(CoreMessage.java:1207)
    at org.apache.activemq.artemis.core.message.impl.CoreMessagePersister.decode(CoreMessagePersister.java:85)
    at org.apache.activemq.artemis.core.message.impl.CoreMessagePersister.decode(CoreMessagePersister.java:28)
    at org.apache.activemq.artemis.spi.core.protocol.MessagePersister.decode(MessagePersister.java:120)
    at org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.decodeMessage(AbstractJournalStorageManager.java:1336)
    at org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.lambda$loadMessageJournal(AbstractJournalStorageManager.java:1035)
    at org.apache.activemq.artemis.utils.collections.SparseArrayLinkedList$SparseArray.clear(SparseArrayLinkedList.java:114)
    at org.apache.activemq.artemis.utils.collections.SparseArrayLinkedList.clearSparseArrayList(SparseArrayLinkedList.java:173)
    at org.apache.activemq.artemis.utils.collections.SparseArrayLinkedList.clear(SparseArrayLinkedList.java:227)
    at org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.loadMessageJournal(AbstractJournalStorageManager.java:990)
    at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.loadJournals(ActiveMQServerImpl.java:3484)
    at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.initialisePart2(ActiveMQServerImpl.java:3149)
    at org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.run(SharedNothingBackupActivation.java:325)
    at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$ActivationThread.run(ActiveMQServerImpl.java:4170)

我也看到了很多这样的消息:

2022/03/08 16:13:28.745 | AMQ224009: Cannot find message 36,887,402,768
2022/03/08 16:13:28.745 | AMQ224009: Cannot find message 36,887,402,768

主要设置:

<ha-policy>
   <replication>
      <master>
         <check-for-live-server>true</check-for-live-server>
      </master>
   </replication>
</ha-policy>
<connectors>
   <connector name="connector-server-0">tcp://172.16.134.51:62616</connector>
   <connector name="connector-server-1">tcp://172.16.134.52:62616</connector>
   <connector name="connector-server-2">tcp://172.16.134.28:62616</connector>
</connectors>
<acceptors>
   <acceptor name="netty-acceptor">tcp://172.16.134.51:62616</acceptor>
   <acceptor name="invm">vm://0</acceptor>
</acceptors>
<cluster-connections>
   <cluster-connection name="my-cluster">
      <connector-ref>connector-server-0</connector-ref>
      <retry-interval>500</retry-interval>
      <use-duplicate-detection>true</use-duplicate-detection>
      <message-load-balancing>ON_DEMAND</message-load-balancing>
      <max-hops>1</max-hops>
      <static-connectors>
         <connector-ref>connector-server-1</connector-ref>
         <connector-ref>connector-server-2</connector-ref>
      </static-connectors>
   </cluster-connection>
</cluster-connections>

从站 1 设置:

<ha-policy>
   <replication>
      <slave>
         <allow-failback>true</allow-failback>
      </slave>
   </replication>
</ha-policy>
<connectors>
   <connector name="connector-server-0">tcp://172.16.134.51:62616</connector>
   <connector name="connector-server-1">tcp://172.16.134.52:62616</connector>
   <connector name="connector-server-2">tcp://172.16.134.28:62616</connector>
</connectors>
<acceptors>
   <acceptor name="netty-acceptor">tcp://172.16.134.52:62616</acceptor>
   <acceptor name="invm">vm://0</acceptor>
</acceptors>
<cluster-connections>
   <cluster-connection name="cluster">
      <connector-ref>connector-server-1</connector-ref>
      <retry-interval>500</retry-interval>
      <use-duplicate-detection>true</use-duplicate-detection>
      <message-load-balancing>ON_DEMAND</message-load-balancing>
      <max-hops>1</max-hops>
      <static-connectors>
         <connector-ref>connector-server-0</connector-ref>
         <connector-ref>connector-server-2</connector-ref>
      </static-connectors>
   </cluster-connection>
</cluster-connections>

从机 2

<ha-policy>
   <replication>
      <slave>
         <allow-failback>true</allow-failback>
      </slave>
   </replication>
</ha-policy>
<connectors>
   <connector name="connector-server-0">tcp://172.16.134.51:62616</connector>
   <connector name="connector-server-1">tcp://172.16.134.52:62616</connector>
   <connector name="connector-server-2">tcp://172.16.134.28:62616</connector>
</connectors>
<acceptors>
   <acceptor name="netty-acceptor">tcp://172.16.134.28:62616</acceptor>
   <acceptor name="invm">vm://0</acceptor>
</acceptors>
<cluster-connections>
  <cluster-connection name="cluster">
      <connector-ref>connector-server-2</connector-ref>
      <retry-interval>500</retry-interval>
      <use-duplicate-detection>true</use-duplicate-detection>
      <message-load-balancing>ON_DEMAND</message-load-balancing>
      <max-hops>1</max-hops>
      <static-connectors>
         <connector-ref>connector-server-0</connector-ref>
         <connector-ref>connector-server-1</connector-ref>
      </static-connectors>
   </cluster-connection>
</cluster-connections>

你能告诉我我的设置有什么不正确的地方吗? 我正在使用 activemq-artemis 版本 2.17.0

我建议您升级到 latest release 并重试。

此外,我建议简化您的配置,只使用一对 live/backup。经纪人只会将数据复制到 一个 其他经纪人。第二个备份将完全空闲,直到主备份或当前备份失败。

最后,使用一对 live/backup 和 replication ha-policy 是非常危险的,因为 split-brain. I strongly recommend that you use shared-storage or once you move to the latest release you configure pluggable quorum voting 与 ZooKeeper 一起使用可以降低 split-brain.