"Unable to announce backup" 在 Openshift 中部署 ActiveMQ Artemis master/slave 对时发出警告

"Unable to announce backup" warning when an ActiveMQ Artemis master/slave pair is deployed in Openshift

我正在尝试在 Openshift 中以 master/slave 模式部署 ActiveMQ Artemis 集群,但我不断收到此警告:

2019-01-09 07:50:40,192 WARN  [org.apache.activemq.artemis.core.server] AMQ222137: Unable to announce backup, retrying: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ119012: Timed out waiting to receive initial broadcast from cluster]
   at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:759) [artemis-core-client-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:635) [artemis-core-client-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:617) [artemis-core-client-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.core.server.cluster.BackupManager$BackupConnector.run(BackupManager.java:246) [artemis-server-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) [artemis-commons-2.6.3.jar:2.6.3]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_181]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_181]
   at org.apache.activemq.artemis.utils.ActiveMQThreadFactory.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.6.3.jar:2.6.3]

当我在同一台机器上使用这个配置时,它没有问题。但是,当我将其 Dockerize 化并在每个容器中部署一个代理时,它会失败。

这是代理 1 的配置:

<?xml version="1.0"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
    <name>0.0.0.0</name>
    <persistence-enabled>true</persistence-enabled>
    <journal-type>ASYNCIO</journal-type>
    <paging-directory>data/paging</paging-directory>
    <bindings-directory>data/bindings</bindings-directory>
    <journal-directory>data/journal</journal-directory>
    <large-messages-directory>data/large-messages</large-messages-directory>
    <journal-datasync>true</journal-datasync>
    <journal-min-files>2</journal-min-files>
    <journal-pool-files>10</journal-pool-files>
    <journal-file-size>10M</journal-file-size>
    <journal-buffer-timeout>36000</journal-buffer-timeout>
    <journal-max-io>4096</journal-max-io>
    <disk-scan-period>5000</disk-scan-period>
    <max-disk-usage>90</max-disk-usage>
    <critical-analyzer>true</critical-analyzer>
    <critical-analyzer-timeout>120000</critical-analyzer-timeout>
    <critical-analyzer-check-period>60000</critical-analyzer-check-period>
    <critical-analyzer-policy>HALT</critical-analyzer-policy>
    <ha-policy>
     <shared-store>
      <master>
       <failover-on-shutdown>true</failover-on-shutdown>
      </master>
     </shared-store>
    </ha-policy>

    <connectors>
         <connector name="netty-connector">tcp://0.0.0.0:61616</connector>
    </connectors>

    <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
    </acceptors>

    <broadcast-groups>
       <broadcast-group name="bg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <broadcast-period>1000</broadcast-period>
          <connector-ref>netty-connector</connector-ref>
       </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
       <discovery-group name="dg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <refresh-timeout>60000</refresh-timeout>
       </discovery-group>
    </discovery-groups>

    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <discovery-group-ref discovery-group-name="dg-group1"/>
       </cluster-connection>
    </cluster-connections>

    <security-settings>
      <security-setting match="#">
        <permission type="createNonDurableQueue" roles="amq"/>
        <permission type="deleteNonDurableQueue" roles="amq"/>
        <permission type="createDurableQueue" roles="amq"/>
        <permission type="deleteDurableQueue" roles="amq"/>
        <permission type="createAddress" roles="amq"/>
        <permission type="deleteAddress" roles="amq"/>
        <permission type="consume" roles="amq"/>
        <permission type="browse" roles="amq"/>
        <permission type="send" roles="amq"/>
        <!-- we need this otherwise ./artemis data imp wouldn't work -->
        <permission type="manage" roles="amq"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <!-- if you define auto-create on certain queues, management has to be auto-create -->
      <address-setting match="activemq.management#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
      <!--default for catch all-->
      <address-setting match="#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
    </address-settings>
    <addresses>
      <address name="DLQ">
        <anycast>
          <queue name="DLQ"/>
        </anycast>
      </address>
      <address name="ExpiryQueue">
        <anycast>
          <queue name="ExpiryQueue"/>
        </anycast>
      </address>
    </addresses>
  </core>
</configuration>

这是代理 2 的配置:

<?xml version="1.0"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
    <name>0.0.0.0</name>
    <persistence-enabled>true</persistence-enabled>
    <journal-type>ASYNCIO</journal-type>
    <paging-directory>data/paging</paging-directory>
    <bindings-directory>data/bindings</bindings-directory>
    <journal-directory>data/journal</journal-directory>
    <large-messages-directory>data/large-messages</large-messages-directory>
    <journal-datasync>true</journal-datasync>
    <journal-min-files>2</journal-min-files>
    <journal-pool-files>10</journal-pool-files>
    <journal-file-size>10M</journal-file-size>
    <journal-max-io>4096</journal-max-io>
    <disk-scan-period>5000</disk-scan-period>
    <max-disk-usage>90</max-disk-usage>
    <!-- should the broker detect dead locks and other issues -->
    <critical-analyzer>true</critical-analyzer>
    <critical-analyzer-timeout>120000</critical-analyzer-timeout>
    <critical-analyzer-check-period>60000</critical-analyzer-check-period>
    <critical-analyzer-policy>HALT</critical-analyzer-policy>
    <ha-policy>
     <shared-store>
      <slave>
       <failover-on-shutdown>true</failover-on-shutdown>
      </slave>
     </shared-store>
    </ha-policy>

    <connectors>
         <connector name="netty-connector">tcp://0.0.0.0:61617</connector>
    </connectors>

    <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61617</acceptor>
    </acceptors>

    <broadcast-groups>
       <broadcast-group name="bg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <broadcast-period>1000</broadcast-period>
          <connector-ref>netty-connector</connector-ref>
       </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
       <discovery-group name="dg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <refresh-timeout>60000</refresh-timeout>
       </discovery-group>
    </discovery-groups>

    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <discovery-group-ref discovery-group-name="dg-group1"/>
       </cluster-connection>
    </cluster-connections>

    <security-settings>
      <security-setting match="#">
        <permission type="createNonDurableQueue" roles="amq"/>
        <permission type="deleteNonDurableQueue" roles="amq"/>
        <permission type="createDurableQueue" roles="amq"/>
        <permission type="deleteDurableQueue" roles="amq"/>
        <permission type="createAddress" roles="amq"/>
        <permission type="deleteAddress" roles="amq"/>
        <permission type="consume" roles="amq"/>
        <permission type="browse" roles="amq"/>
        <permission type="send" roles="amq"/>
        <!-- we need this otherwise ./artemis data imp wouldn't work -->
        <permission type="manage" roles="amq"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <!-- if you define auto-create on certain queues, management has to be auto-create -->
      <address-setting match="activemq.management#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
      <!--default for catch all-->
      <address-setting match="#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
    </address-settings>
    <addresses>
      <address name="DLQ">
        <anycast>
          <queue name="DLQ"/>
        </anycast>
      </address>
      <address name="ExpiryQueue">
        <anycast>
          <queue name="ExpiryQueue"/>
        </anycast>
      </address>
    </addresses>
  </core>
</configuration>

启用 DEBUG 日志记录后显示:

2019-01-10 15:40:37,753 DEBUG [org.apache.activemq.artemis.core.server.cluster.BackupManager] DiscoveryBackupConnector [group=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=60000, discoveryInitialWaitTimeout=10000}]:: announcing TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-119 to ServerLocatorImpl (identity=backupLocatorFor='ActiveMQServerImpl::serverUUID=0cef6ba4-14ee-11e9-83b3-0a580a820077') [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=60000, discoveryInitialWaitTimeout=10000}]

这里有几个问题...

首先,您的连接器配置不正确。在经纪人 1 上,您使用的是:

<connector name="netty-connector">tcp://0.0.0.0:61616</connector>

在代理 2 上,您使用的是:

<connector name="netty-connector">tcp://0.0.0.0:61617</connector>

此连接器信息由集群的每个成员发送,以通知其他集群成员如何连接回发送信息的节点。例如,在您的情况下,代理 1 告诉代理 2 它可以使用 tcp://0.0.0.0:61616 连接回代理 1。当然,这是不正确的,因为元地址 0.0.0.0 实际上并不指向代理 1。当代理 2 尝试使用此 URL 时,它将失败,如您所见。

当 运行 两个代理在同一主机上时,此方法起作用的原因是因为 0.0.0.0 将解析与 localhost 相同的内容。

您需要在连接器配置中使用有效的 IP 地址或主机名,以便集群可以正确形成。

其次,您提供的 DEBUG 日志记录表明多播流量在两个 Docker 实例之间不工作。我建议您尝试使用 static 集群来消除环境中的多播问题。在经纪人 1 上,您可以使用类似的东西:

<connectors>
     <connector name="netty-connector">tcp://broker1:61616</connector>
     <connector name="broker2-connector">tcp://broker2:61617</connector>
</connectors>

...

<cluster-connections>
   <cluster-connection name="my-cluster">
      <connector-ref>netty-connector</connector-ref>
      <static-connectors>
         <connector-ref>broker2-connector</connector-ref>
      </static-connectors>
   </cluster-connection>
</cluster-connections>

在代理 2 上,您可以使用类似的东西:

<connectors>
     <connector name="netty-connector">tcp://broker2:61617</connector>
     <connector name="broker1-connector">tcp://broker1:61616</connector>
</connectors>

...

<cluster-connections>
   <cluster-connection name="my-cluster">
      <connector-ref>netty-connector</connector-ref>
      <static-connectors>
         <connector-ref>broker1-connector</connector-ref>
      </static-connectors>
   </cluster-connection>
</cluster-connections>

当然,您需要使用连接器中实际节点的 IP 地址或主机名。