HA 配置问题,ActiveMQ Artemis

HA configuration issues, ActiveMQ Artemis

我正在尝试在 ActiveMQ Artemis 2.13 上配置 HA。我试图从一个简单的主要和备份开始。我已经阅读了几次关于集群和 HA 的文档,但我仍然不确定我在做什么。我还研究了复制故障回复 java 示例。

在客户端,我是否必须为主节点和备份节点指定连接信息?这个例子让我感到困惑,因为看起来 URLs/connection 是通过输入参数传递给 java 程序的,我不确定它们来自哪里。

在主控制台中,一切看起来都很正常,但我现在有一个“广播组”和“集群连接”。二级只有这两个。

在主服务器上,属性“服务器关闭时的故障转移”是错误的...

下面是我做的HA配置:

主要 (192.168.56.105) broker.xml:

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">
        <name>0.0.0.0</name>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>NIO</journal-type>
        <paging-directory>data/paging</paging-directory>
        <bindings-directory>data/bindings</bindings-directory>
        <journal-directory>data/journal</journal-directory>
        <large-messages-directory>data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>10</journal-pool-files>
        <journal-device-block-size>4096</journal-device-block-size>
        <journal-file-size>10M</journal-file-size>
        <journal-buffer-timeout>2884000</journal-buffer-timeout>
        <journal-max-io>1</journal-max-io>
        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <critical-analyzer>true</critical-analyzer>
        <critical-analyzer-timeout>120000</critical-analyzer-timeout>
        <critical-analyzer-check-period>60000</critical-analyzer-check-period>
        <critical-analyzer-policy>HALT</critical-analyzer-policy>      
        <page-sync-timeout>2884000</page-sync-timeout>
        
        <acceptors>
            <!-- Acceptor for every supported protocol -->
            <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
        
    <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

        </acceptors>

        <connectors>
            <connector name="artemis">tcp://192.168.56.105:61616</connector>
        </connectors>

        <broadcast-groups>
            <broadcast-group name="broadcast-group-1">
                <group-address>231.7.7.7</group-address> 
                <group-port>9876</group-port>
                <connector-ref>artemis</connector-ref>
            </broadcast-group>
        </broadcast-groups>

        <discovery-groups>
            <discovery-group name="discovery-group-1">
                <group-address>231.7.7.7</group-address>
                <group-port>9876</group-port>
            </discovery-group>
        </discovery-groups>

        <cluster-user>cluster.user</cluster-user>
        <cluster-password>password</cluster-password>

        <ha-policy>
            <replication>
                <master>
                    <check-for-live-server>true</check-for-live-server>
                </master>
            </replication>
        </ha-policy>

        <cluster-connections>
            <cluster-connection name="cluster-1">
                <connector-ref>artemis</connector-ref>
                <discovery-group-ref discovery-group-name="discovery-group-1"/>
            </cluster-connection> 
        </cluster-connections>

        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"/>
                <permission type="deleteNonDurableQueue" roles="amq"/>
                <permission type="createDurableQueue" roles="amq"/>
                <permission type="deleteDurableQueue" roles="amq"/>
                <permission type="createAddress" roles="amq"/>
                <permission type="deleteAddress" roles="amq"/>
                <permission type="consume" roles="amq"/>
                <permission type="browse" roles="amq"/>
                <permission type="send" roles="amq"/>
                <!-- we need this otherwise ./artemis data imp wouldn't work -->
                <permission type="manage" roles="amq"/>
            </security-setting>
        </security-settings>

        <address-settings>
            <!-- if you define auto-create on certain queues, management has to be auto-create -->
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <!--default for catch all-->
            <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
        </address-settings>
        <addresses>
            <address name="DLQ">
                <anycast>
                    <queue name="DLQ" />
                </anycast>
            </address>
            <address name="ExpiryQueue">
                <anycast>
                    <queue name="ExpiryQueue" />
                </anycast>
            </address>

        </addresses>
    </core>
</configuration>

备份 (192.168.56.106) broker.xml:

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>0.0.0.0</name>
      <persistence-enabled>true</persistence-enabled>
      <journal-type>NIO</journal-type>
      <paging-directory>data/paging</paging-directory>
      <bindings-directory>data/bindings</bindings-directory>
      <journal-directory>data/journal</journal-directory>
      <large-messages-directory>data/large-messages</large-messages-directory>
      <journal-datasync>true</journal-datasync>
      <journal-min-files>2</journal-min-files>
      <journal-pool-files>10</journal-pool-files>
      <journal-device-block-size>4096</journal-device-block-size>
      <journal-file-size>10M</journal-file-size>
      <journal-buffer-timeout>2868000</journal-buffer-timeout>
      <journal-max-io>1</journal-max-io>
      <disk-scan-period>5000</disk-scan-period>
      <max-disk-usage>90</max-disk-usage>
      <critical-analyzer>true</critical-analyzer>
      <critical-analyzer-timeout>120000</critical-analyzer-timeout>
      <critical-analyzer-check-period>60000</critical-analyzer-check-period>
      <critical-analyzer-policy>HALT</critical-analyzer-policy>      
      <page-sync-timeout>2868000</page-sync-timeout>

      <acceptors>
         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
        <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors>

      <connectors>
          <connector name="artemis">tcp://192.168.56.106:61616</connector>
      </connectors>

      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>
      </addresses>

        <broadcast-groups>
        <broadcast-group name="broadcast-group-1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <connector-ref>artemis</connector-ref>
        </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
        <discovery-group name="discovery-group-1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
        </discovery-group>
    </discovery-groups>

    <cluster-user>cluster.user</cluster-user>
    <cluster-password>password</cluster-password>

    <ha-policy>
        <replication>
            <slave>
                <allow-failback>true</allow-failback>
            </slave>
        </replication>
    </ha-policy>

    <cluster-connections>
        <cluster-connection name="cluster-1">
            <connector-ref>artemis</connector-ref>
            <discovery-group-ref discovery-group-name="discovery-group-1"/>
        </cluster-connection>
    </cluster-connections>
   </core>
</configuration>

我在 broker.xml 文件中只有默认地址 - DLQExpiryQueue 地址和队列。

此外,这是控制台上显示内容的屏幕截图。备份服务器丢失了很多东西。

小学:

备份:

配置集群所需的主要内容是cluster-connectioncluster-connection 引用了 connector,它指定了 other 节点如何连接到它。换句话说,cluster-connection 上配置的 connector-ref 应该匹配代理上配置的 acceptor。您遇到的一个问题是两个节点都有使用 0.0.0.0 的连接器。这是一个专门用于 acceptor 的元地址。对于远程客户端来说,这将毫无意义。您的连接器应该是 运行.

服务器的真实 IP 地址或主机名

您的配置可能应该是这样的:

主要 (192.168.56.105) broker.xml:

<acceptors>    
    <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
</acceptors>

<connectors>
    <connector name="netty-connector">tcp://192.168.56.105:61616</connector>
</connectors>

<broadcast-groups>
    <broadcast-group name="broadcast-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
        <connector-ref>netty-connector</connector-ref>
    </broadcast-group>
</broadcast-groups>

<discovery-groups>
    <discovery-group name="discovery-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
    </discovery-group>
</discovery-groups>

<cluster-user>cluster.user</cluster-user>
<cluster-password>password</cluster-password>

<ha-policy>
    <replication>
        <master>
            <check-for-live-server>true</check-for-live-server>
        </master>
    </replication>
</ha-policy>

<cluster-connections>
    <cluster-connection name="cluster-1">
        <connector-ref>netty-connector</connector-ref>
        <discovery-group-ref discovery-group-name="discovery-group-1"/>
    </cluster-connection>
</cluster-connections>

备份 (192.168.56.106) broker.xml:

<acceptor>
    <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
</acceptors>

<connectors>
    <connector name="netty-connector">tcp://192.168.56.106:61616</connector>
</connectors>

<broadcast-groups>
    <broadcast-group name="broadcast-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
        <connector-ref>netty-connector</connector-ref>
    </broadcast-group>
</broadcast-groups>

<discovery-groups>
    <discovery-group name="discovery-group-1">
        <group-address>231.7.7.7</group-address>
        <group-port>9876</group-port>
    </discovery-group>
</discovery-groups>

<cluster-user>cluster.user</cluster-user>
<cluster-password>password</cluster-password>

<ha-policy>
    <replication>
        <slave>
            <allow-failback>true</allow-failback>
        </slave>
    </replication>
</ha-policy>

<cluster-connections>
    <cluster-connection name="cluster-1">
        <connector-ref>netty-connector</connector-ref>
        <discovery-group-ref discovery-group-name="discovery-group-1"/>
    </cluster-connection>
</cluster-connections>

使用任何受支持协议的客户端可以使用侦听 61616 的接收器。没有严格需要添加任何额外的 acceptor 元素。

备份在Web控制台上显示有限的信息是正常的,因为备份是被动的,而主是主动的。一旦主服务器关闭,备份将激活,网络控制台将完全正常运行。