Wildfly Domain 与 HornetQ 分享主题
Wildfly Domain shared Topic with HornetQ
我有一个域模式的 Wildfly 集群,有两个节点,每个节点有一台服务器,都属于同一个服务器组。
我需要一个共享主题:当客户端订阅服务器 A 的主题时,服务器 B 就同一主题放置的消息也应该通知它。现在每个客户端仅由连接到的服务器通知。
我使用带有 Linux 的 AWS 机器,但我不能使用多播地址,所以我使用了一个明确所有主机地址的准时协议。
这是我的 domain.xml
<subsystem xmlns="urn:jboss:domain:messaging:2.0">
<hornetq-server>
<cluster-password>mypassword</cluster-password>
<journal-file-size>102400</journal-file-size>
<connectors>
<http-connector name="http-connector" socket-binding="http">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<http-connector name="http-connector-throughput" socket-binding="http">
<param key="http-upgrade-endpoint" value="http-acceptor-throughput"/>
<param key="batch-delay" value="50"/>
</http-connector>
<http-connector name="cnode1" socket-binding="node1">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<http-connector name="cnode2" socket-binding="node2">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
<acceptors>
<http-acceptor http-listener="default" name="http-acceptor"/>
<http-acceptor http-listener="default" name="http-acceptor-throughput">
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
</acceptors>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>http-connector</connector-ref>
<static-connectors>
<connector-ref>
cnode1
</connector-ref>
<connector-ref>
cnode2
</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="send" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<max-size-bytes>10485760</max-size-bytes>
<page-size-bytes>2097152</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<redistribution-delay>1000</redistribution-delay>
</address-setting>
</address-settings>
<jms-connection-factories>
<connection-factory name="InVmConnectionFactory">
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/ConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="RemoteConnectionFactory">
<connectors>
<connector-ref connector-name="http-connector"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
</entries>
<ha>true</ha>
<block-on-acknowledge>true</block-on-acknowledge>
<reconnect-attempts>-1</reconnect-attempts>
</connection-factory>
<pooled-connection-factory name="hornetq-ra">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/JmsXA"/>
<entry name="java:jboss/DefaultJMSConnectionFactory"/>
</entries>
</pooled-connection-factory>
</jms-connection-factories>
<jms-destinations>
...
<jms-topic name="MyNotificationTopic">
<entry name="java:/jms/topic/MyNotificationTopic"/>
</jms-topic>
...
</jms-destinations>
</hornetq-server>
</subsystem>
...
<socket-binding-group name="full-ha-sockets" default-interface="public">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
...
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
<outbound-socket-binding name="node1">
<remote-destination host="172.19.223.x" port="8080"/>
</outbound-socket-binding>
<outbound-socket-binding name="node2">
<remote-destination host="172.19.223.y" port="8080"/>
</outbound-socket-binding>
</socket-binding-group>
</socket-binding-groups>
确保第一个出站套接字绑定声明引用域控制器的节点(意味着域控制器应该 运行 on 172.19.223.x,在你的配置中)。我不知道原因,也许这是一个 wildfly bug,我们在这个问题上花了两周时间,但它仍然困扰着我......
我有一个域模式的 Wildfly 集群,有两个节点,每个节点有一台服务器,都属于同一个服务器组。
我需要一个共享主题:当客户端订阅服务器 A 的主题时,服务器 B 就同一主题放置的消息也应该通知它。现在每个客户端仅由连接到的服务器通知。
我使用带有 Linux 的 AWS 机器,但我不能使用多播地址,所以我使用了一个明确所有主机地址的准时协议。 这是我的 domain.xml
<subsystem xmlns="urn:jboss:domain:messaging:2.0">
<hornetq-server>
<cluster-password>mypassword</cluster-password>
<journal-file-size>102400</journal-file-size>
<connectors>
<http-connector name="http-connector" socket-binding="http">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<http-connector name="http-connector-throughput" socket-binding="http">
<param key="http-upgrade-endpoint" value="http-acceptor-throughput"/>
<param key="batch-delay" value="50"/>
</http-connector>
<http-connector name="cnode1" socket-binding="node1">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<http-connector name="cnode2" socket-binding="node2">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
<acceptors>
<http-acceptor http-listener="default" name="http-acceptor"/>
<http-acceptor http-listener="default" name="http-acceptor-throughput">
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
</acceptors>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>http-connector</connector-ref>
<static-connectors>
<connector-ref>
cnode1
</connector-ref>
<connector-ref>
cnode2
</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="send" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<max-size-bytes>10485760</max-size-bytes>
<page-size-bytes>2097152</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<redistribution-delay>1000</redistribution-delay>
</address-setting>
</address-settings>
<jms-connection-factories>
<connection-factory name="InVmConnectionFactory">
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/ConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="RemoteConnectionFactory">
<connectors>
<connector-ref connector-name="http-connector"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
</entries>
<ha>true</ha>
<block-on-acknowledge>true</block-on-acknowledge>
<reconnect-attempts>-1</reconnect-attempts>
</connection-factory>
<pooled-connection-factory name="hornetq-ra">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/JmsXA"/>
<entry name="java:jboss/DefaultJMSConnectionFactory"/>
</entries>
</pooled-connection-factory>
</jms-connection-factories>
<jms-destinations>
...
<jms-topic name="MyNotificationTopic">
<entry name="java:/jms/topic/MyNotificationTopic"/>
</jms-topic>
...
</jms-destinations>
</hornetq-server>
</subsystem>
...
<socket-binding-group name="full-ha-sockets" default-interface="public">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
...
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
<outbound-socket-binding name="node1">
<remote-destination host="172.19.223.x" port="8080"/>
</outbound-socket-binding>
<outbound-socket-binding name="node2">
<remote-destination host="172.19.223.y" port="8080"/>
</outbound-socket-binding>
</socket-binding-group>
</socket-binding-groups>
确保第一个出站套接字绑定声明引用域控制器的节点(意味着域控制器应该 运行 on 172.19.223.x,在你的配置中)。我不知道原因,也许这是一个 wildfly bug,我们在这个问题上花了两周时间,但它仍然困扰着我......