Apache Artemis-保留消息在对称集群上不相同

如何解决Apache Artemis-保留消息在对称集群上不相同

使用带有MQTT协议的Artemis 2.12

我在对称集群中设置了两个代理A和B。在每个代理上,有一个与代理A连接的生产者Pa和与代理B相连的Pb。另外,在每个代理上,有一个订阅主题foo的消费者,Ca与代理A相连,Cb与代理B相连。

当生产者Pa将关于主题foo的保留消息(retainedMessagePa)发布给代理A时,Ca和Cb均获得消息(retainedMessagePa)。如果Ca断开连接,并使用干净的会话重新连接到Broker A,它将获得相同的保留消息(retainedMessagePa)。但是,如果Cb断开连接并通过与代理B的干净会话重新连接,则它不会获得保留的消息。同样,如果生产者Pb将有关主题foo的保留消息发布到代理B(retainedMessagePb),如果Ca和Cb都连接到它们的代理,它们将获得相同的保留消息(retainedMessagePb)。但是,当其中一个重新连接到干净的会话时,它们将得到最后保留的消息发送给其代理。 Ca接收(retainedMessagePab),Cb接收(retainedMessagePb)

如何配置2个代理具有相同的保留消息?

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

        <name>BrokerA</name> <!-- Changed on each Broker -->

        <persistence-enabled>true</persistence-enabled>

        <journal-type>NIO</journal-type>
        <paging-directory>./data/paging</paging-directory>
        <bindings-directory>./data/bindings</bindings-directory>
        <journal-directory>./data/journal</journal-directory>
        <large-messages-directory>./data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>-1</journal-pool-files>
        <journal-buffer-size>10485760</journal-buffer-size>
        <journal-buffer-timeout>1308000</journal-buffer-timeout>

        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>95</max-disk-usage>

        <critical-analyzer>true</critical-analyzer>
        <critical-analyzer-timeout>120000</critical-analyzer-timeout>
        <critical-analyzer-check-period>60000</critical-analyzer-check-period>
        <critical-analyzer-policy>HALT</critical-analyzer-policy>

        <acceptors>
            <acceptor name="artemis">tcp://0.0.0.0:61616</acceptor>
            <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
        </acceptors>

        <connectors>        
            <connector name="L1">tcp://BrokerA:61616</connector>    
            <connector name="L2">tcp://BrokerB:61616</connector>
        </connectors>

        <cluster-connections>
            <cluster-connection name="cluster">
                <address>#</address>
                <connector-ref>L1</connector-ref> <!-- Changed on each Broker -->
                <retry-interval>1000</retry-interval>
                <reconnect-attempts>-1</reconnect-attempts>
                <message-load-balancing>ON_DEMAND</message-load-balancing>
                <max-hops>1</max-hops>
                <static-connectors>
                    <connector-ref>L2</connector-ref> <!-- Changed on each Broker -->
                </static-connectors>
            </cluster-connection>
        </cluster-connections>

        <cluster-user>admin</cluster-user>
        <cluster-password>admin</cluster-password>  

        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="admin"/>
                <permission type="deleteNonDurableQueue" roles="admin"/>
                <permission type="createDurableQueue" roles="admin"/>
                <permission type="deleteDurableQueue" roles="admin"/>
                <permission type="createAddress" roles="admin"/>
                <permission type="deleteAddress" roles="admin"/>
                <permission type="consume" roles="admin"/>
                <permission type="browse" roles="admin"/>
                <permission type="send" roles="admin"/>
                <!-- we need this otherwise ./artemis data imp wouldn't work -->
                <permission type="manage" roles="admin"/>
            </security-setting>
        </security-settings>

        <address-settings>
            <!-- if you define auto-create on certain queues,management has to be auto-create -->
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <!--default for catch all-->
            <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <!-- with -1 only the global-max-size is in use for limiting -->
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
        </address-settings>

        <addresses>
            <address name="DLQ">
                <anycast>
                    <queue name="DLQ" />
                </anycast>
            </address>
            <address name="ExpiryQueue">
                <anycast>
                    <queue name="ExpiryQueue" />
                </anycast>
            </address>

        </addresses>

    </core>
</configuration>

我尝试将<redistribution-delay>0</redistribution-delay>设置为<message-load-balancing>ON_DEMAND</message-load-balancing>对此没有帮助。

我还尝试了<message-load-balancing>STRICT</message-load-balancing>,该方法也无济于事。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res