当ActiveMQ Artemis Master/Slave对部署在OpenShift

问题描述 投票:0回答:1

当我在同一机器中使用此配置时,它没有问题。但是,当我对其进行扩展并在每个容器中部署一个经纪人时,就会失败。

在这里是经纪人1:
的配置

<?xml version="1.0"?> <configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd"> <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core "> <name>0.0.0.0</name> <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <paging-directory>data/paging</paging-directory> <bindings-directory>data/bindings</bindings-directory> <journal-directory>data/journal</journal-directory> <large-messages-directory>data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>10</journal-pool-files> <journal-file-size>10M</journal-file-size> <journal-buffer-timeout>36000</journal-buffer-timeout> <journal-max-io>4096</journal-max-io> <disk-scan-period>5000</disk-scan-period> <max-disk-usage>90</max-disk-usage> <critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy> <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> <connectors> <connector name="netty-connector">tcp://0.0.0.0:61616</connector> </connectors> <acceptors> <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor> </acceptors> <broadcast-groups> <broadcast-group name="bg-group1"> <group-address>${udp-address:231.7.7.7}</group-address> <group-port>9876</group-port> <broadcast-period>1000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name="dg-group1"> <group-address>${udp-address:231.7.7.7}</group-address> <group-port>9876</group-port> <refresh-timeout>60000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="dg-group1"/> </cluster-connection> </cluster-connections> <security-settings> <security-setting match="#"> <permission type="createNonDurableQueue" roles="amq"/> <permission type="deleteNonDurableQueue" roles="amq"/> <permission type="createDurableQueue" roles="amq"/> <permission type="deleteDurableQueue" roles="amq"/> <permission type="createAddress" roles="amq"/> <permission type="deleteAddress" roles="amq"/> <permission type="consume" roles="amq"/> <permission type="browse" roles="amq"/> <permission type="send" roles="amq"/> <!-- we need this otherwise ./artemis data imp wouldn't work --> <permission type="manage" roles="amq"/> </security-setting> </security-settings> <address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match="activemq.management#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!--default for catch all--> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings> <addresses> <address name="DLQ"> <anycast> <queue name="DLQ"/> </anycast> </address> <address name="ExpiryQueue"> <anycast> <queue name="ExpiryQueue"/> </anycast> </address> </addresses> </core> </configuration>

在这里是经纪人2的配置:

<?xml version="1.0"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
    <name>0.0.0.0</name>
    <persistence-enabled>true</persistence-enabled>
    <journal-type>ASYNCIO</journal-type>
    <paging-directory>data/paging</paging-directory>
    <bindings-directory>data/bindings</bindings-directory>
    <journal-directory>data/journal</journal-directory>
    <large-messages-directory>data/large-messages</large-messages-directory>
    <journal-datasync>true</journal-datasync>
    <journal-min-files>2</journal-min-files>
    <journal-pool-files>10</journal-pool-files>
    <journal-file-size>10M</journal-file-size>
    <journal-max-io>4096</journal-max-io>
    <disk-scan-period>5000</disk-scan-period>
    <max-disk-usage>90</max-disk-usage>
    <!-- should the broker detect dead locks and other issues -->
    <critical-analyzer>true</critical-analyzer>
    <critical-analyzer-timeout>120000</critical-analyzer-timeout>
    <critical-analyzer-check-period>60000</critical-analyzer-check-period>
    <critical-analyzer-policy>HALT</critical-analyzer-policy>
    <ha-policy>
     <shared-store>
      <slave>
       <failover-on-shutdown>true</failover-on-shutdown>
      </slave>
     </shared-store>
    </ha-policy>

    <connectors>
         <connector name="netty-connector">tcp://0.0.0.0:61617</connector>
    </connectors>

    <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61617</acceptor>
    </acceptors>

    <broadcast-groups>
       <broadcast-group name="bg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <broadcast-period>1000</broadcast-period>
          <connector-ref>netty-connector</connector-ref>
       </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
       <discovery-group name="dg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <refresh-timeout>60000</refresh-timeout>
       </discovery-group>
    </discovery-groups>

    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <discovery-group-ref discovery-group-name="dg-group1"/>
       </cluster-connection>
    </cluster-connections>

    <security-settings>
      <security-setting match="#">
        <permission type="createNonDurableQueue" roles="amq"/>
        <permission type="deleteNonDurableQueue" roles="amq"/>
        <permission type="createDurableQueue" roles="amq"/>
        <permission type="deleteDurableQueue" roles="amq"/>
        <permission type="createAddress" roles="amq"/>
        <permission type="deleteAddress" roles="amq"/>
        <permission type="consume" roles="amq"/>
        <permission type="browse" roles="amq"/>
        <permission type="send" roles="amq"/>
        <!-- we need this otherwise ./artemis data imp wouldn't work -->
        <permission type="manage" roles="amq"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <!-- if you define auto-create on certain queues, management has to be auto-create -->
      <address-setting match="activemq.management#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
      <!--default for catch all-->
      <address-setting match="#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
    </address-settings>
    <addresses>
      <address name="DLQ">
        <anycast>
          <queue name="DLQ"/>
        </anycast>
      </address>
      <address name="ExpiryQueue">
        <anycast>
          <queue name="ExpiryQueue"/>
        </anycast>
      </address>
    </addresses>
  </core>
</configuration>

启用后,登录出现了:

DEBUG

这里有几个问题...

首先,您的连接器配置不正确。 在经纪1上,您正在使用:
2019-01-10 15:40:37,753 DEBUG [org.apache.activemq.artemis.core.server.cluster.BackupManager] DiscoveryBackupConnector [group=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=60000, discoveryInitialWaitTimeout=10000}]:: announcing TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-119 to ServerLocatorImpl (identity=backupLocatorFor='ActiveMQServerImpl::serverUUID=0cef6ba4-14ee-11e9-83b3-0a580a820077') [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=60000, discoveryInitialWaitTimeout=10000}]
docker openshift activemq-artemis
1个回答
3
投票
在经纪2上,您正在使用此信息:

<connector name="netty-connector">tcp://0.0.0.0:61616</connector>

集群的每个成员发送此连接器信息,以告知其他群集成员如何将其连接回发送信息的节点。例如,在您的情况下,经纪人1告诉经纪2,它可以使用
<connector name="netty-connector">tcp://0.0.0.0:61617</connector>
连接回经纪人1。当然,这是不正确的,因为元地址

tcp://0.0.0.0:61616

实际上并没有指向经纪人1。当经纪人2试图使用此URL时,它将在您看到时失败。
在同一主机上运行两个经纪人时,这起作用的原因是
0.0.0.0

将与

0.0.0.0
.
相同。
您需要在连接器配置中使用有效的IP地址或主机名,以便群集可以正确形成。

秒,您提供的记录表明,两个Docker实例之间的多播流量不起作用。我建议您尝试使用

静态

聚类以从环境中删除多播问题。在经纪人1上,您可以使用以下内容:
localhost

在经纪2中,您可以使用以下内容:

DEBUG

当然,您需要使用连接器中实际节点的IP地址或主机名。
    
最新问题
© www.soinside.com 2019 - 2025. All rights reserved.