ActiveMQ Artemis,连接累积

如何解决ActiveMQ Artemis,连接累积

有没有办法在 ActiveMQ Artemis 中导致陈旧连接超时?我遇到了连接累积的情况,然后出现“newSocketStream(..) failed: Too many open files”错误,我认为这是由于连接造成的。

我应该如何诊断这个问题?

2021-01-28 01:20:39,492 WARN  [io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired,and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.: io.netty.channel.unix.Errors$NativeIoException: accept(..) failed: Too many open files

2021-01-28 01:20:39,656 WARN  [io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired,937 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.epoll.EpollSocketChannel
    at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:46) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:310) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:818) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:785) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:1076) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1125) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1336) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:931) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:820) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:252) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:268) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$StaticConnector$Connector.tryConnect(ServerLocatorImpl.java:1813) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$StaticConnector.connect(ServerLocatorImpl.java:1682) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:536) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:524) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:482) [artemis-core-client-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.14.0.jar:2.14.0]
    at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65) [artemis-commons-2.14.0.jar:2.14.0]
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [java.base:]
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [java.base:]
    at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.14.0.jar:2.14.0]
Caused by: java.lang.reflect.InvocationTargetException
    at jdk.internal.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [java.base:]
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) [java.base:]
    at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:44) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    ... 23 more
Caused by: io.netty.channel.ChannelException: io.netty.channel.unix.Errors$NativeIoException: newSocketStream(..) failed: Too many open files
    at io.netty.channel.unix.Socket.newSocketStream0(Socket.java:421) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at io.netty.channel.epoll.LinuxSocket.newSocketStream(LinuxSocket.java:319) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at io.netty.channel.epoll.LinuxSocket.newSocketStream(LinuxSocket.java:323) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    at io.netty.channel.epoll.EpollSocketChannel.<init>(EpollSocketChannel.java:45) [netty-all-4.1.48.Final.jar:4.1.48.Final]
    ... 27 more
Caused by: io.netty.channel.unix.Errors$NativeIoException: newSocketStream(..) failed: Too many open files

这个问题看起来很相似:SocketException : TOO MANY OPEN FILES

就我的用例而言,我从网站接收订单并将它们处理到 ERP 中,然后将状态传输回网站和其他系统。将消息发送回网站 API 的速度有点慢,在事件发生时可能有 700 条消息排队。

该网站使用 AMQP,而我的消息路由因 JMS 而中断。

这是运行代理的用户的 ulimit。

core file size          (blocks,-c) 0
data seg size           (kbytes,-d) unlimited
scheduling priority             (-e) 0
file size               (blocks,-f) unlimited
pending signals                 (-i) 63805
max locked memory       (kbytes,-l) 16384
max memory size         (kbytes,-m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes,-p) 8
POSIX message queues     (bytes,-q) 819200
real-time priority              (-r) 0
stack size              (kbytes,-s) 8192
cpu time               (seconds,-t) unlimited
max user processes              (-u) 63805
virtual memory          (kbytes,-v) unlimited
file locks                      (-x) unlimited

我的JVM内存设置:-Xms1024M -Xmx8G

这是我的 broker.xml

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>0.0.0.0</name>
      <persistence-enabled>true</persistence-enabled>
      <journal-type>NIO</journal-type>
      <paging-directory>/nfs/amqprod/data/paging</paging-directory>
      <bindings-directory>/nfs/amqprod/data/bindings</bindings-directory>
      <journal-directory>/nfs/amqprod/data/journal</journal-directory>
      <large-messages-directory>/nfs/amqprod/data/large-messages</large-messages-directory>
      <journal-datasync>true</journal-datasync>
      <journal-min-files>2</journal-min-files>
      <journal-pool-files>10</journal-pool-files>
      <journal-device-block-size>4096</journal-device-block-size>
      <journal-file-size>10M</journal-file-size>
      <journal-buffer-timeout>2628000</journal-buffer-timeout>
      <journal-max-io>1</journal-max-io>
      <disk-scan-period>5000</disk-scan-period>
      <max-disk-usage>90</max-disk-usage>
      <critical-analyzer>true</critical-analyzer>
      <critical-analyzer-timeout>120000</critical-analyzer-timeout>
      <critical-analyzer-check-period>60000</critical-analyzer-check-period>
      <critical-analyzer-policy>HALT</critical-analyzer-policy>
      <page-sync-timeout>2628000</page-sync-timeout>
      <jmx-management-enabled>true</jmx-management-enabled>
      <global-max-size>2G</global-max-size>

      <acceptors>

<!-- keystores will be found automatically if they are on the classpath -->
         <acceptor name="netty-ssl-acceptor">tcp://0.0.0.0:5500?sslEnabled=true;keyStorePath={path}/keystore.ks;keyStorePassword={pasword};protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor>

         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>


      </acceptors>

      <!-- HA -->
      <connectors>
        <connector name="artemis">tcp://{Primary IP}:61616</connector>
        <connector name="artemis-backup">tcp://{Secondary IP}:61616</connector>
      </connectors>

      <cluster-user>activemq</cluster-user>
      <cluster-password>{cluster password}</cluster-password>

      <ha-policy>
        <shared-store>
          <master>
            <failover-on-shutdown>true</failover-on-shutdown>
          </master>
        </shared-store>
      </ha-policy>

      <cluster-connections>
        <cluster-connection name="cluster-1">
          <connector-ref>artemis</connector-ref>
          <!--<discovery-group-ref discovery-group-name="discovery-group-1"/>-->
          <static-connectors>
            <connector-ref>artemis-backup</connector-ref>
          </static-connectors>
        </cluster-connection>
       </cluster-connections>
      <!-- HA -->

      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
          </security-setting>
          <security-setting match="SiteCore.#">
            <!--<permission type="createDurableQueue" roles="ecom"/>
            <permission type="deleteDurableQueue" roles="ecom"/>
            <permission type="createAddress" roles="ecom"/>-->
            <permission type="consume" roles="ecom,amq"/>
            <permission type="browse" roles="ecom,amq"/>
            <permission type="send" roles="ecom,amq"/>
         </security-setting>
         <security-setting match="eCommerce.#">
            <!--<permission type="createDurableQueue" roles="ecom"/>
            <permission type="deleteDurableQueue" roles="ecom"/>
            <permission type="createAddress" roles="ecom"/>-->
            <permission type="consume" roles="ecom,amq"/>
        </security-setting>
    
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues,management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

<addresses>
  <address name="DLQ">
    <anycast>
      <queue name="DLQ" />
    </anycast>
  </address>
  <address name="ExpiryQueue">
    <anycast>
      <queue name="ExpiryQueue" />
    </anycast>
  </address>
  <address name="test1.test">
    <multicast>
      <queue name="test1.test.A">
        <filter string="JMSType='A'" />
      </queue>
      <queue name="test1.test.B" />
    </multicast>
  </address>
  <address name="SiteCore.test.address">
    <multicast>
      <queue name="SiteCore.test.queue" />
    </multicast>
  </address>
  <address name="AX.User.Import.Topic">
    <multicast>
      <queue name="AX.User.Import.Queue" />
    </multicast>
  </address>
  <address name="Boomi.API.eCommerce.SOF.Order.Audit.Queue">
    <anycast>
      <queue name="Boomi.API.eCommerce.SOF.Order.Audit.Queue" />
    </anycast>
  </address>
  <address name="eCommerce.Customer.Information.Topic">
    <anycast>
      <queue name="eCommerce.Customer.Information.Queue" />
    </anycast>
  </address>
  <address name="MAO.SOF.Item.Delete.Topic">
    <multicast>
      <queue name="MAO.SOF.Item.Delete.Queue" />
    </multicast>
  </address>
  <address name="MAO.SOF.Item.Import.Topic">
    <multicast>
      <queue name="MAO.SOF.Item.Import.Queue" />
    </multicast>
  </address>
  <address name="MAO.SOF.Location.Import.Topic">
    <multicast>
      <queue name="MAO.SOF.Location.Import.Queue" />
    </multicast>
  </address>
  <address name="MAO.SOF.Order.Fulfillment.Status.Topic">
    <multicast>
      <queue name="MAO.SOF.Order.Fulfillment.Status.Inventory.Reserve.Queue" />
      <queue name="MAO.SOF.Order.Fulfillment.Status.Sitecore.Queue" />
    </multicast>
  </address>
  <address name="MAO.SOF.User.Import.Topic">
    <multicast>
      <queue name="MAO.SOF.User.Import.Queue" />
    </multicast>
  </address>
  <address name="Marketing.NRM.New.Neighbor.Topic">
    <multicast>
      <queue name="Marketing.NRM.New.Neighbor.Responsys.Queue" />
    </multicast>
  </address>  
  <address name="Pet.Services.Event.Topic">
    <multicast>
      
      <queue name="Pet.Services.Event.Appointment.Booked.Queue">
        <filter string="JMSType='appointment-booked'" />
      </queue>
      <queue name="Pet.Services.Event.Appointment.Canceled.Queue">
        <filter string="JMSType='appointment-canceled'" />
      </queue>
      <queue name="Pet.Services.Event.Appointment.Rescheduled.Queue">
        <filter string="JMSType='appointment-rescheduled'" />
      </queue>
      <queue name="Pet.Services.Event.Client.Created.Queue">
        <filter string="JMSType='client-created'" /> 
      </queue>
      <queue name="Pet.Services.Event.Client.Deleted.Queue">
        <filter string="JMSType='client-deleted'" />
      </queue>
      <queue name="Pet.Services.Event.Client.Updated.Queue">
        <filter string="JMSType='client-updated'" />
      </queue>
    </multicast>
  </address>
  <address name="Process.Tracking.General.Topic">
    <multicast>
      <queue name="Process.Tracking.General.DB.Writer.Queue" />
    </multicast>
  </address>
  <address name="PSP.Utilities.Email.Send.Queue">
    <multicast>
      <queue name="PSP.Utilities.Email.Send.Queue" />
    </multicast>
  </address>
  <address name="SiteCore.Sales.Order.Submission.Topic">
    <multicast>
      <queue name="SiteCore.Sales.Order.Submission.Queue" />
    </multicast>
  </address>
  <!--<address name="SiteCore.SOF.Order.Fulfillment.Submission.Error.Queue">
    <anycast>
      <queue name="SiteCore.SOF.Order.Fulfillment.Submission.Error.Queue" />
    </anycast>
  </address>-->
  <address name="SiteCore.SOF.Order.Fulfillment.Submission.Topic">
    <multicast>
      <queue name="SiteCore.SOF.Order.Fulfillment.Submission.ActiveOmni.Queue" />
      <queue name="SiteCore.SOF.Order.Fulfillment.Submission.Inventory.Reserve.Queue" />
    </multicast>
  </address>
</addresses>

      <!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
      <broker-plugins>
         <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>
      -->

   </core>
</configuration>

这是我认为给我带来所有麻烦的客户。

using (new TimeMeasure("PSP.Commerce.Foundation.Common.Services.BoomiUserService:SendMessage"))
{
    _connection = await GetOrSetQueueConnection();
    var session = new Session(_connection);
    var sender = new SenderLink(session,typeof(T).Name,topicName);
    var serializedData = JsonConvert.SerializeObject(message,Formatting.None,new JsonSerializerSettings {NullValueHandling = NullValueHandling.Ignore});
    var serializedMessage = new Message(serializedData)
    {
        Properties = new Properties
        {
            CreationTime = DateTime.Now
        }
    };
    Log.Info($"Message with body {serializedData} sent to {topicName} during attempt {currentAttempt}/{maxNumberOfAttempts}",this);
    await sender.SendAsync(serializedMessage);
    await session.CloseAsync();
    await _connection.CloseAsync();  // This line was missing
    return true;
}

解决方法

ActiveMQ Artemis 已经为任何使用 acceptor 的 AMQP 客户端强制执行 60 秒的默认连接超时,其中 amqpIdleTimeout 未设置。有关详细信息,请参阅 the documentation。因此,任何“陈旧”连接都应在 60 秒内删除,您将看到表明连接已被清理的日志消息。

值得注意的是,除了中断连接的网络问题外,导致连接失效的最常见原因是客户端编写不当,无法正确管理其资源。

总的来说,我认为对于现代系统来说,打开文件的 1024 ulimit 是相当低的。我建议你大幅提高这一点。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res