是什么导致在随机梯度Descet模式下梯度和总误差增加?

如何解决是什么导致在随机梯度Descet模式下梯度和总误差增加?

我试图理解神经网络学习背后的直觉。我了解其背后的数学原理,并且已经尝试通过解析来解决它。在Multilayer Perceptron中从头编码Python时,我面临着增加总错误的问题。我已经注释了我的代码,解释了操作,还发布了三种不同训练方法的结果输出和图表。此外,我尽可能使用NumPy向量运算来减少代码。

摘要:

  • 主要方法包含使用随机梯度下降方法以及Dense类对象及其训练模型的方法生成用于二进制校准的数据的代码
  • 网络分为三层;输入(4个节点),隐藏层(6个节点)和输出(2个节点)
  • Dense类是网络中的Layer的实现

密集类表示MLP Network中的一层:

包含以下内容:

  • 构造函数:随机初始化权重和偏差
  • Sigmoid方法:激活层的线性组合,也称为激活电位
  • d_sigmoid方法:求出S形的一阶微分的值
  • forward_pass方法:进行图层的正向传播
  • backward_pass方法:执行图层的向后传播
#Importing numpy for vector operations
import numpy as np
np.random.seed(78)
class Dense():
  def __init__(self,n_inputs,n_nodes):
        
        #Weights associated with all n_nodes for n_inputs synaptic inputs
        #Each column in the weight matrix is weight vector associated with one neuron
        
        self.weights = np.random.uniform(low=0,high=1,size=n_inputs*n_nodes).reshape(n_inputs,n_nodes)
        
        #Biases associated with each neuron,shape (1,n_nodes)
        #There are n_nodes columns and each one represent bias associated with one neuron in the layer
        #This one dimension array will be added to linear combinations of all neurons  
        #assuming the synaptic connection associated with biases is '1'
        
        self.biases = np.ones(n_nodes)

    def sigmoid(self):
        #Activates the activation_potential -> ndarray
        #Save the activated ndarry to outputs,as I will need this later
        
        self.outputs = 1 / (1 + np.exp(-self.activation_potentials))
        
        return self.outputs

    def d_sigmoid(self):
        #Derivitive of the activation potential -> ndarray (For all neurons,values of first differentail activation function at activation_potential)
        #Will be used in local gradients calculation

        #Vector value of shape (n_nodes,) 
        return self.outputs * (1 - self.outputs)
        
    def forward_pass(self,inputs):
        #Calculate activation potential of the current layer neurons -> ndarray 
        #Which is inputs times weights add a bias
        #Stores it to activation_potentials
        
        self.activation_potentials = np.dot(self.weights.T,inputs) + self.biases
        
        #Return the outputs of the layer by calling sigmoid() whih activates the activation potentials
        return self.sigmoid()

    def backward_pass(self,learning_rate,inputs_to_layer,target,prev_loc_grads = [],prev_weights = []):

        #The term previous layer in this method is a lyer next to current layer which made call to this method,because this bakward signal propegate from output layer to input layer

        #input_to_layer: represent ndarray of inputs to current layer which made call to this method
        #target: represent ndarray of target after one-hot-encoding it
        #prev_loc_grads: local gradients of layer next to current layer,I call it previous because this is backpropagation and the flow is propagated from output towards input layer
        #prev_weights: weights matrix of layer next to current layer,I call it previous because of backward signal flow


        #No previous local gradients means the call to backwardpass is made with output layer object
        if not len(prev_loc_grads):
            #At Output layer local gradient of each node is error at that node * derivative of activation function
            #While error at a node is (predicted - actual value)
            #Next line perform element wise subtraction of two array (the predicted and desired)
            self.error_at_end_nodes = self.outputs - target
            
            #Calculate the local gradients
            self.loc_gradients = self.error_at_end_nodes * self.d_sigmoid()
        else:
            # Local gradients of nodes in hidden layer are (derivitive of activation * sum of all (local gradients of previous layer neurons * wights associated to synaptic connection of those neurons)
         
            # Calculating the sum of all (local gradients of previous layer neurons * wights associated to synaptic connection of those neurons)
            temp = np.zeros(prev_weights.shape[0])
            for i in range(prev_loc_grads.size):
                temp += prev_loc_grads[i] * prev_weights[:,i]
            
        #Local gradients of the hidden layer
        self.loc_gradients = self.d_sigmoid() * temp
            
        #Update Weights,based on learning rate,local gradients and inputs to layer
        self.weights = self.weights + (learning_rate * np.outer(inputs_to_layer,self.loc_gradients))
        
        # The inputs_to_layer is ommited as bias is (conceptually) multiplied by input from a neuron with a fixed activation of 1
        self.biases = self.biases + learning_rate * self.loc_gradients


        return self.weights,self.biases

包含训练循环和训练数据的主要代码:

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder

import matplotlib.pyplot as plt
from dense import Dense
import numpy as np
import random


def main():
    print("In Main .. Starting ..")
    
    #Generating Data with 10 samples containing 4 features and one target (binary classification)
    data = make_classification(n_samples=10,n_features=4,n_classes=2)

    
    X_train = data[0]
    y_train = data[1]

    #Normalizing the data
    X_train = X_train/X_train.max()
    
    print('Training Data: (X) : ',X_train)
    print('Taraining Data: (y): ',y_train)
    
    #Encoding data using one hot encoding technique 
    enc = OneHotEncoder(sparse = False)
    desired = enc.fit_transform(y_train.reshape(-1,1))
    print("Targets after One Hot Encoding: ",desired)
    

    #-------------------------------Experimental Code-----------------------------------------
    
    #netwok is a list of all layers in the network,each time a layer is created as Dense() object,will be appended to network
    network = []
    print("----------------------------Layer 1-------------------------")
    network.append(Dense(4,6))
    print('Created Only Hidden Layer: N_nodes: {},N_inputs: {}'.format(6,4))
    print("----------------------------Layer 2-------------------------")
    network.append(Dense(6,2)) 
    print('Created Output Layer: N_nodes: {},N_inputs: {}'.format(2,6))    
    
    epoch = 1
    error = []
    
    #The main training loop,exits at after 10 epochs
    while epoch <= 10:
        
        #Random suffling of training data
        temp = list(zip(X_train,y_train))
        random.shuffle(temp)
        X_train,y_train=zip(*temp)

        #Variabels to store  the temporary state of the operations
        weights = 0
        biases = 0
        
        #List to store the intermediate weights and biases for means square error calculation at end of each epoch
        wb = []

        print('--------------------------Epoch: {}-------------------------'.format(epoch),'\n')
        
        #Select one feature vector from feature matrix and corresponding target vector from desired matrix (Which was obtained from one hot encoding)
        for x,y in zip(X_train,desired):     
            
            #previous_inputs list keeps track of inputs from layer to layer in each epoch
            previous_inputs = []    
            
            #At start of each epoch,the list contains only inputs from input nodes which are the features of current training sample
            previous_inputs.append(x)

            #This loop iterates over all layers in network and perform forward pass
            for layer in network:
                #Forward_pass perform forward propagation of a layer of last element of the previous_inputs list,#and returns the output of layer which is stored as ndarray in list,as it will be used as inputs to next layer
                previous_inputs.append(layer.forward_pass(previous_inputs[-1]))

            #Ignore the output of last layer,as I'm using the preious_inputs array in reverse order in backward_pass in next loop
            previous_inputs = previous_inputs[:-1]

            #Next loop reverses the network and previous_inputs lists to perform  backward propagation of al layers from output layer all the way to input layer
            for layer,inputs in zip(network[::-1],previous_inputs[::-1]):
                
                #If the layer is not output layer then perform backward propagation using code inside if statement
                if layer != network[-1]:
                    
                    #call to backward_pass using learning rate = 0.0001,inputs to current layer,target vector 'y',#previous_loc_gradients (local gradients of layer next to current layer),#and prev_weights (weights of layer next to current layer) 
                    #Store the updated weights and biases for mean square error calculation at end of epoch
                    weights,biases = layer.backward_pass(0.0001,prev_inputs,y,prev_loc_gradients,prev_weights)
                
                #otherwise,perform the backward pass for output layer using code in else block
                else:
                    weights,y)
                
                #Store local gradietns nad weights of current layer for next layer backward pass
                prev_loc_gradients = layer.loc_gradients
                prev_weights = layer.weights
                
                #Add updated weights and biases to wb,will be using it in next loop
                wb.append((weights,biases))
            
            #error_i is sum of errors for all training examples on updated weights and biases
            error_i = 0
            
            #This loop calculates Total Error on new weights and biases,by considering the whole training data
            for x_val,y_val in zip(X_train,desired):
                
                previous_inputs = []    
                previous_inputs.append(x_val)
                
                #Perform  forward pass on new weights and biases
                for layer in network:
                    #Forward Pass
                    previous_inputs.append(layer.forward_pass(previous_inputs[-1]))
                
                #add the error of prediction of current training sample to prevoius errors
                error_i += np.power((previous_inputs[-1] - y_val),2).sum()
            
            #Append total error of current sample to error list,and repeate the process for next sample,do this for all samples
            error.append(error_i)
        
        #Increase epoch by one,and perform forward,backward on next sample,and calculate error for all samples,do this until while  is true
        epoch += 1
    
    #Plot the errors after training completes,plt.plot(error)
    plt.show()
    #-------------------------------Experimental Code-----------------------------------------

if __name__ == "__main__":
    main()

最后,我获得了这些输出和图表,具体取决于时期和训练数据的大小:

  • 用于1个时期和10个训练样本
In Main .. Starting ..
Training Data: (X) :  [[-0.26390333 -0.12430637 -0.38741338  0.20075948]
 [ 0.63580037 -1.05223163 -0.58551008  0.68911107]
 [-0.54448011  0.08334418 -0.4174701   0.11937366]
 [ 0.22123838 -0.54513245 -0.40486294  0.39508491]
 [-0.3489578  -0.2067747  -0.55992358  0.30225496]
 [ 0.46346633  0.29702914  0.76883225 -0.42087526]
 [ 0.05631264  0.04373764  0.10200898 -0.05777301]
 [ 0.19738736 -0.26007568 -0.10694419  0.15615838]
 [ 0.12548086 -0.17220663 -0.07570972  0.10523554]
 [-0.52398487  1.          0.63178402 -0.68315832]]
Taraining Data: (y):  [0 1 0 1 0 1 1 1 0 0]
Targets after One Hot Encoding: [[1. 0.]
 [0. 1.]
 [1. 0.]
 [0. 1.]
 [1. 0.]
 [0. 1.]
 [0. 1.]
 [0. 1.]
 [1. 0.]
 [1. 0.]]
----------------------------Layer 1-------------------------
Created Only Hidden Layer: N_nodes: 6,N_inputs: 4
----------------------------Layer 2-------------------------
Created Output Layer: N_nodes: 2,N_inputs: 6
--------------------------Epoch: 1-------------------------

For 1 epoch and 10 training samples

  • 10个纪元和10个训练样本
In Main .. Starting ..
Training Data: (X) :  [[-0.26390333 -0.12430637 -0.38741338  0.20075948]
 [ 0.63580037 -1.05223163 -0.58551008  0.68911107]
 [-0.54448011  0.08334418 -0.4174701   0.11937366]
 [ 0.22123838 -0.54513245 -0.40486294  0.39508491]
 [-0.3489578  -0.2067747  -0.55992358  0.30225496]
 [ 0.46346633  0.29702914  0.76883225 -0.42087526]
 [ 0.05631264  0.04373764  0.10200898 -0.05777301]
 [ 0.19738736 -0.26007568 -0.10694419  0.15615838]
 [ 0.12548086 -0.17220663 -0.07570972  0.10523554]
 [-0.52398487  1.          0.63178402 -0.68315832]]
Taraining Data: (y):  [0 1 0 1 0 1 1 1 0 0]
Targets after One Hot Encoding: [[1. 0.]
 [0. 1.]
 [1. 0.]
 [0. 1.]
 [1. 0.]
 [0. 1.]
 [0. 1.]
 [0. 1.]
 [1. 0.]
 [1. 0.]]
----------------------------Layer 1-------------------------
Created Only Hidden Layer: N_nodes: 6,N_inputs: 6
--------------------------Epoch: 1-------------------------

--------------------------Epoch: 2-------------------------

--------------------------Epoch: 3-------------------------

--------------------------Epoch: 4-------------------------

--------------------------Epoch: 5-------------------------

--------------------------Epoch: 6-------------------------

--------------------------Epoch: 7-------------------------

--------------------------Epoch: 8-------------------------

--------------------------Epoch: 9-------------------------

--------------------------Epoch: 10-------------------------

For 10 epochs and 10 training smaples

  • 对于50个时期和1000个样本
In Main .. Starting ..
Training Data: (X) :  [[ 0.10845729  0.03110484 -0.10935314 -0.01435112]
 [-0.27863109 -0.17048214 -0.04769305  0.04802046]
 [-0.10521553 -0.07933533 -0.07228399  0.01997508]
 ...
 [-0.25583767 -0.24504791 -0.36494096  0.0549903 ]
 [ 0.06933997 -0.29438308 -1.21018002  0.02951967]
 [-0.02084834  0.06847175  0.29115171 -0.00640819]]
Taraining Data: (y):  [1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0 1 1 1 0 1 0 0 1 1 0 0 1 1 1 0
 1 1 0 0 1 0 0 0 1 0 1 0 1 1 1 0 1 1 1 0 1 1 0 0 0 0 1 1 0 1 0 1 0 1 1 1 0
 1 0 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 1
 0 0 0 1 0 1 1 0 1 1 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 1 1 1 1 1 0 0 0 0
 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 0 0 1 0 0 1 1
 1 0 1 1 0 1 0 1 1 0 0 0 1 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0
 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 0 0 1 1
 0 0 0 1 1 1 0 1 0 0 0 1 0 0 0 1 1 1 1 1 1 0 1 0 1 0 0 1 0 1 1 1 1 0 1 0 0
 1 1 0 1 0 1 1 1 0 0 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0
 1 0 0 1 1 1 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 1 0 0
 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 0 0 0 0 0 1 1 0 1 1 0 1 0 0
 1 1 0 0 1 1 0 0 0 1 1 1 0 1 0 0 1 0 0 0 0 1 0 0 1 1 1 0 1 1 0 1 1 0 1 1 0
 1 1 0 0 1 0 1 1 0 0 0 0 0 1 0 1 0 1 1 1 0 0 0 0 1 0 1 0 1 1 1 1 0 1 0 0 1
 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 1 1 0 1 1 1 1 1 0 1 1 0 0 1 1 0 0 0 1 0 1
 0 0 1 1 0 0 0 0 0 0 1 1 0 1 1 0 1 1 1 1 0 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1
 1 1 1 1 1 0 1 0 1 0 0 1 1 0 1 0 0 0 0 0 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 1 0
 0 1 1 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 1 1 1 1 1 1 0
 1 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 0 0 0
 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 0 1 1
 1 1 1 1 1 1 1 0 1 0 0 1 0 0 1 1 0 0 1 1 1 1 0 1 0 0 0 0 0 1 1 1 1 0 0 1 0
 1 0 1 0 1 0 0 1 0 0 0 1 1 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0
 1 0 0 1 1 0 1 1 0 0 0 0 1 1 0 1 0 1 0 1 1 1 0 0 1 1 0 1 0 0 0 1 1 0 0 0 0
 0 1 0 0 1 0 0 0 1 1 1 0 0 1 0 0 0 1 1 1 0 0 0 1 0 0 1 0 1 0 0 1 0 1 1 1 0
 0 1 1 0 0 1 0 0 0 1 1 1 0 1 0 1 1 0 0 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 0 1
 0 1 0 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 0 0 0 0 0 0 1 0 1 0 1 0 1 1 1 0 1 1 1
 1 1 0 1 1 0 0 1 0 1 0 1 1 1 0 1 0 0 0 0 0 1 0 1 1 1 1 0 0 1 0 1 1 0 0 1 1
 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 0 1 1 0 1 0 1 1 1 1 0 0
 0]
Targets after One Hot Encoding: [[0. 1.]
 [1. 0.]
 [1. 0.]
 ...
 [1. 0.]
 [1. 0.]
 [1. 0.]]
----------------------------Layer 1-------------------------
Created Only Hidden Layer: N_nodes: 6,N_inputs: 6
--------------------------Epoch: 1------------------------- 

--------------------------Epoch: 2------------------------- 

--------------------------Epoch: 3------------------------- 

--------------------------Epoch: 4------------------------- 

--------------------------Epoch: 5------------------------- 

--------------------------Epoch: 6------------------------- 

--------------------------Epoch: 7------------------------- 

--------------------------Epoch: 8------------------------- 

--------------------------Epoch: 9------------------------- 

--------------------------Epoch: 10------------------------- 

--------------------------Epoch: 11------------------------- 

--------------------------Epoch: 12------------------------- 

--------------------------Epoch: 13------------------------- 

--------------------------Epoch: 14------------------------- 

--------------------------Epoch: 15------------------------- 

--------------------------Epoch: 16------------------------- 

--------------------------Epoch: 17------------------------- 

--------------------------Epoch: 18------------------------- 

--------------------------Epoch: 19------------------------- 

--------------------------Epoch: 20------------------------- 

--------------------------Epoch: 21------------------------- 

--------------------------Epoch: 22------------------------- 

--------------------------Epoch: 23------------------------- 

--------------------------Epoch: 24------------------------- 

--------------------------Epoch: 25------------------------- 

--------------------------Epoch: 26------------------------- 

--------------------------Epoch: 27------------------------- 

--------------------------Epoch: 28------------------------- 

--------------------------Epoch: 29------------------------- 

--------------------------Epoch: 30------------------------- 

--------------------------Epoch: 31------------------------- 

--------------------------Epoch: 32------------------------- 

--------------------------Epoch: 33------------------------- 

--------------------------Epoch: 34------------------------- 

--------------------------Epoch: 35------------------------- 

--------------------------Epoch: 36------------------------- 

--------------------------Epoch: 37------------------------- 

--------------------------Epoch: 38------------------------- 

--------------------------Epoch: 39------------------------- 

--------------------------Epoch: 40------------------------- 

--------------------------Epoch: 41------------------------- 

--------------------------Epoch: 42------------------------- 

--------------------------Epoch: 43------------------------- 

--------------------------Epoch: 44------------------------- 

--------------------------Epoch: 45------------------------- 

--------------------------Epoch: 46------------------------- 

--------------------------Epoch: 47------------------------- 

--------------------------Epoch: 48------------------------- 

--------------------------Epoch: 49------------------------- 

--------------------------Epoch: 50------------------------- 

For 50 epochs and 1000 samples

我不明白是什么原因导致错误增加,因为目标是减少错误。我想念什么?

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res