如何解决为什么我的反向传播算法的性能停滞不前?
我正在学习如何编写神经网络,目前我正在研究一种具有一个输入层、一个隐藏层和一个输出层的反向传播算法。算法正在运行,当我抛出一些测试数据
x_train = np.array([[1.,2.,-3.,10.],[0.3,-7.8,1.,2.]])
y_train = np.array([[10,-3,6,1],[1,1,1]])
进入我的算法,使用3个隐藏单元的默认值和10e-4的默认学习率,
Backprop.train(x_train,y_train,tol = 10e-1)
x_pred = Backprop.predict(x_train),
我得到了很好的结果:
Tolerances: [10e-1,10e-2,10e-3,10e-4,10e-5]
Iterations: [2678,5255,7106,14270,38895]
Mean absolute error: [0.42540,0.14577,0.04264,0.01735,0.00773]
Sum of squared errors: [1.85383,0.21345,0.01882,0.00311,0.00071].
每次误差平方和都按照我的预期下降一位小数。但是,当我使用这样的测试数据时
X_train = np.random.rand(20,7)
Y_train = np.random.rand(20,2)
Tolerances: [10e+1,10e-0,10e-1,10e-3]
Iterations: [11,19,63,80,7931],Mean absolute error: [0.30322,0.25076,0.25292,0.24327,0.24255],Sum of squared errors: [4.69919,3.43997,3.50411,3.38170,3.16057],
没有什么真正改变。我检查了我的隐藏单元、梯度和权重矩阵,它们都不同,梯度确实像我设置的反向传播算法一样缩小
if ( np.sum(E_hidden**2) + np.sum(E_output**2) ) < tol:
learning = False,
其中 E_hidden 和 E_output 是我的梯度矩阵。我的问题是:怎么会在梯度缩小的情况下,某些数据的指标实际上保持不变,我该怎么办?
我的反向传播看起来像这样:
class Backprop:
def sigmoid(r):
return (1 + np.exp(-r)) ** (-1)
def train(x_train,hidden_units = 3,learning_rate = 10e-4,tol = 10e-3):
# We need y_train to be 2D. There should be as many rows as there are x_train vectors
N = x_train.shape[0]
I = x_train.shape[1]
J = hidden_units
K = y_train.shape[1] # Number of output units
# Add the bias units to x_train
bias = -np.ones(N).reshape(-1,1) # Make it 2D so we can stack it
# Make the row vector a column vector for easier use when applying matrices. Afterwards,x_train.shape = (N,I+1)
x_train = np.hstack((x_train,bias)).T # x_train.shape = (I+1,N) -> N column vectors of respective length I+1
# Create our weight matrices
W_input = np.random.rand(J,I+1) # W_input.shape = (J,I+1)
W_hidden = np.random.rand(K,J+1) # W_hidden.shape = (K,J+1)
m = 0
learning = True
while learning:
##### ----- Phase 1: Forward Propagation ----- #####
# Create the total input to the hidden units
u_hidden = W_input @ x_train # u_hidden.shape = (J,N) -> N column vectors of respective length J. For every training vector we # get J hidden states
# Create the hidden units
h = Backprop.sigmoid(u_hidden) # h.shape = (J,N)
# Create the total input to the output units
bias = -np.ones(N)
h = np.vstack((h,bias)) # h.shape = (J+1,N)
u_output = W_hidden @ h # u_output.shape = (K,N). For every training vector we get K output states.
# In the code itself the following is not necessary,because,as we remember from the above,the output activation function
# is the identity function,but let's do it anyway for the sake of clarity
y_pred = u_output.copy() # Now,y_pred has the same shape as y_train
##### ----- Phase 2: Backward Propagation ----- #####
# We will calculate the delta terms now and begin with the delta term of the output unit
# We will transpose several times now. Before,having column vectors was convenient,because matrix multiplication is
# more intuitive then. But now,we need to work with indices and need the right dimensions. Yes,loops are inefficient,# they provide much more clarity so that we can easily connect the theory above with our code.
# We don't need the delta_output right now,because we will update W_hidden with a loop. But we need it for the delta term
# of the hidden unit.
delta_output = y_pred.T - y_train
# Calculate our error gradient for the output units
E_output = np.zeros((K,J+1))
for k in range(K):
for j in range(J+1):
for n in range(N):
E_output[k,j] += (y_pred.T[n,k] - y_train[n,k]) * h.T[n,j]
# Calculate our change in W_hidden
W_delta_output = -learning_rate * E_output
# Update the old weights
W_hidden = W_hidden + W_delta_output
# Let's calculate the delta term of the hidden unit
delta_hidden = np.zeros((N,J+1))
for n in range(N):
for j in range(J+1):
for k in range(K):
delta_hidden[n,j] += h.T[n,j]*(1 - h.T[n,j]) * delta_output[n,k] * W_delta_output[k,j]
# Calculate our error gradient for the hidden units,but exclude the hidden bias unit,because W_input and the hidden bias
# unit don't share any relation at all
E_hidden = np.zeros((J,I+1))
for j in range(J):
for i in range(I+1):
for n in range(N):
E_hidden[j,i] += delta_hidden[n,j]*x_train.T[n,i]
# Calculate our change in W_input
W_delta_hidden = -learning_rate * E_hidden
W_input = W_input + W_delta_hidden
if ( np.sum(E_hidden**2) + np.sum(E_output**2) ) < tol:
learning = False
m += 1 # Iteration count
Backprop.weights = [W_input,W_hidden]
Backprop.iterations = m
Backprop.errors = [E_hidden,E_output]
##### ----- #####
def predict(x):
N = x.shape[0]
# x1 = Backprop.weights[1][:,:-1] @ Backprop.sigmoid(Backprop.weights[0][:,:-1] @ x.T) # Trying this we see we really need to add
# a bias here the bias if we also train using bias
# Add the bias units to x
bias = -np.ones(N).reshape(-1,1) # Make it 2D so we can stack it
# Make the row vector a column vector for easier use when applying matrices.
x = np.hstack((x,bias)).T
h = Backprop.weights[0] @ x
u = Backprop.sigmoid(h) # We need to transform the data using the sigmoidal function
h = np.vstack((u,bias.reshape(1,-1)))
return (Backprop.weights[1] @ h).T
解决方法
我找到了答案。如果在 Backprop.predict 中,我写
output = (Backprop.weights[1] @ h).T
return output
与上述不同,一切正常。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。