卷积神经网络的反向传播

如何解决卷积神经网络的反向传播

我想仅使用 NumPy 库在 Python 中编写我自己的卷积神经网络,因此我遵循了以下两个教程:https://victorzhou.com/blog/intro-to-cnns-part-1/https://towardsdatascience.com/training-a-convolutional-neural-network-from-scratch-2235c2a25754 关于 CNN 如何工作以及如何训练它但对于不知什么原因,火车就是不工作,没有人回复我对帖子的评论,我的数学老师也不明白。请在这里帮助我,我理解微积分,我需要的是明确解释为什么他和我的代码不起作用或反向传播如何工作。非常感谢任何和所有帮助。一个月后我有一个生成对抗性 CNN,我真的需要解决这个问题,谢谢你们每一个人。

这是他的代码:

'''main.py'''
import numpy as np
from conv import Conv3x3
from maxpool import MaxPool2
from softmax import Softmax

# We only use the first 1k examples of each set in the interest of time.
# Feel free to change this if you want.
with np.load("mnist.npz") as mnist:
    train_images = mnist["training_images"][:1000]
    train_labels = mnist["training_labels"][:1000]
    test_images = mnist["test_images"][:1000]
    test_labels = mnist["test_labels"][:1000]

conv = Conv3x3(8)                  # 28x28x1 -> 26x26x8
pool = MaxPool2()                  # 26x26x8 -> 13x13x8
softmax = Softmax(13 * 13 * 8,10) # 13x13x8 -> 10

def forward(image,label):
  '''
  Completes a forward pass of the CNN and calculates the accuracy and
  cross-entropy loss.
  - image is a 2d numpy array
  - label is a digit
  '''
  # We transform the image from [0,255] to [-0.5,0.5] to make it easier
  # to work with. This is standard practice.
  out = conv.forward((image / 255) - 0.5)
  out = pool.forward(out)
  out = softmax.forward(out)

  # Calculate cross-entropy loss and accuracy. np.log() is the natural log.
  loss = -np.log(out[label])
  acc = 1 if np.argmax(out) == label else 0

  return out,loss,acc

def train(im,label,lr=.005):
  '''
  Completes a full training step on the given image and label.
  Returns the cross-entropy loss and accuracy.
  - image is a 2d numpy array
  - label is a digit
  - lr is the learning rate
  '''
  # Forward
  out,acc = forward(im,label)

  # Calculate initial gradient
  gradient = np.zeros(10)
  gradient[label] = -1 / out[label]

  # Backprop
  gradient = softmax.backprop(gradient,lr)
  gradient = pool.backprop(gradient)
  gradient = conv.backprop(gradient,lr)

  return loss,acc

print('MNIST CNN initialized!')

# Train the CNN for 3 epochs
for epoch in range(3):
  print('--- Epoch %d ---' % (epoch + 1))

  # Shuffle the training data
  permutation = np.random.permutation(len(train_images))
  train_images = train_images[permutation]
  train_labels = train_labels[permutation]

  # Train!
  loss = 0
  num_correct = 0
  for i,(im,label) in enumerate(zip(train_images,train_labels)):
    if i > 0 and i % 100 == 99:
      print(
        '[Step %d] Past 100 steps: Average Loss %.3f | Accuracy: %d%%' %
        (i + 1,loss / 100,num_correct)
      )
      loss = 0
      num_correct = 0

    l,acc = train(im,label)
    loss += l
    num_correct += acc

# Test the CNN
print('\n--- Testing the CNN ---')
loss = 0
num_correct = 0
for im,label in zip(test_images,test_labels):
  _,l,label)
  loss += l
  num_correct += acc

num_tests = len(test_images)
print('Test Loss:',loss / num_tests)
print('Test Accuracy:',num_correct / num_tests)


'''conv.py'''
import numpy as np

'''
Note: In this implementation,we assume the input is a 2d numpy array for simplicity,because that's
how our MNIST images are stored. This works for us because we use it as the first layer in our
network,but most CNNs have many more Conv layers. If we were building a bigger network that needed
to use Conv3x3 multiple times,we'd have to make the input be a 3d numpy array.
'''

class Conv3x3:
  # A Convolution layer using 3x3 filters.

  def __init__(self,num_filters):
    self.num_filters = num_filters

    # filters is a 3d array with dimensions (num_filters,3,3)
    # We divide by 9 to reduce the variance of our initial values
    self.filters = np.random.randn(num_filters,3) / 9

  def iterate_regions(self,image):
    '''
    Generates all possible 3x3 image regions using valid padding.
    - image is a 2d numpy array.
    '''
    h,w = image.shape

    for i in range(h - 2):
      for j in range(w - 2):
        im_region = image[i:(i + 3),j:(j + 3)]
        yield im_region,i,j

  def forward(self,input):
    '''
    Performs a forward pass of the conv layer using the given input.
    Returns a 3d numpy array with dimensions (h,w,num_filters).
    - input is a 2d numpy array
    '''
    self.last_input = input

    h,w = input.shape
    output = np.zeros((h - 2,w - 2,self.num_filters))

    for im_region,j in self.iterate_regions(input):
      output[i,j] = np.sum(im_region * self.filters,axis=(1,2))

    return output

  def backprop(self,d_L_d_out,learn_rate):
    '''
    Performs a backward pass of the conv layer.
    - d_L_d_out is the loss gradient for this layer's outputs.
    - learn_rate is a float.
    '''
    d_L_d_filters = np.zeros(self.filters.shape)

    for im_region,j in self.iterate_regions(self.last_input):
      for f in range(self.num_filters):
        d_L_d_filters[f] += d_L_d_out[i,j,f] * im_region

    # Update filters
    self.filters -= learn_rate * d_L_d_filters

    # We aren't returning anything here since we use Conv3x3 as the first layer in our CNN.
    # Otherwise,we'd need to return the loss gradient for this layer's inputs,just like every
    # other layer in our CNN.
    return None


'''maxpool.py'''


import numpy as np

class MaxPool2:
  # A Max Pooling layer using a pool size of 2.

  def iterate_regions(self,image):
    '''
    Generates non-overlapping 2x2 image regions to pool over.
    - image is a 2d numpy array
    '''
    h,_ = image.shape
    new_h = h // 2
    new_w = w // 2

    for i in range(new_h):
      for j in range(new_w):
        im_region = image[(i * 2):(i * 2 + 2),(j * 2):(j * 2 + 2)]
        yield im_region,input):
    '''
    Performs a forward pass of the maxpool layer using the given input.
    Returns a 3d numpy array with dimensions (h / 2,w / 2,num_filters).
    - input is a 3d numpy array with dimensions (h,num_filters)
    '''
    self.last_input = input

    h,num_filters = input.shape
    output = np.zeros((h // 2,w // 2,num_filters))

    for im_region,j] = np.amax(im_region,axis=(0,1))

    return output

  def backprop(self,d_L_d_out):
    '''
    Performs a backward pass of the maxpool layer.
    Returns the loss gradient for this layer's inputs.
    - d_L_d_out is the loss gradient for this layer's outputs.
    '''
    d_L_d_input = np.zeros(self.last_input.shape)

    for im_region,j in self.iterate_regions(self.last_input):
      h,f = im_region.shape
      amax = np.amax(im_region,1))

      for i2 in range(h):
        for j2 in range(w):
          for f2 in range(f):
            # If this pixel was the max value,copy the gradient to it.
            if im_region[i2,j2,f2] == amax[f2]:
              d_L_d_input[i * 2 + i2,j * 2 + j2,f2] = d_L_d_out[i,f2]

    return d_L_d_input


'''softmax.py'''


import numpy as np

class Softmax:
  # A standard fully-connected layer with softmax activation.

  def __init__(self,input_len,nodes):
    # We divide by input_len to reduce the variance of our initial values
    self.weights = np.random.randn(input_len,nodes) / input_len
    self.biases = np.zeros(nodes)

  def forward(self,input):
    '''
    Performs a forward pass of the softmax layer using the given input.
    Returns a 1d numpy array containing the respective probability values.
    - input can be any array with any dimensions.
    '''
    self.last_input_shape = input.shape

    input = input.flatten()
    self.last_input = input

    input_len,nodes = self.weights.shape

    totals = np.dot(input,self.weights) + self.biases
    self.last_totals = totals

    exp = np.exp(totals)
    return exp / np.sum(exp,axis=0)

  def backprop(self,learn_rate):
    '''
    Performs a backward pass of the softmax layer.
    Returns the loss gradient for this layer's inputs.
    - d_L_d_out is the loss gradient for this layer's outputs.
    - learn_rate is a float.
    '''
    # We know only 1 element of d_L_d_out will be nonzero
    for i,gradient in enumerate(d_L_d_out):
      if gradient == 0:
        continue

      # e^totals
      t_exp = np.exp(self.last_totals)

      # Sum of all e^totals
      S = np.sum(t_exp)

      # Gradients of out[i] against totals
      d_out_d_t = -t_exp[i] * t_exp / (S ** 2)
      d_out_d_t[i] = t_exp[i] * (S - t_exp[i]) / (S ** 2)

      # Gradients of totals against weights/biases/input
      d_t_d_w = self.last_input
      d_t_d_b = 1
      d_t_d_inputs = self.weights

      # Gradients of loss against totals
      d_L_d_t = gradient * d_out_d_t

      # Gradients of loss against weights/biases/input
      d_L_d_w = d_t_d_w[np.newaxis].T @ d_L_d_t[np.newaxis]
      d_L_d_b = d_L_d_t * d_t_d_b
      d_L_d_inputs = d_t_d_inputs @ d_L_d_t

      # Update weights / biases
      self.weights -= learn_rate * d_L_d_w
      self.biases -= learn_rate * d_L_d_b

      return d_L_d_inputs.reshape(self.last_input_shape)

这是我个人的改编:

在反向传播和梯度下降方面,这两种方法都不起作用。



'''main.py'''

import numpy as np




class conv_layer:
    def __init__(self,num_filter,filter_dimention):
        self.num_filter = num_filter
        self.filter_dimention = filter_dimention
        self.filters = np.random.randn(num_filter,filter_dimention,filter_dimention) / filter_dimention**2


    def iterate_regions(self,image):
        h,w = image.shape
        for i in range(h - (self.filter_dimention - 1)):
            for j in range(w - (self.filter_dimention - 1)):
                img_region = image[i:(i + self.filter_dimention),j:(j + self.filter_dimention)]
                yield img_region,j


    def feedforward(self,input):

        self.last_input = input

        h,w = input.shape
        output = np.zeros((h - (self.filter_dimention - 1),w - (self.filter_dimention - 1),self.num_filter))

        for img_region,j in self.iterate_regions(input):
            output[i,j] = np.sum(img_region * self.filters,2))
        return output

    def backprop(self,learn_rate):

        d_L_d_filters = np.zeros(self.filters.shape)

        for im_region,j in self.iterate_regions(self.last_input):
            for f in range(self.num_filter):
                d_L_d_filters[f] += d_L_d_out[i,f] * im_region

        self.filters -= learn_rate * d_L_d_filters

        return None




class max_pooling_layer:
    def __init__(self,pool_size):
        self.pool_size = pool_size

    def iterate_regions(self,_ = image.shape
        new_h = h // self.pool_size
        new_w = w // self.pool_size

        for i in range(new_h):
            for j in range(new_w):
                img_region = image[(i * self.pool_size):(i * self.pool_size +
                                                         self.pool_size),(j * self.pool_size):(j * self.pool_size +
                                                         self.pool_size)]
                yield img_region,j

    def feedforward(self,input):
        self.last_input = input
        h,num_filters = input.shape
        output = np.zeros(
            (h // self.pool_size,w // self.pool_size,num_filters))

        for img_region,j] = np.amax(img_region,1))

        return output

    def backprop(self,d_L_d_out):
        d_L_d_input = np.zeros(self.last_input.shape)

        for im_region,j in self.iterate_regions(self.last_input):
            h,f = im_region.shape
            amax = np.amax(im_region,1))

            for i2 in range(h):
                for j2 in range(w):
                    for f2 in range(f):
                        if im_region[i2,f2] == amax[f2]:
                            d_L_d_input[i * 2 + i2,f2]

        return d_L_d_input




class soft_max_layer:
    def __init__(self,nodes):

        self.weights = np.random.randn(input_len,nodes) / input_len
        self.biases = np.zeros(nodes)

    def feedforward(self,input):

        self.last_input_shape = input.shape

        input = input.flatten()
        self.last_input = input

        input_len,nodes = self.weights.shape

        totals = np.dot(input,self.weights) + self.biases
        self.last_totals = totals

        exp = np.exp(totals)
        return exp / np.sum(exp,axis=0)

    def backprop(self,learn_rate):
        for i,gradient in enumerate(d_L_d_out):
            
            if gradient == 0:
                pass
            else:
                S = np.sum(np.exp(d_L_d_out))
                N = np.e**self.last_totals[i]




class Convolutional_Neural_Network:
    def __init__(self,pool_size):
        self.conv_layer = conv_layer(num_filter,filter_dimention)
        self.max_pooling_layer = max_pooling_layer(pool_size)
        self.soft_max_layer = soft_max_layer(13 * 13 * 8,10)

    def feedforward(self,input):
        out = self.conv_layer.feedforward(input)
        out = self.max_pooling_layer.feedforward(out)
        out = self.soft_max_layer.feedforward(out)
        return out
    
    def calculate_accuracy(self,inputs,outputs):
        accuracy = 0
        for i,(input,output) in enumerate(zip(inputs,outputs)):
            if np.argmax(self.feedforward(input)) == np.argmax(output):
                accuracy += 1
        accuracy /= len(inputs)
        return accuracy

    def train_network(self,outputs):

        for image,output in zip(inputs,outputs):
            
            output = output.astype(int)
            out = self.feedforward((image / 255) - 0.5)
            
            gradient = np.zeros(10)
            gradient[output] = -1 / out[output]

            gradient = self.soft_max_layer.backprop(gradient,0.05)
            gradient = self.max_pooling_layer.backprop(gradient)
            gradient = self.conv_layer.backprop(gradient,0.05)


with np.load('mnist.npz') as data:

    train_size = 1

    training_images = data['training_images'][:train_size]
    training_images = np.reshape(training_images,(train_size,28,28))
    training_labels = data['training_labels'][:train_size]

print("start....")
CNN = Convolutional_Neural_Network(8,2)

print(CNN.feedforward(training_images[0]).reshape((10,1)))
print(training_labels[0])

epochs = 5
for i in range(epochs):
    CNN.train_network(training_images,training_labels)
    
    accuracy = CNN.calculate_accuracy(training_images,training_labels)
    print("\n" + "EPOCH " + str(i + 1) + " DONE | out of " + str(epochs) + " Accuracy: " + str(accuracy * 100) + "%")
    print(CNN.feedforward(training_images[0]).reshape((10,1)))
    print(training_labels[0])

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)> insert overwrite table dwd_trade_cart_add_inc > select data.id, > data.user_id, > data.course_id, > date_format(
错误1 hive (edu)> insert into huanhuan values(1,'haoge'); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive> show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 <configuration> <property> <name>yarn.nodemanager.res