微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

我的反向传播有什么问题? (爪哇)

如何解决我的反向传播有什么问题? (爪哇)

protected List<Double> weight;
protected List<ABPPerceptron> next;
protected List<ABPPerceptron> prev;
protected Double bias;
protected Double w0;
protected Double theta;
protected Double net;
protected Double out;
protected Double upgrade;

public ABPPerceptron(int k,Double bias,Double theta) {
    if (bias == 0D)
        System.out.println("bias must be not 0!!");

    this.theta = theta;
    this.bias = bias;
    weight = new ArrayList<>();
    Random rand = new Random();
    for (int i = 0; i < k; i++) {
        weight.add(rand.nextDouble());
    }
    w0 = rand.nextDouble();
    System.out.println("weights: " + w0 + " " + Arrays.toString(weight.toArray()));
}

public void calculate() {
    net = bias * w0;
    for (int i = 0; i < prev.size(); i++) {
        net += prev.get(i).getout() * weight.get(i);
    }
    out = evaluate(net);
}

public void outLayerLearn(Double expected) {
    upgrade = (expected - out) * derivate(out);

    w0 += theta * upgrade * bias;
    for (int i = 0; i < weight.size(); i++) {
        weight.set(i,weight.get(i) + (theta * upgrade * prev.get(i).out));
    }
}

public void hiddenLayerLearn() {
    upgrade = 0D;
    int f = next.get(0).prev.indexOf(this);
    for (ABPPerceptron n : next) {
        upgrade += n.upgrade * n.weight.get(f);
    }
    upgrade *= derivate(out);
    w0 += theta * upgrade * bias;
    for (int i = 0; i < weight.size(); i++) {
        weight.set(i,weight.get(i) + (theta * upgrade * prev.get(i).out));
    }
}

这是我的神经元类的一部分,网络在所有输出神经元上调用 outLayerLearn,在除最后一个输入层之外的其他神经元上调用 hiddenLayerLearn。 我总是在每个 ephoc 之前调用计算。

我的问题是在 10000 epochs 之后我无法得到正确的结果。

设置 xor 输入和输出后,主要看起来像这样。 xor 有 2 个输入、2 个隐藏和 1 个输出 sigmoid 神经元。

    int echo = 10000;
    for (int i = 0; i < echo; i++) {
        for (int j = 0; j < inputs.size(); j++) {
            xor.calculate(inputs.get(j));
            List<Double> e = new ArrayList<Double>();
            e.add(expected.get(j));
            xor.learn(inputs.get(j),e);
        }
    }

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。