如何解决如何使用tf.GradientTape自定义多层的梯度下降
我正在尝试使用tf.GradientTape在多层中自定义梯度下降方法。 Tensorflow教程提供了没有多层的回归示例。请帮忙。 网络具有隐藏层(L01)。 我想知道如何计算损失与w02之间的梯度,以及误差在L01与w01之间的梯度。
# ---------------------------------------------
# model development
w01 = tf.Variable(tf.random.uniform(shape=(input_dim,10)))
b01 = tf.Variable(tf.zeros(shape=(10,)))
w02 = tf.Variable(tf.random.uniform(shape=(10,output_dim)))
b02 = tf.Variable(tf.zeros(shape=(output_dim,)))
def compute_predictions(features):
# return oHypothesis(features)
L01 = tf.matmul(features,w01) + b01
L01 = tf.nn.sigmoid(L01)
L02 = tf.matmul(L01,w02) + b02
L02 = tf.nn.sigmoid(L02)
oHypothesis = L02
return oHypothesis
def compute_diff01(features):
# return oHypothesis(features)
L01 = tf.matmul(features,w01) + b01
L01 = tf.nn.sigmoid(L01)
return L01
def compute_loss(y,predictions):
return tf.reduce_mean(tf.square(y - predictions))
# ---------------------------------------------
# model training
def train_on_batch(x,y):
# ??? question here. How to calculate weights ???
**with tf.GradientTape() as tape:
predictions = compute_predictions(x)
loss = tf.reduce_mean(tf.square(y - predictions))
dloss_dw02,dloss_db02 = tape.gradient(loss,[w02,b02])
diff01 = tf.reduce_mean(tf.square(y - predictions))
dL01_dw01,dL01_db01 = tape.gradient(diff01,[w01,b01])
w01.assign_sub(learning_rate * dloss_dw01)
b01.assign_sub(learning_rate * dloss_db01)
w02.assign_sub(learning_rate * dloss_dw02)
b02.assign_sub(learning_rate * dloss_db02)**
return loss
for epoch in range(10):
for step,(x,y) in enumerate(dataset):
loss = train_on_batch(x,y)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。