使用GradientTape的TensorFlow 2.0线性回归示例中的爆炸损失

问题描述 投票:0回答:2

我正在尝试为多元线性回归构建一些教育性的例子,但是损失不断增加,直到爆炸而不是变小为止,知道吗?

import tensorflow as tf
tf.__version__

import numpy as np
data = np.array(
    [
        [100,35,35,12,0.32],
        [101,46,35,21,0.34],
        [130,56,46,3412,12.42],
        [131,58,48,3542,13.43]
    ]
)

x = data[:,1:-1]
y_target = data[:,-1]

def loss_function(y, pred):
    return tf.reduce_mean(tf.square(y - pred))

def train(b, w, x, y, lr=0.012):
    with tf.GradientTape() as t:
        current_loss = loss_function(y, linear_model(x))
        lr_weight, lr_bias = t.gradient(current_loss, [w, b])
        w.assign_sub(lr * lr_weight)
        b.assign_sub(lr * lr_bias)

epochs = 80
for epoch_count in range(epochs):
    real_loss = loss_function(y_target, linear_model(x))
    train(b, w, x, y_target, lr=0.12)
    print(f"Epoch count {epoch_count}: Loss value: {real_loss.numpy()}")

甚至如果我使用“正确的”值初始化权重(通过scikit-learn回归器发现),也会发生这种情况

w = tf.Variable([-1.76770250e-04,3.46688912e-01,2.43827475e-03],dtype=tf.float64)
b = tf.Variable(-11.837184241807234,dtype=tf.float64)
tensorflow linear-regression linear-algebra gradienttape
2个回答
1
投票

这里是您可以如何将TF2优化器用于玩具示例(根据评论)。我知道这不是答案,但我不想在评论部分中发布它,因为它会使缩进和所有这些东西弄乱。

tf_x = tf.Variable(tf.constant(2.0,dtype=tf.float32),name='x')
optimizer = tf.optimizers.SGD(learning_rate=0.1)

# Optimizing tf_x using gradient tape
x_series, y_series = [],[]
for step in range(5):    
    x_series.append(tf_x.numpy().item())
    with tf.GradientTape() as tape:
        tf_y = tf_x**2

    gradients = tape.gradient(tf_y, tf_x)
    optimizer.apply_gradients(zip([gradients], [tf_x]))

0
投票

基于@ thushv89的输入,我在这里使用正在运行的TF2优化器提供一种中间解决方案,尽管这不能100%回答我的问题

import tensorflow as tf
tf.__version__

import numpy as np
data = np.array(
    [
        [100,35,35,12,0.32],
        [101,46,35,21,0.34],
        [130,56,46,3412,12.42],
        [131,58,48,3542,13.43]
    ]
)

x = data[:,1:-1]
y_target = data[:,-1]

w = tf.Variable([1,1,1],dtype=tf.float64)
b = tf.Variable(1,dtype=tf.float64)

def linear_model(x):
    return b + tf.tensordot(x,w,axes=1)

optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.MeanSquaredLogarithmicError()

def train_step(x, y):
    with tf.GradientTape() as tape:
        predicted = linear_model(x)   
        loss_value = loss_object(y, predicted)
        print(f"Loss Value:{loss_value}")
        grads = tape.gradient(loss_value, [b,w])
        optimizer.apply_gradients(zip(grads, [b,w]))

def train(epochs):
    for epoch in range(epochs):
            train_step(x, y_target)
    print ('Epoch {} finished'.format(epoch))

train(epochs = 1000)
© www.soinside.com 2019 - 2024. All rights reserved.