学习率太低,无法进行线性回归

问题描述 投票:0回答:1

[我是机器学习的新手,我正在尝试使用python为scikit-learn包中的boston数据集构建多元线性回归模型。

我使用SGD(随机梯度下降法)来优化模型。看来我必须使用很小的学习率(0.000000001)来进行模型学习。如果我使用较高的学习率,则该模型将无法学习,并且会偏离NaN或inf。

所以,这是我的问题:

  1. 使用这么小的学习率可以吗?还是下面的代码有问题?
  2. 似乎验证数据集的损失减少了,增加了一段时间,然后又减少了。我的模型是否属于过度拟合问题,但由于与BGD(批梯度下降法)相比SGD的不稳定性而幸运地逃脱了?]

谢谢您!

这是我的代码:

from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt

def loss(x, y, w):
    predict_y = x @ w
    return np.sqrt(np.mean(np.square((y - predict_y))))

def status(w):
    w_ = np.squeeze(w)
    print("w = [", end="")
    for i in range(14):
        if(i == 13):
            print(w_[i], end="]")
        else:
            print(w_[i], end=", ") 
    print()

    training_loss = loss(training_x, training_y, w)
    validation_loss = loss(validation_x, validation_y, w)
    print("Training Loss = " + str(training_loss))
    print("Validation Loss = " + str(validation_loss))

    training_predict_y = training_x @ w
    validation_predict_y = validation_x @ w

    print("{:^40s}|{:^40s}".format("training", "validation"))
    print("{:^20s}{:^20s}|{:^20s}{:^20s}".format("predict_y", "true_y", "predict_y", "true_y"))
    for i in range(10):
        print("{:^20f}{:^20f}|{:^20f}{:^20f}".format(float(training_predict_y[i]), float(training_y[i]), float(validation_predict_y[i]), float(validation_y[i])))
    print()

def plot(title, data):
    plt.title(title)
    plt.plot(range(len(data)), data)
    plt.savefig(title + ".png", dpi = 300)
    plt.show()

np.random.seed(2020) # for consistency

# data
dataset = datasets.load_boston()
x = dataset.data
y = dataset.target

# reformat the data
x_ = np.concatenate((np.ones((x.shape[0], 1)), x), axis=1) # x0 = 1인 열 추가
y_ = np.expand_dims(y, axis=1)

# divide data into training set and validation set
training_x = x_[ 0:406, : ]
training_y = y_[ 0:406, : ]

validation_x = x_[ 406:506, : ]
validation_y = y_[ 406:506, : ]

# initialize w
w = np.random.rand(x_.shape[1], 1)
print("Before Training...")
status(w)

# hyperparameter
epochs = 100000
lr = 0.000000001

training_losses = []
validation_losses = []
data_num = training_x.shape[0]
for epoch in range(epochs):    
    for i in range(data_num):
        sample = training_x[ i:i + 1, : ]
        true_y = training_y[ i:i + 1, : ]

        predict_y = sample @ w

        # calculate gradient
        gradient = -(2 / sample.shape[0]) * sample.T @ (true_y - predict_y)

        # update w
        w = w - lr * gradient

    training_loss = loss(training_x, training_y, w)
    validation_loss = loss(validation_x, validation_y, w)
    training_losses.append(training_loss)
    validation_losses.append(validation_loss)

print("After Training...")
status(w)

plot("Training Loss - SGD", training_losses)
plot("Validation Loss - SGD", validation_losses)

这里是验证数据集的损失曲线。Validation Loss - SGD

numpy machine-learning linear-regression learning-rate
1个回答
0
投票

问题是您的成本(亏损)函数中的np.sqrt(),通过对梯度的计算,您正在尝试使用均方损失,以便应删除np.sqrt()

无论如何,对于随机梯度下降SGD),可以有一些收敛和发散,尤其是在开始时,这仅意味着当前批次可能难以优化。

非常小的学习率问题,有时这可能不是问题,因为当您实施梯度下降时,某些问题需要一些复杂的步骤来找到最佳点,因此可以解决此问题如果您使用的是<>(例如LBFGSBFGS)的强大优化器,那么

© www.soinside.com 2019 - 2024. All rights reserved.