为什么这个简单的(Keras)机器学习代码会给出错误的答案?

问题描述 投票:0回答:1

我正在尝试学习一些时间序列神经网络 ML 并且得到了奇怪的解决方案,因此我正在尝试对我能想到的最简单的非平凡情况进行建模,即预测 n+1 作为下一个数字序列 0,1,2,3,...n(使用 LSTM 模型)。

每个数据点的训练数据是一系列紧邻的前面的数字,我假设只要每个训练集的数据长度 >= 2(因为它是一个算术序列),它就应该很容易地解决模型问题

下面的代码为所有测试数据返回一个常量,无论训练系列的大小如何。有人可以解释我做错了什么吗?

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math

import statistics

dim = 5

data = pd.Series(range(0,200))

# Setting 80 percent data for training
training_data_len = math.ceil(len(data) * .8)

# Normalize data
train_data = data[:training_data_len]

#Split dataset
train_data = data[:training_data_len]
test_data = data[training_data_len:]
print(train_data.shape, test_data.shape)

# Selecting values
dataset_train = train_data.values 
# Reshaping 1D to 2D array
dataset_train = np.reshape(dataset_train, (-1,1)) 

# Selecting values
dataset_test = test_data.values
# Reshaping 1D to 2D array
dataset_test = np.reshape(dataset_test, (-1,1))  

X_train = []
y_train = []
for i in range(dim, len(dataset_train)):
    X_train.append(dataset_train[i-dim:i, 0])
    y_train.append(dataset_train[i, 0])


X_test = []
y_test = []
for i in range(dim, len(dataset_test)):
    X_test.append(dataset_test[i-dim:i, 0])
    y_test.append(dataset_test[i, 0])


# The data is converted to Numpy array
X_train, y_train = np.array(X_train), np.array(y_train)

#Reshaping
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1],1))
y_train = np.reshape(y_train, (y_train.shape[0],1))
print("X_train :",X_train.shape,"y_train :",y_train.shape)


# The data is converted to numpy array
X_test, y_test = np.array(X_test), np.array(y_test)

#Reshaping
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1],1))
y_test = np.reshape(y_test, (y_test.shape[0],1))
print("X_test :",X_test.shape,"y_test :",y_test.shape)

# importing libraries
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import SimpleRNN
from keras.layers import Dropout
from keras.layers import GRU, Bidirectional
from keras.optimizers import SGD
from sklearn import metrics
from sklearn.metrics import mean_squared_error

#Initialising the model
regressorLSTM = Sequential()

#Adding LSTM layers
regressorLSTM.add(LSTM(dim, 
                       return_sequences = True, 
                       input_shape = (X_train.shape[1],1)))
regressorLSTM.add(LSTM(dim, 
                       return_sequences = False))

#Adding the output layer
regressorLSTM.add(Dense(1))

#Compiling the model
regressorLSTM.compile(optimizer = 'adam',
                      loss = 'mean_squared_error',
                      metrics = ["accuracy"])

#Fitting the model
regressorLSTM.fit(X_train, 
                  y_train, 
                  batch_size = 1, 
                  epochs = 4)
regressorLSTM.summary()


# predictions with X_test data
y_LSTM = regressorLSTM.predict(X_test)

#Plot for LSTM predictions
plt.plot(train_data.index[dim:], train_data[dim:], label = "train_data", color = "b")
plt.plot(test_data.index, test_data, label = "test_data", color = "g")
plt.plot(test_data.index[dim:], y_LSTM, label = "y_LSTM", color = "orange")
plt.legend()
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
python machine-learning keras time-series
1个回答
0
投票

我建议分别绘制训练集和测试集的损失曲线和准确性,以查看模型是否欠拟合或过拟合。

我认为你的模型一定是欠拟合的,因为你只运行了 4 个时期。我建议您将其增加到 100 甚至更多,看看结果是否会发生变化。还有更多技术可以增加数据集或增加模型的复杂性,但这完全取决于您所做的实验。

希望有帮助。

最新问题
© www.soinside.com 2019 - 2024. All rights reserved.