ValueError:无法识别的关键字参数传递给 LSTM:Keras 中的 {'batch_input_shape'}

问题描述 投票:0回答:1

我正在尝试在 TensorFlow 中使用 Keras 构建和训练有状态 LSTM 模型,但在指定 batch_input_shape 参数时不断遇到 ValueError。

错误信息:

ValueError: Unrecognized keyword arguments passed to LSTM: {'batch_input_shape': (1, 1, 14)}

这是我的代码的简化版本:

import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM

# Load your data
file_path = 'path_to_your_file.csv'
data = pd.read_csv(file_path)

# Create 'date' column with the first day of each month
data['date'] = pd.to_datetime(data['tahun'].astype(str) + '-' + data['bulan'].astype(str) + '-01')
data['date'] = data['date'] + pd.offsets.MonthEnd(0)
data.set_index('date', inplace=True)

# Group by 'date' and sum the 'amaun_rm' column
df_sales = data.groupby('date')['amaun_rm'].sum().reset_index()

# Create a new dataframe to model the difference
df_diff = df_sales.copy()
df_diff['prev_amaun_rm'] = df_diff['amaun_rm'].shift(1)
df_diff = df_diff.dropna()
df_diff['diff'] = df_diff['amaun_rm'] - df_diff['prev_amaun_rm']

# Create new dataframe from transformation from time series to supervised
df_supervised = df_diff.drop(['prev_amaun_rm'], axis=1)
for inc in range(1, 13):
    field_name = 'lag_' + str(inc)
    df_supervised[field_name] = df_supervised['diff'].shift(inc)

# Adding moving averages
df_supervised['ma_3'] = df_supervised['amaun_rm'].rolling(window=3).mean().shift(1)
df_supervised['ma_6'] = df_supervised['amaun_rm'].rolling(window=6).mean().shift(1)
df_supervised['ma_12'] = df_supervised['amaun_rm'].rolling(window=12).mean().shift(1)
df_supervised = df_supervised.dropna().reset_index(drop=True)
df_supervised = df_supervised.fillna(df_supervised.mean())

# Split the data into train and test sets
train_set, test_set = df_supervised[0:-6].values, df_supervised[-6:].values
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler = scaler.fit(train_set)
train_set_scaled = scaler.transform(train_set)
test_set_scaled = scaler.transform(test_set)

# Split into input and output
X_train, y_train = train_set_scaled[:, 1:], train_set_scaled[:, 0]
X_test, y_test = test_set_scaled[:, 1:], test_set_scaled[:, 0]
X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1]))
X_test = X_test.reshape((X_test.shape[0], 1, X_test.shape[1]))

# Check the shape of X_train
print("X_train shape:", X_train.shape)  # Should output (44, 1, 14)

# Define the LSTM model
model = Sequential()
model.add(LSTM(4, stateful=True, batch_input_shape=(1, X_train.shape[1], X_train.shape[2])))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')

# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=1, verbose=1, shuffle=False)

# Summarize the model
model.summary()

我尝试过的:

  • 我验证了
    X_train
    的形状,即
    (44, 1, 14)
  • 我尝试使用
    input_shape
    而不是
    batch_input_shape
    ,这导致了不同的错误。
  • 我确保 TensorFlow 和 Keras 的版本兼容。

系统信息:

  • Python版本:3.12

  • TensorFlow 版本:2.17.0

  • Keras 版本:3.4.1

问题: 如何在 Keras 中为我的有状态 LSTM 模型正确指定

batch_input_shape
以避免此错误?是否有任何特定版本要求或需要额外配置?

python keras lstm
1个回答
0
投票

为了提供

batch_input_shape
,您需要将
InputLayer
作为顺序模型中的第一层。

下面是经过修改的代码。

model = Sequential()
model.add(InputLayer(batch_input_shape=(1, X_train.shape[1], X_train.shape[2])))
model.add(LSTM(4, stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')

# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=1, verbose=1, shuffle=False)

# Summarize the model
model.summary()
© www.soinside.com 2019 - 2024. All rights reserved.