运行model.predict(X)时,ndim出现ValueError异常

问题描述 投票:0回答:2

我使用此代码在我的数据上训练我的模型

tf.keras.backend.clear_session()
tf.random.set_seed(50)
np.random.seed(50)
train_set = windowed_dataset(x_train, window_size=30, batch_size=15, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
  tf.keras.layers.Conv1D(filters=100, kernel_size=5,
                      strides=1, padding="causal",
                      activation="relu",
                      input_shape=[None, 1]),
  tf.keras.layers.LSTM(100, return_sequences=True),
  tf.keras.layers.LSTM(100, return_sequences=True),
  #tf.keras.layers.Dense(30, activation="relu"),
  #tf.keras.layers.Dense(30, activation="relu"),
  tf.keras.layers.Dense(1),
  tf.keras.layers.Lambda(lambda x: x * 400)
])


optimizer = tf.keras.optimizers.Adam(
    learning_rate=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True,
    name='Adam'
)
model.compile(loss=tf.keras.losses.Huber(),
              optimizer=optimizer,
              metrics=["mae"])
history = model.fit(train_set,epochs=100)

这里是model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 30, 100)           600       
_________________________________________________________________
lstm (LSTM)                  (None, 30, 100)           80400     
_________________________________________________________________
lstm_1 (LSTM)                (None, 30, 100)           80400     
_________________________________________________________________
dense (Dense)                (None, 30, 1)             101       
_________________________________________________________________
lambda (Lambda)              (None, 30, 1)             0         
=================================================================
Total params: 161,501
Trainable params: 161,501
Non-trainable params: 0
_________________________________________________________________
None

我正在尝试运行此代码

model.predict(
    x_valid, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10,
    workers=1, use_multiprocessing=False
)

并抛出此错误消息:

ValueError:图层顺序的输入0与层:预期ndim = 3,找到的ndim = 2。收到完整的图形:[无,1]

我尝试使用此函数np.array(x_valid).reshape(300,1)重塑x_valid,但是没有用。

我已经通过扩展三次ndim解决了这个问题

    test_input = x_valid[425]
    test_input = np.expand_dims(test_input,axis=0)
    test_input = np.expand_dims(test_input,axis=0)
    test_input = np.expand_dims(test_input,axis=0)

    print(model.predict(test_input))
    # OUTPUT [[[71.46894]]]
python numpy tensorflow python-3.7 tensorflow2.0
2个回答
1
投票

问题来自错误的测试数据二维。 x_input的形状为(15,30,1),因此由此得出测试数据也必须具有3-dim形状(例如[1,1,1])。在您的代码中,测试数据为1维数组,因此您应使用'test_input = np.expand_dims(test_input,axis = 0)']将TWICE扩展为3维数组。


1
投票

您的问题来自以下事实:您需要添加batch_dimension以便在一个数据点上进行预测。

在处理TensorFlow和Keras时这是必要的,即使您对单个样本进行预测,也需要添加batch_size为1。

您需要做的是:

  1. 从测试集中获取一项(例如test_input = x_valid[0]
  2. 构造1的batch_size,即test_input = np.expand_dims(test_input,axis=0)
  3. 现在使用模型进行预测,即prediction = model.predict(test_input)
© www.soinside.com 2019 - 2024. All rights reserved.