模型不学习,验证精度受到打击

问题描述 投票:0回答:3

我正在为 3 个简单的标志构建一个标志分类器。我捕捉图像。然后应用 mediapipe 提取面部、手部、姿势的关键点。所有关键点堆叠在一个数组中。所以现在我将 1662 个值的数组保存为 .npy 文件。一个标志有 30 帧,因为标志是动态的。我总共有 540 个训练样本和 60 个验证样本。我在上面应用了 LSTM 模型,但我的模型没有学习任何东西。我做错了什么我的代码和我的培训accurices如下所示。

Data_Path_train = "training_data"
Data_Path_val = "validation_data"
actions = np.array(['No sign', 'Yes', 'No'])

label_map = {label:num for num, label in enumerate(actions)}
print(label_map)

def data_generator(data_path, actions, sequence_length, batch_size,validation):
    while True:
        for action in actions:
            for sequence in os.listdir(os.path.join(data_path, action)):
                # skip non-numeric sequence names (i.e., augmented sequences)
                if not sequence.isdigit():
                    continue
                if validation and int(sequence) % 5 == 0:
                    continue
                if not validation and int(sequence) % 5 != 0:
                    continue
                sequence = int(sequence)
                window = []
                for frame_num in range(sequence_length):
                    res = np.load(os.path.join(data_path, action, str(sequence), "{}.npy".format(frame_num)))
                    window.append(res)
                x = np.array(window)
                y = keras.utils.to_categorical(label_map[action], num_classes=len(actions))
                yield x[np.newaxis, :, :], y[np.newaxis, :]

sequence_length = 30
batch_size = 60

train_gen = data_generator(Data_Path_train, actions, sequence_length, batch_size,validation=False)
val_gen = data_generator(Data_Path_val, actions, sequence_length, batch_size,validation=True)

from keras.models import Sequential
from keras.layers import LSTM, Dense

# Define the input shape
input_shape = (sequence_length, 1662)

# Define the model architecture
model = Sequential()
model.add(LSTM(64, input_shape=input_shape, return_sequences=True))
model.add(LSTM(64, return_sequences=False))
model.add(Dense(len(actions), activation='softmax'))


# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(train_gen, 
                    steps_per_epoch=540*len(actions)//batch_size, 
                    validation_data=val_gen, 
                    validation_steps=60*len(actions)//batch_size, 
                    epochs=30,verbose=1, shuffle=True)

结果:

Epoch 1/30
27/27 [==============================] - 2s 56ms/step - loss: 1.3371 - accuracy: 0.5926 - val_loss: 1.7992 - val_accuracy: 0.3333
Epoch 2/30
27/27 [==============================] - 1s 21ms/step - loss: 2.3097 - accuracy: 0.1852 - val_loss: 2.4106 - val_accuracy: 0.0000e+00
Epoch 3/30
27/27 [==============================] - 1s 21ms/step - loss: 1.6771 - accuracy: 0.0000e+00 - val_loss: 1.3552 - val_accuracy: 0.0000e+00
Epoch 4/30
27/27 [==============================] - 1s 21ms/step - loss: 1.3889 - accuracy: 0.0741 - val_loss: 1.3459 - val_accuracy: 0.0000e+00
Epoch 5/30
27/27 [==============================] - 1s 25ms/step - loss: 1.2858 - accuracy: 0.0000e+00 - val_loss: 1.2403 - val_accuracy: 0.3333
Epoch 6/30
27/27 [==============================] - 1s 27ms/step - loss: 1.2816 - accuracy: 0.0370 - val_loss: 1.0072 - val_accuracy: 0.6667
Epoch 7/30
27/27 [==============================] - 1s 22ms/step - loss: 1.1760 - accuracy: 0.0000e+00 - val_loss: 1.0123 - val_accuracy: 0.0000e+00
Epoch 8/30
27/27 [==============================] - 1s 22ms/step - loss: 1.2672 - accuracy: 0.0370 - val_loss: 1.1520 - val_accuracy: 0.0000e+00
Epoch 9/30
27/27 [==============================] - 1s 22ms/step - loss: 1.1903 - accuracy: 0.0000e+00 - val_loss: 1.2613 - val_accuracy: 0.0000e+00
Epoch 10/30
27/27 [==============================] - 1s 20ms/step - loss: 1.2038 - accuracy: 0.0370 - val_loss: 1.1799 - val_accuracy: 0.3333
Epoch 11/30
27/27 [==============================] - 1s 21ms/step - loss: 1.2164 - accuracy: 0.0000e+00 - val_loss: 1.0480 - val_accuracy: 0.6667
Epoch 12/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1805 - accuracy: 0.0000e+00 - val_loss: 1.0215 - val_accuracy: 1.0000
Epoch 13/30
27/27 [==============================] - 1s 20ms/step - loss: 1.1981 - accuracy: 0.0370 - val_loss: 1.0742 - val_accuracy: 0.0000e+00
Epoch 14/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1769 - accuracy: 0.0000e+00 - val_loss: 1.1262 - val_accuracy: 0.0000e+00
Epoch 15/30
27/27 [==============================] - 1s 20ms/step - loss: 1.1846 - accuracy: 0.0370 - val_loss: 1.1544 - val_accuracy: 0.0000e+00
Epoch 16/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1534 - accuracy: 0.0370 - val_loss: 1.0768 - val_accuracy: 0.6667
Epoch 17/30
27/27 [==============================] - 1s 22ms/step - loss: 1.1903 - accuracy: 0.0370 - val_loss: 0.9775 - val_accuracy: 1.0000
Epoch 18/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1363 - accuracy: 0.0000e+00 - val_loss: 1.0444 - val_accuracy: 0.0000e+00
Epoch 19/30
27/27 [==============================] - 1s 20ms/step - loss: 1.1847 - accuracy: 0.0741 - val_loss: 1.1029 - val_accuracy: 0.0000e+00
Epoch 20/30
27/27 [==============================] - 1s 20ms/step - loss: 1.1653 - accuracy: 0.0000e+00 - val_loss: 1.1406 - val_accuracy: 0.0000e+00
Epoch 21/30
27/27 [==============================] - 1s 20ms/step - loss: 1.1462 - accuracy: 0.0741 - val_loss: 1.0914 - val_accuracy: 0.6667
Epoch 22/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1722 - accuracy: 0.0000e+00 - val_loss: 1.0319 - val_accuracy: 1.0000
Epoch 23/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1600 - accuracy: 0.0370 - val_loss: 1.0185 - val_accuracy: 1.0000
Epoch 24/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1582 - accuracy: 0.0000e+00 - val_loss: 1.0609 - val_accuracy: 1.0000
Epoch 25/30
27/27 [==============================] - 1s 20ms/step - loss: 1.1511 - accuracy: 0.0741 - val_loss: 1.1442 - val_accuracy: 0.0000e+00
Epoch 26/30
27/27 [==============================] - 1s 23ms/step - loss: 1.1811 - accuracy: 0.0000e+00 - val_loss: 1.1506 - val_accuracy: 0.0000e+00
Epoch 27/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1193 - accuracy: 0.1481 - val_loss: 0.9981 - val_accuracy: 1.0000
Epoch 28/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1849 - accuracy: 0.0000e+00 - val_loss: 1.0087 - val_accuracy: 1.0000
Epoch 29/30
27/27 [==============================] - 1s 21ms/step - loss: 1.1146 - accuracy: 0.0000e+00 - val_loss: 1.0342 - val_accuracy: 0.0000e+00
Epoch 30/30
27/27 [==============================] - 1s 23ms/step - loss: 1.1934 - accuracy: 0.1481 - val_loss: 1.0982 - val_accuracy: 0.6667

enter image description here

enter image description here

deep-learning lstm
3个回答
4
投票

如果你正在处理图像,你可以使用 cnn 等其他算法,因为这些方法的重点是图像,而 Lstm 恰好适用于定量数据集。确定正确的方法也可能是模型过度拟合的原因之一,因为一些激活、损失、优化器方法仅适用于图像数据集,而一些则适用于文本数据集。但是,检查这些项目可以使模型的准确性和误差更好


4
投票

因为您使用的是 lstm 模型,所以值必须是常量而不是随机的,这就是为什么您必须将 shuffle 设置为 false。


2
投票

shuffle 应该是 false 随机在 lstm 中非常重要

© www.soinside.com 2019 - 2024. All rights reserved.