Keras CNN 模型中的准确性和验证准确性保持不变

问题描述 投票:0回答:1

给定一个由 250 个正常螺钉的灰度图像和 50 个异常螺钉(侧面划伤、头部划伤、尾部切掉等)组成的数据集,我需要创建一个模型来对它们进行分类。 normal screw scratched

我正在使用 Keras 构建 CNN 模型,但我的准确性没有提高。

文件结构为:

- archive
  - training
    - not-good : 50 images in total, 5 types of anomalies of 10 images each
    - good : 250 images
  - test : 180 UNLABELLED images. kind of useless (?) so I am not using this folder for now.

我的代码是:

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Dropout, Flatten, Dense
from keras.losses import binary_crossentropy
from keras.optimizers import SGD
import os
from keras import backend as K

TRAIN_PATH = "archive/train/"
BATCH_SIZE = 20

# greyscale image, so only 1 channel
img_shape = (224, 224, 1) if K.image_data_format() == "channels_last" else (1, 224, 224)

model = Sequential()
model.add(Conv2D(16, kernel_size=(3, 3), activation="relu", input_shape=img_shape, padding="same")) # 224x224 grey-scale images

model.add(Conv2D(32, (3, 3), activation="relu", padding='same'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(32, (3, 3), activation="relu", padding='same'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3, 3), activation="relu", padding='same'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(64, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid")) # binary classification hence 1 & sigmoid

model.compile(loss=binary_crossentropy, optimizer=SGD(learning_rate=0.0002), metrics=["accuracy"])

model.summary()

train_datagen = ImageDataGenerator(
    rescale=1./255,
    shear_range=0.1,
    zoom_range=0.1,
    horizontal_flip=True,
    vertical_flip=True,
    validation_split=0.2
)

train_generator = train_datagen.flow_from_directory(
    TRAIN_PATH,
    target_size=(224, 224),
    batch_size=BATCH_SIZE,
    class_mode="binary",
    subset="training",
    # save_format="jpeg",
    # save_to_dir="generated/train",
    color_mode="grayscale"
)

validation_generator = train_datagen.flow_from_directory(
    TRAIN_PATH,
    target_size=(224, 224),
    batch_size=BATCH_SIZE,
    class_mode='binary',
    subset='validation',
    color_mode="grayscale"
)

print(f"Len of train: {len(train_generator)}") # should be train_generator.samples // BATCH_SIZE

mapping = train_generator.class_indices  # {'good': 0, 'not-good': 1}
# 250 good samples and 50 not-good samples so ratio is 0.83:0.17
class_weights = {mapping["not-good"] : 0.17, mapping["good"] : 0.83}
NUM_EPOCHS = 10
hist = model.fit(
    train_generator,
    steps_per_epoch=train_generator.samples//BATCH_SIZE,
    epochs=NUM_EPOCHS,
    validation_data=validation_generator,
    validation_steps=validation_generator.samples//BATCH_SIZE,
    class_weight=class_weights # imbalanced dataset
)
...

我收到的输出是:

Epoch 1/10
12/12 [==============================] - 5s 370ms/step - loss: 0.6805 - accuracy: 0.6083 - val_loss: 0.6787 - val_accuracy: 0.8500
Epoch 2/10
12/12 [==============================] - 4s 343ms/step - loss: 0.6461 - accuracy: 0.7375 - val_loss: 0.6623 - val_accuracy: 0.8500
Epoch 3/10
12/12 [==============================] - 4s 343ms/step - loss: 0.6001 - accuracy: 0.8208 - val_loss: 0.6552 - val_accuracy: 0.8000
Epoch 4/10
12/12 [==============================] - 4s 347ms/step - loss: 0.5753 - accuracy: 0.8167 - val_loss: 0.6348 - val_accuracy: 0.8500
Epoch 5/10
12/12 [==============================] - 4s 357ms/step - loss: 0.5530 - accuracy: 0.8333 - val_loss: 0.6116 - val_accuracy: 0.9000
Epoch 6/10
12/12 [==============================] - 5s 377ms/step - loss: 0.5348 - accuracy: 0.8333 - val_loss: 0.6121 - val_accuracy: 0.8500
Epoch 7/10
12/12 [==============================] - 5s 374ms/step - loss: 0.5128 - accuracy: 0.8333 - val_loss: 0.6018 - val_accuracy: 0.8500
Epoch 8/10
12/12 [==============================] - 5s 410ms/step - loss: 0.5175 - accuracy: 0.8333 - val_loss: 0.6079 - val_accuracy: 0.8000
Epoch 9/10
12/12 [==============================] - 5s 396ms/step - loss: 0.4984 - accuracy: 0.8333 - val_loss: 0.5837 - val_accuracy: 0.8500
Epoch 10/10
12/12 [==============================] - 5s 433ms/step - loss: 0.4770 - accuracy: 0.8333 - val_loss: 0.5562 - val_accuracy: 0.9000

我尝试使用 cv2.BINARY 以及 cv2.MORPH 提前对图像进行阈值处理,但无济于事。 我假设准确率停留在 0.8333,因为数据集本质上是不平衡的 - 250 个良好图像和 50 个异常图像对应于 83%。

我尝试过调整批量大小、学习率、优化器的类型,甚至向模型添加额外的层(并向每层添加更多的神经元),但似乎无济于事。

注1:我也尝试过使用

validation_split
train_test_split
,但没有成功。 注 2:此代码不会生成测试数据集,但这是另一个问题。

python keras deep-learning conv-neural-network overfitting-underfitting
1个回答
0
投票

随着准确率停滞不前,但损失稳步下降,这看起来像是过度拟合的情况。您关于 83% 对应于倾斜数据集的说法可能是正确的。

  1. 首先,验证集应该有自己的
    ImageDataGenerator
    (即
    val_datagen = ImageDataGenerator(rescale=1./255)
    )。

请注意,我们没有

ImageDataGenerator
的其他参数,因为验证集代表“真实世界数据”,应按原样使用。

  1. 其次,验证集不应该与训练数据集相同。这意味着您必须手动或通过代码创建一个单独的目录进行验证(即使用
    os
    train_test_split()
    )。

请参阅下文,了解其大致外观:

VALIDATION_PATH = '/path/to/validation_images/'

train_generator = train_datagen.flow_from_directory(

    TRAIN_PATH,
    <your_params> ...
)

validation_generator = val_datagen.flow_from_directory(
    VALIDATION_PATH,
    <your_params> ...
)

最后,设置种子以实现可重复性和更轻松地进行故障排除始终是一个好习惯。

import os
import numpy as np    
os.environ['PYTHONHASHSEED']=str(1)
np.random.seed(1)
© www.soinside.com 2019 - 2024. All rights reserved.