无论我添加多少数据,Keras 在第二个时期都会耗尽训练数据

问题描述 投票:0回答:1

我正在尝试制作一个图像分类器。运行这个时,它在第一个纪元上表现良好,但在第二个纪元开始时,它耗尽了训练数据,给了我一个警告,然后开始第三个纪元,完成后,它在纪元上给了我另一个错误4,依此类推。

我相信问题出在图像数据生成器中,因为它不会在每个纪元后重新启动(我认为它总是这样做),但我没有找到任何方法在每个纪元后重新启动它。

这是代码:

import os
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from tensorflow.keras.optimizers import Adam
import numpy as np
import matplotlib.pyplot as plt
import shutil

os.environ["CUDA_VISIBLE_DEVICES"] = "-1" 
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

train = 'train'
toclassify = 'toclassify'
results = 'results'
resultsyes = os.path.join(results, 'yes')
resultsno = os.path.join(results, 'no')


os.makedirs(resultsyes, exist_ok=True)
os.makedirs(resultsno, exist_ok=True)


size = (128, 128)
batch_size = 32  
epochs = 10


datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    validation_split=0.2
)

train_generator = datagen.flow_from_directory(
    train,
    target_size=size,
    batch_size=batch_size,
    class_mode='binary',
    subset='training',
    color_mode='grayscale'
)

validation_generator = datagen.flow_from_directory(
    train,
    target_size=size,
    batch_size=batch_size,
    class_mode='binary',
    subset='validation',
    color_mode='grayscale'
)

print(f"for training: {train_generator.samples}")
print(f"for validation: {validation_generator.samples}")


steps = train_generator.samples // batch_size
steps_v = validation_generator.samples // batch_size #validation steps


model = Sequential([
    Conv2D(32,(3, 3),activation='relu',input_shape=(size[0],size[1],1)),
    MaxPooling2D((2,2)),
    Dropout(0.1),
    Conv2D(64,(3, 3),activation='relu'),
    MaxPooling2D((2,2)),
    Dropout(0.1),
    Conv2D(128,(3, 3),activation='relu'),
    MaxPooling2D((2,2)),
    Flatten(),
    Dense(512,activation='relu'),
    Dense(1,activation='sigmoid')
])

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

history = model.fit(
    train_generator,
    steps_per_epoch=steps,
    validation_data=validation_generator,
    validation_steps=steps_v,
    epochs=epochs
)

编辑1:这是控制台日志:

Epoch 1/10
D:\anaconda\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
  self._warn_if_super_not_called()
14/14 ━━━━━━━━━━━━━━━━━━━━ 8s 353ms/step - accuracy: 0.5409 - loss: 1.9809 - val_accuracy: 0.5729 - val_loss: 0.6883
Epoch 2/10
 1/14 ━━━━━━━━━━━━━━━━━━━━ 2s 221ms/step - accuracy: 0.5312 - loss: 0.74472024-06-14 11:08:49.711665: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
         [[{{node IteratorGetNext}}]]
D:\anaconda\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
  self.gen.throw(typ, value, traceback)
2024-06-14 11:08:49.828859: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
         [[{{node IteratorGetNext}}]]
14/14 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - accuracy: 0.5312 - loss: 0.7447 - val_accuracy: 0.5909 - val_loss: 0.6761

任何帮助将不胜感激<3

我尝试过:

  1. 更改批量大小
  2. 添加/删除图像
  3. 从steps和steps_v中减去1,希望它能在下一个纪元之前重新启动
  4. 在谷歌上寻找这个问题的答案
  5. 使用人工智能工具

我预计:

  1. 我希望它不会给我警告并跳过整个纪元,因为它会弄乱之后的 pyplot,我无法判断模型是否过度拟合,我认为它这样做是不正常的
python tensorflow keras
1个回答
0
投票

对我有用的是没有在

steps_per_epoch
中指定
validation_steps
model.fit()
。只需让 keras 自行计算即可。

© www.soinside.com 2019 - 2024. All rights reserved.