AttributeError:使用“ImageDataGenerator”训练 DL 数据集时,“NoneType”对象没有属性“items”

问题描述 投票:0回答:1

我正在尝试使用迁移学习和包含 40,000 张图像的数据集来训练 resnet50 模型。 我使用 ImageDataGenerator 准备数据集,然后使用 flow_from_directory 制作训练和验证数据集(validation_split=0.2)。优化器是 Adam(),用于模型编译。

后来训练模型时出现错误:

AttributeError:“NoneType”对象没有属性“items”

我使用了shuffle=True、重复功能、手动过滤,但似乎都不起作用。

代码是:

# Import Library
import numpy as np 
import pandas as pd
import matplotlib.pyplot as plt
from glob import glob
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
from tensorflow.keras.models import save_model
from keras.models import Sequential
from keras.layers import Dense
import os
import cv2

from tensorflow.keras.applications import ResNet50
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.applications.imagenet_utils import preprocess_input  # Assuming TensorFlow 2.x
from tensorflow.keras.models import Model  # To define your model architecture
from tensorflow.keras.optimizers import Adam  # Or any other optimizer you choose
from tensorflow.keras.losses import categorical_crossentropy  # Or any other loss function

from keras.models import Model
from keras.callbacks import EarlyStopping

# Test and Train path

# Define the path to the dataset and batch size
path = r"C:\Users\Rajarshi\Downloads\Compressed\Concrete Crack Images for Classification"
batch_size = 32

# Step 1: Set up data generators
#image_generator = ImageDataGenerator(validation_split=0.2)  # Remove rescale argument
image_generator = ImageDataGenerator(horizontal_flip=True,
                                     rescale=1./255,
                                     zoom_range=0.2, 
                                     validation_split=0.2)
                                     
#image_generator.preprocessing_function = custom_preprocessing  # Apply custom preprocessing
try:
 train_data = image_generator.flow_from_directory(batch_size=batch_size,
                                                 directory=path,
                                                 shuffle=True,
                                                 target_size=(224, 224),
                                                 subset="training",
                                                 class_mode="categorical")
                                                
 validation_data = image_generator.flow_from_directory(batch_size=batch_size,
                                                      directory=path,
                                                      shuffle=True,
                                                      target_size=(224, 224),
                                                      subset="validation",
                                                      class_mode="categorical")

except OSError as e:
  print(f"Error encountered while generating data: {e}")
  print("Please check your data directory path and structure.")

print (train_data.shape) and (validation_data.shape) #to verify data generation.

# model compile
model.compile(loss='categorical_crossentropy',  # Adjust loss function based on your problem
              optimizer=Adam(),  # Adjust optimizer based on preference
              #optimizer='rmsprop',
              metrics=['accuracy'])


# Training Model

model.fit(
     train_data,
     steps_per_epoch=train_data.samples // train_data.batch_size,  # Calculate steps per epoch
     epochs=num_epochs,
     validation_data=validation_data,
     validation_steps=validation_data.samples // validation_data.batch_size  # Calculate validation steps
 )

模型训练错误 数据集生成错误检查

** 我使用了 shuffle=True、重复功能、手动过滤,但似乎都不起作用。 我也尝试过使用“rmsprop”优化器。**

python-3.x deep-learning resnet image-classification imagedatagenerator
1个回答
0
投票

问题出现在这些行中:

steps_per_epoch=train_data.samples // train_data.batch_size
validation_steps=validation_data.samples // validation_data.batch_size

因为 ImageDataGenerator 在 Tensorflow 的最新版本 (2.16.1) 中已被弃用(如此处所示)。

我遇到了同样的问题,我找到的解决方案是通过 Jupyter 中的以下命令将 Tensorflow 降级到以前的版本:

!pip install tensorflow==2.15
© www.soinside.com 2019 - 2024. All rights reserved.