您好,我有一个错误的问题。我正在运行一个标记了 16667 张图像的数据集。我遇到了这样的错误,我该怎么办?我已经将我的数据集分开进行测试和训练,我想尝试 VGG16 模型。
我的课程有标签 1,2,3,
lb = LabelBinarizer()
trainY=np.load(r'data_label.npy')
#labels=trainY
trainY = lb.fit_transform(trainY)
#labels = to_categorical(labels)
trainX=np.load(r'resized_images.npy')
#trainX = np.expand_dims(trainX, axis=3)
trainX, testX, trainY, testY = train_test_split(trainX, trainY,test_size=0.30, stratify=trainY, random_state=42)
#trainY = np.asarray(trainY).astype.reshape((-1,1))
#testX = np.asarray(testY).astype.reshape((-1,1))
print(trainX.shape, trainY.shape)
#model.fit(trainX.reshape(11736, 224, 224, 1))
trainAug = ImageDataGenerator(
rotation_range=15,
fill_mode="nearest")
trainX = np.expand_dims(trainX, axis=3)
baseModel = VGG16(weights=None, include_top=False,
input_tensor=Input(shape=(224, 224, 1)))
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(4, 4))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(64, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)
#headModel.add(Flatten())
model = Model(inputs=baseModel.input, outputs=headModel)
model.summary()
for layer in baseModel.layers:
layer.trainable = False
INIT_LR = 1e-3
EPOCHS = 20
BS = 32
print("[INFO] compiling model...")
opt = keras.optimizers.Adam(learning_rate=0.01)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
#annealer = ReduceLROnPlateau(monitor='val_accuracy', factor=0.5, patience=5, verbose=1, min_lr=1e-3)
print("[INFO] training head...")
print('shape', trainY.shape)
#trainY=(11736,1)
print(trainX)
np.squeeze(trainY, axis=1).shape
H = model.fit_generator(
trainAug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS, epochs=EPOCHS)
#print('shape', trainY.shape))
第一个 epoch 正在运行然后我得到这个错误我应该怎么做:
ValueError:
logits
和 labels
必须具有相同的形状,收到((无,2)与(无,1))。
你的
trainY
是(11736,1)
的形状,但是你的最后一层有2个logitsheadModel = Dense(2, activation="softmax")(headModel)
。你的标签是什么样的?根据您之前的帖子,您的 trainY
似乎具有与 0,1,2,3,...,n-1 等类对应的不同数值,其中 n 是类数。因此,您需要:
trainY
以塑造(11736, n)
.n
进行
headModel = Dense(n, activation="softmax")(headModel)