使用预训练模型开发深度学习图像识别系统

问题描述 投票:0回答:2

我想在我的深度学习图像识别项目中使用预训练模型,例如 Xception、VGG16、ResNet50 等,以在训练集上快速训练模型并保持高精度。我无法找到准确的代码来实现我的模型。首先,根据VGG16模型的要求,我将训练数据的输入形状从(256,256,3)修改为(224,224,3)。我使用的是keras编程环境。我的型号代码如下

train_x = np.expand_dims(train_X, axis=2)
train_y = np.expand_dims(train_Y, axis=2)
print(train_X.shape) # output - (670, 224, 224, 3)
print(train_Y.shape) # output - (670, 224, 224, 1)
print(train_x.shape) # output - (670, 224, 1, 224, 3)
print(train_y.shape) # output - (670, 224, 1, 224, 1) 


def vgg16_(IMG_WIDTH=224,IMG_HEIGHT=224,IMG_CHANNELS=3):
inputs = Input(shape=(len(train_x[0]), 1))
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(inputs)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)

# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)

# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)

# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)

# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)

x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(100, activation='tanh')(x)
x = Reshape([len(train_x[0]),1])(x)
model = Model(inputs, reshape)
model.compile(loss='mse', optimizer='rmsprop')
return model

但是,不幸的是,我在训练数据上拟合这个模型时遇到了这个错误

ValueError: Input 0 is incompatible with layer block1_conv1: expected ndim=4, found ndim=3

我应该怎样做才能获得正确的输出?

此外,我尝试通过仅更改输出层来运行以下代码。我收到此错误 ValueError: 检查目标时出错:预期预测具有 2 维,但得到形状为 (670, 224, 224, 1) 的数组

model_vgg16_conv = VGG16(input_shape=(IMG_WIDTH,IMG_HEIGHT,3),weights='imagenet', include_top=False,pooling=max)
model_vgg16_conv.summary()
#print("ss")
    #Create your own input format 
input = Input(shape=(IMG_WIDTH,IMG_HEIGHT,3),name = 'image_input')
#print("ss2")
    #Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)
print("ss3")
    #Add the fully-connected layers 
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(512, activation='relu', name='fc1')(x)
x = Dense(128, activation='relu', name='fc2')(x)
x = Dense(1, activation='sigmoid', name='predictions')(x)

    #Create your own model 
my_model = Model(input=input, output=x)

    #In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training
my_model.summary()



my_model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

如何解决这个问题?

python tensorflow machine-learning deep-learning keras
2个回答
0
投票

我猜你的输入层定义是错误的。应该是这个。

inputs = Input(shape=(IMG_WIDTH,IMG_HEIGHT,CHANNELS))

输入将是尺寸为 (224,224,3) 的图像,那么为什么要将输入层形状设置为 (len(train_x[0]),1)


0
投票

它看起来像你的

Input
张量形状中的错误:

inputs = Input(shape=(len(train_x[0]), 1))

len(train_x[0])
将是
224
,因为
len
将采用沿第一个轴的尺寸。相反,它应该是:

inputs = Input(shape=train_x[0].shape)
© www.soinside.com 2019 - 2024. All rights reserved.