我正在尝试将数据增强作为一个层添加到模型中,但我遇到了我认为是形状问题。我也尝试在增强层中指定输入形状。当我从模型中取出
data_augmentation
层时,它运行良好。
preprocessing.RandomFlip('horizontal', input_shape=(224, 224, 3))
data_augmentation_layer = keras.Sequential([
preprocessing.RandomFlip('horizontal'),
preprocessing.RandomRotation(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomWidth(0.2),
preprocessing.RandomHeight(0.2),
preprocessing.RandomContrast(0.2)
], name='data_augmentation')
model = keras.Sequential([
data_augmentation_layer,
Conv2D(filters=32,
kernel_size=1,
strides=1,
input_shape=(224, 224, 3)),
Activation(activation='relu'),
MaxPool2D(),
Conv2D(filters=32,
kernel_size=1,
strides=1),
Activation(activation='relu'),
MaxPool2D(),
Flatten(),
Dense(1, activation='sigmoid')
])```
The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)
Call arguments received:
• inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float32)
• training=True
• mask=None
图层
RandomWidth
和 RandomHeight
导致此错误,因为它们导致 None
尺寸:请参阅评论 here:
[...]RandomHeight 将导致 None 形状 高度维度,因为并非该层的所有输出都是 相同的高度(根据设计)。对于像 Conv2D 层这样的东西来说这是可以的, 它可以接受可变形状的图像输入(某些上没有形状) 尺寸)。
这不适用于随后调用 Flatten 和 密集,因为展平的批次也将具有可变的大小 (因为高度可变),并且Dense层需要固定 最后一个维度的形状。你可能可以填充 flatten 的输出 在密集之前,但如果你想要这种架构,你可能只想 避免图像增强层导致可变输出 形状。
因此,您可以使用
Flatten
层来代替使用 GlobalMaxPool2D
层,它不需要事先知道其他尺寸:
import tensorflow as tf
data_augmentation_layer = tf.keras.Sequential([
tf.keras.layers.RandomFlip('horizontal',
input_shape=(224, 224, 3)),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.RandomZoom(0.2),
tf.keras.layers.RandomWidth(0.2),
tf.keras.layers.RandomHeight(0.2),
tf.keras.layers.RandomContrast(0.2)
], name='data_augmentation')
model = tf.keras.Sequential([
data_augmentation_layer,
tf.keras.layers.Conv2D(filters=32,
kernel_size=1,
strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Conv2D(filters=32,
kernel_size=1,
strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.GlobalMaxPool2D(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
data_augmentation (Sequenti (None, None, None, 3) 0
al)
conv2d_8 (Conv2D) (None, None, None, 32) 128
activation_8 (Activation) (None, None, None, 32) 0
max_pooling2d_6 (MaxPooling (None, None, None, 32) 0
2D)
conv2d_9 (Conv2D) (None, None, None, 32) 1056
activation_9 (Activation) (None, None, None, 32) 0
global_max_pooling2d_1 (Glo (None, 32) 0
balMaxPooling2D)
dense_4 (Dense) (None, 1) 33
=================================================================
Total params: 1,217
Trainable params: 1,217
Non-trainable params: 0
_________________________________________________________________
None
这样写模型也能解决问题
# Data Augmentation Layer
# model.add(data_augmentation)
model.add(RandomFlip("horizontal"))
model.add(RandomRotation(0.2))
model.add(RandomZoom(0.3))
model.add(RandomTranslation(height_factor=0, width_factor=0.2))
model.add(RandomContrast(0.2))