如何将 ImageDataGenerator 传递到分段模型的 U-net,
data_generator = ImageDataGenerator(
rescale = 1./255.
)
train_dataset_images = data_generator.flow_from_directory(
directory=image_directory,
target_size = (256, 256),
class_mode = None,
batch_size = 32,
seed=custom_seed
)
train_dataset_masks = data_generator.flow_from_directory(
directory=mask_directory,
target_size = (256, 256),
batch_size = 32,
class_mode = None,
color_mode = 'grayscale',
seed=custom_seed
)
train_generator = zip(train_dataset_images, train_dataset_masks)
当我运行这个时,我遇到了一个 valueError ,说“预期有 1 个输入,但收到了 2 个”,所以我尝试将它们与这些函数结合起来:
def combine_generator(image_generator, mask_generator):
while True:
image_batch = image_generator.next()
mask_batch = mask_generator.next()
yield (image_batch, mask_batch)
和
def combine_generator(image_gen, mask_gen):
for img, mask in zip(image_gen, mask_gen):
yield img, mask
这些似乎不起作用。
在train目录下创建子文件夹images和masked images。Pass 它到
image_datagen.flow_from_directory()
。面具图像通常是
灰度,创建一个生成批量图像和蒙版的生成器。创建一个
U-net
建模并传递生成器来拟合函数,掩码不必是一个
热编码。我已经建立了 U-net 请参考这个 gist