从 keras 中的文件传输学习加载权重时层数不匹配

问题描述 投票:0回答:1

我对卷积神经网络和机器学习总体来说是新手,如果我的问题真的很简单,那么提前抱歉,但我已经搜索了一段时间而没有找到解决方案来解决它。对于语义分割任务,我正在 keras 中使用补丁具有 input_shape = (128,128,3) 的数据集预训练 4 个卷积块的 U-Net 模型。然后,我需要训练相同的模型,但使用新的数据集 (input_shape=(128,128,5))。到目前为止,我认为我已经成功应用 U-Net 模型所需的卷积来接受来自 5 个通道的张量。我已将模型及其权重保存在 h5 文件中。但是,我在第二个代码块中以粗体显示的行中收到错误“keras load_weights() ValueError:从文件加载权重时层数不匹配。模型预期 4 层,发现 23 个已保存层”。

我已经在 keras litterature 中进行了搜索,并在堆栈溢出中查看了其他相关问题,但它们没有完全回答我的问题。 使用 model.summary() 方法后,我发现当使用预训练的模型权重、自适应输入卷积层和输出层训练新模型时,我最终得到的模型只有 4 层(输入层、作为单层的预训练模型和输出层。

这是我的迁移学习模型代码:

pretrained_unet = load_model(".\projects\unet_v2.h5")
#freezing before the layer with the same input size
pretrained_unet = freeze_up_to(pretrained_unet, "dropout_5")

## Create a new input layer for the Unet
input_unet = Input(shape=(128, 128, 5))
input_layer = Conv2D(3, (3, 3), padding='same', activation='relu', input_shape= (128,128,5))

# Convert 5-channel input to 3-channel input
filtered_unet_input = Conv2D(3, (1, 1), activation='relu')(input_unet)
filtered_unet_input = pretrained_unet(filtered_unet_input)
#filtered_unet_input = tf.convert_to_tensor(input_unet[:,:,:3])
# Combine the UNET head and Unet
combined_output = Dense(n_classes, activation="softmax")(filtered_unet_input)
combined_model = keras.Model(input_unet,combined_output)
#combined_output = pretrained_unet(filtered_unet_input)

combined_model.summary()
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 input_1 (InputLayer)        [(None, 128, 128, 5)]     0

 conv2d_1 (Conv2D)           (None, 128, 128, 3)       18

 UNet (Functional)           (None, 128, 128, 2)       8557570

 dense (Dense)               (None, 128, 128, 2)      

 6

=================================================================
Total params: 8,557,594
Trainable params: 110,874
Non-trainable params: 8,446,720

但是我的预训练模型有 23 层。所以层数不匹配,权重无法正确分配。这是我的预训练模型代码:

import tensorflow as tf
from keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard
from keras.models import load_model, Model
from keras.layers import Input, Dense, Conv2D

# Visualization
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.patches as patches

# Working with arrays
import numpy as np
from tabulate import tabulate

# External files with functions to load the dataset,
# create a CNN model, and a data generator.
from importlib import reload
import utils
import models
import data_generator
from data_generator import DataGenerator

reload(utils)
reload(models)
reload(data_generator)

from utils import *
from models import *
from data_generator import *


# Module to track model's metrics
import wandb
import random

def fix_gpu():
    config = tf.compat.v1.ConfigProto()
    config.gpu_options.allow_growth = True
    session = tf.compat.v1.InteractiveSession(config=config)


fix_gpu()

PROJECT_DIR = "."  # os.getcwd()
SEED = 42
color2index = {(0, 0, 0): 0,
               (255, 255, 255): 1
               }

n_classes = len(color2index)

BATCH_SIZE = 8
PATCH_SIZE = 128
STEP_SIZE = 128
EPOCHS = 7

#define the function to freeze desired layers of the UNET model
def freeze_up_to(model, freeze_layer_name):
  """Function to freeze some layers of the model

  Args:
      model (keras.Model): a keras.Model
      freeze_layer_name (str): layer name of "model". All layers up to this layer will be freezed.

  Returns:
      keras.Model: a keras.Model with some layers freezed.
  """
  # Getting layer number based on layer name
  for id_layer, layer in enumerate(model.layers):
    if layer.name == freeze_layer_name:
      layer_number = id_layer
      break

  # Froze layers
  for layer in model.layers[:layer_number]:
    layer.trainable = False

  return model

# Path to the dataset folder
DATA_PATH = join("./projects", "data/train")
print(DATA_PATH)
list_images, list_masks = read_dataset(DATA_PATH)

splits = train_val_test_dataset(list_images,
                                list_masks,
                                val_size=0.25,
                                seed=SEED)
#data generation of the train dataset (images and masks)
data_gen_train = DataGenerator(batch_size=BATCH_SIZE,
                               patch_size=PATCH_SIZE,
                               step_size=STEP_SIZE,
                               list_images=splits["images_train"],
                               list_masks=splits["masks_train"],
                               n_classes=n_classes,
                               colormap_gt=color2index
                               )
#data generation of the validation dataset (images and masks)
data_gen_val = DataGenerator(batch_size=BATCH_SIZE,
                             patch_size=PATCH_SIZE,
                             step_size=STEP_SIZE,
                             list_images=splits["images_val"],
                             list_masks=splits["masks_val"],
                             n_classes=n_classes,
                             colormap_gt=color2index
                             )

print("Number of patches for training: {}".format(len(data_gen_train) * BATCH_SIZE))
print("\nNumber of patches for validation: {}".format(len(data_gen_val) * BATCH_SIZE))

a, b = data_gen_train[0]
imgs, labels = data_gen_train[0]

show_batch(imgs, labels, color2index)

unet = get_unet(img_size=PATCH_SIZE,
                n_classes=n_classes)

unet.summary()

# Checkpoint
autosave = ModelCheckpoint("./projects/unet_v2.h5",
                           mode="max",
                           save_best_only=True,
                           monitor="val_iou",
                           verbose=1)

# Early stopping
early_stopping = EarlyStopping(monitor='val_iou',
                               patience=30,
                               verbose=1,
                               mode='max')

# Train the UNET model
unet.fit(
    data_gen_train,
    validation_data=data_gen_val,
    epochs=EPOCHS,
    callbacks=[autosave, early_stopping]
)

# Save the trained UNET model
unet.save("./unet_v2.h5")

# Load the pretrained UNET model
pretrained_unet = load_model("./projects/unet_v2.h5")

#freezing before the layer with the same input size
pretrained_unet = freeze_up_to(pretrained_unet, "dropout_5")

## Create a new input layer for the Unet
input_unet = Input(shape=(128, 128, 5))
input_layer = Conv2D(3, (3,3), padding='same', activation='relu', input_shape= (128,128,5))

# Convert 5-channel input to 3-channel input
filtered_unet_input = Conv2D(3, (1, 1), activation='relu')(input_unet)
filtered_unet_input = pretrained_unet(filtered_unet_input)

# Combine the UNET head and Unet
combined_output = Dense(n_classes, activation="softmax")(filtered_unet_input)
combined_model = keras.Model(input_unet,combined_output)


# Path to the new dataset folder
NEW_DATA_PATH = join("/projects", "data/DOP")  
new_list_images, new_list_masks = read_dataset(NEW_DATA_PATH)

new_splits = train_val_test_dataset(new_list_images, new_list_masks, val_size=0.25, seed=SEED)

# Prepare data generators for the new dataset
data_gen_new_train = DataGenerator(
    batch_size=BATCH_SIZE,
    patch_size=PATCH_SIZE,
    step_size=STEP_SIZE,
    list_images=new_splits["images_train"],
    list_masks=new_splits["masks_train"],
    n_classes=n_classes,
    colormap_gt=color2index
)

data_gen_new_val = DataGenerator(
    batch_size=BATCH_SIZE,
    patch_size=PATCH_SIZE,
    step_size=STEP_SIZE,
    list_images=new_splits["images_val"],
    list_masks=new_splits["masks_val"],
    n_classes=n_classes,
    colormap_gt=color2index
)

combined_model.compile(optimizer= Adam(),
                loss="categorical_crossentropy",
                metrics=["accuracy", 
                         keras.metrics.OneHotMeanIoU(num_classes=n_classes, 
                                       name="iou")])
# Fine-tune the combined model
combined_model.fit(
    data_gen_new_train,
    validation_data=data_gen_new_val,
    epochs=8,  # You may want to use fewer epochs for fine-tuning
    callbacks=[autosave, early_stopping]
)

#Loading the weights of the trained model.
****#I get the error in this line: **
combined_model.load_weights("unet_v2.h5")**

# Evaluate the model on test data
data_gen_test = DataGenerator(
    batch_size=BATCH_SIZE,
    patch_size=PATCH_SIZE,
    step_size=STEP_SIZE,
    list_images=new_splits["images_test"],
    list_masks=new_splits["masks_test"],
    n_classes=n_classes,
    colormap_gt=color2index
)

scores_train = combined_model.evaluate(data_gen_new_train, verbose=0)
scores_val = combined_model.evaluate(data_gen_new_val, verbose=0)
scores_test = combined_model.evaluate(data_gen_test, verbose=0)

如果需要,这是定义 U-Net 模型架构的 Models 类的方法:

def get_unet(img_size, n_classes):
  """Function to create a U-Net architecture

  Args:
      img_size (int): size of the input image
      n_classes (int): number of classes

  Returns:
      keras.Model: a keras model created using
        the functional API
  """
  # Input
  input = Input(shape=(img_size,img_size,3))
  # Downsampling
  f1,p1 = downsampling(input, 64, times=2)
  f2,p2 = downsampling(p1, 128, times=2)
  f3,p3 = downsampling(p2, 256, times=2)
  # Bottleneck
  blottleneck = conv_block(p3, 512, times=2)
  #Upsampling
  u7 = upsampling(blottleneck, 256, layer_concat=f3)
  u8 = upsampling(u7, 128, layer_concat=f2)
  u9 = upsampling(u8, 64, layer_concat=f1)
  # Output
  output = Conv2D(filters=n_classes,
                  kernel_size=1,
                  padding="same",
                  activation="softmax")(u9)
  model = Model(inputs=input, outputs=output, name="UNet")

  model.compile(optimizer= Adam(),
                loss="categorical_crossentropy",
                metrics=["accuracy", 
                         keras.metrics.OneHotMeanIoU(num_classes=n_classes, 
                                       name="iou")])

  return model

您是否知道有一种替代方法可以将模型加载为所有组成层而不是单个层?重量也一样吗?或者也许这就是我从预先训练的模型构建新模型的方式。如果有人能给我一些指导,我将非常感激。

python keras deep-learning transfer-learning
1个回答
0
投票
  • 出现错误的原因: 当您在 Keras 中将模型作为一个整体保存时,它会保存整个模型架构以及权重,会出现错误“从文件加载权重时层数不匹配”,因为您的新模型的架构与原始模型的架构不同。

具体来说,当您尝试调整 U-Net 模型以接受 5 个通道而不是 3 个通道的输入时,您会引入新的层(例如用于将 5 通道输入转换为 3 通道输入的 Conv2D 层),这会改变整体结构模型的。

因此,Keras 无法直接将保存的权重映射到您的新模型。

以下是一些可能的解决方案:

1-如果您想修改模型以处理 5 通道输入,您可以尝试手动保存和加载特定层的权重。例如:

pretrained_unet.load_weights('unet_v2.h5', by_name=True)

by_name=True 参数确保仅加载已保存模型和新模型之间具有匹配名称和兼容形状的图层的权重。

2-另一种方法可能是预处理您的 5 通道输入,将其减少到 3 通道,然后再将其输入原始 U-Net 模型。

这样,您就不需要修改架构,并且可以直接使用预训练的模型及其权重,这里有一些示例代码,您可以开始使用:

input_unet = Input(shape=(128, 128, 5))
processed_input = Conv2D(3, (1, 1), activation='relu')(input_unet)
unet_output = pretrained_unet(processed_input)

该方法允许您使用原始的U-Net模型而不修改其结构,同时避免层不匹配错误。

3-考虑通过冻结预训练模型的中间层并仅训练您引入的新层来进行迁移学习。这种方法允许您利用预先训练的权重,而不需要完全相同的架构,它应该是这样的:

for layer in pretrained_unet.layers[:-x]:  # Freeze all layers except the last x layers
    layer.trainable = False

祝你好运,如果您需要更多帮助,请告诉我(:

© www.soinside.com 2019 - 2024. All rights reserved.