如何调用Squential的层的输出?

问题描述 投票:1回答:1

我确实喜欢建立基本的 UNet 在Keras中用Sequential模型,使用简单的模块化函数来实现DownSampling路径,和UpSampling路径。

我试着编写了代码,但我得到的是 错误:

Traceback (most recent call last):
  File "C:\Users\User\Desktop\Seq_UNet.py", line 56, in <module>
    model = UNet(128,128,1)
  File "C:\Users\User\Desktop\Seq_UNet.py", line 36, in UNet
    x1, l1 = ConvBlock(16)(s)
TypeError: 'tuple' object is not callable'

如何获得序列模型在特定层的输出?

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' 
import keras

def ConvBlock(num_features):
    model = keras.Sequential([
        keras.layers.Conv2D(num_features,(3,3),activation='relu',kernel_initializer='he_normal', padding='same'),
        keras.layers.BatchNormalization(epsilon=1e-06,  momentum=0.9, weights=None),
        keras.layers.Dropout(0.1),
        keras.layers.Conv2D(num_features,(3,3),activation='relu',kernel_initializer='he_normal', padding='same'),
        keras.layers.MaxPooling2D((2,2)) ])
    return model, model.layers[3]

def ConvTransposeBlock(num_features, layer):
    model = keras.Sequential([
        keras.layers.Conv2DTranspose(num_features,(2,2),strides=(2,2),padding='same'),
        keras.layers.concatenate([model.layer[0],layer]), # The past layer (model.layer[0])
        keras.layers.Conv2D(num_features,(3,3),activation='relu',kernel_initializer='he_normal', padding='same'),
        keras.layers.BatchNormalization(epsilon=1e-06,  momentum=0.9, weights=None),
        keras.layers.Dropout(0.2),
        keras.layers.Conv2D(num_features,(3,3),activation='relu',kernel_initializer='he_normal', padding='same') ])
    return model 

# Initialization
IMG_Width  = 128
IMG_Height = 128
IMG_Channels = 1

# Model
def UNet(IMG_Width=IMG_Width, IMG_Height=IMG_Height,IMG_Channels=IMG_Channels):
    # Input Layer
    inputs = keras.layers.Input((IMG_Width,IMG_Height,IMG_Channels))
    # Convert integer inputs to floating point
    s = keras.layers.Lambda(lambda x: x / 255)(inputs) 
    # Contraction path:
    x1, l1 = ConvBlock(16)(s)
    x2, l2 = ConvBlock(32)(x1)
    x3, l3 = ConvBlock(64)(x2)
    x4, l4= ConvBlock(128)(x3)
    x5,_ = ConvBlock(256)(x4)
    # Expansion path:
    x6 = ConvTransposeBlock(128,l4)(x5)
    x7 = ConvTransposeBlock( 64,l3)(x6)
    x8 = ConvTransposeBlock( 32,l2)(x7)
    x9 = ConvTransposeBlock( 16,l1)(x8)

    outputs = keras.layers.Conv2D(1,(1,1), activation='sigmoid')(x9)

    model = keras.Model(inputs=[inputs],outputs=[outputs])

    optimizer = keras.optimizers.Adam(lr=1e-4)
    model.compile(optimizer=optimizer,loss=bce_dice_loss,metrics=['accuracy'])

    return model

model = UNet(128,128,1)
python keras keras-layer sequential
1个回答
1
投票

编辑这行 inputs = keras.layer.Input((IMG_Width,IMG_Height,IMG_Channels,IMG_Channels))

要输入 = keras.layer.Input((IMG_Width,IMG_Height,IMG_Channels))

你写了两个IMG_Channels


1
投票

这里的这行 return model, model.layer[3] 你忘了写 "s".你必须写 return model, model.layer[3] 。

最新问题
© www.soinside.com 2019 - 2025. All rights reserved.