ValueError:在 Keras 模型中组合图像、掩模和 CSV 数据时输入形状无效

问题描述 投票:0回答:1

我正在使用 Keras 开发深度学习模型,尝试结合三个输入:图像、掩码和 CSV 数据。我的模型的目标是根据医学图像(CT 扫描)和 CSV 数据预测脑出血的存在和类型。我使用用于图像分割的 Attention U-Net 和用于 CSV 数据处理的密集层构建了模型。

输入数据包括:

  1. 图像:形状为 (256, 256, 1) 的灰度 CT 扫描。

  2. Masks:对应于形状为 (256, 256, 1) 的分割的二进制掩码。

  3. CSV 数据:各种出血类型的数值数据,形状为 (6,)。

我在尝试使用

model.fit()
训练模型时遇到以下错误:

ValueError: Invalid input shape for input Tensor("functional_1/Cast:0", shape=(None, 6), dtype=float32). Expected shape (None, 256, 256, 1), but input has incompatible shape (None, 6)

看起来形状为 (None, 6) 的 CSV 数据正在被输入到预期图像数据(形状为 (None, 256, 256, 1))的模型中。我已经仔细检查了输入形状并使用 train_test_split 来分割数据,但问题仍然存在。

这是我的代码的简化版本:

# Model definition
inputs, outputs = attention_unet()  # UNet model for images

csv_input = layers.Input(shape=(6,), name='csv_input')  # CSV data input

mask_input = layers.Input(shape=(256, 256, 1), name='mask_input')  # Mask input

# CSV data processing
csv_x = layers.Dense(64, activation='relu')(csv_input)csv_x = layers.Dense(32, activation='relu')(csv_x)

# Combine CSV with U-Net output
flatten_outputs = layers.Flatten()(outputs)combined = layers.Concatenate()([flatten_outputs, csv_x])

# Final output
final_output = layers.Dense(1, activation='sigmoid', name='final_output')(combined)

# Model compilation
model = models.Model(inputs=[inputs, csv_input, mask_input], outputs=[outputs, final_output])model.compile(optimizer=Adam(), loss={'final_output': 'binary_crossentropy', 'outputs': 'categorical_crossentropy'}, metrics=['accuracy', dice_coef, jaccard_index])

# Data shapes
print(f"Shape of X_train: {X_train.shape}")  # Shape of images: (batch_size, 256, 256, 1)print(f"Shape of csv_train: {csv_train.shape}")  # Shape of CSV: (batch_size, 6)print(f"Shape of y_train: {y_train.shape}")  # Shape of masks: (batch_size, 256, 256, 1)

# Model training
model.fit({'input_layer': X_train, 'csv_input': csv_train, 'mask_input': y_train},{'final_output': csv_train, 'outputs': y_train},validation_data=({'input_layer': X_val, 'csv_input': csv_val, 'mask_input': y_val},{'final_output': csv_val, 'outputs': y_val}),epochs=50,batch_size=32,callbacks=callbacks)

我尝试过的步骤:

  • 我已经打印了训练数据的形状,它们看起来都是正确的:
    • 图像:(batch_size, 256, 256, 1)
    • CSV:(批量大小,6)
    • 掩模:(batch_size, 256, 256, 1)
  • 我检查了 model.fit() 中数据输入的顺序,以确保数据传递到正确的层。

如何解决此形状不匹配问题并正确组合模型中的图像、掩模和 CSV 数据?

python tensorflow machine-learning keras unet-neural-network
1个回答
0
投票

当您尝试将图像数据(形状:(None, 256, 256, 1))与 CSV 数据(形状:(None, 6))组合时,会发生这种情况。出现此问题的原因是模型中 CSV 数据和图像数据混合不正确。

此代码应该适合您:

import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.losses import binary_crossentropy, categorical_crossentropy

# Assuming `attention_unet` is your function that returns the U-Net model
def attention_unet():
    inputs = layers.Input(shape=(256, 256, 1), name='image_input')  # Input for images
    # Build your U-Net architecture here
    # outputs = ... (this would be your segmentation output)
    outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(inputs)  # Example output
    return inputs, outputs

# Model definition
image_input, segmentation_output = attention_unet()  # UNet model for images

# CSV data input
csv_input = layers.Input(shape=(6,), name='csv_input')

# Mask input for segmentation task
mask_input = layers.Input(shape=(256, 256, 1), name='mask_input')

# CSV data processing
csv_x = layers.Dense(64, activation='relu')(csv_input)
csv_x = layers.Dense(32, activation='relu')(csv_x)

# Flatten the U-Net's segmentation output to combine with CSV data
flatten_segmentation_output = layers.Flatten()(segmentation_output)

# Combine flattened segmentation output with CSV processed data
combined = layers.Concatenate()([flatten_segmentation_output, csv_x])

# Classification output (predicting hemorrhage type)
classification_output = layers.Dense(1, activation='sigmoid', name='final_output')(combined)

# Define the model with inputs and outputs
model = models.Model(inputs=[image_input, csv_input, mask_input],
                     outputs=[segmentation_output, classification_output])

# Compile the model
model.compile(optimizer=optimizers.Adam(),
              loss={'final_output': 'binary_crossentropy', 'conv2d': 'categorical_crossentropy'},  # replace 'conv2d' with the correct name of the segmentation output
              metrics=['accuracy'])

# Print model summary to verify the architecture
model.summary()

# Example data shapes (for illustration purposes):
X_train = np.random.rand(32, 256, 256, 1)  # Images
csv_train = np.random.rand(32, 6)  # CSV Data
y_train = np.random.rand(32, 256, 256, 1)  # Masks

X_val = np.random.rand(32, 256, 256, 1)  # Validation images
csv_val = np.random.rand(32, 6)  # Validation CSV data
y_val = np.random.rand(32, 256, 256, 1)  # Validation masks

# Training the model
model.fit({'image_input': X_train, 'csv_input': csv_train, 'mask_input': y_train},
          {'final_output': csv_train, 'conv2d': y_train},  # 'conv2d' needs to match the segmentation output's layer name
          validation_data=({'image_input': X_val, 'csv_input': csv_val, 'mask_input': y_val},
                           {'final_output': csv_val, 'conv2d': y_val}),
          epochs=50, batch_size=32)
© www.soinside.com 2019 - 2024. All rights reserved.