AutoEncoder 训练——mean_squared_error 需要广播形状

问题描述 投票:0回答:0

我正在尝试使用 python 中的tensorflow API 训练自动编码器生成人脸。为此,我使用了大约 10000 张图像的数据集,每张图像的形状为 (256,256,3)。

问题是,当我使用“拟合”函数训练我的模型时,我发现了以下错误:

return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
Node: 'mean_squared_error/SquaredDifference'
required broadcastable shapes
     [[{{node mean_squared_error/SquaredDifference}}]] [Op:__inference_train_function_1327]

这是我的脚本:

  • 数据集加载


FACE_TRAINING_DIRECTORY = "data/Faces_Autoencoder"
IMAGE_SIZE = (256,256)
BATCH_SIZE = 32

LATENT_DIM = 100

TRAIN_DATA = tf.keras.utils.image_dataset_from_directory(
    directory=FACE_TRAINING_DIRECTORY+"/train/",
    image_size=IMAGE_SIZE,
    batch_size= BATCH_SIZE,
    seed=100
)

TEST_DATA = tf.keras.utils.image_dataset_from_directory(
    directory=FACE_TRAINING_DIRECTORY+"/validation/",
    image_size=IMAGE_SIZE,
    batch_size= BATCH_SIZE,
    seed=100
)
  • 型号

class Autoencoder (Model):
    def __init__(self, latent_dim):
        super(Autoencoder, self).__init__()

        self.__latent_dim = latent_dim

        # Defining the Encoder
        self.encoder = tf.keras.Sequential(
            [
                InputLayer(input_shape=(256,256,3)), # (28 , 28, 3)
                Conv2D(filters=8, padding="same", kernel_size=(3,3), strides = 2),
                Flatten(),
                Dense (LATENT_DIM)
            ]
        )

        self.decoder = tf.keras.Sequential(
            [
                InputLayer(input_shape=(LATENT_DIM,)),
                Dense(128*128*8),
                Reshape((128,128,8)),
                Conv2DTranspose(filters=8, padding="same", kernel_size=(3,3), strides = 2),
                Conv2DTranspose(filters=3, padding="same", kernel_size=(3,3), strides = 1),
            ]
        )


    
    def call(self,x):
        
        encoded = self.encoder(x)
        decoded = self.decoder(encoded)

        return decoded

这里是编码器部分的总结:Encoder's summary

这里是解码器部分的总结:Decoder's summary

autoencoder = Autoencoder(LATENT_DIM)
autoencoder.compile(optimizer='adam', loss=tf.losses.MeanSquaredError())
autoencoder.build(input_shape=(BATCH_SIZE, 256, 256, 3)) 
  • 培训

autoencoder.fit(
  TRAIN_DATA,
  epochs=10,
  verbose=1
)

这是抛出错误的地方:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[15], line 1
----> 1 autoencoder.fit(
      2   TRAIN_DATA,
      3   epochs=5,
      4   verbose=1
      5 )

File ~/.local/lib/python3.10/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs)
     67     filtered_tb = _process_traceback_frames(e.__traceback__)
     68     # To get the full stack trace, call:
     69     # `tf.debugging.disable_traceback_filtering()`
---> 70     raise e.with_traceback(filtered_tb) from None
     71 finally:
     72     del filtered_tb

File ~/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:912, in Function._call(self, *args, **kwds)
    909   self._lock.release()
    910   # In this case we have created variables on the first call, so we run the
    911   # defunned version which is guaranteed to never create variables.
--> 912   return self._no_variable_creation_fn(*args, **kwds)  # pylint: disable=not-callable
    913 elif self._variable_creation_fn is not None:
    914   # Release the lock early so that multiple threads can perform the call
    915   # in parallel.
    916   self._lock.release()

TypeError: 'NoneType' object is not callable

我的第一个假设是模型的输入和输出形状不同,但我不确定为什么。

我尝试修改模型的架构,但仍然出现同样的问题。我认为问题出在我加载数据集的方式上。

python tensorflow deep-learning tensorflow2.0 autoencoder
© www.soinside.com 2019 - 2024. All rights reserved.