GPytorch:运行时错误:压平训练标签失败

问题描述 投票:0回答:1

我正在尝试将 GPR 与 GPytorch 结合使用,当我通过模型传递 testing 数据时,我的功能似乎失败,但在训练数据上运行良好。训练张量的形状为 (116,6),测试张量的形状为 (51,6)。我得到的确切错误是:“RuntimeError:压平训练标签失败。此错误的最常见原因是先验平均值和训练标签的形状不匹配。训练目标的形状是 torch.Size([ 116, 1]),而报告的均值形状为 torch.Size([116])。"

我转换为张量如下:

# convert X_train and X_test to torch arrays

X_train_tensor = torch.tensor(X_train.values, dtype=torch.float64)
X_test_tensor = torch.tensor(X_test.values, dtype=torch.float64)
y_train_tensor = torch.tensor(y_train.values, dtype=torch.float32).reshape(-1, 1)
y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).reshape(-1, 1)

我原来做的课是这样的:

class ExactGPModel(gpytorch.models.ExactGP):
#     # Initialize the parameter
    def __init__(self, X_train_tensor, y_train_tensor, likelihood):
        super(ExactGPModel, self).__init__(X_train_tensor, y_train_tensor, likelihood)
        self.mean_module = gpytorch.means.ConstantMean()
        self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())

    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)

ChatGPT 建议将类的最后一行更改为此,但这不起作用:

        # return gpytorch.distributions.MultivariateNormal(mean_x.view(-1), covar_x)

python pytorch gpytorch
1个回答
0
投票

此错误发生在模型初始化期间。

EaxctGP
模型通常接受大小为
train_inputs
n*d
和大小为
train_targets
n
,也就是说你不能提供大小为
n*1
的向量。要解决这个bug,你可以直接将你的代码改为

# convert X_train and X_test to torch arrays
X_train_tensor = torch.tensor(X_train.values, dtype=torch.float64)
X_test_tensor = torch.tensor(X_test.values, dtype=torch.float64)
y_train_tensor = torch.tensor(y_train.values, dtype=torch.float32).reshape(-1, 1).squeeze()
y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).reshape(-1, 1).squeeze()
© www.soinside.com 2019 - 2024. All rights reserved.