损失函数在pytorch中没有requires_grad=True

问题描述 投票:0回答:1

您好,我有以下代码(我的代码的简化版本,但能够重现错误):

import numpy as np
from numpy import linalg as LA
import torch
import torch.optim as optim 
import torch.nn as nn

def func(x,pars):
    a = pars[0]
    b = pars[1]
    c = pars[2]
    d = pars[3]

    x = x.int()

    H = torch.tensor([[a,b,1],[2,3,c],[4,d,7]])

    eigenvalues, eigenvectors = np.linalg.eigh(H)

    trans_freq = eigenvalues[x]

    return torch.tensor(trans_freq)

x_index = torch.tensor([1,2])
y_vals = torch.tensor([0.5,12])

params = torch.tensor([1.,2.,3.,4.])
params.requires_grad=True
opt = optim.SGD([params], lr=100)

mse_loss = nn.MSELoss()

for i in range(10):
  opt.zero_grad()
  loss = mse_loss(func(x_index,params),y_vals)
  print(x_index.requires_grad)
  print(params.requires_grad)
  print(y_vals.requires_grad)
  print(loss.requires_grad)
  loss.backward()
  opt.step() 
  print(loss)

输出为:

False
True
False
False

并且我收到此错误:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
,来自此行:
loss.backward()
。事实上,损失没有
requires_grad=True
但为什么会这样(在 for 循环中手动设置它也不起作用)。我应该怎么办?谢谢!

python python-3.x pytorch gradient loss-function
1个回答
0
投票

您需要在

loss.requires_grad = True
 之前添加一行 
loss.backward()

for i in range(10):
  opt.zero_grad()
  loss = mse_loss(func(x_index,params),y_vals)
  print(x_index.requires_grad)
  print(params.requires_grad)
  print(y_vals.requires_grad)
  print(loss.requires_grad)

  loss.requires_grad = True     # Here you need to add this line
  loss.backward()
  opt.step() 
  print(loss)
最新问题
© www.soinside.com 2019 - 2025. All rights reserved.