我想冻结 pytorch efficentnet 模型中的层。我通常的做法不起作用。
from torchvision.models import efficientnet_b0
from torch import nn
from torch import optim
efficientnet_b0_fine = efficientnet_b0(pretrained=True)
for param in efficientnet_b0_fine.parameters():
param.requires_grad = False
efficientnet_b0_fine.fc = nn.Linear(512, 10)
optimizer = optim.Adam(efficientnet_b0_fine.parameters(), lr=0.0001)
loss_function = nn.CrossEntropyLoss()
training(net=efficientnet_b0_fine, n_epochs=epochs, optimizer=optimizer, loss_function=loss_function, train_dl = train_dl)
我收到的错误说:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
训练函数如下所示:
for xb, yb in train_dl:
optimizer.zero_grad()
xb = xb.to(device)
yb = yb.to(device)
y_hat = net(xb)
loss = loss_function(y_hat, yb)
loss.backward()
optimizer.step()
如果你们中的一个人有解决方案,那就太好了!
因为“张量的元素 0 不需要 grad 并且没有 grad_fn”