Pytorch 同时更新两个模型

问题描述 投票:0回答:1

我是 pytorch 的新手,没有运气遵循类似的线程。我正在尝试在同一循环中联合训练两个模型,并且模型更新涉及不同的计算,该计算会吸收 model_a 和 model_b 的一些组合损失。但是,我不确定如何同时培训他们。任何建议将不胜感激!

self.optimiser_a.zero_grad()
loss_a = calc_loss_a(output_a, output_b, ground_truth)
loss_a.backward()
self.optimiser_a.step()

self.optimiser_b.zero_grad()
loss_b = calc_loss_b(output_a, output_b, ground_truth)
loss_b.backward()
self.optimiser_b.step()

我从上面得到的错误是

RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.

根据一些线程的建议,我尝试使用retain_graph=True,但收到此错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 10]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
python pytorch computer-vision autograd
1个回答
0
投票

尝试以稍微不同的顺序做事:

# first, clear all prev gradient info
self.optimiser_a.zero_grad()
self.optimiser_b.zero_grad()

# compute the losses
loss_a = calc_loss_a(output_a, output_b, ground_truth)
loss_b = calc_loss_b(output_a, output_b, ground_truth)

# backprop
loss_a.backward()
loss_b.backward()

# finally, step
self.optimiser_a.step()
self.optimiser_b.step()
© www.soinside.com 2019 - 2024. All rights reserved.