我正在尝试使用割炬优化器来优化参数值,但是参数的比例差异很大。即,一个参数的数值为数千,而其他参数的数值在0到1之间。例如,在这种情况下,有两个参数-一个参数的最佳值为0.1,另一个参数的最佳值为20。代码,以便对每个参数(例如1e-3和0.1)应用合理的学习率?
import torch as pt
# Objective function
def f(x, y):
return (10 - 100 * x) ** 2 + (y - 20) ** 2
# Optimal parameters
print("Optimal value:", f(0.1, 20))
# Initial parameters
hp = pt.Tensor([1, 10])
print("Initial value", f(*hp))
# Optimiser
hp.requires_grad = True
optimizer = pt.optim.Adam([hp])
n = 5
for i in range(n):
optimizer.zero_grad()
loss = f(*hp)
loss.backward()
optimizer.step()
hp.requires_grad = False
print("Final parameters:", hp)
print("Final value:", f(*hp))
torch.optim.Optimizer
类接受params
参数中的词典列表作为参数组。在每个字典中,您需要定义params
和用于此参数组的其他参数。如果您在字典中未提供特定的参数,则将使用传递给Optimizer
的原始参数。有关更多信息,请参见official documentation。
这里是更新的代码:
import torch as pt
# Objective function
def f(x, y):
return (10 - 100 * x) ** 2 + (y - 20) ** 2
# Optimal parameters
print("Optimal value:", f(0.1, 20))
# Initial parameters
hp = pt.Tensor([1]), pt.Tensor([10])
print("Initial value", f(*hp))
# Optimiser
for param in hp:
param.requires_grad = True
# eps and betas are shared between the two groups
optimizer = pt.optim.Adam([{"params": [hp[0]], "lr": 1e-3}, {"params": [hp[1]], "lr": 0.1}])
n = 5
for i in range(n):
optimizer.zero_grad()
loss = f(*hp)
loss.backward()
optimizer.step()
for param in hp:
param.requires_grad = False
print("Final parameters:", hp)
print("Final value:", f(*hp))