我正在使用
scipy.optimize.fmin
来优化Rosenbrock函数:
import scipy
import bumpy as np
def rosen(x):
"""The Rosenbrock function"""
return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
scipy.optimize.fmin(rosen, x0, full_output=True)
这将返回解决方案的元组(最小化函数的参数、函数最小值、迭代次数、函数调用次数)。
但是我希望能够绘制每一步的值。例如,我会沿 x 轴绘制迭代次数,沿 y 轴绘制运行最小值。
fmin 可以采用在每一步调用的可选回调函数,因此您只需创建一个简单的回调函数来获取每一步的值:
def save_step(k):
global steps
steps.append(k)
steps = []
scipy.optimize.fmin(rosen, x0, full_output=True, callback=save_step)
print np.array(steps)[:10]
输出:
[[ 1.339 0.721 0.824 1.71 1.236 ]
[ 1.339 0.721 0.824 1.71 1.236 ]
[ 1.339 0.721 0.824 1.71 1.236 ]
[ 1.339 0.721 0.824 1.71 1.236 ]
[ 1.2877696 0.7417984 0.8013696 1.587184 1.3580544 ]
[ 1.28043136 0.76687744 0.88219136 1.3994944 1.29688704]
[ 1.28043136 0.76687744 0.88219136 1.3994944 1.29688704]
[ 1.28043136 0.76687744 0.88219136 1.3994944 1.29688704]
[ 1.35935594 0.83266045 0.8240753 1.02414244 1.38852256]
[ 1.30094767 0.80530982 0.85898166 1.0331386 1.45104273]]
感谢兰迪的回答。
因为某些优化方法在调用回调时会发出更多参数(例如:“trust-constr”),所以我发现这个解决方案更有效:
import numpy
steps = []
def save_step(*args):
for arg in args:
if type(arg) is numpy.ndarray:
steps.append(arg)
用法示例是:
def f(x):
x1 = x[0]
x2 = x[1]
return -(-4 * x1 * x1 - 4 * x2 * x2 + 4 * x1 * x2 + 8 * x1 + 20 * x2)
def gradient(x):
x1 = x[0]
x2 = x[1]
return np.array([
-(-8 * x1 + 4 * x2 + 8),
-(-8 * x2 + 4 * x1 + 20)
])
start_point = np.zeros(2)
result = minimize(
fun=f,
x0=start_point,
method='trust-constr',
jac=gradient,
callback=save_step
)
print(result)
print(steps)