最速下降和寻找最佳步长
Steepest descent and finding optimal step size
我正在尝试对具有 2 个变量的函数进行最速下降。它适用于 known 步长 = 0.3。但我想找到一种方法来优化步长并创建一个函数来找到一个好的步长。我找到了一个叫做 Armijo–Goldstein 条件的东西,但我不明白它,而且这个公式让我有点困惑。所以我请求你的帮助,如果你知道如何平衡这个,因为我认为在我的代码中与步长相关的一切都是错误的。我猜它必须计算在 x 和 y 上加深的步长。
x, y = f.optimal_range() ##getting random start
step = 0.3 ## <--- This number have to be random between 0 to 1. But my step size calculations are wrong so I can't do it.
while f.fprime_x(x) != 0: ##loop to find 0 point for derivative of a function on x
fx = -f.fprime_x(x)
x = x + (step * fx)
print(x)
if not f.delta_check(step, x, y): <--- Here's the problem. By the defenition the step have to be smaller if it doesn't pass the check, but if I make it smaller - it enters the eternal loop around the mi
step = step * 1.001
while f.fprime_y(y) != 0: ##loop to find 0 point for derivative of a function on x
fy = -f.fprime_y(y)
y = y + (step * fy)
print(x, y)
if not f.delta_check(step, x, y):
step = step * 1.001
print(f"\n\nMIN ({x}, {y})")
这里是增量/步长检查的函数:
def delta_check(delta, x, y):
ux = -fprime_x(x)
uy = -fprime_y(y)
f_del_xy = func(x + (delta * ux), y + (delta * uy))
return f_del_xy <= func(delta * ux, delta * uy) + delta
这是一个概念性的 Armijo–Goldstein 实现。但是,如果没有数据+函数示例就无法对其进行测试。
# both should be less than, but usually close to 1
c = 0.8 # how much imperfection in function improvement we'll settle up with
tau = 0.8 # how much the step will be decreased at each iteration
x = np.array(f.optimal_range()) # assume everything is a vector; x is an n-dimensional coordinate
# NOTE: the part below is repeated for every X update
step = 0.3 # alpha in Armijo–Goldstein terms
gradient = np.array(f.fprime_x(x[0]), f.fprime_y(x[1]), ...)
# in the simplest case (SGD) p can point in the direction of gradient,
# but in general they don't have to be the same, e.g. because of added momentum
p = -gradient / ((gradient**2).sum() **0.5)
m = gradient.dot(p) # "happy case" improvement per unit step
t = - c * m # improvement we'll consider good enough
# func(*x) might be worth precomputing
while func(*x) - func(*(x + step*p)) < step * t: # good enough step size found
step *= tau
# update X and repeat
我正在尝试对具有 2 个变量的函数进行最速下降。它适用于 known 步长 = 0.3。但我想找到一种方法来优化步长并创建一个函数来找到一个好的步长。我找到了一个叫做 Armijo–Goldstein 条件的东西,但我不明白它,而且这个公式让我有点困惑。所以我请求你的帮助,如果你知道如何平衡这个,因为我认为在我的代码中与步长相关的一切都是错误的。我猜它必须计算在 x 和 y 上加深的步长。
x, y = f.optimal_range() ##getting random start
step = 0.3 ## <--- This number have to be random between 0 to 1. But my step size calculations are wrong so I can't do it.
while f.fprime_x(x) != 0: ##loop to find 0 point for derivative of a function on x
fx = -f.fprime_x(x)
x = x + (step * fx)
print(x)
if not f.delta_check(step, x, y): <--- Here's the problem. By the defenition the step have to be smaller if it doesn't pass the check, but if I make it smaller - it enters the eternal loop around the mi
step = step * 1.001
while f.fprime_y(y) != 0: ##loop to find 0 point for derivative of a function on x
fy = -f.fprime_y(y)
y = y + (step * fy)
print(x, y)
if not f.delta_check(step, x, y):
step = step * 1.001
print(f"\n\nMIN ({x}, {y})")
这里是增量/步长检查的函数:
def delta_check(delta, x, y):
ux = -fprime_x(x)
uy = -fprime_y(y)
f_del_xy = func(x + (delta * ux), y + (delta * uy))
return f_del_xy <= func(delta * ux, delta * uy) + delta
这是一个概念性的 Armijo–Goldstein 实现。但是,如果没有数据+函数示例就无法对其进行测试。
# both should be less than, but usually close to 1
c = 0.8 # how much imperfection in function improvement we'll settle up with
tau = 0.8 # how much the step will be decreased at each iteration
x = np.array(f.optimal_range()) # assume everything is a vector; x is an n-dimensional coordinate
# NOTE: the part below is repeated for every X update
step = 0.3 # alpha in Armijo–Goldstein terms
gradient = np.array(f.fprime_x(x[0]), f.fprime_y(x[1]), ...)
# in the simplest case (SGD) p can point in the direction of gradient,
# but in general they don't have to be the same, e.g. because of added momentum
p = -gradient / ((gradient**2).sum() **0.5)
m = gradient.dot(p) # "happy case" improvement per unit step
t = - c * m # improvement we'll consider good enough
# func(*x) might be worth precomputing
while func(*x) - func(*(x + step*p)) < step * t: # good enough step size found
step *= tau
# update X and repeat