如何判断牛顿法是否失败
How to tell if Newtons-Method Fails
我正在为无约束优化问题创建一个基本的牛顿法算法,但算法的结果不是我所期望的。这是一个简单的 objective 函数,因此很明显该算法应该收敛于 (1,1)。我之前创建的梯度下降算法证实了这一点,此处:
def grad_descent(x, t, count, magnitude):
xvalues.append(x)
gradvalues.append(np.array([dfx1(x), dfx2(x)]))
fvalues.append(f(x))
temp=x-t*dfx(x)
x = temp
magnitude = mag(dfx(x))
count+=1
return xvalues, gradvalues, fvalues, count
我在此处尝试创建牛顿法算法:
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp=x-f(x)/dfx(x)
while count < 10:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
temp=x-t*f(x)/dfx(x)
x = temp
magnitude = mag(dfx(x))
count+=1
if count > 100:
break
return xvalues, gradvalues, fvalues, count
下面是objective函数和梯度函数:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))])
这是初始条件。请注意,牛顿方法中不使用 alpha 和 beta。
x0, t0, alpha, beta, count = np.array([-1.1, 1.1]), 1, .15, .7, 1
magnitude = mag(np.array([dfx1(x0), dfx2(x0)]))
调用函数:
xvalues, gradvalues, fvalues, iterations = newton(x0, t0, count, magnitude)
这会产生非常奇怪的结果。以下是其各自 x 输入的 x 值、梯度值和函数解的前 10 次迭代:
[array([-1.1, 1.1]), array([-0.99315589, 1.35545455]), array([-1.11651296, 1.11709035]), array([-1.01732476, 1.35478987]), array([-1.13070578, 1.13125051]), array([-1.03603697, 1.35903467]), array([-1.14368874, 1.14364506]), array([-1.05188162, 1.36561528]), array([-1.15600558, 1.15480705]), array([-1.06599492, 1.37360245])]
[array([-52.6, -22. ]), array([142.64160215, 73.81918332]), array([-62.07323963, -25.90216846]), array([126.11789251, 63.96803995]), array([-70.85773749, -29.44900758]), array([114.31050737, 57.13241151]), array([-79.48668009, -32.87577304]), array([104.93863096, 51.83206539]), array([-88.25737032, -36.308371 ]), array([97.03403558, 47.45145765])]
[5.620000000000003, 17.59584998020613, 6.156932949106968, 14.29937453260906, 6.7080172227439725, 12.305727666787176, 7.297442528545537, 10.926625703722639, 7.944104584786208, 9.89743708419569]
这是最终输出:
final_value = print('Final set of x values: ', xvalues[-1])
final_grad = print('Final gradient values: ', gradvalues[-1])
final_f = print('Final value of the object function with optimized inputs: ', fvalues[-1])
final_grad_mag = print('Final magnitude of the gradient with optimized inputs: ', mag(np.array([dfx1(xvalues[-1]), dfx2(xvalues[-1])])))
total_iterations = print('Total iterations: ', iterations)
显示 3d 图here
代码:
x = np.array([i[0] for i in xvalues])
y = np.array([i[1] for i in xvalues])
z = np.array(fvalues)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x, y, z, label='Newton Method')
ax.legend()
这是因为初始猜测非常接近最佳点,还是我的算法中存在我没有发现的错误?任何建议将不胜感激。看起来溶液甚至可能在振荡,但很难说
我想我已经找到了部分问题。我使用的是不正确的牛顿算法。在我使用之前:
x{k+1} = x{k}-f(x)⁄∇f(x)
正确的更新是:
x{k+1} = x{k} - [f''(x{k})]-1f'(x{k})
当我改变这个时,结果仍然不同,但稍微好一点。新功能在这里:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx1 = lambda x: -400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2
dfx2 = lambda x: 200*(x[1]-np.square(x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))])
dfx11 = lambda x: -400*(x[1])+1200*np.square(x[0])+2
dfx12 = lambda x: -400*x[0]
dfx21 = lambda x: -400*x[0]
dfx22 = lambda x: 200
hessian = lambda x: np.array(([dfx11(x0), dfx12(x0)], [dfx21(x0), dfx22(x0)]))
inv_hessian = lambda x: inv(np.array(([dfx11(x0), dfx12(x0)], [dfx21(x0), dfx22(x0)])))
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp = x-(inv_hessian(x).dot(dfx(x)))
while count < 25:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
temp = x-(inv_hessian(x).dot(dfx(x)))
x = temp
magnitude = mag(dfx(x))
count+=1
if count > 100:
break
return xvalues, gradvalues, fvalues, count
解最接近收敛的是在第一步之后,它到达 (-1.05, 1.1)。然而,它仍然存在分歧。我从来没有使用过牛顿法,所以我不确定这是否与算法预期的一样准确。
我现在确定 python 代码有问题。我决定在 Matlab 中实现该算法,它似乎运行良好。这是代码:
clear; clc;
x=[-1.1, 1.1]';
t=1;
count=1;
xvalues=[];
temp = x - inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200]);
disp(x-inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200])*[-400*x(1)*x(2)+400*x(1)^3+2*x(1)-2; 200*(x(2)-x(1)^2)])
while count<10
xvalues(count,:)= x;
temp = x - inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200]) * [-400*x(1)*x(2)+400*x(1)^3+2*x(1)-2; 200*(x(2)-x(1)^2)];
x = temp;
count = count+1;
end
disp(xvalues)
输出:
-1.1000 1.1000
-1.0087 1.0091
-0.2556 -0.5018
-0.2446 0.0597
0.9707 -0.5348
0.9708 0.9425
1.0000 0.9991
1.0000 1.0000
1.0000 1.0000
所以我终于弄明白这是怎么回事了。这完全是关于 Python 将我的变量存储为什么数据结构。因此,我将所有值设置为 'float32' 并初始化正在迭代的变量。工作代码在这里:
注意:lambda 函数是对单行表达式有用的匿名函数
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))], dtype='float32')
dfx11 = lambda x: -400*(x[1])+1200*np.square(x[0])+2
dfx12 = lambda x: -400*x[0]
dfx21 = lambda x: -400*x[0]
dfx22 = lambda x: 200
hessian = lambda x: np.array([[dfx11(x), dfx12(x)], [dfx21(x), dfx22(x)]], dtype='float32')
inv_hessian = lambda x: inv(hessian(x))
mag = lambda x: math.sqrt(sum(i**2 for i in x))
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp = np.zeros((2,1))
while magnitude > .000005:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
deltaX = np.array(np.dot(-inv_hessian(x), dfx(x)))
temp = np.array(x+t*deltaX)
x = temp
magnitude = mag(deltaX)
count+=1
return xvalues, gradvalues, fvalues, count
x0, t0, alpha, beta, count = np.array([[-1.1], [1.1]]), 1, .15, .7, 1
xvalues, gradvalues, fvalues, iterations = newton(x0, t0, count, magnitude)
final_value = print('Final set of x values: ', xvalues[-1])
final_grad = print('Final gradient values: ', gradvalues[-1])
final_f = print('Final value of the object function with optimized inputs: ', fvalues[-1])
final_grad_mag = print('Final magnitude of the gradient with optimized inputs: ', mag(np.array([dfx1(xvalues[-1]), dfx2(xvalues[-1])])))
total_iterations = print('Total iterations: ', iterations
print(xvalues)
输出:
Final set of x values: [[0.99999995]
[0.99999987]]
Final gradient values: [[ 9.1299416e-06]
[-4.6193604e-06]]
Final value of the object function with optimized inputs: [5.63044182e-14]
Final magnitude of the gradient with optimized inputs: 1.02320249276675e-05
Total iterations: 9
[array([[-1.1],
[ 1.1]]), array([[-1.00869558],
[ 1.00913081]]), array([[-0.25557778],
[-0.50186648]]), array([[-0.24460602],
[ 0.05971173]]), array([[ 0.97073805],
[-0.53472879]]), array([[0.97083687],
[0.94252417]]), array([[0.99999957],
[0.99914868]]), array([[0.99999995],
[0.99999987]])]
我正在为无约束优化问题创建一个基本的牛顿法算法,但算法的结果不是我所期望的。这是一个简单的 objective 函数,因此很明显该算法应该收敛于 (1,1)。我之前创建的梯度下降算法证实了这一点,此处:
def grad_descent(x, t, count, magnitude):
xvalues.append(x)
gradvalues.append(np.array([dfx1(x), dfx2(x)]))
fvalues.append(f(x))
temp=x-t*dfx(x)
x = temp
magnitude = mag(dfx(x))
count+=1
return xvalues, gradvalues, fvalues, count
我在此处尝试创建牛顿法算法:
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp=x-f(x)/dfx(x)
while count < 10:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
temp=x-t*f(x)/dfx(x)
x = temp
magnitude = mag(dfx(x))
count+=1
if count > 100:
break
return xvalues, gradvalues, fvalues, count
下面是objective函数和梯度函数:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))])
这是初始条件。请注意,牛顿方法中不使用 alpha 和 beta。
x0, t0, alpha, beta, count = np.array([-1.1, 1.1]), 1, .15, .7, 1
magnitude = mag(np.array([dfx1(x0), dfx2(x0)]))
调用函数:
xvalues, gradvalues, fvalues, iterations = newton(x0, t0, count, magnitude)
这会产生非常奇怪的结果。以下是其各自 x 输入的 x 值、梯度值和函数解的前 10 次迭代:
[array([-1.1, 1.1]), array([-0.99315589, 1.35545455]), array([-1.11651296, 1.11709035]), array([-1.01732476, 1.35478987]), array([-1.13070578, 1.13125051]), array([-1.03603697, 1.35903467]), array([-1.14368874, 1.14364506]), array([-1.05188162, 1.36561528]), array([-1.15600558, 1.15480705]), array([-1.06599492, 1.37360245])]
[array([-52.6, -22. ]), array([142.64160215, 73.81918332]), array([-62.07323963, -25.90216846]), array([126.11789251, 63.96803995]), array([-70.85773749, -29.44900758]), array([114.31050737, 57.13241151]), array([-79.48668009, -32.87577304]), array([104.93863096, 51.83206539]), array([-88.25737032, -36.308371 ]), array([97.03403558, 47.45145765])]
[5.620000000000003, 17.59584998020613, 6.156932949106968, 14.29937453260906, 6.7080172227439725, 12.305727666787176, 7.297442528545537, 10.926625703722639, 7.944104584786208, 9.89743708419569]
这是最终输出:
final_value = print('Final set of x values: ', xvalues[-1])
final_grad = print('Final gradient values: ', gradvalues[-1])
final_f = print('Final value of the object function with optimized inputs: ', fvalues[-1])
final_grad_mag = print('Final magnitude of the gradient with optimized inputs: ', mag(np.array([dfx1(xvalues[-1]), dfx2(xvalues[-1])])))
total_iterations = print('Total iterations: ', iterations)
显示 3d 图here 代码:
x = np.array([i[0] for i in xvalues])
y = np.array([i[1] for i in xvalues])
z = np.array(fvalues)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x, y, z, label='Newton Method')
ax.legend()
这是因为初始猜测非常接近最佳点,还是我的算法中存在我没有发现的错误?任何建议将不胜感激。看起来溶液甚至可能在振荡,但很难说
我想我已经找到了部分问题。我使用的是不正确的牛顿算法。在我使用之前:
x{k+1} = x{k}-f(x)⁄∇f(x)
正确的更新是:
x{k+1} = x{k} - [f''(x{k})]-1f'(x{k})
当我改变这个时,结果仍然不同,但稍微好一点。新功能在这里:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx1 = lambda x: -400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2
dfx2 = lambda x: 200*(x[1]-np.square(x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))])
dfx11 = lambda x: -400*(x[1])+1200*np.square(x[0])+2
dfx12 = lambda x: -400*x[0]
dfx21 = lambda x: -400*x[0]
dfx22 = lambda x: 200
hessian = lambda x: np.array(([dfx11(x0), dfx12(x0)], [dfx21(x0), dfx22(x0)]))
inv_hessian = lambda x: inv(np.array(([dfx11(x0), dfx12(x0)], [dfx21(x0), dfx22(x0)])))
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp = x-(inv_hessian(x).dot(dfx(x)))
while count < 25:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
temp = x-(inv_hessian(x).dot(dfx(x)))
x = temp
magnitude = mag(dfx(x))
count+=1
if count > 100:
break
return xvalues, gradvalues, fvalues, count
解最接近收敛的是在第一步之后,它到达 (-1.05, 1.1)。然而,它仍然存在分歧。我从来没有使用过牛顿法,所以我不确定这是否与算法预期的一样准确。
我现在确定 python 代码有问题。我决定在 Matlab 中实现该算法,它似乎运行良好。这是代码:
clear; clc;
x=[-1.1, 1.1]';
t=1;
count=1;
xvalues=[];
temp = x - inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200]);
disp(x-inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200])*[-400*x(1)*x(2)+400*x(1)^3+2*x(1)-2; 200*(x(2)-x(1)^2)])
while count<10
xvalues(count,:)= x;
temp = x - inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200]) * [-400*x(1)*x(2)+400*x(1)^3+2*x(1)-2; 200*(x(2)-x(1)^2)];
x = temp;
count = count+1;
end
disp(xvalues)
输出:
-1.1000 1.1000
-1.0087 1.0091
-0.2556 -0.5018
-0.2446 0.0597
0.9707 -0.5348
0.9708 0.9425
1.0000 0.9991
1.0000 1.0000
1.0000 1.0000
所以我终于弄明白这是怎么回事了。这完全是关于 Python 将我的变量存储为什么数据结构。因此,我将所有值设置为 'float32' 并初始化正在迭代的变量。工作代码在这里:
注意:lambda 函数是对单行表达式有用的匿名函数
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))], dtype='float32')
dfx11 = lambda x: -400*(x[1])+1200*np.square(x[0])+2
dfx12 = lambda x: -400*x[0]
dfx21 = lambda x: -400*x[0]
dfx22 = lambda x: 200
hessian = lambda x: np.array([[dfx11(x), dfx12(x)], [dfx21(x), dfx22(x)]], dtype='float32')
inv_hessian = lambda x: inv(hessian(x))
mag = lambda x: math.sqrt(sum(i**2 for i in x))
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp = np.zeros((2,1))
while magnitude > .000005:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
deltaX = np.array(np.dot(-inv_hessian(x), dfx(x)))
temp = np.array(x+t*deltaX)
x = temp
magnitude = mag(deltaX)
count+=1
return xvalues, gradvalues, fvalues, count
x0, t0, alpha, beta, count = np.array([[-1.1], [1.1]]), 1, .15, .7, 1
xvalues, gradvalues, fvalues, iterations = newton(x0, t0, count, magnitude)
final_value = print('Final set of x values: ', xvalues[-1])
final_grad = print('Final gradient values: ', gradvalues[-1])
final_f = print('Final value of the object function with optimized inputs: ', fvalues[-1])
final_grad_mag = print('Final magnitude of the gradient with optimized inputs: ', mag(np.array([dfx1(xvalues[-1]), dfx2(xvalues[-1])])))
total_iterations = print('Total iterations: ', iterations
print(xvalues)
输出:
Final set of x values: [[0.99999995]
[0.99999987]]
Final gradient values: [[ 9.1299416e-06]
[-4.6193604e-06]]
Final value of the object function with optimized inputs: [5.63044182e-14]
Final magnitude of the gradient with optimized inputs: 1.02320249276675e-05
Total iterations: 9
[array([[-1.1],
[ 1.1]]), array([[-1.00869558],
[ 1.00913081]]), array([[-0.25557778],
[-0.50186648]]), array([[-0.24460602],
[ 0.05971173]]), array([[ 0.97073805],
[-0.53472879]]), array([[0.97083687],
[0.94252417]]), array([[0.99999957],
[0.99914868]]), array([[0.99999995],
[0.99999987]])]