实现收敛的迭代解决方案

Iterative solution for achieving convergence

我正在尝试计算等式中的 n:f = n^2 以迭代方式使 f 接近 3从底部开始精确到小数点后 3 位。 为了说明,我在这里写了一个代码,它成功地完成了我想要的。此代码的一个问题是,我启动外部脚本而不是 f=n^2 并使用此方法整个迭代变得非常慢,因此无法使用。它需要太多的循环才能收敛到给定的精度。因此,我正在寻找更快的方法来实现收敛。我的意思是“更快”以更少的周期数收敛。向我推荐了一个二进制搜索算法,你能用这个小例子帮助创建它吗 f = n^2?

这是我的代码,它演示了我想要什么。

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy

while f <= tol
    n = n+accuracy;
    f=n^2;
end
n = n-accuracy;
f=n^2;
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

更新: 这是我为 Newton-Raphson 算法设置函数及其导数的示例。不幸的是,它可能不正确,所以我想请你帮忙。

n = 7.76; % initial guess
f_error = [7.73 9 12.404 7.659 8.66];
df_error = diff(f_error)
xNew = n - f_error./df_error

Newton-Raphson 方法

f_error(x) = x^(1/2) - N
f'_error(x) = (1/2)*x^(-1/2)
xNew = x - f_error(x)/f'_error(x)
repeat:
xNew = x - (x^(1/2) - N) / ((1/2)*x^(-1/2))

比尝试扫描它们之间的所有 floating-point 个可表示值更快地接近目标值:

n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

while abs(error) > accuracy
    nNew=n-(n^(1/2) - val)/((1/2)*n^(-1/2));
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

c = 5 <--- only 5 iterations
n = 9
f = 9

您的方法无法在 https://octave-online.net 服务器的 time-limit 内完成计算:

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy
count = 0;

while f <= tol
    n = n+accuracy;
    f=n^2;
    count = count+1;
end
n = n-accuracy;
f=n^2;
disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

!!! OUT OF TIME !!!
c = 847309
n = 1.0847309
f = 1.176641126

它在 5 秒内仅移动了 0.1766,因此它至少需要 1-2 分钟才能完成,而 Newton-Raphson 方法只需要 5 次迭代即可完成。由于舍入误差,n=n+accuracy 方法也不适用于大的 n 值。如果下一个 binary-representable n 浮点值大于 n+accuracy 则它会陷入无限循环。


上面的 Newton Raphson 方法使用平方根。许多 x86 CPUs 的 sqrt 有 CPU 指令,因此它很可能加速,但您也可以通过另一种 Newton-Raphson 方法求平方根:

f_error(x) = x^2 - N
f'_error(x) = 2*x
xNew = x - f_error(x)/f'_error(x)
xNew = x - (1/2)*x - (1/2)*(N/x)

但它仍然有 N/x 的分区。如果 CPU 没有除法那么你可以使用另一个 Newton Raphson:

f_error(x) = 1/x - N
f'_error(x) = -1/(x*x)
xNew = x - f_error(x)/f'_error(x)
xNew = x*(2-N*x)

将此应用于其他两种方法会产生完整的 multiplication-based 版本,并且可能会更慢,因为您需要 cubic-power 次迭代,因此总共需要数百次迭代。


编辑: 对于远程服务器上 运行 的未知 f 函数,您可以使用简单的 two-point 导数。这个版本的导数是最简单的离散导数,比 3 点、5 点等类型的导数具有更高的误差:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = x^(1/2); 
end

% numerical (discrete two-point 1-dimensional) derivative
% beware: the error will be h^2 so you should keep h small
% add more points (with coefficients) for h^3 or better error
function result = twoPointDerivative(x,y,h)
    result = (y-x)/(2*h); 
end

n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 

while abs(error) > accuracy
    point1 = blackBoxFunction(n-accuracy)
    point2 = blackBoxFunction(n+accuracy)
    nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

c = 5
n = 9
f = 9

如果您需要进行比 5 次更少的迭代,您应该有一个更好的初始猜测。也许通过对 x^2 的 second-order 多项式逼近。(显然是 c1 + c2x + c3x*x --> c1=0, c2=0 ,这里的 x-square 问题的 c3=1,但您始终可以从级数扩展、遗传算法和其他启发式方法中获得帮助,以找到一组好的系数来最小化整个输入范围的 Newton-Raphson 迭代,而不仅仅是3).

例如,你有 cos(x) 作为黑盒函数,你想达到 acos(x) 所以你可以有效地使用多项式初始猜测:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = cos(x); 
end

% numerical (discrete two-point 1-dimensional) derivative
% beware: the error will be h^2 so you should keep h small
% add more points (with coefficients) for h^3 or better error
function result = twoPointDerivative(x,y,h)
    result = (y-x)/(2*h); 
end

val = 0.04; % value that is the input of this problem: what is acos(0.04) ??
n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
nNew = n;
f=acos(val); % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 

while abs(error) > accuracy
    point1 = blackBoxFunction(n-accuracy)
    point2 = blackBoxFunction(n+accuracy)
    nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

c = 3 <-- 2 less iterations !!
n = 1.530785652
f = 1.530785652

与five-point导数:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = cos(x); 
end

% numerical (discrete four-point 1-dimensional) derivative
% beware: the error will be h^4 so you should keep h small
% add more points (with coefficients) for h^5 or better error
function result = fivePointDerivative(x,h,f)
    result = (f(x-2*h)-8*f(x-h)+8*f(x+h)-f(x+2*h))/(12*h); 
end

val = -1; % value that is the input of this problem: what is acos(-1) ?? (it is PI)
n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
nNew = n;
f=acos(val); % function to calculate

accuracy = 0.00000000001; % accuracy
error = 1;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 
h_diff=0.0000001
while abs(error) > accuracy

    nNew=n-(blackBoxFunction(n)-val)/fivePointDerivative(n,h_diff,@blackBoxFunction);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,15)])
disp(['n = ' num2str(n,15)])
disp(['f = ' num2str(f,15)])

输出:

c = 29
n = 3.14159265683019
f = 3.14159265358979
              ^ 3e-9 error

此外,如果黑盒函数非常昂贵,那么这个简单的导数会很慢,但您仍然可以搜索 non-uniform 离散导数方法来 re-use 早期迭代的黑盒输出最小化 run-time.