使用 scipy.optimize 求多变量方程的根

Find the root of a multivariable equation using scipy.optimize

我有以下功能

import numpy as np
import scipy.optimize as optimize

def x(theta1, theta2, w, h, L1, L2):
    sint1 = np.sin(theta1)
    cost1 = np.cos(theta1)
    sint2 = np.sin(theta2)
    cost2 = np.cos(theta2)

    i1 = L1 * (cost1 + cost2) + w
    j1 = L1 * (sint1 - sint2) - h
    D = np.sqrt((L1*(cost2-cost1)+w)**2+(L1*(sint2-sint1)+h)**2)
    a = (0.25)*np.sqrt((4*L2**2-D**2)*D**2)

    return i1/2 + 2*j1*a/(D**2)

def y(theta1, theta2, w, h, L1, L2):
    sint1 = np.sin(theta1)
    cost1 = np.cos(theta1)
    sint2 = np.sin(theta2)
    cost2 = np.cos(theta2)

    i2 = L1 * (sint1 + sint2) + h
    j2 = L1 * (cost1 - cost2) - w
    D = np.sqrt((L1*(cost2-cost1)+w)**2+(L1*(sint2-sint1)+h)**2)
    a = (0.25)*np.sqrt((4*L2**2-D**2)*D**2)

    return i2/2 - 2*j2*a/(D**2)

def det_jacobiano(theta, w, h, L1, L2,eps):
    theta1,theta2 = theta
    dxdt1 = (-x(theta1+eps, theta2, w, h, L1, L2)+4*x(theta1, theta2, w, h, L1, L2)-3*x(theta1-eps, theta2, w, h, L1, L2))/(2*eps)
    dxdt2 = (-x(theta1, theta2+eps, w, h, L1, L2)+4*x(theta1, theta2, w, h, L1, L2)-3*x(theta1, theta2-eps, w, h, L1, L2))/(2*eps)
    dydt1 = (-y(theta1+eps, theta2, w, h, L1, L2)+4*y(theta1, theta2, w, h, L1, L2)-3*y(theta1-eps, theta2, w, h, L1, L2))/(2*eps)
    dydt2 = (-y(theta1, theta2+eps, w, h, L1, L2)+4*y(theta1, theta2, w, h, L1, L2)-3*y(theta1, theta2-eps, w, h, L1, L2))/(2*eps)  
    return dxdt1*dydt2 - dxdt2*dydt1

我想找到使 det_jacobiano 为 0 的 theta 1 和 theta2 的值。如您所见,函数 det_jacobiano 在函数 x 和 y 中求值。

当我尝试使用scipy.optimize寻找根

initial_guess = [2.693, 0.4538]
result = optimize.root(det_jacobiano, initial_guess,tol=1e-8,args=(20,0,100,100,1e-10),method='lm')

出现错误:TypeError: Improper input: N=2 must not exceed M=1

求根是求解方程组的数值计算等价物。同样的基本限制适用:你需要多少未知数就需要多少方程式。

scipy 中的所有求根例程都期望第一个参数是 N 个变量的函数,returns N 个值。本质上,第一个参数意味着等效于具有 N 个未知数的 N 个方程组。因此,您的问题是 det_jacobiano 需要 2 个变量,但只有 returns 一个值。

您不能使用当前公式的求根方法,但您仍然可以进行最小化。将det_jacobiano的最后一行改为:

return np.abs(dxdt1*dydt2 - dxdt2*dydt1)

然后使用optimize.minimize:

result = optimize.minimize(det_jacobiano, initial_guess, tol=1e-8, args=(20,0,100,100,1e-10), method='Nelder-Mead')

输出:

 final_simplex: (array([[ 1.47062275, -3.46178428],
       [ 1.47062275, -3.46178428],
       [ 1.47062275, -3.46178428]]), array([ 0.,  0.,  0.]))
           fun: 0.0
       message: 'Optimization terminated successfully.'
          nfev: 330
           nit: 137
        status: 0
       success: True
             x: array([ 1.47062275, -3.46178428])

result.fun 保存最终的最小值(确实是 0.0,如您所愿),result.x 保存 theta1, theta2 的值,该值产生 0.0.