使用 Martin Eastwood 的 MPE 公式执行 scipy.optimize.minimize
Performing scipy.optimize.minimize with Martin Eastwood's MPE formula
我很难用 Martin Eastwood 的插值公式执行 scipy.optimize.minimize—
z=(x^w1/(x^w2+y^w3))*w4 * 17 (we get 16 instead of 17
while x[3], x[4], x[16], x[18] reside in the formula)
My data set (17/12/12 preml.ge)
x=np.array([33,43,28,26,28,30,26,24,15,21,23,28,19,18,19,22,15,19,18,15])
y=np.array([15,24,17,16,21,25,22,21,13,20,23,29,25,24,26,32,24,31,32,30])
z=np.array([36,42,29,24,27,29,23,27,24,23,22,20,25,16,17,15,18, 9,15,10])
data=np.array([x, y, z])
十年前,Martin Eastwood(狂热博主)found:
w1=1.122777, w2=1.072388, w3=1.127248, w4=2.499973
where RMSE=3.657522858 for my problem.
我想知道的是我可以使用哪种方法来获得这些 w–参数,就像那些,用于上述依赖估计.
,不过方法好像不太好查到我。我需要你的帮助。
观察。 1
使用 least-squares 更好,因为该方法会单独查看每个样本的变化,而不仅仅是最终总和。
import scipy.optimize
import numpy as np
import matplotlib.pyplot
x=np.array([33,43,28,26,28,30,26,24,15,21,23,28,19,18,19,22,15,19,18,15])
y=np.array([15,24,17,16,21,25,22,21,13,20,23,29,25,24,26,32,24,31,32,30])
z=np.array([36,42,29,24,27,29,23,27,24,23,22,20,25,16,17,15,18, 9,15,10])
pred = lambda w: (x**w[0]/(x**w[1]+y**w[2]))*w[3
w_given = 1.122777, 1.072388, 1.127248, 2.499973]
w,_ = scipy.optimize.leastsq(lambda w: (z - pred(w)), (1,1,1,1))
w_guided,_ = scipy.optimize.leastsq(lambda w: (z - pred(w)), w_given)
让我们想象一下
plt.plot(z, pred(w), '.')
# 17 introduced here arbitrarily
plt.plot(z, pred(w_given)*17, '+')
plt.plot(z, pred(w_guided), '+')
plt.plot(np.sort(z), np.sort(z), '--')
plt.legend(['dumb guess', 'given w (scaled)', 'init with given w', 'target'])
检查拟合结果是否比初始猜测更好(健全性检查)
(np.mean((z - pred(w))**2),
np.mean((z - pred(w_guided))**2),
np.mean((z - pred(w_given)*17)**2),
np.mean((z - pred(w_given)*16)**2))
(10.987132120174204,
10.987132121290418,
15.064715846376691,
17.341093598858798)
我很难用 Martin Eastwood 的插值公式执行 scipy.optimize.minimize—
z=(x^w1/(x^w2+y^w3))*w4 * 17 (we get 16 instead of 17
while x[3], x[4], x[16], x[18] reside in the formula)
My data set (17/12/12 preml.ge)
x=np.array([33,43,28,26,28,30,26,24,15,21,23,28,19,18,19,22,15,19,18,15])
y=np.array([15,24,17,16,21,25,22,21,13,20,23,29,25,24,26,32,24,31,32,30])
z=np.array([36,42,29,24,27,29,23,27,24,23,22,20,25,16,17,15,18, 9,15,10])
data=np.array([x, y, z])
十年前,Martin Eastwood(狂热博主)found:
w1=1.122777, w2=1.072388, w3=1.127248, w4=2.499973
where RMSE=3.657522858 for my problem.
我想知道的是我可以使用哪种方法来获得这些 w–参数,就像那些,用于上述依赖估计.
观察。 1
使用 least-squares 更好,因为该方法会单独查看每个样本的变化,而不仅仅是最终总和。
import scipy.optimize
import numpy as np
import matplotlib.pyplot
x=np.array([33,43,28,26,28,30,26,24,15,21,23,28,19,18,19,22,15,19,18,15])
y=np.array([15,24,17,16,21,25,22,21,13,20,23,29,25,24,26,32,24,31,32,30])
z=np.array([36,42,29,24,27,29,23,27,24,23,22,20,25,16,17,15,18, 9,15,10])
pred = lambda w: (x**w[0]/(x**w[1]+y**w[2]))*w[3
w_given = 1.122777, 1.072388, 1.127248, 2.499973]
w,_ = scipy.optimize.leastsq(lambda w: (z - pred(w)), (1,1,1,1))
w_guided,_ = scipy.optimize.leastsq(lambda w: (z - pred(w)), w_given)
让我们想象一下
plt.plot(z, pred(w), '.')
# 17 introduced here arbitrarily
plt.plot(z, pred(w_given)*17, '+')
plt.plot(z, pred(w_guided), '+')
plt.plot(np.sort(z), np.sort(z), '--')
plt.legend(['dumb guess', 'given w (scaled)', 'init with given w', 'target'])
检查拟合结果是否比初始猜测更好(健全性检查)
(np.mean((z - pred(w))**2),
np.mean((z - pred(w_guided))**2),
np.mean((z - pred(w_given)*17)**2),
np.mean((z - pred(w_given)*16)**2))
(10.987132120174204,
10.987132121290418,
15.064715846376691,
17.341093598858798)