梯度下降 - 作为列表和作为 numpy 数组的 theta 之间的区别

Gradient Descent - Difference between theta as a list and as a numpy array

我已经实现了一个梯度下降算法,它会根据我的 theta 是列表类型还是 numpy 数组产生不同的结果:当 theta 是 python 列表时,我的程序运行正常,但使用 theta = np.zeros((2, 1)) 出问题了,我的 theta 增加得非常快。

num_iter = 1500
alpha = 0.01
theta = [0, 0]
#theta = np.zeros((2, 1), dtype=np.float64)
print(theta)
def gradient_descent(x, y, theta, alpha, iteration):
    m = y.size
    i = 0
    temp = np.zeros_like(theta, np.float64)
    for i in range(iteration):
        h = x @ theta
        temp[0] = (alpha/m)*(np.sum(h - y))
        temp[1] = (alpha/m)*(np.sum((h - y)*x[:,1]))
        theta[0] -= temp[0]
        theta[1] -= temp[1]

        print("theta0 {}, theta1 {}, Cost {}".format(theta[0], theta[1], compute_cost(x, y, theta)))
    return theta, J_history


theta = gradient_descent(X, y, theta, alpha, num_iter)

theta 的答案为 numpy 数组

theta0 [5.663961], theta1 [63.36898425], Cost 15846739.108595487
theta0 [-495.73201075], theta1 [-4010.76967073], Cost 65114528414.94523
theta0 [31736.05800912], theta1 [259011.3427287], Cost 271418872442062.44
.
.
.
theta0 [nan], theta1 [nan], Cost nan
theta0 [nan], theta1 [nan], Cost nan
theta0 [nan], theta1 [nan], Cost nan

当 theta 是列表时回答

theta0 0.05839135051546392, theta1 0.6532884974555672, Cost 6.737190464870008
theta0 0.06289175271039384, theta1 0.7700097825599365, Cost 5.9315935686049555
.
.
.
theta0 -3.6298120050247746, theta1 1.166314185951815, Cost 4.483411453374869
theta0 -3.6302914394043593, theta1 1.166362350335582, Cost 4.483388256587725

您的两个 theta 具有不同的形状:theta = [0,0] 具有形状 (1,2),但 theta = np.zeros((2,1)) 具有形状 (2,1)。因此,如果 x 具有形状 (n,),则 x @ theta 给出第一个 (1,n) 或第二个 (n,1)。

例如,

t1 = [0,0]
t2 = np.zeros((2,1))
t3 = np.zeros((2,))
x = np.arange(6).reshape(3,2)

x @ t1
# array([0, 0, 0])

x @ t2
# array([[0.],
#        [0.],
#        [0.]]) 

x @ t3
# array([0, 0, 0])

更改为 theta = np.zeros((2,)) 是(我认为)一个快速修复。