梯度下降发散,学习率太高
Gradient Descent diverges, learning rate too high
下面有一段代码,一步步做GD,但是theta是发散的。有什么问题吗?
X = arange(100)
Y = 50 + 4*X + uniform(-20, 20, X.shape)
theta = array([0,0])
alpha = 0.001
# one step of GD
theta0 = theta[0] - alpha * sum( theta[0]+theta[1]*x-y for x,y in zip(X,Y))/len(X)
theta1 = theta[1] - alpha * sum((theta[0]+theta[1]*x-y)*x for x,y in zip(X,Y))/len(X)
theta = [theta0, theta1]
学习率太高。
alpha = 0.0001
下面有一段代码,一步步做GD,但是theta是发散的。有什么问题吗?
X = arange(100)
Y = 50 + 4*X + uniform(-20, 20, X.shape)
theta = array([0,0])
alpha = 0.001
# one step of GD
theta0 = theta[0] - alpha * sum( theta[0]+theta[1]*x-y for x,y in zip(X,Y))/len(X)
theta1 = theta[1] - alpha * sum((theta[0]+theta[1]*x-y)*x for x,y in zip(X,Y))/len(X)
theta = [theta0, theta1]
学习率太高。
alpha = 0.0001