Gradient descent vectorization
I have implemented this gradient descent in Numpy:
def gradientDescent(X, y, theta, alpha, iterations): m = len(y) for i in range(iterations): h = np.dot(X,theta) loss = h-y theta = theta - (alpha/m)*np.dot(X.T, loss) #update theta return theta
While other parts of the code are fully vectorized here, there is also a for loop that I find impossible to eliminate; especially requiring theta update at every step, I don't see how I could vectorize it or write it in a more efficient way.
thanks for the help
source to share