Gradient descent vectorization

I have implemented this gradient descent in Numpy:

def gradientDescent(X, y, theta, alpha, iterations):
    m = len(y)

    for i in range(iterations):
        h = np.dot(X,theta)
        loss = h-y
        theta = theta - (alpha/m)*np.dot(X.T, loss) #update theta

    return theta

      

While other parts of the code are fully vectorized here, there is also a for loop that I find impossible to eliminate; especially requiring theta update at every step, I don't see how I could vectorize it or write it in a more efficient way.

thanks for the help

+3


source to share


1 answer


You cannot vectorize the for loop because each iteration updates the state. Vectorization is primarily used when the calculation can be done in such a way that each iteration calculates an independent result (in a sense).



+4


source







All Articles