Gradient descent vectorization

I have implemented this gradient descent in Numpy:

def gradientDescent(X, y, theta, alpha, iterations):
    m = len(y)

    for i in range(iterations):
        h =,theta)
        loss = h-y
        theta = theta - (alpha/m)*, loss) #update theta

    return theta


While other parts of the code are fully vectorized here, there is also a for loop that I find impossible to eliminate; especially requiring theta update at every step, I don't see how I could vectorize it or write it in a more efficient way.

thanks for the help


source to share

1 answer

You cannot vectorize the for loop because each iteration updates the state. Vectorization is primarily used when the calculation can be done in such a way that each iteration calculates an independent result (in a sense).



All Articles