How to vectorize loss in SVM

I would like to calculate SVM loss without looping. But I cannot understand. It takes a little enlightenment.

  chart? cht = tx & chl = L_% 7Bi% 7D% 20% 3D% 20% 5Csum_% 7Bj% 5Cneq% 20y_% 7Bi% 7D% 7D% 5E% 7B% 7D% 20max (0% 2C% 20S_% 7Bj% 7D% 20-% 20S_% 7By_% 7Bi% 7D% 7D% 20% 2B% 201% 20)
def svm_loss_vectorized(W, X, y, reg):
    loss = 0.0
    scores = np.dot(X, W)
    correct_scores = scores[y]
    deltas = np.ones(scores.shape)
    margins = scores - correct_scores + deltas

    margins[margins < 0] = 0  # max -> Boolean array indexing
    margins[np.arange(scores.shape[0]), y] = 0  # Don't count j = yi
    loss = np.sum(margins)

    # Average
    num_train = X.shape[0]
    loss /= num_train

    # Regularization
    loss += 0.5 * reg * np.sum(W * W)
    return loss

      

It should output the same loss as the following function.

def svm_loss_naive(W, X, y, reg):
    num_classes = W.shape[1]
    num_train = X.shape[0]
    loss = 0.0

    for i in range(num_train):
        scores = X[i].dot(W)
        correct_class_score = scores[y[i]]
        for j in range(num_classes):
            if j == y[i]:
                continue
            margin = scores[j] - correct_class_score + 1 # note delta = 1
            if margin > 0:
                loss += margin
    loss /= num_train # mean
    loss += 0.5 * reg * np.sum(W * W) # l2 regularization
    return loss

      

+3


source to share


1 answer


Here's a vector approach -

delta = 1
N = X.shape[0]
M = W.shape[1]
scoresv = X.dot(W)
marginv = scoresv - scoresv[np.arange(N), y][:,None] + delta

mask0 = np.zeros((N,M),dtype=bool)
mask0[np.arange(N),y] = 1
mask = (marginv<0) | mask0
marginv[mask] = 0

loss_out = marginv.sum()/num_train # mean
loss_out += 0.5 * reg * np.sum(W * W) # l2 regularization

      

Also, we could optimize np.sum(W * W)

with np.tensordot

, for example:

float(np.tensordot(W,W,axes=((0,1),(0,1))))

      

Runtime test



The proposed approach as a function is -

def svm_loss_vectorized_v2(W, X, y, reg):
    delta = 1
    N = X.shape[0]
    M = W.shape[1]
    scoresv = X.dot(W)
    marginv = scoresv - scoresv[np.arange(N), y][:,None] + delta

    mask0 = np.zeros((N,M),dtype=bool)
    mask0[np.arange(N),y] = 1
    mask = (marginv<=0) | mask0
    marginv[mask] = 0

    loss_out = marginv.sum()/num_train # mean
    loss_out += 0.5 * reg * float(np.tensordot(W,W,axes=((0,1),(0,1))))
    return loss_out

      

Timing -

In [86]: W= np.random.randn(3073,10)
    ...: X= np.random.randn(500,3073)
    ...: y= np.random.randint(0,10,(500))
    ...: reg = 4.56
    ...: 

In [87]: svm_loss_naive(W, X, y, reg)
Out[87]: 70380.938069371899

In [88]: svm_loss_vectorized_v2(W, X, y, reg)
Out[88]: 70380.938069371914

In [89]: %timeit svm_loss_naive(W, X, y, reg)
100 loops, best of 3: 10.2 ms per loop

In [90]: %timeit svm_loss_vectorized_v2(W, X, y, reg)
100 loops, best of 3: 2.94 ms per loop

      

+1


source







All Articles