Gradual addition of scipy sparse matrix vector with broadcast

I am trying to figure out the best way to perform elemental addition (and subtraction) of a sparse matrix and a sparse vector. I found this trick on SO:

mat = sp.csc_matrix([[1,0,0],[0,1,0],[0,0,1]])
vec = sp.csr_matrix([[1,2,1]])
mat.data += np.repeat(vec.toarray()[0], np.diff(mat.indptr))

      

But unfortunately it only updates non-null values:

print(mat.todense())
[[2 0 0]
 [0 3 0]
 [0 0 2]]

      

Valid accepted answer on SO thread:

def sum(X,v):
    rows, cols = X.shape
    row_start_stop = as_strided(X.indptr, shape=(rows, 2),
                            strides=2*X.indptr.strides)
    for row, (start, stop) in enumerate(row_start_stop):
        data = X.data[start:stop]
        data -= v[row]

sum(mat,vec.A[0])

      

Does the same. Unfortunately, unfortunately, I'm out of ideas, so I was hoping you could help me find a better way to solve this problem.

EDIT: I expect it to do the same as the dense version:

np.eye(3) + np.asarray([[1,2,1]])
array([[ 2.,  2.,  1.],
       [ 1.,  3.,  1.],
       [ 1.,  2.,  2.]])

      

thank

+3


source to share


2 answers


Some tests with 10x10 sparse mat and vec:

In [375]: mat=sparse.rand(10,10,.1) 
In [376]: mat
Out[376]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 10 stored elements in COOrdinate format>

In [377]: mat.A
Out[377]: 
array([[ 0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
         0.        ,  0.        ,  0.        ,  0.        ,  0.        ],
       [ 0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
         0.15568621,  0.59916335,  0.        ,  0.        ,  0.        ],
       ...
       [ 0.        ,  0.        ,  0.15552687,  0.        ,  0.        ,
         0.47483064,  0.        ,  0.        ,  0.        ,  0.        ]])

In [378]: vec=sparse.coo_matrix([0,1,0,2,0,0,0,3,0,0]).tocsr()
<1x10 sparse matrix of type '<class 'numpy.int32'>'
    with 3 stored elements in Compressed Sparse Row format>

      

maxymoo solution:

def addvec(mat,vec):
    Mc = mat.tocsc()
    for i in vec.nonzero()[1]:
        Mc[:,i]=sparse.csc_matrix(Mc[:,i].todense()+vec[0,i])
    return Mc    

      

And a variant that uses a format lil

that should be more efficient when changing the sparsity structure:

def addvec2(mat,vec):
    Ml=mat.tolil()
    vec=vec.tocoo()                                            
    for i,v in zip(vec.col, vec.data):
        Ml[:,i]=sparse.coo_matrix(Ml[:,i].A+v)
    return Ml

      

The summation has 38 nonzero terms, up from 10 in the original mat

. It adds 3 columns from vec

. This is a big change in sparsity.

In [382]: addvec(mat,vec)
Out[382]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 38 stored elements in Compressed Sparse Column format>

In [383]: _.A
Out[383]: 
array([[ 0.        ,  1.        ,  0.        ,  2.        ,  0.        ,
         0.        ,  0.        ,  3.        ,  0.        ,  0.        ],
       [ 0.        ,  1.        ,  0.        ,  2.        ,  0.        ,
         0.15568621,  0.59916335,  3.        ,  0.        ,  0.        ],
       ...
       [ 0.        ,  1.        ,  0.15552687,  2.        ,  0.        ,
         0.47483064,  0.        ,  3.        ,  0.        ,  0.        ]])

      

Same output with addvec2:

In [384]: addvec2(mat,vec)
Out[384]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 38 stored elements in LInked List format>

      

And the timing is addvec2

better than 2x



In [385]: timeit addvec(mat,vec)
100 loops, best of 3: 6.51 ms per loop

In [386]: timeit addvec2(mat,vec)
100 loops, best of 3: 2.54 ms per loop

      

and dense equivalents:

In [388]: sparse.coo_matrix(mat+vec.A)
Out[388]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 38 stored elements in COOrdinate format>

In [389]: timeit sparse.coo_matrix(mat+vec.A)
1000 loops, best of 3: 716 µs per loop

In [390]: timeit sparse.coo_matrix(mat.A+vec.A)
1000 loops, best of 3: 338 µs per loop

      

The version that can save on the temporary dense matrix space is executed at the same time:

In [393]: timeit temp=mat.A; temp+=vec.A; sparse.coo_matrix(temp)
1000 loops, best of 3: 334 µs per loop

      

So the dense version is 5-7x better than my sparse version.

For a really big mat

memory problem, tight performance can be chewed on, but an iterative sparse solution won't shine either.

I can squeeze better performance off addvec2

by indexing Ml

more efficiently. Ml.data[3],Ml.rows[3]

much faster than Ml[3,:]

or Ml[:,3]

.

def addvec3(mat,vec):
    Mtl=mat.T.tolil()
    vec=vec.tocoo()
    n = mat.shape[0]
    for i,v in zip(vec.col, vec.data):
        t = np.zeros((n,))+v
        t[Mtl.rows[i]] += Mtl.data[i]
        t = sparse.coo_matrix(t)
        Mtl.rows[i] = t.col
        Mtl.data[i] = t.data
    return Mtl.T

In [468]: timeit addvec3(mat,vec)
1000 loops, best of 3: 1.8 ms per loop

      

Small improvement, but not as much as I hoped. And the compression is a little more:

def addvec3(mat,vec):
    Mtl = mat.T.tolil()
    vec = vec.tocoo(); 
    t0 = np.zeros((mat.shape[0],))
    r0 = np.arange(mat.shape[0])
    for i,v in zip(vec.col, vec.data):
        t = t0+v
        t[Mtl.rows[i]] += Mtl.data[i]
        Mtl.rows[i] = r0
        Mtl.data[i] = t
    return Mtl.T

In [531]: timeit mm=addvec3(mat,vec)
1000 loops, best of 3: 1.37 ms per loop

      

+2


source


So your original matrix is ​​sparse, the vector is sparse, but in the resulting matrix, the columns corresponding to non-zero coordinates in your vector will be dense.

So we can materialize these columns as dense matrices



def addvec(mat,vec):
   for i in vec.nonzero()[1]:
      mat[:,i] = sp.csc_matrix(mat[:,i].todense() + vec[0,i])
   return mat

      

0


source







All Articles