How to vectorize / tensor operations in numpy with irregular array shapes

I would like to perform an operation

form1

If form2had the correct form I could use np.einsum, I believe the syntax would be

np.einsum('ijp,ipk->ijk',X, alpha)

      

Unfortunately my X data has an irregular structure on axis 1 (if index zero).

To give a little more context, form3refers to the pth function of the jth member of the i-th group. In fact, the groups have different sizes, it is a list of lists of different lengths of lists of the same length.

form4 has a regular structure and so can be stored as a standard numpy array (it comes in one-dimensional and then I use alpha.reshape (a, b, c) where a, b, c are task specific integers)

I would like to avoid storing X as a list of lists of lists or a list of np.arrays of different dimensions and write something like

A = []
for i in range(num_groups):
    temp = np.empty(group_sizes[i], dtype=float)
    for j in range(group_sizes[i]):
        temp[i] = np.einsum('p,pk->k',X[i][j], alpha[i,:,:])
    A.append(temp)

      

Is this some nice numpy function / data structure for this, or will I have to compromise with some only partially-vectorized implementation?

+3


source to share


1 answer


I know this sounds obvious, but if you can afford memory, I would start by just checking the performance you get by simply filling the data with a single size, i.e. just adding zeros and performing an operation.Sometimes a simpler solution is faster than more presumably optimal, which has more calls to Python / C.

If that doesn't work, then the best bet, as suggested by Tom Wiley , is probably a backeting strategy. Assuming X

is your list of lists of lists, and alpha

is an array, you can start by collecting the sizes of the second index (you probably already have):

X_sizes = np.array([len(x_i) for x_i in X])

      

And sort them:

idx_sort = np.argsort(X_sizes)
X_sizes_sorted = X_sizes[idx_sort]

      

Then you select several buckets, the number of which depends on the amount of your work. Let's say you have chosen BUCKETS = 4

. You just need to split the data so that more or less each part is the same size:

sizes_cumsum = np.cumsum(X_sizes_sorted)
total = sizes_cumsum[-1]
bucket_idx = []
for i in range(BUCKETS):
    low = np.round(i * total / float(BUCKETS))
    high = np.round((i + 1) * total / float(BUCKETS))
    m = sizes_cumsum >= low & sizes_cumsum < high
    idx = np.where(m),
    # Make relative to X, not idx_sort
    idx = idx_sort[idx]
    bucket_idx.append(idx)

      



And then you do the calculations for each bucket:

bucket_results = []
for idx in bucket_idx:
    # The last index in the bucket will be the biggest
    bucket_size = X_sizes[idx[-1]]
    # Fill bucket array
    X_bucket = np.zeros((len(X), bucket_size, len(X[0][0])), dtype=X.dtype)
    for i, X_i in enumerate(idx):
        X_bucket[i, :X_sizes[X_i]] = X[X_i]
    # Compute
    res = np.einsum('ijp,ipk->ijk',X, alpha[:, :bucket_size, :])
    bucket_results.append(res)

      

Filling the array X_bucket

will probably be slow in this part. Again, if you can afford memory, it would be more efficient to have X

in one padded array and then just slice off X[idx, :bucket_size, :]

.

Finally, you can return the results to a list:

result = [None] * len(X)
for res, idx in zip(bucket_results, bucket_idx):
    for r, X_i in zip(res, idx):
        result[X_i] = res[:X_sizes[X_i]]

      

Sorry, I'm not giving the proper function, but I'm not sure exactly how your input or expected output is, so I just put the chunks in and you can use them however you see fit.

+1


source







All Articles