Loop over tensor

I am trying to handle a variable size tensor in python that would look something like this:

# X is of shape [m, n]
for x in X:
    process(x)

      

I tried to use tf.scan, the point is that I want to process each sub-tensor, so I tried to use nested scan, but I was able to do it because tf.scan works from accumulator, if not found, in the first element elems will be injected as the initializer, which I don't want to do. As an example, let's say I want to add it to each element of my tensor (this is just an example) and I want to process it element by element. If I run the code below, I only add it to the sub tensor because the scan treats the first tensor as an initializer along with the first element of each sub tensor.

import numpy as np
import tensorflow as tf

batch_x = np.random.randint(0, 10, size=(5, 10))
x = tf.placeholder(tf.float32, shape=[None, 10])

def inner_loop(x_in):
    return tf.scan(lambda _, x_: x_ + 1, x_in)

outer_loop = tf.scan(lambda _, input_: inner_loop(input_), x, back_prop=True)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    rs = sess.run(outer_loop, feed_dict={x: batch_x})

      

Any suggestions?

+4


source to share


2 answers


Most of tensorflow's built-in functions could be applied in different ways. This way you can just pass tensor to function. Like:

outer_loop = inner_loop(x)

      

However, if you have some function that cannot be applied in this way (it is really tempting to see this function), you can use map_fn

.



Let's say your function should just add 1 to each tensor element (or whatever):

inputs = tf.placeholder...

def my_elementwise_func(x):
    return x + 1

def recursive_map(inputs):
   if tf.shape(inputs).ndims > 0:
       return tf.map_fn(recursive_map, inputs)
   else:
       return my_elementwise_func(inputs)

result = recursive_map(inputs)  

      

+1


source


To iterate over tensor you can try tf.unstack

Unpacks the specified dimension of the rank- R tensor into rank- (R -1) tensors.

So adding 1 to each tensor would look something like this:

import tensorflow as tf
x = tf.placeholder(tf.float32, shape=(None, 10))
x_unpacked = tf.unstack(x) # defaults to axis 0, returns a list of tensors

processed = [] # this will be the list of processed tensors
for t in x_unpacked:
    # do whatever
    result_tensor = t + 1
    processed.append(result_tensor)

output = tf.concat(processed, 0)

with tf.Session() as sess:
    print(sess.run([output], feed_dict={x: np.zeros((5, 10))}))

      



Obviously, you can additionally unpack each tensor from the list to process it, down to individual elements. However, to avoid a lot of nested unboxing, you could tf.reshape(x, [-1])

try tf.reshape(x, [-1])

x with tf.reshape(x, [-1])

and then tf.reshape(x, [-1])

like

flattened_unpacked = tf.unstack(tf.reshape(x, [-1])
for elem in flattened_unpacked:
    process(elem)

      

In this case, it elem

is a scalar.

+4


source







All Articles