Tensorflow tf.while_loop automatically loops dependencies when executed in parallel?

I am interested in implementing a recursive neural network in Tensorflow, for example what has been done in How to implement a recursive neural network in TensorFlow? ...

However, in his implementation, the parallel_iterations

operator expression tf.while_loop

was fixed as 1. I'm afraid it might be too slow. Since the tree I'm going to use in tensorflow has parts that are independent of each other, I would hope that I can set parallel_iterations

to a higher value. However, it is inevitable that there are some dependencies in the tree that I feed as input to tensorflow, and I am afraid that setting it to a higher value might break the dependency property.

So my question is, did Tensorflow have tf.while_loop

auto-fixed dependencies already so that it only takes advantage of hosting concurrency that are independent of each other?

The tensorflow method documentation states the following:

For correct programs, while_loop should return the same result for any parallel_iterations> 0.

But I'm not sure what they mean by "correct programs".

+3


source to share


1 answer


You can.

according to this issue , ops will run in parallel as soon as all their inputs are calculated:



while_loop implements loose semantics. An iteration can begin as soon as one of the ops for that iteration is ready (i.e., all its inputs are available) for execution. Thus, while_loop can easily execute multiple iterations in parallel. For example, for validation, even if the accumulated value is not available in the step, the step can still be started and perform any operations that do not depend on the accumulated value.

so that there are no problems.

-1


source







All Articles