The "tensor" type of the error object doesn't repeat itself when I use tf.contrib.rnn.LayerNormBasicLSTMCell

My tensorflow version is 1.0.0. When I run with tf.contrib.rnn.GRUCell (n_hidden_units) ok but running error tf.contrib.rnn.LayerNormBasicLSTMCell (n_hidden_units): "error type tensor object is not iterable"

`with tf.variable_scope('init_name',initializer=tf.orthogonal_initializer()):   

        cell = tf.contrib.rnn.LayerNormBasicLSTMCell(n_hidden_units)
        init_state = tf.get_variable('init_state', [1, n_hidden_units],initializer=tf.constant_initializer(0.0))  #tf.constant_initializer(0.0)
        init_state = tf.tile(init_state, [train_batch_size, 1])

        outputs, states = tf.nn.dynamic_rnn(
        cell,X,dtype=tf.float32,sequence_length=true_lenth,initial_state=init_state)`

      

And the error:

/usr/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in <lambda>()
681 
682     input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t)--> 683     call_cell = lambda: cell(input_t, state)    684     685     if sequence_length is not None:/usr/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/rnn_cell.py in __call__(self, inputs, state, scope)1228 1229     with vs.variable_scope(scope or 
"layer_norm_basic_lstm_cell"):
-> 1230       c, h = state

      

1231 args = array_ops.concat ([Inputs, h], 1) 1232 concat = self._linear (args)

/usr/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in iter (self)

514       TypeError: when invoked.
515     """
--> 516     raise TypeError("'Tensor' object is not iterable.")
517 
518   def __bool__(self):

TypeError: 'Tensor' object is not iterable.

      

Can anyone help me? Thank you very much.

+3


source to share


1 answer


LayerNormBasicLSTMCell

requires the initial state to be a tuple ( num_units

, num_units

).

You can make your code by doing



    cell = tf.contrib.rnn.LayerNormBasicLSTMCell(n_hidden_units)
    init_state = (tf.zeros([train_batch_size, n_hidden_units]), 
                  tf.zeros([train_batch_size, n_hidden_units]))

    outputs, states = tf.nn.dynamic_rnn(
        cell, X, dtype=tf.float32, 
        sequence_length=true_lenth,initial_state=init_state)

      

+2


source







All Articles