How to use gradients as input in TensorFlow

I am trying to use gradients of a certain level of my network as input for the next layer (derived with respect to layer weights). My code is too long to put it all here, but below is the problematic part (it only broke after inserting it).

weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="conv5/weights:0")
deriv = tf.gradients(net, weights)
deriv = tf.squeeze(deriv)
deriv = tf.expand_dims(deriv, 0)
batch_deriv = tf.tile(deriv, [batch_size, 1, 1])
h = tf.reduce_sum(net, axis=1)# maybe normalize ?
new_feature_map = tf.concat([h, batch_deriv], axis=1)

      

I am getting the following error: (which is in the optimizer, not my model, but I don't know why)

 File "/usr/lib/pycharm-community/helpers/pydev/pydevd.py", line 1596, in <module>
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/usr/lib/pycharm-community/helpers/pydev/pydevd.py", line 974, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File ".../train.py", line 272, in <module>
    train()
  File ".../train.py", line 133, in train
    train_op = optimizer.minimize(loss, global_step=batch)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 315, in minimize
    grad_loss=grad_loss)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 386, in compute_gradients
    colocate_gradients_with_ops=colocate_gradients_with_ops)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 551, in gradients
    out_grads[i] = control_flow_ops.ZerosLikeOutsideLoop(op, i)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1305, in ZerosLikeOutsideLoop
    pred = op_ctxt.pred
AttributeError: 'NoneType' object has no attribute 'pred'

      

Does anyone know why this is happening and how to solve it?

UPDATE

It seems that changing the line

new_feature_map = tf.concat([h, batch_deriv], axis=1) 

      

to

new_feature_map = tf.concat([h, tf.stop_gradient(batch_deriv) ], axis=1)

      

"fixed" this (this is already a dowsnt bug), but I have no idea why. Also, during the evaluation (over the test suite), it seems that no gradients are being computed.

Does anyone know how to solve this?

+3
python tensorflow


source to share


No one has answered this question yet

Check out similar questions:

5116
How can I check if a file exists without exceptions?
4268
How to combine two dictionaries in one expression?
3474
How to list all files in a directory?
176
How do I import modules into pycharm?
2
Google Object Discovery API - using fast_rcnn_resnet101_coco model for training
2
Tensorflow: minimization of L2 loss for int64 data without casting to float32, because when issuing "no gradients" error
1
Tensorflow implementing crf loss
0
Numerous removed bugs when using TensorFlow Pets on Google Cloud.
0
Tensorflow gradient: unsupported operand type
0
ValueError: no variables to store in tensorflow using object_detection api



All Articles
Loading...
X
Show
Funny
Dev
Pics