Accessing the gradient values ​​of the keras model output with respect to the inputs

I made a fairly simple NN model to do some non-linear regressions for me in Keras as an introduction. I have uploaded my jupyter notebookit as a gist here (displays correctly on github), which is pretty short and accurate.

It just matches the 1D function y = (x - 5) ^ 2/25.

I know that Theano and Tensorflow are essentially graphical derived (gradient) frames. And mainly it is the use of gradients of loss functions in relation to weights for gradient stepwise optimization.

But I'm trying to figure out if I have access to something that, given the trained model, can approximate the derivatives of the input with respect to the output level for me (not a weight or loss function). So for this case, I would like y '= 2 (x-5) /25.0 to be evaluated from the derived network plot for me for the specified input value x in the state that is currently trained.

Do I have any options in the Keras or Anano / TF APIs to do this, or do I need to somehow implement my own weighted control chain (or perhaps add my own non-trainable "identical" layers or something)? In my notebook, you can see me trying several approaches based on what I've been able to find so far, but without a ton of success.

To make it specific, I have a working keras model with the structure:

model = Sequential()
# 1d input
model.add(Dense(64, input_dim=1, activation='relu'))
model.add(Activation("linear"))
model.add(Dense(32, activation='relu'))
model.add(Activation("linear"))
model.add(Dense(32, activation='relu'))
# 1d output
model.add(Dense(1))

model.compile(loss='mse', optimizer='adam', metrics=["accuracy"])
model.fit(x, y,
      batch_size=10,
      epochs=25,
      verbose=0,
      validation_data=(x_test, y_test))

      

I would like to evaluate the derivative of the output y with respect to the input x in, for example x = 0.5.

All my attempts to extract gradient values ​​based on searching past answers resulted in syntax errors. From a high level perspective, is this a supported Keras feature, or would any solution be backend specific?

+3


source to share


1 answer


As you mentioned, Theano and TF are symbolic, so the derivation is pretty simple:

import theano
import theano.tensor as T
import keras.backend as K
J = T.grad(model.output[0, 0], model.input)
jacobian = K.function([model.input, K.learning_phase()], [J])

      



First you compute the symbolic gradient (T.grad) of the output given in the input, then you create a function that you can call and does the computation. Note that sometimes this is not so trivial due to form issues, as you get one derivative for each input element.

+3


source







All Articles