How do I get the weight of a layer from Conv2D in keras correctly?

My Conv2D layer is defined as:

Conv2D(96, kernel_size=(5, 5),
             activation='relu',
             input_shape=(image_rows, image_cols, 1),
             kernel_initializer=initializers.glorot_normal(seed),
             bias_initializer=initializers.glorot_uniform(seed),
             padding='same',
             name='conv_1')

      

This is the first level in my network.
Input sizes - 64 by 160, image - 1 channel.
I am trying to render weights from this convolutional layer, but dont know how to get them.
This is how I am doing it now:

1.Call

layer.get_weights()[0]

      

This returns an array of the form (5, 5, 1, 96). 1 is that the pictures are 1-channel.

2.Select 5 by 5 filters on

layer.get_weights()[0][:,:,:,j][:,:,0]

      

Very ugly, but I'm not sure how to simplify this, any comments are greatly appreciated.

I'm not sure about these 5 by 5 squares. Are they really filtering?
If not someone could tell me how to grab filters from the model correctly?

+3


source to share


1 answer


I tried to display the scales so that only the first 25. I have the same question you are doing, is it a filter or something. It looks like these are not the filters that are derived from deep belief networks or stacked RBMs.

Here are the untrained visualized weights: unprepared weights

and here are the prepared weights:

trained weights

Strange, no change after training! If you compare them, they are identical.

and then RBN RBN filters layer 1 on top and layer 2 on the bottom: DBM RBM filters

If I set kernel_intialization = "ones", then I get filters that look good, but the net loss never decreases, although with a lot of trial and error: enter image description here

Here is the code to display 2D Conv Weights / Filters.

  ann = Sequential()
  x = Conv2D(filters=64,kernel_size=(5,5),input_shape=(32,32,3))
  ann.add(x)
  ann.add(Activation("relu"))

      



...

  x1w = x.get_weights()[0][:,:,0,:]
  for i in range(1,26):
      plt.subplot(5,5,i)
      plt.imshow(x1w[:,:,i],interpolation="nearest",cmap="gray")
  plt.show()

  ann.fit(Xtrain, ytrain_indicator, epochs=5, batch_size=32)

  x1w = x.get_weights()[0][:,:,0,:]
  for i in range(1,26):
      plt.subplot(5,5,i)
      plt.imshow(x1w[:,:,i],interpolation="nearest",cmap="gray")
  plt.show()

      

--------------------------- UPDATE ------------------- --- -

So, I tried again with a training rate of 0.01 instead of 1e-6, and used images normalized between 0 and 1 instead of 0 and 255, dividing the images by 255.0. Now the convolution filters are changing, and the output of the first convolutional filter looks like this: Underestimated weights

The trained filter you will notice changes (not much) at a reasonable learning rate: Trained convolution filter

Here is a picture of the seven CIFAR-10 test suite: Picture 7 CIFAR-10 Car

And here is the output of the first layer of the convolution: Convolution level output

And if I take the last convolution layer (no dense layers in between) and send it to an untriven classifier, it is similar to the classification of raw images in terms of accuracy, but if I train convolution layers, then the last output of the convolution level increases the accuracy of the classifier (random forest ).

So, I would conclude that the convolution levels are indeed filters as well as weights.

+4


source







All Articles