Training Keras Model with BatchNorm Tensor Flow Layer

I am using keras to build a model and write optimization codes and everything else in tensorflow . When I used fairly simple layers like Dense or Conv2D it was simple. But adding a BatchNormalization layer to my keras model makes the task difficult.

Since the BatchNormalization level behaves differently during the training phase and the testing phase, I realized that I needed K.learning_phase (): True in my feed_dict. But the following code doesn't work very well. It works without errors, but the model's performance does not improve.

import keras.backend as K
...
x_train, y_train = get_data()
sess.run(train_op, feed_dict={x:x_train, y:y_train, K.learning_phase():True})

      

When I tried to examine the keras model using the keras fit function it worked well.

What should I do to train the model keras with BatchNormalization in tensor stream ?

+3


source to share


1 answer


Actually I was duplicating this question which I have not seen.



I found the answer here , it just consists of passing a special argument to the call to the BatchNormalization layer

+1


source







All Articles