Installing a keras model when layers are not trained gives inconsistent results

I am trying to determine the accuracy of my model without training and updating weights, so I set all my layers to trainable = False

.

When I run fit_generator

in the generator with shuffle = False

, I get consistent results every time.

When I run fit_generator

on a generator with shuffle = True

, the results intersect in a jump. Given that the inputs are the same and the model is not trained, I would expect that the internal state of the model will not change and the accuracy will be the same on the same dataset, regardless of order.

However, this order dependence means that some state in the model is changing despite trainable = False

. What is going on inside the model that is causing this?

+3


source to share


1 answer


This is a really interesting phenomenon. This is likely due to the fact that most neural network packages use precision float32

, which gives precision to 5-7 decimal places. Here you can read a detailed explanation.



0


source







All Articles