Keras Lambda Layer gives ValueError: no value supported when trying to fit model
I wrote a lambda layer in Keras to allow input:
def sparsify(x, percent_zero_dims):
k_val = int((1.0 - percent_zero_dims / 100.0) * K.int_shape(x)[1])
values, indices = tf.nn.top_k(x, k=k_val, sorted=False)
# We need to create full indices like [[0, 0], [0, 1], [1, 2], [1, 1]]
my_range = tf.expand_dims(tf.range(0, K.shape(indices)[0]), 1) # will be [[0], [1]]
my_range_repeated = tf.tile(my_range, [1, k_val]) # will be [[0, 0], [1, 1]]
# change shapes to [N, k, 1] and [N, k, 1], to concatenate into [N, k, 2]
full_indices = tf.concat([tf.expand_dims(my_range_repeated, 2), tf.expand_dims(indices, 2)], 2)
full_indices = tf.reshape(full_indices, [-1, 2])
output = tf.sparse_to_dense(full_indices, K.shape(x), tf.reshape(values, [-1]), default_value=0.,
validate_indices=False)
return output
I am calling the lambda layer in my model:
sparse = Lambda(lambda x: sparsify(x, sparse_perc))(relu)
Input form (None, 32) and (None, 32) get output. I also wrote this lambda layer as a custom layer which throws the same error. If I set the output of the model to relu it trains fine, but when I use the sparse output the model throws
line 360, in make_tensor_proto
raise ValueError("None values not supported.")
I can predict on the model, compile without problems and get the expected results, but when I try to fit it, the model throws an error. I believe this is due to invalid losses received from this output, as seen from the stack trace:
line 1014, in _make_train_function
self.total_loss)...
I tried to remove any nans from the sparse layer output (which I know is not good practice, but I just loop to figure out where / how it breaks):
Lambda(lambda x: tf.where(tf.is_nan(x), tf.zeros_like(x), x))(sparse)
This happens to me in several other custom layers in the model, but not all of them, so I am confused.
specs: python 3.5
mac osx
Keras 2.0.4
tf 1.1.0
source to share
No one has answered this question yet
Check out similar questions: