Number of tensor mismatches for attachments in TensorBoard Keras callback
I am using the CIFAR-10 dataset, so there are 10,000 test images. I have successfully created a file .tsv
containing metadata: test suite labels (in human readable text, not indices) on each of 10,000 lines.
However, in TensorBoard, when I open the embed tab, I get this error:
The number of tensors (16128) does not match the number of rows in the metadata (10000).
But I would expect the test suites to inline, which are usually 10,000 long, like in the file .tsv
I made!
Here is the code I am using from this project :
K.set_learning_phase(1)
# [...]
model = build_model(hype_space)
# [...]
log_path = None
if log_for_tensorboard:
log_path = os.path.join(TENSORBOARD_DIR, model_uuid)
if not os.path.exists(log_path):
os.makedirs(log_path)
print("Tensorboard log files will be saved to: {}".format(log_path))
embeddings_metadata = {
# Dense layers only:
l.name: "../test_classes.tsv"
for l in model.layers if 'dense' in l.name.lower()
}
tb_callback = keras.callbacks.TensorBoard(
log_dir=log_path,
histogram_freq=1,
write_graph=True,
write_images=True
embeddings_freq=3,
embeddings_layer_names=list(embeddings_metadata.keys()),
embeddings_metadata=embeddings_metadata
)
tb_callback.set_model(model)
callbacks.append(tb_callback)
history = model.fit(
[x_train],
[y_train, y_train_c],
batch_size=int(hype_space['batch_size']),
epochs=EPOCHS,
shuffle=True,
verbose=1,
callbacks=callbacks,
validation_data=([x_test], [y_test, y_test_coarse])
).history
thank
+3
source to share
No one has answered this question yet
See similar questions:
or similar: