TensorFlow: Why does tf.one_hot give better performance on tf.uint8 dtypes?

I am rather puzzled that there is a huge difference (5% difference in accuracy) in performance of the same model (keeping all other factors the same) when I simply rearrange my dtype labels (tf. Uint8) after use tf.one_hot

, that is to say, that the function tf.one_hot

handles uint8 integers.

for example

...

labels = tf.cast(labels, tf.int64)
labels = tf.one_hot(labels, num_classes=12)

      

Compared with

...
labels = tf.one_hot(labels, num_classes=12)
labels = tf.cast(labels, tf.int64)

      

the latter has the best performance.

Is there a preferred dtype when used tf.one_hot

?

+3


source to share





All Articles