The input 'y' of 'Mul' Op is of type float32 which does not match the int32 type of argument 'x'

When I use this code on Linux. It works. But this is not the case on windows. By the way my python 3.5 version on my windows

with graph.as_default():

 train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
 train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
 valid_dataset = tf.constant(valid_examples, dtype=tf.int32)


with tf.device('/cpu:0'):

 embeddings = tf.Variable(
    tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
 embed = tf.nn.embedding_lookup(embeddings, train_inputs)


 nce_weights = tf.Variable(
    tf.truncated_normal([vocabulary_size, embedding_size],
                        stddev=1.0 / math.sqrt(embedding_size)))
 nce_biases = tf.Variable(tf.zeros([vocabulary_size]))


loss = tf.reduce_mean(
  tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,num_sampled, vocabulary_size))

      

+3


source to share


4 answers


the new version of tensorflow changed the order of the ncs_loss parameters.

try to change how



tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed, num_sampled, vocabulary_size)

+1


source


I ran into this error. with code very similar to yours. When I ran it on Floydhub using env = tensorflow (which stands for Tensorflow 1.1.0 + Keras 2.0.4 in Python3) it threw the above error.



However, it worked great after I changed the environment to use tensorflow-1.0 (Tensorflow 1.0.0 + Keras 1.2.2 in Python3).

0


source


You need to convert the train_labels type to float32

. You mentioned that train_labels is of type int32

and embed is of type float32

.]

This is how you convert int32 to float32

tf.cast(train_labels, tf.float32)

      

then calculate the loss.

0


source


I faced the same problem but with a different loss function. You are missing the parameter name, the pass parameter with the name and the error will disappear. Check out the example line of code.

    loss = tf.reduce_mean(
                  tf.nn.nce_losss(weights=nce_weights, biases=nce_biases, 
                                  inputs=embed, labels=train_labels, 
                                  num_sampled=num_sampled,
                                  num_classes=vocabulary_size))

      

0


source







All Articles