Why was the result different when I resized the test batch in tensorflow

Here is my train code:

x = tf.placeholder(tf.float32, [None, 2, 3])
cell = tf.nn.rnn_cell.GRUCell(10)

_, state = tf.nn.dynamic_rnn(
        cell = cell,
        inputs = x,
        dtype = tf.float32)
# train
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    x_ = np.ones([2,2,3],np.float32)
    output = sess.run(state, feed_dict= {x: x_})
    print output    
    saver = tf.train.Saver()
    saver.save(sess,'./model')

      

Result:

[[ 0.12851571 -0.23994535  0.23123585 -0.00047993 -0.02450397
  -0.21048039 -0.18786618  0.04458345 -0.08603278 -0.08259721]
 [ 0.12851571 -0.23994535  0.23123585 -0.00047993 -0.02450397
  -0.21048039 -0.18786618  0.04458345 -0.08603278 -0.08259721]]

      

Here is my test code:

x = tf.placeholder(tf.float32, [None, 2, 3])
cell = tf.nn.rnn_cell.GRUCell(10)

_, state = tf.nn.dynamic_rnn(
        cell = cell,
        inputs = x,
        dtype = tf.float32)
with tf.Session() as sess:
    x_ = np.ones([1,2,3],np.float32)
    saver = tf.train.Saver()
    saver.restore(sess,'./model')
    output = sess.run(state, feed_dict= {x: x_})
    print output

      

Then I get:

[[ 0.12851571 -0.23994535  0.2312358  -0.00047993 -0.02450397 
  -0.21048039 -0.18786621  0.04458345 -0.08603278 -0.08259721]]

      

As you can see, the result has changed slightly. When I set the test batch to 2, the result is the same as the train result. So what is it? My tf version is 0.12

+3


source to share





All Articles