Tensorflow. How to make a tensor

So I have this code that works until I want it to draw a tensogram for me:

import numpy as np
import tensorflow as tf
import tqdm
from sklearn.model_selection import train_test_split
from tensorflow.python.framework import ops

ops.reset_default_graph()

x = np.linspace(0, 10, 1000, dtype='float32')
y = np.sin(x) + np.random.normal(size=len(x))

X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=.3)

x_ = tf.placeholder(name="input", shape=None, dtype=np.float32)
y_ = tf.placeholder(name="output", shape=None, dtype=np.float32)
w = tf.Variable(tf.random_normal([]), name='w')
b = tf.Variable(tf.random_normal([]), name='bias')

model_output = tf.add(tf.multiply(x_, w), b)

loss = tf.reduce_mean(tf.pow(y_ - model_output, 2), name='loss')
train_step = tf.train.GradientDescentOptimizer(0.0025).minimize(loss)

summary_writer = tf.summary.FileWriter('linreg')
for value in [x_, model_output, w, loss]:
    tf.summary.scalar(value.op.name, value)
summaries = tf.summary.merge_all()

n_epochs = 100
train_errors = []
test_errors = []

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in tqdm.tqdm(range(n_epochs)):  # 100
        _, train_err = sess.run([train_step, loss],
                                feed_dict={x_: X_train, y_: y_train})
        train_errors.append(train_err)
        test_errors.append(
            sess.run(loss, feed_dict={x_: X_test, y_: y_test}))

        summary_writer.add_summary(sess.run(summaries), i)

      

with this I get:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input' with dtype float
     [[Node: input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

      

so if I understood correctly it asked me for feed_dict, it is good to allow changing the last line:

summary_writer.add_summary(sess.run(summaries, feed_dict={x_: X_train, y_: y_train}), i)

      

and now we have:

InvalidArgumentError (see above for traceback): tags and values not the same shape: [] != [700] (tag 'input_1')
     [[Node: input_1 = ScalarSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](input_1/tags, _recv_input_0)]]

      

so the weight wants to be the same as x, I can do this:

w = tf.Variable(tf.random_normal([700]), name='w')

      

but what about X_test? it has only 300 lines:

InvalidArgumentError (see above for traceback): Incompatible shapes: [300] vs. [700]
     [[Node: Mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_input_0, w/read)]]

      

so I have to dynamically change the shape of w? or get w1 and w2 for trains and trials? how is tensor?

=============================================== === =======================

Form time. After defining the form for variables and placeholders:

x_ = tf.placeholder(name="input", shape=[None, 1], dtype=np.float32)
y_ = tf.placeholder(name="output", shape=[None, 1], dtype=np.float32)
w = tf.Variable(tf.random_normal([1, 1]), name='w')
b = tf.Variable(tf.random_normal([1]), name='bias')

      

We can see that the data must also be in the form:

ValueError: Cannot feed value of shape (700,) for Tensor 'input:0', which has shape '(?, 1)'

      

so the last piece of code looks like this (changes in data added):

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in tqdm.tqdm(range(n_epochs)):

        _, train_err, summ = sess.run([train_step, loss, summaries],
                                feed_dict={x_: X_train.reshape(len(X_train), 1), y_: y_train.reshape(len(y_train), 1)})
        summary_writer.add_summary(summ, i)

        train_errors.append(train_err)
        test_errors.append(
            sess.run(loss, feed_dict={x_: X_test.reshape(len(X_test), 1), y_: y_test.reshape(len(y_test), 1)}))

      

And the current error:

InvalidArgumentError (see above for traceback): tags and values not the same shape: [] != [1,1] (tag 'w_1')
         [[Node: w_1 = ScalarSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](w_1/tags, w/read)]]

      

Now I don't even understand which tensor has the form [].

=============================================== === ============

Withdrawal time.

tf.summary.scalar([value.op.name], value)

      

won't, coz argument first / name tf.summary.scalar wants a string or byte and gives an error otherwise.

So, the name will be [], let's accept it and change the code a little:

w = tf.Variable(tf.random_normal([]), name='w')
b = tf.Variable(tf.random_normal([]), name='bias')
...
for value in [w, b, loss]:
    tf.summary.scalar(value.op.name, value)

      

finally works

+3


source to share


1 answer


x_ is a placeholder that will contain your input values. It does not have a fixed meaning on the chart, its only meaning is the ones you submit. Therefore, you just need to use:

    summary_writer.add_summary(sess.run(summaries, feed_dict={x_: X_train, y_: y_train}), i)

      

But do it so that you calculate everything twice. What you should be using is:

_, train_err, summ = sess.run([train_step, loss, summaries],
                            feed_dict={x_: X_train, y_: y_train})
summary_writer.add_summary(summ, i)

      

This way, your learning step and the final calculations happen immediately.

EDIT

It looks like you only have shaping issues that are only detectable by tensor ...



  • your placeholder x_ must be declared in the form [None, n_features]

    (here n_features = 1

    , so you can also work with it just with [None]

    . I don't really know what does None

    , maybe your problems are coming from that, maybe not ...)

  • y must be shaped [None, n_outputs]

    , so [None, 1]

    here. Probably None

    or [None]

    also work.

  • w should have a shape [n_features, n_outputs]

    , in your case [1, 1]

    . You cannot shape it after the batch size, this is nonsense from a machine learning perspective (at least if you are sin(x)

    only trying to learn from x

    and not from the rest of the batch, which doesn't make much sense)

  • b

    should have a shape [n_outputs]

    , so [1]

    here.

Does it work if you specify all these shapes?

EDIT 2

This is a formation problem. The answer is given here , it seems you just need to replace

tf.summary.scalar(value.op.name, value)

      

from

tf.summary.scalar([value.op.name], value)

      

+1


source







All Articles