Transferring the inlet piping to the TensorFlow score

I'm a noob to TF, so it's easy on me.

I need to train a simple CNN from a bunch of images in a shortcut directory. After looking around a lot, I cooked up this code that prepares the TF input pipeline and I was able to print an array of images.

    image_list, label_list = load_dataset()

    imagesq = ops.convert_to_tensor(image_list, dtype=dtypes.string)
    labelsq = ops.convert_to_tensor(label_list, dtype=dtypes.int32)

    # Makes an input queue
    input_q = tf.train.slice_input_producer([imagesq, labelsq],
                                                shuffle=True)

    file_content = tf.read_file(input_q[0])
    train_image = tf.image.decode_png(file_content,channels=3)
    train_label = input_q[1]

    train_image.set_shape([120,120,3])

    # collect batches of images before processing
    train_image_batch, train_label_batch = tf.train.batch(
        [train_image, train_label],
        batch_size=5
        # ,num_threads=1
    )

    with tf.Session() as sess:
        # initialize the variables
        sess.run(tf.global_variables_initializer())
        # initialize the queue threads to start to shovel data
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)
        # print "from the train set:"
        for i in range(len(image_list)):
             print sess.run(train_image_batch)
        # sess.run(train_image)
        # sess.run(train_label)
        # classifier.fit(input_fn=lambda: (train_image, train_label),
        #                steps=100,
        #                monitors=[logging_hook])

        # stop our queue threads and properly close the session
        coord.request_stop()
        coord.join(threads)
        sess.close()

      

But looking at the MNIST example given in the TF docs, I can see that they use cnn_model_fn as well as the Estimator class .

I have defined my own cnn_model_fn and would like to combine the two. Please help me how to move on. This code doesn't work

classifier = learn.Estimator(model_fn=cnn_model_fn, model_dir='./test_model')
classifier.fit(input_fn=lambda: (train_image, train_label),
steps=100,
monitors=[logging_hook])

      

The pipeline seems to only fill up when the session is started, otherwise its empty and it gives the InputError 'Input graph and Layer graph' are not the same '

Please help me.

+3


source to share


1 answer


I'm new to tensorflow, so take this with a grain of salt.

AFAICT, when you call any of the APIs tf

that create "tensors" or "operations", they are created in a context named Graph

.

Also, I believe that on startup Estimator

it creates a new empty one Graph

for every startup. It fills Graph

in by launching model_fn

and input_fn

which should call APIs tf

that add "tensors" and "operations" in the context of this new one Graph

.



The return values ​​from model_fn

and input_fn

just provide references so the parts can be connected correctly - Graph

already contains them.

However, in this example, the input operations were already created before it was Estimator

created Graph

, and therefore their associated operations were added to the implicit default Graph

(one is created automatically, I believe). So when Estimator

creating a new one and filling the model with with model_fn

, the input and the model will be on two different graphs.

To fix this, you need to change input_fn

. Don't just wrap the pair (image, labels)

in lambda

, but rather wrap the entire input construct in a function so that when Estimator

triggered input_fn

as a side effect of all API calls, all operation inputs and tensors will be created in the correct context Graph

.

+2


source







All Articles