Benchmark Keras Model Using TensforFlow Benchmark

I am trying to compare the inference phase performance of my Keras model using TensorFlow support. I thought the Tensorflow Benchmark tool was the right way.

I was able to build and run the example on desktop with tensorflow_inception_graph.pb

and everything works fine.

I can't figure out how to store the Keras model as the correct model .pb

. I can get TensorFlow plot from Keras model like this:

import keras.backend as K
K.set_learning_phase(0)

trained_model = function_that_returns_compiled_model()
sess = K.get_session()
sess.graph # This works

# Get the input tensor name for TF Benchmark
trained_model.input
> <tf.Tensor 'input_1:0' shape=(?, 360, 480, 3) dtype=float32>

# Get the output tensor name for TF Benchmark
trained_model.output
> <tf.Tensor 'reshape_2/Reshape:0' shape=(?, 360, 480, 12) dtype=float32>

      

I am trying to save the model in different ways.

import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter

model = trained_model
export_path = "path/to/folder"  # where to save the exported graph
export_version = 1  # version number (integer)

saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)

      

Which creates a folder with some files, I don't know what to do.

Now I run the Benchmark tool with something like this

bazel-bin/tensorflow/tools/benchmark/benchmark_model \
  --graph=tensorflow/tools/benchmark/what_file.pb \
  --input_layer="input_1:0" \
  --input_layer_shape="1,360,480,3" \
  --input_layer_type="float" \
  --output_layer="reshape_2/Reshape:0"

      

But no matter what file I try to use as what_file.pb

, I getError during inference: Invalid argument: Session was not created with a graph before Run()!

+3


source to share


2 answers


So I got this to work. You just need to convert all the variables in the tensorflow graph to constants and then save the graph definition.

Here's a small example:



import tensorflow as tf

from keras import backend as K
from tensorflow.python.framework import graph_util

K.set_learning_phase(0)
model = function_that_returns_your_keras_model()
sess = K.get_session()

output_node_name = "my_output_node" # Name of your output node

with sess as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    graph_def = sess.graph.as_graph_def()
    output_graph_def = graph_util.convert_variables_to_constants(
                                                                 sess,
                                                                 sess.graph.as_graph_def(),
                                                                 output_node_name.split(","))
    tf.train.write_graph(output_graph_def,
                         logdir="my_dir",
                         name="my_model.pb",
                         as_text=False)

      

Now just call the TensorFlow Benchmark tool my_model.pb

as a graph.

+2


source


You are saving the parameters of this model, not the definition of the graph; to save this usage tf.get_default_graph().as_graph_def().SerializeToString()

and then save it to a file.



This suggests that I don't think the reference tool will work as it doesn't have the ability to initialize the variables your model depends on.

0


source







All Articles