Crash on Porting Inception V3 Custom Model into Tensor Flow Demo Camera for Android

My target task:

In step 1 when training, the batch size is set to 1. Also added images = tf.identity(images, name='Inputs_layer')

to name the tensor network as suggested in the question No operation named [input] in Graph "fine tuning / retraining error inceptionV1 fine model .

Before step 3

>> bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --
   in_graph=frozen_graph.pb
   No inputs spotted.
   No variables spotted.
   Found 1 possible outputs: (name=tower_0/logits/predictions, op=Softmax)
   Found 21781804 (21.78M) const parameters, 0 (0) variable parameters, and 188 
   control_edges
   Op types used: 777 Const, 378 Mul, 284 Add, 283 Sub, 190 Identity, 188 Sum, 
   96 Reshape, 94
   Conv2D, 94 StopGradient, 94 SquaredDifference, 94 Square, 94 Mean, 94 Rsqrt, 
   94 Relu, 94
   Reciprocal, 15 ConcatV2, 10 AvgPool, 4 MaxPool, 1 RealDiv, 1 RandomUniform, 1
   QueueDequeueManyV2, 1 Softmax, 1 Split, 1 MatMul, 1 Floor, 1 FIFOQueueV2, 1 
   BiasAdd

      

Step 3

bazel-bin/tensorflow/python/tools/optimize_for_inference \
   --input=tensorflow/examples/android/assets/frozen_graph.pb \
   --output=tensorflow/examples/android/assets/stripped_graph.pb \
   --input_names=inputs_layer \
   --output_names=tower_0/logits/predictions

      

After step 3

 >>> bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --
   in_graph=stripped_graph.pb
   No inputs spotted.
   No variables spotted.
   Found 1 possible outputs: (name=tower_0/logits/predictions, op=Softmax) 
   Found 21781804 (21.78M) const parameters, 0 (0) variable parameters, and 188 
   control_edges
   Op types used: 777 Const, 378 Mul, 284 Add, 283 Sub, 188 Sum, 96 Reshape, 94 
   Conv2D, 94 StopGradient, 94 SquaredDifference, 94 Square, 94 Mean, 94 Rsqrt, 
   94 Relu, 94 Reciprocal, 15 ConcatV2, 10 AvgPool, 4 MaxPool, 1 RealDiv, 1 
   RandomUniform, 1 QueueDequeueManyV2, 1 Softmax, 1 Split, 1 MatMul, 1 Floor, 1 
   FIFOQueueV2, 1 BiasAdd
   To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
   run tensorflow/tools/benchmark:benchmark_model -- --
   graph=stripped_graph.pb --show_flops --logtostderr --input_layer= --
   input_layer_type= --input_layer_shape= --
   output_layer=tower_0/logits/predictions

      

In ClassifierActivity.java,

private static final int INPUT_SIZE = 224;//299; //224;  
private static final int IMAGE_MEAN = 117;
private static final float IMAGE_STD = 1;
private static final String INPUT_NAME = "inputs_layer";
private static final String OUTPUT_NAME = "tower_0/logits/predictions";
private static final String MODEL_FILE = 
                       "file:///android_asset/stripped_graph.pb";
private static final String LABEL_FILE =
      "file:///android_asset/custom_label.txt";

      

After following the above 4 steps, APK crash log on Android device:

E/AndroidRuntime( 8558): FATAL EXCEPTION: inference
   E/AndroidRuntime( 8558): Process: org.tensorflow.demo, PID: 8558
   E/AndroidRun time( 8558): java.lang.IllegalArgumentException: No Operation 
   named [inputs_layer] in the Graph

      

How to fix it?

+3


source to share


1 answer


When you optimize for output, it has no node login. You just gave inputs_layer

, so the optimized.pb file is not recognized correctly in Android.



Nowhere is it said that your node input is inputs_layer

. Enter the correct node input and it should work.

-1


source







All Articles