Ios / CoreML - Input type - MultiArray when keras model is converted to CoreML

I am trying to train a model keras

and convert it to a model coreML

using keras 1.2.2

both the TensorFlow

backend. This relates to the task of classification. CoreML login is displayed as MultiArray

. I need it to be Image <BGR, 32, 32>

or something CVPixelBuffer

. I tried to add image_input_names='data'

as mentioned here . Also mine input shape

- (height, width, depth)

which I believe is what is required.

Please help fix this problem. I used cifar10 dataset and the following code ( Link ):

from keras.datasets import cifar10
from keras.models import Model
from keras.layers import Input, Convolution2D, MaxPooling2D, Dense, Dropout, Flatten
from keras.utils import np_utils
import numpy as np
import coremltools

np.random.seed(1234)

batch_size = 32
num_epochs = 1

kernel_size = 3 
pool_size = 2 
conv_depth_1 = 32 
conv_depth_2 = 64 
drop_prob_1 = 0.25
drop_prob_2 = 0.5
hidden_size = 512 

(X_train, y_train), (X_test, y_test) = cifar10.load_data()
num_train, height, width, depth = X_train.shape
num_test = X_test.shape[0]
num_classes = 10

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= np.max(X_train)
X_test /= np.max(X_test)

y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)

data = Input(shape=(height, width, depth))
conv_1 = Convolution2D(conv_depth_1, (kernel_size, kernel_size), padding='same', activation='relu')(data)
conv_2 = Convolution2D(conv_depth_1, (kernel_size, kernel_size), padding='same', activation='relu')(conv_1)
pool_1 = MaxPooling2D(pool_size=(pool_size, pool_size))(conv_2)
drop_1 = Dropout(drop_prob_1)(pool_1)

conv_3 = Convolution2D(conv_depth_2, (kernel_size, kernel_size), padding='same', activation='relu')(drop_1)
conv_4 = Convolution2D(conv_depth_2, (kernel_size, kernel_size), padding='same', activation='relu')(conv_3)
pool_2 = MaxPooling2D(pool_size=(pool_size, pool_size))(conv_4)
drop_2 = Dropout(drop_prob_1)(pool_2)

flat = Flatten()(drop_2)
hidden = Dense(hidden_size, activation='relu')(flat)
drop_3 = Dropout(drop_prob_2)(hidden)
out = Dense(num_classes, activation='softmax')(drop_3)

model = Model(inputs=data, outputs=out) 

model.compile(loss='categorical_crossentropy', 
              optimizer='adam', 
              metrics=['accuracy']) 

model.fit(X_train, y_train,                
          batch_size=batch_size, epochs=num_epochs,
          verbose=1, validation_split=0.1) 
loss, accuracy = model.evaluate(X_test, y_test, verbose=1)
print ("\nTest Loss: {loss} and Test Accuracy: {acc}\n".format(loss = loss, acc = accuracy))
coreml_model = coremltools.converters.keras.convert(model, input_names='data', image_input_names='data')
coreml_model.save('my_model.mlmodel')

      

+3


source to share


2 answers


The problem was my version tf

and protobuf

. I was able to fix the problem by installing the versions mentioned in the coremltools

` documentation .



0


source


I just tested this with Keras 2 and your model's input is Image<RGB,32,32>

, not MultiArray

. Maybe it depends on the Keras version.

BGR

Add is_bgr=True

to call if you need coremltools.converters.keras.convert()

.



Here is the documentation for this converter.

+1


source







All Articles