Convert UIImage to MLMultiArray for Keras model
In Python, I prepared an image classification model with keras to accept array input [224, 224, 3] and output a prediction (1 or 0). When I load the save model and load it in xcode, it states that the input should be in MLMultiArray format.
Is there a way to convert UIImage to MLMultiArray format? Or is there a way to change my keras model to accept objects of type CVPixelBuffer as input.
source to share
When you convert the caffe model to MLModel
, you need to add this line:
image_input_names = 'data'
Take my own passing script as an example, the script should look something like this:
import coremltools
coreml_model = coremltools.converters.caffe.convert(('gender_net.caffemodel',
'deploy_gender.prototxt'),
image_input_names = 'data',
class_labels = 'genderLabel.txt')
coreml_model.save('GenderMLModel.mlmodel')
And then your input MLModel
will be CVPixelBufferRef
instead MLMultiArray
. Transferring UIImage
in CVPixelBufferRef
will be an easy task.
source to share
Haven't tried this but here's how to do it for sample FOOD101
func preprocess(image: UIImage) -> MLMultiArray? {
let size = CGSize(width: 299, height: 299)
guard let pixels = image.resize(to: size).pixelData()?.map({ (Double($0) / 255.0 - 0.5) * 2 }) else {
return nil
}
guard let array = try? MLMultiArray(shape: [3, 299, 299], dataType: .double) else {
return nil
}
let r = pixels.enumerated().filter { $0.offset % 4 == 0 }.map { $0.element }
let g = pixels.enumerated().filter { $0.offset % 4 == 1 }.map { $0.element }
let b = pixels.enumerated().filter { $0.offset % 4 == 2 }.map { $0.element }
let combination = r + g + b
for (index, element) in combination.enumerated() {
array[index] = NSNumber(value: element)
}
return array
}
source to share