Unable to replicate CNN Matconvnet architecture in Keras

I have the following convolutional neural network architecture in matconvnet that I am using to prepare my own data:

function net = cnn_mnist_init(varargin)
% CNN_MNIST_LENET Initialize a CNN similar for MNIST
opts.batchNormalization = false ;
opts.networkType = 'simplenn' ;
opts = vl_argparse(opts, varargin) ;

f= 0.0125 ;
net.layers = {} ;
net.layers{end+1} = struct('name','conv1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,1,64, 'single'), zeros(1, 64, 'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool1',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0);
net.layers{end+1} = struct('name','conv2',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,64,128, 'single'),zeros(1,128,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool2',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 2, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv3',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,128,256, 'single'),zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool3',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv4',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,256,512, 'single'),zeros(1,512,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool4',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','ip1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,256, 'single'),  zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','relu',...
                           'type', 'relu');
net.layers{end+1} = struct('name','classifier',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,2, 'single'), zeros(1,2,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','loss',...
                           'type', 'softmaxloss') ;

% optionally switch to batch normalization
if opts.batchNormalization
  net = insertBnorm(net, 1) ;
  net = insertBnorm(net, 4) ;
  net = insertBnorm(net, 7) ;
  net = insertBnorm(net, 10) ;
  net = insertBnorm(net, 13) ;
end

% Meta parameters
net.meta.inputSize = [28 28 1] ;
net.meta.trainOpts.learningRate = [0.01*ones(1,10) 0.001*ones(1,10) 0.0001*ones(1,10)];
disp(net.meta.trainOpts.learningRate);
pause;
net.meta.trainOpts.numEpochs = length(net.meta.trainOpts.learningRate) ;
net.meta.trainOpts.batchSize = 256 ;
net.meta.trainOpts.momentum = 0.9 ;
net.meta.trainOpts.weightDecay = 0.0005 ;

% --------------------------------------------------------------------
function net = insertBnorm(net, l)
% --------------------------------------------------------------------
assert(isfield(net.layers{l}, 'weights'));
ndim = size(net.layers{l}.weights{1}, 4);
layer = struct('type', 'bnorm', ...
               'weights', {{ones(ndim, 1, 'single'), zeros(ndim, 1, 'single')}}, ...
               'learningRate', [1 1], ...
               'weightDecay', [0 0]) ;
net.layers{l}.biases = [] ;
net.layers = horzcat(net.layers(1:l), layer, net.layers(l+1:end)) ;

      

What I want to do is build the same architecture in Keras, this is what I have tried so far:

model = Sequential()

model.add(Conv2D(64, (3, 3), strides=1, input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(128, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(Conv2D(256, (3, 3), strides=1))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(512, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=1))

model.add(Conv2D(256, (1, 1)))
convout1=Activation('relu')
model.add(convout1)

model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

opt = keras.optimizers.rmsprop(lr=0.0001, decay=0.0005)  
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['binary_accuracy'])

      

However, when I run matconvnet network I have 87% accuracy and if I run keras version I have 77% accuracy. If they should be the same network and the data is the same, where is the difference? What's wrong with my Keras architecture?

+3


source to share


1 answer


In your version of MatConvNet, you are using SGD with momentum.

In Keras, you are using rmsprop

With a different teaching rule, you should try different courses of study. Also sometimes boost is helpful when teaching CNN.



Could you try SGD + boost in Keras and let me know what's going on?

Another thing that might be different is initialization. for example in MatConvNet you use Gaussian initialization with f = 0.0125 as standard deviation. In Keras, I'm not sure about the default initialization.

In general, if you don't use batch normalization, the network is prone to many numerical problems. If you are using a standard batch on both networks, I am sure the results will be similar. Is there a reason you don't want to use batch normalization?

+1


source







All Articles