Keras VGGnet Variable Input Size Advance Model

I want to extract features of a 368x368 image with a VGG pre-model. According to the documentation, VGGnet accepts 224x224 images. Is there a way to provide variable size Keras VGG input?

Here is my code:

# VGG Feature Extraction
x_train = np.random.randint(0, 255, (100, 224, 224, 3))
base_model = VGG19(weights='imagenet')
modelVGG = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_conv2').output)
block4_conv2_features = modelVGG.predict(x_train)

      

Edited code (it works!)

# VGG Feature Extraction
x_train = np.random.randint(0, 255, (100, 368, 368, 3))
base_model = VGG19(weights='imagenet', include_top=False)
modelVGG = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_conv2').output)
block4_conv2_features = modelVGG.predict(x_train)

      

+3


source to share


1 answer


The input size affects the number of neurons in fully connected ( Dense

) levels. Therefore, you need to create your own fully linked layers.



Call VGG19 with include_top=False

to remove fully connected layers and then add them yourself. Check this code for reference.

+3


source







All Articles