Introducing adversarial learning in keras

I would like to implement an adversarial network with a classifier, the output of which is associated with an adversary, which must guess the specific feature of the inputs to the classifier (inconvenience parameter) based on the output of the classifier (a detailed description of such an adversarial network can be found in this article: Learning to rotate with adversarial networks

The model will then be trained as follows:

  • train your opponent by party (classifier will be corrected)
  • freeze the enemy
  • train the entire model while the enemy is frozen on the party
  • unfreeze the enemy
  • repeat for other parties
  • shuffling and repetition for other eras

When I train the enemy, I want to loss of function was categorical krossentropiey, and when I train the whole model, I want it to be a function of the loss of the classifier (binary krossentropii loss), net of the loss of time opponent of the parameter: L_c() - b * L_a()

.

Most of the race codes I've seen are trained in Keras using train_on_batch. However, since I already have a lot of code setup with another sequential model and I would like to reuse it, I was wondering if there is a way to implement this model and prepare it using model.fit in Keras.

What I was going to do was set up the model using the Keras functional API with the classifier inputs as inputs and the classifier and opposing outputs. I would also compile the model as an adversary model with only one way out. For example:

classifier_input = Input(shape=(10,))
x = Dense(50, activation='tanh')(classifier_input)
x = Dense(25, activation='tanh')(x)
x = Dense(5, activation='tanh')(x)
classifier_output = Dense(1, activation='sigmoid')(x)

x = Dense(30, activation='tanh')(classifier_output)
x = Dense(15, activation='tanh')(x)
x = Dense(5, activation='tanh')(x)
adversary_output = Dense(3, activation='sgimoid')(x)

adversary = Model(inputs=classifier_input , outputs=adversary_output)
adversary.compile(optimizer='Adamax', loss='categorical_crossentropy', metrics=['accuracy'])

final_model = Model(inputs=classifier_input,outputs=[classifier_output,adversary_output])
final_model.compile(optimizer='Adamax', 
              loss=['binary_crossentropy', 'categorical_crossentropy'], 
              loss_weights=[1.0, -0.1], metrics=['accuracy'])

      

And then I want to set up a callback that will train the opponent (after freezing the layers in the classifier) ​​in on_batch_begin and then training the final_model using the model.fit code (I will freeze the opponent and unfreeze the classifier layers in on_batch_begin before starting the final_model training).

But then I don't know if the current batch can be passed as an argument to on_batch_begin. Should I set up my own batches in the callback, or can the model.fit package be passed in?

Is there a better way to do adversarial preparation while still using model.fit?

+3


source to share





All Articles