Force keras mode.fit () use Multiprocessing

I am using Keras with anano backend and I want to train my Network on gpu. It really works really well. But when I want to train with a huge amount of data, I realized that there is a bottleneck in the function model.fit()

(I am using the functional API).

In fact, in function model.fit()

Keras is starting to use GPU for training. But before it starts working on the GPU, it takes a lot of CPU work to prepare the training (I don't know what exactly it does fit()

before the actual training). The problem is that this part only uses one thread, so this part takes quite a long time.

Is it possible to force Keras to use multiprocessing at this point?

Edit: Added additional data to call the function:

My function call looks like this:

optimizer = SGD(lr=0.00001)
early_stopping = EarlyStopping(monitor='val_loss', patience=30, verbose=1, mode='auto')
outname = join(outdir, save_base_name+".model")
checkpoint = ModelCheckpoint(outname, monitor='val_loss', verbose=1, save_best_only=True)
model.compile(loss='hinge', optimizer=optimizer, metrics=['accuracy'])
model.fit(
    train_instances.x,
    train_instances.y,
    batch_size=60,
    epochs=50,
    verbose=1,
    callbacks=[checkpoint, early_stopping],
    validation_data=(valid_instances.x, valid_instances.y),
    shuffle=True
)

      

The model I'm using (you can find the implementation here: https://github.com/pexmar/DSCNN_document ) has 90 inputs (shared layers) 100 x 300 in size (word2vec embedding layer: 100 words, each with 300 dimensions) ... I am giving 12500 training copies and 1000 proofing copies on the net.

+3


source to share





All Articles