How to implement regularization in pybrain
Below is no L1 / L2 regularization. But it can be used to prevent overfitting by stopping early.
From the trainer documentation,
trainUntilConvergence (dataset = None, maxEpochs = None, verbose = None, continueEpochs = 10, validationProportion = 0.25)
Tune the module on the dataset until it converges.
Return the module with the parameters that gave the minimum validation error.
If no dataset is specified, the dataset is transferred during training using initialization. validationProportion is a relation dataset that is used for a validation dataset.
If maxEpochs is given, at most many epochs are learned. Each time the validation error reaches its minimum value, try to continue epops to find the best one.
If you are using the default parameters, you have already enabled the 75:25 split as a workout set against a validation dataset. The validation dataset is used for EARLY STOP.
source to share
Regularization means changing the cost function. The custom choices in PyBrain do affect the cost function - for example, choosing whether the layers are linear or sigmoid - but the cost function itself is not directly affected.
However , elsewhere on StackOverflow , someone claims that L2 regularization is possible with the weightdecay parameter. (The L2 norm sums the squares of the differences in each coordinate, while the L1 norm sums their positive absolute differences.)
source to share