Avoid overflow with softplus function in python
Since for x>30
we have log(1+exp(x)) ~= log(exp(x)) = x
, a simple stable implementation
def safe_softplus(x, limit=30):
if x>limit:
return x
else:
return np.log(1.0 + np.exp(x))
In fact | log(1+exp(30)) - 30 | < 1e-10
, this is why this implementation makes mistakes less 1e-10
and never overflows. In particular, at x = 1000, the error in this approximation will be much smaller than the float64 resolution, so it is impossible to even measure it on a computer.
source to share
You can try mpmath , which was written for arbitrary precision floating point arithmetic:
>>> import mpmath
>>> mpmath.exp(5000)
>>> mpf('2.9676283840236669e+2171')
However, I'm not sure how much this will scale in terms of performance when used for machine learning, or if it will even work with your machine learning system, since, as shown in the example, it wraps the results into native number types. If you are using machine learning framework that may have a corresponding softplus built in. For example, Tensorflow has one here: https://www.tensorflow.org/api_docs/python/tf/nn/softplus
source to share