Scipy.optimize.minimize returning invalid results

In Python, I have a function error_p that calculates the root mean square error between a set of observed probabilities (or rather, normalized frequencies) and the Poisson distribution for a given mean.

from scipy.stats import poisson
import numpy as np
from scipy.optimize import minimize

def error_p(mu):
    """ 
    Returns mean squared error between observed data points and Poisson distribution 
    """
    data = np.array([17.0,32,20,19,6,5,7,5,0,1,3,1,1,0,0,0,0,1,0,0])
    data = data / sum(data)
    x = range(len(data))
    theory = [poisson.pmf(x, mu) for x in x]
    error = np.mean((data - theory)**2)
    return error

      

Plot this function (error_p) over the range of values mu

:

enter image description here

Clearly the minimum is with the input (mu) value just below 2. However, when I call scipy.optimize.minimize

like this:

results = minimize(error_p, 2, tol=0.00001)
results['x'], results['fun']

      

I get

(array([ 13.86128699]), 0.007078183160196419)

      

indicating the minimum at mu = 13.86 at a function value of ~ 0.007, whereas if I run

error_p(2)

      

I get

0.000848142902892

      

Why scipy.optimize.minimize

doesn't it find the true minimum?

+3


source to share


2 answers


If you use the function scipy.optimize.minimize_scalar

, you get the expected result:

results = minimize_scalar(error_p, tol=0.00001)
print results['x'], results['fun']
>>> 1.88536329298 0.000820148069544

      

Why scipy.optimize.minimize

doesn't it work? I am assuming your function is error_p

malformed from a numpy perspective. Try the following:



MU = np.linspace(0,20,100)
error_p(MU)

      

and you will see that it doesn't work. Your function is not geared to taking an array of inputs and splashing out an array of outputs, and I think this minimizes the lookup.

+5


source


Edit

theory = [poisson.pmf(x, mu) for x in x]

      

to



theory = poisson.pmf(x, mu)

      

and works as expected.

+5


source







All Articles