How to compute standard errors of a parameter obtained by minification in Python

I evaluated the (atheta) parameter with 'scipy.optimize.minimize'. My procedure is equivalent to calculating maximum likelihood. I want to compute the standard errors of this parameter like statistical software does.

I found the scikits.bootstrap package, but it seems that it does not calculate the confidence intervals of custom functions, but only from the meager statistical functions.

How can I go about calculating standard errors?

Here is my code:

from __future__ import division
import numpy as np
import pandas 
import scipy
from scipy import optimize

# import data 
dir = 
data = 

#define function to minimize
def f(y, ns, vars):
    atheta = y[:1]
    tosum = 1/(np.exp(atheta)-np.exp(-atheta*vars))
    sum = np.nansum(tosum,axis=1)
    firstterm = tosum[:,[0]]
    firsterm2 = firstterm.flatten()
    lnp1 = np.log(firsterm2 * 1/sum)
    return -np.sum(lnp1)

# this is the minimisation of the likelihood. It gives back atheta.
def main():
    print '*'*80
    print 'nouvelle execution'
    print '*'*80

    # data
    ns = data['n'].values.astype('int')
    vars = data.loc[:, ('R1', 'R2', 'R3', 'R4', 'R5', 'R6')].values  
    ns= np.array(ns, dtype=np.int)
    vars= np.array(vars, dtype=np.float)

    x0 = [-0.1]   
    result = scipy.optimize.minimize(f, x0, method = 'Nelder-Mead',
                                        args = (ns, vars))

    return result

if __name__ == "__main__":
    print 'resultat du main = ', main()

      

This is what the data looks like:

R1 R2 R3 R4 R5 R6 n  
1  30.3 4.1 10.2 2.5 10.8 6    
0.9 10.4 4.1 6.3 3.3 NaN 5  

      

This is just a sample as the data is 25000 rows and the number of R variables is up to R24.

+3


source to share





All Articles