Finding roots with scipy.optimize.root

I am trying to find the y root of a function f using Python.

Here is my code:

 def f(y):
    w,p1,p2,p3,p4,p5,p6,p7 = y[:8] 
    t1 = w - 0.500371726*(p1**0.92894164) - (-0.998515304)*((1-p1)**1.1376649)
    t2 = w - 8.095873128*(p2**0.92894164) - (-0.998515304)*((1-p2)**1.1376649)
    t3 = w - 220.2054377*(p3**0.92894164) - (-0.998515304)*((1-p3)**1.1376649)
    t4 = w - 12.52760758*(p4**0.92894164) - (-0.998515304)*((1-p4)**1.1376649)
    t5 = w - 8.710859537*(p5**0.92894164) - (-0.998515304)*((1-p5)**1.1376649)
    t6 = w - 36.66350261*(p6**0.92894164) - (-0.998515304)*((1-p6)**1.1376649)
    t7 = w - 3.922692207*(p7**0.92894164) - (-0.998515304)*((1-p7)**1.1376649)       
    t8 = p1 + p2 + p3 + p4 + p5 + p6 + p7 - 1
    return [t1,t2,t3,t4,t5,t6,t7,t8]


x0 = np.array([-0.01,0.3,0.1,0.2,0.1,0.1,0.1,0.1])
sol = scipy.optimize.root(f, x0, method='lm')
print sol 
print 'solution', sol.x
print 'success', sol.success

      

Python does not find the root regardless of the method I try to use in scipy.optimize.root.

However, there is one, I found it with the fsolve function in Matlab.

It:

[- 0.0622, 0.5855, 0.087, 0.0028, 0.0568, 0.0811, 0.0188, 0.1667].

When I put x0 next to the root, the python algorithm converges. The problem is, I don't know a priori at the root to indicate x0. I actually solve many equations of this type.

I really want to use Python. Can anyone help me converge with python?

+1


source to share


1 answer


OK, after some trickery, we will focus on one more aspect of good optimization / root lookup algorithms. In the comments above, we went back and forth around the method in scipy.optimize.root () to use. An equally important question for an almost bulletproof "automatic" root finding is zeroed out with good initial assumptions. Often, good initial guesses are not, in fact, next to the real answer. Instead, they should be organized so that they naturally lead the solver in the right direction.

In your particular case, your guesses were essentially sending the algorithm in strange directions.

My toy reconstruction of your problem:

import numpy as np
import scipy as sp
import scipy.optimize
def f(y):
    w,p1,p2,p3,p4,p5,p6,p7 = y[:8]
    def t(p,w,a):       
        b = -0.998515304
        e1 = 0.92894164
        e2 = 1.1376649
        return(w-a*p**e1 - b*(1-p)**e2)
    t1 = t(p1,w,1.0)
    t2 = t(p2,w,4.0) 
    t3 = t(p3,w,16.0)
    t4 = t(p4,w,64.0)
    t5 = t(p5,w,256.0)
    t6 = t(p6,w,512.0)
    t7 = t(p7,w,1024.0)       
    t8 = p1 + p2 + p3 + p4 + p5 + p6 + p7 - 1.0
    return(t1,t2,t3,t4,t5,t6,t7,t8)
guess = 0.0001
x0 = np.array([-1000.0,guess,guess,guess,guess,guess,guess,guess])
sol = sp.optimize.root(f, x0, method='lm')
print('w=-1000: ', sol.x, sol.success,sol.nfev,np.sum(f(sol.x)))

      



Please note that I did not use my specific prefactors (I wanted to expand the range I researched), although I have all your specific metrics on p terms.

The real secret lies in the initial assumption, which I made the same for all p terms. If it has about 0.1 or more bombings most of the time, some terms want to go one way and the other. Reducing to 0.01 worked well for this problem. (Note that the term w is very strong - -1000 to +1000 does not affect the solution). Reducing the original guess doesn't further affect this particular problem, but it doesn't hurt either. I would keep it very small.

Yes, you know that at least some conditions will be much larger. But you put the solver in a position where it can clearly and directly go to the real solution.

Good luck.

+6


source







All Articles