Scipy.optimize.root doesn't converge in Python and Matlab fsolve works, why?

I am trying to find the y root of a function f using Python.

Here is my code:

def f(y):
    w,p1,p2,p3,p4,p5,p6 = y[:7] 
    t1 = w - 0.99006633*(p1**0.5) - (-1.010067)*((1-p1))
    t2 = w - 22.7235687*(p2**0.5) - (-1.010067)*((1-p2))
    t3 = w - 9.71323491*(p3**0.5) - (-1.010067)*((1-p3))
    t4 = w - 2.43852877*(p4**0.5) - (-1.010067)*((1-p4))
    t5 = w - 3.93640207*(p5**0.5) - (-1.010067)*((1-p5))
    t6 = w - 9.22688144*(p6**0.5) - (-1.010067)*((1-p6))
    t7 = p1 + p2 + p3 + p4 + p5 + p6 - 1
    return [t1,t2,t3,t4,t5,t6,t7]


x0 = np.array([-0.01,0.4,0.1,0.2,0.1,0.1,0.1])
sol = scipy.optimize.root(f, x0)
print sol 

      

Python does not find root. However, there is one, I found it with the fsolve function in Matlab.

It:

[0.3901, 0.6166, 0.0038, 0.0202, 0.2295, 0.1076, 0.0223]

I really want to use Python. Can anyone explain why scipy.optimize.root in Python doesn't converge, but fsolve in Matlab does?

For reference scipy.optimize.solve does not converge.

+3


source to share


1 answer


Try another method. For me method="lm"

(I'm assuming Levenberg-Marquardt, but I'm not entirely sure) works very well:

import numpy as np
import scipy.optimize

def f(y):
    w,p1,p2,p3,p4,p5,p6 = y[:7]
    t1 = w - 0.99006633*(p1**0.5) - (-1.010067)*((1-p1))
    t2 = w - 22.7235687*(p2**0.5) - (-1.010067)*((1-p2))
    t3 = w - 9.71323491*(p3**0.5) - (-1.010067)*((1-p3))
    t4 = w - 2.43852877*(p4**0.5) - (-1.010067)*((1-p4))
    t5 = w - 3.93640207*(p5**0.5) - (-1.010067)*((1-p5))
    t6 = w - 9.22688144*(p6**0.5) - (-1.010067)*((1-p6))
    t7 = p1 + p2 + p3 + p4 + p5 + p6 - 1
    return [t1,t2,t3,t4,t5,t6,t7]


x0 = np.array([-0.01,0.4,0.1,0.2,0.1,0.1,0.1])
sol = scipy.optimize.root(f, x0, method='lm')

assert sol['success']
print 'Solution: ', sol.x
print 'Misfit: ', f(sol.x)

      

This gives:



Solution: [ 0.39012036  0.61656436  0.00377616  0.02017937  0.22954825 
            0.10763827  0.02229359]
Misfit: [0.0, 0.0, 1.1102230246251565e-16, -1.1102230246251565e-16,   
         1.1102230246251565e-16, 0.0, -2.2204460492503131e-16]

      

I'm actually a little surprised that Levenberg-Marquardt is not a default. This is usually one of the first gradient-descent style solvers you can try.

+5


source







All Articles