Binomial expansion with fractional powers in Python

Is there a quick way to expand and solve binomials raised to fractional powers in Scypy / numpy?

For example, I want to solve the following equation

y * (1 + x) ^ 4.8 = x ^ 4.5

where y is known (eg 1.03).

This requires the binomial decomposition (1 + x) ^ 4.8.

I want to do this for millions of y values โ€‹โ€‹and so I get a nice and quick way to solve this problem.

I've tried the sympy extension (and simplification) but don't seem to like the fractional exponent. I am also struggling with scipy fsolve module.

Any pointers in the right direction would be helpful.

EDIT:

On the contrary, the simplest solution was to create a truth table ( https://en.wikipedia.org/wiki/Truth_table ) for the assumed x values โ€‹โ€‹(and the known y). This allows you to quickly interpolate the "true" x values.

y_true = np.linspace(7,12, 1e6)
x = np.linspace(10,15, 1e6)

a = 4.5
b = 4.8
y = x**(a+b) / (1 + x)**b

x_true = np.interp(y_true, y, x)

      

+3


source to share


2 answers


EDIT: Comparing the output to the Woldfram alpha score for y = 1.03, it looks like fsolve won't return complex roots. fooobar.com/questions/939168 / ... is a similar question that might be helpful.

Change the equation y = x^4.5 / (1+x)^4.8

. Scipy.optimize.fsolve()

requires a function as the first argument.

Or:



from scipy.optimize import fsolve
import math
def theFunction(x):   
    return math.pow(x, 4.5) / math.pow( (1+x) , 4.8)

for y in millions_of_values:
    fsolve(theFunction, y)

      

Or using lambda

(anonymous function constructor):

from scipy.optimize import fsolve
import math
for y in millions_of_values:
    fsolve((lambda x: (math.pow(x, 4.5) / math.pow((1+x), 4.8))), y)

      

+3


source


I don't think you need binomial expansion. Horner's method for estimating polynomials implies that it is better to have a factorized form of polynomials than an extended form.

In general, solving nonlinear equations can benefit from symbolic differentiation, which is not that hard to do manually for your equation. Providing an analytical expression for the derivative saves the solver from having to evaluate the derivatives numerically. You can write two functions, one that returns the value of a function and another that returns a derivative (i.e. a Jacobian function for this simple 1-D function), as described in the docs for scipy.optimize.fsolve () . Some code that uses this approach:

import timeit
import numpy as np
from scipy.optimize import fsolve

def the_function(x, y): 
    return y * (1 + x)**(4.8)   /   x**(4.5) - 1

def the_derivative(x, y):
    l_dh = x**(4.5) * (4.8 * y * (1 + x)**(3.8))
    h_dl = y * (1 + x)**(4.8) * 4.5 * x**3.5
    square_of_whats_below = x**9
    return (l_dh - h_dl)/square_of_whats_below

print fsolve(the_function, x0=1, args=(0.1,))
print '\n\n'
print fsolve(the_function, x0=1, args=(0.1,), fprime=the_derivative)

%timeit fsolve(the_function, x0=1, args=(0.1,))
%timeit fsolve(the_function, x0=1, args=(0.1,), fprime=the_derivative)

      

... gives me this result:

[ 1.79308495]



[ 1.79308495]
10000 loops, best of 3: 105 ยตs per loop
10000 loops, best of 3: 136 ยตs per loop

      

which shows that analytical differentiation did not lead to acceleration in this particular case. I assume that numerical approximation to a function involves simpler calculations like multiplication, squaring, and / or addition, instead of functions like fractional exponentiation.

You can get further simplification by taking the log of your equation and plotting it. With a little algebra, you should be able to get an explicit function for a ln_y

natural log y

. If I did algebra correctly:

def ln_y(x):
    return 4.5 * np.log(x/(1.+x)) - 0.3 * np.log(1.+x)

      

You can plot this function I made for lin-lin and log-log plots:

%matplotlib inline
import matplotlib.pyplot as plt

x_axis = np.linspace(1, 100, num=2000)

f, ax = plt.subplots(1, 2, figsize=(8, 4))
ln_y_axis = ln_y(x_axis)

ax[0].plot(x_axis, np.exp(ln_y_axis))  # plotting y vs. x
ax[1].plot(np.log(x_axis), ln_y_axis)  # plotting ln(y) vs. ln(x)

      

Plot



This shows that y

there are two values for each x

if y

below the critical value. The minimum exceptional value y

occurs when x=ln(15)

u matters y

:

np.exp(ln_y(15))
0.32556278053267873

      

So your y

value example 1.03

doesn't provide a (real) solution for x

.

This behavior, which we identified on the charts, is repeated by the call scipy.optimize.fsolve()

we made earlier:

print fsolve(the_function, x0=1, args=(0.32556278053267873,), fprime=the_derivative)
[ 14.99999914]

      

This shows that the assumption x=1

initially, when y

equal 0.32556278053267873

, gives x=15

as a solution. Trying to increase values y

:

print fsolve(the_function, x0=15, args=(0.35,), fprime=the_derivative)

      

results in an error:

/Users/curt/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:5: RuntimeWarning: invalid value encountered in power

      

The reason for the error is that a function power

in Python (or numpy) does not accept negative bases for fractional indices by default. You can fix this by granting the authority as a complex number, i.e. Write x**(4.5+0j)

instead x**4.5

, but are you really interested in complex values x

that could solve your equation?

+2


source







All Articles