How does scipy.integrate.ode.integrate () work?

I have obviously read the documentation , but I have not been able to find a more detailed description of what is going on under the covers.In particular, there are several ways that I am very confused with:

General setup

import numpy as np
from scipy.integrate import ode

#Constants in ODE
N = 30                                 
K = 0.5                               
w = np.random.normal(np.pi, 0.1, N)

#Integration parameters
y0 = np.linspace(0, 2*np.pi, N, endpoint=False)   
t0 = 0                                                                         

#Set up the solver
solver = ode(lambda t,y: w + K/N*np.sum( np.sin( y - y.reshape(N,1) ), axis=1))
solver.set_integrator('vode', method='bdf')
solver.set_initial_value(y0, t0)

      

Problem 1: solver.integrate(t0)

doesn't work

Setting up an integrator and asking for a value the t0

first time returns a successful integration. Repeating this returns the correct number, but the method solver.successful()

returns false:

solver.integrate(t0)
>>> array([ 0.        ,  0.20943951,  0.41887902, ...,  5.65486678,
            5.86430629,  6.0737458 ])

solver.successful()
>>> True

solver.integrate(t0)
>>> array([ 0.        ,  0.20943951,  0.41887902, ...,  5.65486678,
            5.86430629,  6.0737458 ])

solver.successful()
>>> False

      

My question is what is going on in the method solver.integrate(t)

that causes it to run successfully the first time and fail afterwards, and what does "fail" integration mean? Also, why does the integrator fail and keep producing useful information until I ask him if it was successful?

Related, is there a way to reset the failed integration, or do I need to re-create the solver from scratch?

Problem 2: solver.integrate (t) immediately returns an answer for almost any value t

Even though my initial value y0

is set to t0=0

, I can query for a value in t=10000

and get an immediate response. I would expect that numerical integration over such a long period of time would take at least a few seconds (for example, in Matlab, which requires more than 10,000 time steps to integrate, it would take several minutes).

For example, re-run the installation from above and run:

solver.integrate(10000)
>>> array([ 2153.90803383,  2153.63023706,  2153.60964064, ...,  2160.00982959,
            2159.90446056,  2159.82900895])

      

Is Python really that fast or is it a complete bug?

+3


source to share


2 answers


Issue 0

Don't ignore error messages. Yes, error messages ode

can be cryptic at times, but you still want to avoid them.

Problem 1

As you already integrated before t0

on the first call solver.integrate(t0)

, you are integrating for the time step 0

with the second call. This generates a cryptic error:

 DVODE--  ISTATE (=I1) .gt. 1 but DVODE not initialized      
      In above message,  I1 =         2
/usr/lib/python3/dist-packages/scipy/integrate/_ode.py:869: UserWarning: vode: Illegal input detected. (See printed message.)
  'Unexpected istate=%s' % istate))

      

Problem 2.1

The maximum number of (internal) steps that the solver will take in one call without throwing an error. This can be specified with an argument nsteps

set_integrator

. If you integrate a long time at once, it nsteps

will be exceeded even if nothing happened and the following error message is thrown:



/usr/lib/python3/dist-packages/scipy/integrate/_ode.py:869: UserWarning: vode: Excess work done on this call. (Perhaps wrong MF.)
  'Unexpected istate=%s' % istate))

      

The integrator then stops when this happens.

Issue 2.2

If you install nsteps=10**10

, the integration is seamless. It's still pretty fast though (roughly 1s on my machine). The reason for this is as follows:

For a multidimensional system like yours, there are two main integration receivers:

  • Vector and matrix operations inside the integrator. They are scipy.ode

    all implemented using NumPy operations or ported Fortran or C code. Anyway, they are implemented using compiled code without the Python overhead and are therefore very efficient.

  • Derivative score ( lambda t,y: w + K/N*np.sum( np.sin( y - y.reshape(N,1) ), axis=1)

    in your case). You figured it out with NumPy operations, which again are implemented with compiled code and are very efficient. You can improve this slightly with a purely compiled function, but this will give you the smallest factor possible. If you were using Python lists and loops, it would be terribly slow.

Hence, for your problem, everything is relevantly handled with the compiled code under the hood and the integration is handled with efficiency comparable to that of a pure C program, for example. I don't know how the above two aspects are handled in Matlab, but if any of the above tasks is handled with interpreted instead of compiled for loops, this explains the runtime inconsistency that you are seeing.

+4


source


In the second question, yes, the way out can be nonsense. Local errors, whether from sampling or floating point operations, accumulate with a compactification factor that is related to the Lipschitz constant of the ODE function. In the first estimate, the Lipschitz constant is here K=0.5

. The rate at which early errors increase, i.e. Their coefficient, as part of the global error, can be as large as exp(0.5*10000)

, which is a huge number.



On the other hand, it comes as no surprise that the integration is fast. Most of the methods provided use step size adaptation, and with standard error tolerances, this can lead to only a few dozen internal steps. Reducing error margins will increase the number of internal steps and can dramatically change the numerical result.

+1


source







All Articles