Calculating inverse trigonometric functions with formulas

I am trying to create my own calculator for calculating trigonometric functions. Apart from Cheboshev's pylonomials and / or Cordic's algorithm, I used Taylor series, which were accurate in several decimal places.

This is what I created to calculate simple trigonometric functions without any modules:

from __future__ import division

def sqrt(n):
  ans = n ** 0.5
  return ans

def factorial(n):
  k = 1
  for i in range(1, n+1):
    k = i * k

  return k 

def sin(d):
  pi = 3.14159265359
  n = 180 / int(d) # 180 degrees = pi radians
  x = pi / n # Converting degrees to radians
  ans = x - ( x ** 3 / factorial(3) ) + ( x ** 5 / factorial(5) ) - ( x ** 7 / factorial(7) ) + ( x ** 9 / factorial(9) )
  return ans 

def cos(d):
  pi = 3.14159265359
  n = 180 / int(d) 
  x = pi / n 
  ans = 1 - ( x ** 2 / factorial(2) ) + ( x ** 4 / factorial(4) ) - ( x ** 6 / factorial(6) ) + ( x ** 8 / factorial(8) )
  return ans 

def tan(d): 
  ans = sin(d) / sqrt(1 - sin(d) ** 2)
  return ans 

      

Unfortunately, I couldn't find any sources that would help me interpret the inverse trigonometric function formulas for Python. I also tried putting sin (x) into force -1 ( sin(x) ** -1

) which didn't work as expected.

What could be the best solution for this in Python (at best, I mean the simplest with the same precision as the Taylor series)? Is this possible with a power series or do I need to use a cord-based algorithm?

+3


source to share


2 answers


The question is broad in scope, but here are some simple ideas (and code!) That can serve as a starting point for the calculation arctan

. First, the old Taylor series. For simplicity, we are using a fixed number of members; in practice, you may need to define the number of terms to use dynamically depending on size, x

or introduce some sort of convergence criterion. With a fixed number of terms, we can effectively evaluate using something similar to Horner's scheme.

def arctan_taylor(x, terms=9):
    """
    Compute arctan for small x via Taylor polynomials.

    Uses a fixed number of terms. The default of 9 should give good results for
    abs(x) < 0.1. Results will become poorer as abs(x) increases, becoming
    unusable as abs(x) approaches 1.0 (the radius of convergence of the
    series).
    """
    # Uses Horner method for evaluation.
    t = 0.0
    for n in range(2*terms-1, 0, -2):
        t = 1.0/n - x*x*t
    return x * t

      

The above code gives good results for small x

(for example, less than 0.1

absolute value), but the accuracy decreases as it increases x

, and for the abs(x) > 1.0

series it never converges, no matter how much term (or how much extra precision) we throw at it. Therefore, we need a better way to calculate for large ones x

. One solution is to use an ID abbreviation for arguments arctan(x) = 2 * arctan(x / (1 + sqrt(1 + x^2)))

. This gives the following code, which it builds on arctan_taylor

to give reasonable results over a wide range x

(but beware of possible overflow and overflow in computation x*x

).

import math

def arctan_taylor_with_reduction(x, terms=9, threshold=0.1):
    """
    Compute arctan via argument reduction and Taylor series.

    Applies reduction steps until x is below `threshold`,
    then uses Taylor series.
    """
    reductions = 0
    while abs(x) > threshold:
        x = x / (1 + math.sqrt(1 + x*x))
        reductions += 1

    return arctan_taylor(x, terms=terms) * 2**reductions

      

Alternatively, given the existing implementation for tan

, you can simply find a solution y

to the equation tan(y) = x

using traditional root finding techniques. Since arctan is already naturally bounded in range (-pi/2, pi/2)

, half-search works well:

def arctan_from_tan(x, tolerance=1e-15):
    """
    Compute arctan as the inverse of tan, via bisection search. This assumes
    that you already have a high quality tan function.
    """
    low, high = -0.5 * math.pi, 0.5 * math.pi
    while high - low > tolerance:
        mid = 0.5 * (low + high)
        if math.tan(mid) < x:
            low = mid
        else:
            high = mid
    return 0.5 * (low + high)

      

Finally, just for fun, here's a CORDIC-like implementation that's actually more suitable for low-level implementations than Python. The idea here is that you pre-compromise a table of arctan values โ€‹โ€‹for 1

, 1/2,

1/4,

etc., and then use them to compute total arctan values, essentially by computing successive approximations to the true angle. The cool part is that after the pre-computation step, the computation of arctan only involves additions, subtractions, and multiplications by powers of 2. (Of course, these multiplications are not more efficient than any other multiplication at the Python level, but closer to hardware. this could potentially make a big difference.)



cordic_table_size = 60
cordic_table = [(2**-i, math.atan(2**-i))
                 for i in range(cordic_table_size)]

def arctan_cordic(y, x=1.0):
    """
    Compute arctan(y/x), assuming x positive, via CORDIC-like method.
    """
    r = 0.0
    for t, a in cordic_table:
        if y < 0:
            r, x, y = r - a, x - t*y, y + t*x
        else:
            r, x, y = r + a, x + t*y, y - t*x
    return r

      

Each of the above methods have their own strengths and weaknesses, and all of the above code can be improved in many ways. I recommend that you experiment and research.

To wrap it all up, here are the results of calling the above functions on a small number of not very carefully selected test values โ€‹โ€‹compared to the output of the standard library math.atan

function:

test_values = [2.314, 0.0123, -0.56, 168.9]
for value in test_values:
    print("{:20.15g} {:20.15g} {:20.15g} {:20.15g}".format(
        math.atan(value),
        arctan_taylor_with_reduction(value),
        arctan_from_tan(value),
        arctan_cordic(value),
    ))

      

Output on my machine:

    1.16288340166519     1.16288340166519     1.16288340166519     1.16288340166519
     0.0122993797673      0.0122993797673   0.0122993797673002   0.0122993797672999
  -0.510488321916776   -0.510488321916776   -0.510488321916776   -0.510488321916776
    1.56487573286064     1.56487573286064     1.56487573286064     1.56487573286064

      

+2


source


The easiest way to do any inverse function is to use binary search.

  • Definition

    let's say the function

    x = g(y)
    
          

    And we want to code the reverse:

    y = f(x) = f(g(y))
    x = <x0,x1>
    y = <y0,y1>
    
          

  • search bins by floats

    You can do it in integer math using the bits of the mantissa like here:

    but if you don't know the exponent of the result before calculating, you also need to use float to search in bin.

    therefore the idea behind binary search is to swap the mantissa y

    from y1

    to y0

    bit by bit from MSB to LSB . Then call the direct function g(y)

    and if the cross result x

    will return the last bit change.

    In the case of using floats, you can use a variable that will hold the approximate value of the mantissa bit rather than integer bit access. This will fix the unknown exhibitor issue. So set the y = y0

    actual bit to MSB first , so b=(y1-y0)/2

    . After each iteration, decrease it and do as many iterations as you get the mantissa bit n

    ... This way you get the result in n

    iterations within the precision (y1-y0)/2^n

    .

    If your inverse function is not monotonic, split it into monotonic intervals and treat each as a separate binary search.

    The zoom in / out function simply determines the direction of the intersection (use <

    or >

    ).

C ++ acos example

acos

therefore it is y = acos(x)

defined by x = <-1,+1> , y = <0,M_PI>

and reduced like this:

double f64_acos(double x)
    {
    const int n=52;         // mantisa bits
    double y,y0,b;
    int i;
    // handle domain error
    if (x<-1.0) return 0;
    if (x>+1.0) return 0;
    // x = <-1,+1> , y = <0,M_PI> , decreasing
    for (y= 0.0,b=0.5*M_PI,i=0;i<n;i++,b*=0.5)  // y is min, b is half of max and halving each iteration
        {
        y0=y;                   // remember original y
        y+=b;                   // try set "bit"
        if (cos(y)<x) y=y0;     // if result cross x return to original y decreasing is <  and increasing is >
        }
    return y;
    }

      

I tested it like this:

double x0,x1,y;
for (x0=0.0;x0<M_PI;x0+=M_PI*0.01)  // cycle all angle range <0,M_PI>
    {
    y=cos(x0);              // direct function (from math.h)
    x1=f64_acos(y);         // my inverse function
    if (fabs(x1-x0)>1e-9)   // check result and output to log if error
     Form1->mm_log->Lines->Add(AnsiString().sprintf("acos(%8.3lf) = %8.3lf != %8.3lf",y,x0,x1));
    }

      



No difference found ... so the implementation works correctly. A rough binary search on a 52-bit mantissa is usually slower than polynomial approximation ... on the other hand the implementation is so simple ...

[Note]

If you don't want to keep track of monotonous intervals, you can try

Since you are dealing with goniometric functions, you need to handle singularities to avoid NaN

either division by zero, etc ...

If you're curious here are more examples of buffer lookups (mostly for integers)

+1


source







All Articles