Self-adjusting algorithm?

I am trying to implement a generic IIR filter. The main thing:

    // Loop over all SOS sections:
    value_type y = x;
    BOOST_FOREACH( Sos& n, m_sos )
    {
        y = update( n, y );
    }

      

from:

...update( Sos& sos, const value_type& x ) const
{

    const value_type xs = sos.m_s * x;

    const value_type w0 =   xs
                          - sos.m_a( 1 ) * sos.m_w( 0 )
                          - sos.m_a( 2 ) * sos.m_w( 1 );

    const value_type y =   w0
                         + sos.m_b( 1 ) * sos.m_w( 0 )
                         + sos.m_b( 2 ) * sos.m_w( 1 );

    sos.m_w( 1 ) = sos.m_w( 0 );
    sos.m_w( 0 ) = w0;

    return y;
}

      

The coefficients m_a, m_b are variable and are read from the file once at runtime. Therefore, the compile-time values โ€‹โ€‹are unknown. Depending on the filter, it may happen that some of the coefficients are 1.0 or 0.0. Therefore, the corresponding operation can be omitted. This will save you a lot of time. Of course now I can optimize the code to be fast for one dedicated filter, but as mentioned the implementation should be very general. My first idea was some kind of self-modification algorithm ... but maybe someone has a cool idea or hint ... :)

+1


source to share


3 answers


You can create a boilerplate version of your filtering function. I don't know how to apply this directly to your code, but consider the following:

// Z1 means "coefficient 1 is zero"
template <bool Z1, bool Z2, bool Z3, bool Z4, bool Z5>
...update( Sos& sos, const value_type& x ) const
{
    value_type coef1 = Z1 ? 0 : sos.m_a( 1 ); // or whatever else
    value_type coef2 = Z2 ? 0 : ...;
    value_type coef3 = Z3 ? 0 : ...;
    value_type coef4 = Z4 ? 0 : ...;
    value_type coef5 = Z5 ? 0 : ...;

    ... // the rest of your code
}

      

So far, this defines 32 different functions, each of which is optimized as much as possible (the compiler should detect multiplication by a null constant and optimize the code). Then you can use an array of function pointers:



auto my_table[] = {
    &update<0, 0, 0, 0, 0>,
    &update<0, 0, 0, 0, 1>,
    ...
    &update<1, 1, 1, 1, 1>
};

      

Then check your coefficients where performance is not important (where it reads the coefficients from the file), get the function pointer from the array, and store it in the object Sos

.

I'm sorry if this sounds too vague; I didn't understand your code enough to express my idea in compiled code.

+3


source


With only 5 coefficients, it would be possible to generate all permutations of the 2 ^ 5 code and then select the appropriate permutation at runtime (eg "new Updater00110 ()" if the first second and fifth coefficients are 0 and the third and fourth coefficients are nonzero ); this assumes the code is the same, when the coefficient is 0 or 1, if they are different, then you are looking at 3 ^ 5 permutations that can bloat your code a lot.

Accordingly, unused classes / functions should not clutter up the cache or anything like that.



In the past, when I had to generate multiple code permutations like this, I used a helper program (usually written in an interpreted language like Python) to automatically generate code / files for all 2 ^ 5 permutations.

+2


source


Q: What solutions should I "optimize" the code at runtime depending on the coefficients used?

So basically you want code that "compiles and optimizes itself" at runtime. Technically it is possible : for example here . Basically you have to call the C / C ++ compiler from your code, create the code page and navigate to it using a function pointer.

From a practical perspective, I would not recommend it. Compilation at runtime will take time. Of course, if it's only done once, it could pay off. The problem is that your compiled binary will generate a lot of requirements and be difficult to distribute.

My recommendations would be as follows:

  • Make the filter function inline. This will save time for passing parameters to your function. If you only use 5 ratios, it might be worth a try.
  • Check for cases where your odds are 1.0 or 0.0. Perhaps these cases are very rare, so you don't need to worry about them.
  • Check if your project can be implemented using a precomputed set of values โ€‹โ€‹calculated only once at initialization.
+1


source







All Articles