More efficient: large array or many scalars

Work in embedded (PIC) environment, programming in c.

I need to keep track of 1000 values ​​for a variable (history) and return a moving average of those values. I'm just wondering if it would be more efficient in terms of speed, ROM and RAM usage if I use an array or 1000 16-bit variables. Is there a negative answer to this? Or should I just try both and see which works best?

Thank.

EDIT: Hmm ... I already ran into another problem. The compiler limits me to an array size of no more than 256.

EDIT2:

To clarify ... the code fires about 1000 times per second. Each time, the value for history [n] (or history_n) is calculated and stored. Each time I need to calculate the average of the 1000 most recent history values ​​(including the current one). So, (history[1000] + history[999] + ... + history[1] + history[0]) / 1000;

or something like that. Obviously, every time I need to throw out the old and add the newest.

EDIT3:

I reworked the code in such a way that now the size of the 256 array is not an issue. Approximate size about 100 now fits.

+2


source to share


8 answers


Assuming you need to keep history, and given the limit of 256 array elements, here's a way to manage it:



int history1[256];
int history2[256];
int history3[256];
int history4[256];
int* arrays[] = {history1,history2,history3,history4}
int idx=0;
int sum = 0;
int n = 0;

int updateAverage(int newValue)
{
  int ai = (idx++)>>8;
  int* target = arrays[ai]+(idx&0xFF);

  sum -=*target;
  *target = newValue;
  sum += *target;
  n++;
  n=n<1000?n:1000;
  idx = (idx<1000)?idx:0;
  return sum/n;
}

      

+4


source


I'm not sure if I fully understand your question. Are you asking for the difference between the code generated for "short history [1000]" and "short history1, history2, ..., history1000;"?

Both must use the same amount of RAM: each entry will be stored in the same file register (assuming you are using a 16-bit PIC). The code for calculating the average of the latter will be ugly and will most likely require a bit of ROM, since it will need to reference each value separately (and not just offset the base).



Edit: The reason for the 256 element limitation is because of the paging of the file register in the PIC. You cannot access a larger array simply by compensating for the base case, because you might need to request a page change.

Do you absolutely need to calculate the average? Or can you do a general average? If the overall average is good, use Alphaneo's answer: just keep the sum and number of values ​​collected in two variables, and divide any time you need the average.

+2


source


If you are using an array, the generated code will be much smaller. I'm not sure about this, but I think that accessing the array will use less memory since you don't need to keep track of multiple variables, you have a pointer to one Chunk. If stack size is an issue, Array on the heap might be the best way to go, since C variables are stored on the stack.

+1


source


If you want to compute the average, storing it in an array will definitely be more expensive than computation at runtime.

Reason-1: if you compute it at runtime you will keep adding, for example look at the following thread

    init-0: _tempSum_ = 0
    step-1: Read current value to _currVal_
    step-2: Add current value to _tempSum_
    step-3: Check if we have required of values _num_ (eg. 1000), if not goto-1
    step-4: Calculate _avg_ = _tempSum_ / _num_ and return
    step-5: goto init-0

      

If you are storing 1000 values ​​in a temp array, all steps will actually run from init-0 to step-5, except that you will be using an array of 1000 values.

It can be slower, depending on the array access time ... so beware

+1


source


First you can modify the linker file to allow for the most. Then you will need to put your history array in this section using pragmas.

Second, the array method is much better. For better performance, you also need a 32-bit integer to store the total of the history array.

For each run of the history function, you subtract the oldest value from HistoryRunningTotal and add a new history value. You will also need the OldestHistoryIndex variable to keep track of where the newest value will go (and overwrite the old history).

+1


source


If you have enough memory for 1000 values, why not statically allocate memory, make a spin buffer, and move the pointer through the values ​​to calculate the sum. (if avg is not working what you need). Just keep putting the new value at the oldest value, calculate the sum of 1000 values ​​and divide by 1000. So there are actually two pointers, one to insert the new value and one to iterate over the entire buffer.

0


source


Does the compiler limit arrays to 256? This is very bad. This places a burden on the programmer for what the compiler should really care about. Are you sure there is no compiler option, eg. "large" memory models? If not, research another micro and / or compiler.

Or, to get it wrong, consider using a small IIR filter rather than a large FIR filter.

Anyway, to answer your original question: array is the logical choice for such an algorithm, for the benefits of reasonable (maintainable) code and small code size.

0


source


If the array is dynamically allocated, then using static variables will be faster. If the array is statically allocated, then the compiler should be able to optimize it as quickly as static variables. Try on both some trivial code and profile.

0


source







All Articles