Why is ArrayList add () and add (int index, E) complexity amortized constant time? Why not O (1) for add (), O (n) for add (int index, E)?

Why is the ArrayList add () and add (int index, E) complexity amortized in constant time?

Why not O (1) for a single add () operation, O (n) for a single add operation (int index, E) and O (n) for adding n elements (n additional operations) using the (any) add method ? Suppose we use add (int index, E) to rarely add the end of an array?

More than one complexity of the array (and ArrayList) already contains n elements:

  • add () is O (1)?
  • add (int index, E) - O (n)?

If we make a million investments, the average of 1 and 2 cannot be O (1), right?

Why Oracle says

The add operation runs in amortized constant time , that is, adding n for elements takes O (n) time .

I thought the complexity was O (1) for add () and O (n) for add (int index, E).

Does this mean that the "integral complexity of n operations" (each operation is O (1)), so to speak, n * O (1) = O (n). What am I missing?

Maybe for Oracle "add operation" always means just add ()? Is add (int, E) an "input operation"? Then it's perfectly clear!

I assume this is due to the difference between asymptotic analysis and amortized analysis, but I cannot fully understand it.

What is amortized algorithm analysis?

Why is the time complexity of inserting an array O (n) and not O (n + 1)?

Is it more appropriate to say Amortized O (1) versus O (n) for insertion into an unsorted dynamic array? (not quite clear)

Big O when adding different routines

+3


source to share


1 answer


In Oracle terms (which are tacitly implied) and speaking of List

  • "add method" (synonym - "add method") always means boolean add(E)

  • "insert method" always means boolean add(int index, E)

When Oracle writes

The add operation is performed in amortized constant time mode, that is, adding n elements takes O (n) time.

it means:

The complexity of a single operation is boolean add(E)

amortized in O (1).



It can't just be O (1) asymptotic (always), because we rarely have to grow the capacity of the array. This single add operation, which is actually "creating a new larger array, copying the old array into it, and then adding one element to the end", is asymptotic complexity O (n) since copying an array as the list capacity grows is O ( n), complexity of growth plus addition O (n) [computed as O (n) + O (1) = O (n)]. Without this work of building capacity to add complexity, it would be O (1), an element is always added (added) to the end of the array (max index). If we "added" (= inserted) not to the end of the array, we would need to move the rightmost elements (with large indices), and the complexity for one such operation would be O (n).

Now, for a single add operation, the asymptotic complexity is O (1) for add-without-increase-capacity and O (n) for add-with-increase-capacity (which is very rare).

The amortized complexity of one add operation is O (1). It reflects the fact that sparse O (n) "grow and add" operations get "diluted" with much more O (1) no additions, so the "average" single operation is O (1).

The "asymptotic complexity" of operations n add is O (n). But here we are talking about the complexity of n operations, not the complexity of one operation. This is not the strictest way to express it ("Asymptotic Complexity"), but anyway. The amortized complexity of n operations is even less.

Finally, the boolean add(int index, E)

complexity of one operation is always O (n). If it causes growth, then O (n) + O (n) [grow + insert], but 2 * O (n) is the same as O (n).

+5


source







All Articles