Complexity analysis in Haskell

positions2              :: (Eq a) => a -> [a] -> [Int]
positions2              = f where
    f aa aas        = filter (g aa aas) [0 .. (length aas - 1)]
    g aa aas it     = aa == aas !! it

      

This code should find the position of the given element in the given list.

To find out its complexity, I thought about filter

taking a function g

and a list [0..length-1]

.

Now I can't figure out what the complexity positions2

will be (n)

, or will there be any loop going through the function filter

.

Please suggest if there is any other way to write more compact code for lower complexity than this.

+3


source to share


3 answers


  • has complexity O (n).
  • [0..x] has complexity O (n).
  • (!!)

    has complexity O (n).

Your naive implementation is quadratic due to nested (!!)

Lists are lazy here, so the filter and list generator combines O (n) complexity plus some constant per element (from laziness, list generation and filtering happens in one pass, efficiently).



Those. generating and filtering work would be (O (n + n * k)) in lazy lists, not O (2 * n) in a strong list, where k is the cost of creating a single cons cell.

However, using linear indexing makes it quadratic anyway. I note that on strict merge vectors this would be O (n + n * j) (linear) due to the constant complexity of finding j or using structured search in logs O (n + n * log n).

+9


source


Linear complexity

positions2 :: Eq a => a -> [a] -> [Int]
positions2 x = map fst . filter ((x==).snd) . zip [0..]

main = print $ positions2 3 [1,2,3,4,1,3,4]

      



(I suggest you read and understand exactly how it works)

(your code takes quadratic time as it (!!)

takes linear time)

+5


source


First, a few conversions:

positions2              :: (Eq a) => a -> [a] -> [Int]
positions2              = f where
    f aa aas        = filter (g aa aas) [0 .. (length aas - 1)]
    g aa aas it     = aa == aas !! it
-- ==
positions2 aa aas  = filter (g aa aas) [0 .. (length aas - 1)]
    g aa aas it     = aa == aas !! it
-- ==
positions2 aa aas  = [it | it <- [0 .. (length aas - 1)], aa == (aas !! it)]
{- 
positions2 aa aas = for each it in [0 .. (length aas - 1)]
                      if (aas !! it) == aa
                        emit it
-}

      

Now you can clearly see that for this one, the it

list traversal is aas

repeated from the beginning with (!!)

. This is classic quadratic behavior, you follow steps 1 + 2 + 3 + 4 + ... + n = O (n 2 ) by the time the results positions2 aa aas

have been examined in full (the returned list goes all the way to the end).

But you are learning incrementally increasing indices, so you could start from the previous item down the list (not from the beginning) and only type 1 position each time (not the full index it

, since (!!)

):

positions2 aa aas = g 0 aas
   where
      g i (a:as) | a == aa = i : g (i+1) as
                 | otherwise =   g (i+1) as
      g _ [] = []

      

Here you can see how incrementing the index by 1 ( i --> i+1

) is woven along with moving up the list one position at a time ( (a:as) --> as

).

With a re-iteration of the list, weaving (or, in fact, more like zipping) is achieved with zip

:

positions2 aa aas = [it | (a,it) <- zip aas [0 .. (length aas - 1)]
                        , a == aa]
{-
    = for each a in aas while incrementing it by 1 from 0 to (length aas - 1),
        if a == aa
          emit it
-}

      

This is a clear and obvious O (n) solution. And since Haskell lists are lazy and zip

stops when the shortest list ends,

positions2 aa aas = [it | (a,it) <- zip aas [0 ..], a == aa]

      

(which is equivalent to the code in this answer here ).

+1


source







All Articles