Rewriting is mandatory for loops in declarative style in Scala
How can I rewrite the following loop (pattern) in Scala using higher order built-in functions or tail recursion?
This is an example of an iteration pattern in which you perform a computation (like comparison) on two elements of a list, but only if the second occurs after the first in the original input. Note that step +1 is used here, but in general it can be + n.
public List<U> mapNext(List<T> list) {
List<U> results = new ArrayList();
for (i = 0; i < list.size - 1; i++) {
for (j = i + 1; j < list.size; j++) {
results.add(doSomething(list[i], list[j]))
}
}
return results;
}
So far I have come up with this in Scala:
def mapNext[T, U](list: List[T])(f: (T, T) => U): List[U] = {
@scala.annotation.tailrec
def loop(ix: List[T], jx: List[T], res: List[U]): List[U] = (ix, jx) match {
case (_ :: _ :: is, Nil) => loop(ix, ix.tail, res)
case (i :: _ :: is, j :: Nil) => loop(ix.tail, Nil, f(i, j) :: res)
case (i :: _ :: is, j :: js) => loop(ix, js, f(i, j) :: res)
case _ => res
}
loop(list, Nil, Nil).reverse
}
Edit: To all contributors, I only wish that I could accept each answer as a solution :)
source to share
Return:
After I deleted my first attempt at an answer, I put some more considerations into it and came up with another, at least shorter solution.
def mapNext[T, U](list: List[T])(f: (T, T) => U): List[U] = {
@tailrec
def loop(in: List[T], out: List[U]): List[U] = in match {
case Nil => out
case head :: tail => loop(tail, out ::: tail.map { f(head, _) } )
}
loop(list, Nil)
}
I would also like to recommend enriching my library template to add the mapNext function to the List api (or with some customization of any other collection).
object collection {
object Implicits {
implicit class RichList[A](private val underlying: List[A]) extends AnyVal {
def mapNext[U](f: (A, A) => U): List[U] = {
@tailrec
def loop(in: List[A], out: List[U]): List[U] = in match {
case Nil => out
case head :: tail => loop(tail, out ::: tail.map { f(head, _) } )
}
loop(underlying, Nil)
}
}
}
}
Then you can use a function like:
list.mapNext(doSomething)
Again, there is a drawback as concatenated lists are relatively expensive. However, the variable assignemends internally for understanding can also be quite ineffective (since this is an improvement task for the dotted Scala Wart: suggested degreasing is suggested for understanding).
UPDATE
Now that I get into this I just can't let go :(
Regarding "Note that step +1 is used here, but in general it can be + n. '
I've expanded my proposal with some parameters to cover more situations:
object collection {
object Implicits {
implicit class RichList[A](private val underlying: List[A]) extends AnyVal {
def mapNext[U](f: (A, A) => U): List[U] = {
@tailrec
def loop(in: List[A], out: List[U]): List[U] = in match {
case Nil => out
case head :: tail => loop(tail, out ::: tail.map { f(head, _) } )
}
loop(underlying, Nil)
}
def mapEvery[U](step: Int)(f: A => U) = {
@tailrec
def loop(in: List[A], out: List[U]): List[U] = {
in match {
case Nil => out.reverse
case head :: tail => loop(tail.drop(step), f(head) :: out)
}
}
loop(underlying, Nil)
}
def mapDrop[U](drop1: Int, drop2: Int, step: Int)(f: (A, A) => U): List[U] = {
@tailrec
def loop(in: List[A], out: List[U]): List[U] = in match {
case Nil => out
case head :: tail =>
loop(tail.drop(drop1), out ::: tail.drop(drop2).mapEvery(step) { f(head, _) } )
}
loop(underlying, Nil)
}
}
}
}
source to share
list // [a, b, c, d, ...]
.indices // [0, 1, 2, 3, ...]
.flatMap { i =>
elem = list(i) // Don't redo access every iteration of the below map.
list.drop(i + 1) // Take only the inputs that come after the one we're working on
.map(doSomething(elem, _))
}
// Or with a monad-comprehension
for {
index <- list.indices
thisElem = list(index)
thatElem <- list.drop(index + 1)
} yield doSomething(thisElem, thatElem)
You start, not with a list, but with it indices
. Then you use flatMap
because each index goes to the list of items. Use drop
to take only the elements after the element we are working on and match that list to actually perform the calculation. Please note that this is a difficult time complexity, because most of the operations here indices
/ length
, flatMap
, map
up O(n)
to the amount of the list, drop
and apply
- O(n)
in the argument.
You can get better performance if you: a) stop using linked list ( List
good for LIFO, sequential access, but Vector
better in general), and b) make it a little more ugly
val len = vector.length
(0 until len)
.flatMap { thisIdx =>
val thisElem = vector(thisIdx)
((thisIdx + 1) until len)
.map { thatIdx =>
doSomething(thisElem, vector(thatIdx))
}
}
// Or
val len = vector.length
for {
thisIdx <- 0 until len
thisElem = vector(thisIdx)
thatIdx <- (thisIdx + 1) until len
thatElem = vector(thatIdx)
} yield doSomething(thisElem, thatElem)
If you really need to, you can generalize any version of this code to all IndexedSeq
s using some parameters implicit
CanBuildFrom
, but I won't cover that.
source to share
Here's my kick. I think this is pretty readable. Intuition: For each chapter of the list, apply the function to the head and every other member of the tail. Then go back to the tail of the list.
def mapNext[U, T](list: List[U], fun: (U, U) => T): List[T] = list match {
case Nil => Nil
case (first :: Nil) => Nil
case (first :: rest) => rest.map(fun(first, _: U)) ++ mapNext(rest, fun)
}
Here's a sample mileage
scala> mapNext(List(1, 2, 3, 4), (x: Int, y: Int) => x + y)
res6: List[Int] = List(3, 4, 5, 5, 6, 7)
This one is not explicitly tail-recursive, but it can be easily added to make it.
source to share
Recursion is certainly an option, but the standard library offers some alternatives that will have the same iteration pattern.
Here's a very simple setup for demo purposes.
val lst = List("a","b","c","d")
def doSomething(a:String, b:String) = a+b
And here's one way to understand what we are doing.
val resA = lst.tails.toList.init.flatMap(tl=>tl.tail.map(doSomething(tl.head,_)))
// resA: List[String] = List(ab, ac, ad, bc, bd, cd)
It works, but the fact that the a is map()
internally flatMap()
suggests that insight for
can be used to its good results.
val resB = for {
tl <- lst.tails
if tl.nonEmpty
h = tl.head
x <- tl.tail
} yield doSomething(h, x) // resB: Iterator[String] = non-empty iterator
resB.toList // List(ab, ac, ad, bc, bd, cd)
In both cases, the cast is toList
used to return us to the original type of the collection, which may not be necessary depending on what additional processing of the collection is required.
source to share