Why are folding and shrinking considered fundamental - surely everything is defined in terms of cons and cars?

We can see that we can use reduce

/ foldl1

as a function with which we can define other higher order functions such as map, filter and postback ,

(defn mapl [f coll]
  (reduce (fn [r x] (conj r (f x)))
          [] coll))

(defn filterl [pred coll]
  (reduce (fn [r x] (if (pred x) (conj r x) r))
          [] coll))

(defn mapcatl [f coll]
  (reduce (fn [r x] (reduce conj r (f x)))
          [] coll))

      

We can also do this from a point of view foldr

. Here map

and filter

in terms foldr

of conversations with Rich Hickey Transducers
at 5:25 pm.

(defn mapr [f coll]
  (foldr (fn [x r] (cons (f x) r))
         () coll))

(defn filterr [pred coll]
  (foldr (fn [x r] (if (pred x) (cons x r) r))
         () coll))

      

Now we can define map

, foldl

( reduce

) and foldr

in terms of first

, rest

and cons

( car

, cdr

and cons

)

(defn foldr [f z xs]
   (if (null? xs)
       z
       (f (first xs) (foldr f z (rest xs)))))

(defn foldl [f z xs]
   (if (null? xs)
       z
       (foldl f (f z (first xs)) (rest xs))))

(defn map [f lst]
   (if (null? lst)
       '()
       (cons (f (first lst)) (map f (rest lst)))))

      

My question is Why fold

and reduce

are considered fundamental - of course everything is defined in terms of cons

, cdr

and car

?
Doesn't that look like things at the wrong level?

+3


source to share


3 answers


I thought Rich Hickey explained this by what in the conversation about converters .

He sees it folds

as a fundamental concept of transformation. He does not need to know what structure he is working with and how to work with this structure.



You have just defined fold

in terms of itself, cdr

, car

and rest

. What Rich argues is that it fold

is itself an abstract concept, separate from the data structure on which it operates, and as long as we provide it with certain functions that actually work on the data structure, it will work as expected.

So in the end, it's all about separation of concerns and reuse.

+6


source


Folds are a unifying structure for data processing in principle as they conform to the definitions of inductive data. They are not ad-hoc. cons/car/cdr

are the building blocks for creating data (and code), but in an unprincipled way (we can do anything and process it ad-hoc). Creases - higher level, more disciplined, more predictable, easier to reason.

Given the special implementations of maps, filters, mapcat for lists, we can see something similar in them - a code structure that follows a data structure (a list built from cons nodes, which corresponds to two arguments of the union function). And this fold.



In this conversation, Rich Hickey abstracts the step function. But he doesn't abstract from the data. Any union function used to fold over marked trees must take three parameters, not two. So what its functions -ing

do is still folding, conceptually; it's just that they abstract over specific implementations of lists - either as cons cells, hashed arrays of arrays, whatever.

0


source


Most Lisp operators will have a partner that you can use to define it, or vice versa. This attribute is called "metacircularity". There is not one particular set of basic operators that is the basic minimum to allow for the full set of programmability given by the language.

You can read more on this page: http://home.pipeline.com/~hbaker1/MetaCircular.html

0


source







All Articles