What are abstract patterns?

I am learning Haskell and trying to understand the Monoid lineup. I am reading the haskellbook at the moment and it says the following about the pattern (monoid):

One of the finer points of the Haskell community is its tendency to recognize abstract patterns in code that have well-defined, legitimate representations in mathematics.

What does the author mean by abstract patterns

?

+3


source to share


1 answer


The summary in this sense is the opposite of the concrete. This is probably one of the key things to understand about Haskell.

What exactly? Best of all, most of the values โ€‹โ€‹in Haskell are specific. For example 'a' :: Char

. The letter "a" is a Char value and that is a specific value. Char

- a specific type. But a 1 :: Num a => a

number 1

is actually a value of any type if that type has a set of functions that the class Num

specifies as required. This is an abstract meaning! We can have abstract values, abstract types, and therefore abstract functions. When the program is compiled, the Haskell compiler will choose a specific concrete value that supports all of our requirements.

Haskell, at its core, is a very simple, small, but incredibly flexible language. It actually looks a lot like a math expression. This makes it very powerful. What for? because most of the things that will be built in language constructs in other languages โ€‹โ€‹are not directly built into Haskell, but are defined in terms of this simple core.

One of the main parts is the function, which turns out to be most of the computation expressed in terms. Since so much Haskell is simply defined in terms of this little simple core, that means we can extend it to pretty much anything we can imagine.

Type tests are probably the best example of this. Monoid

, and Num

- examples of types. These are constructs that allow programmers to use an abstraction like a function for multiple types, but only to define it. Typeclasses allow the same function names to be used across the entire range of types, as long as we can define those functions for those types. Why is this important or useful? Well, if we can recognize a pattern, for example, all numbers, and we have a mechanism for discussing all numbers in the language itself, then we can write functions that work on all numbers.at once, This is an abstract pattern. You will notice that some Haskellers are very interested in the branch of mathematics called Category Theory. This branch is pretty much the mathematical definition of abstract models. Contrast this ability to code such things with the inability of other languages, where in other languages, community notification patterns are often much less stringent and must be written out by hand and without any respect for its mathematical nature. The beauty of following math is the enormous amount of material that we get for free by making our language closer to mathematics.

This is a good explanation of these basics, including the classes in the book that I helped the author: http://www.happylearnhaskelltutorial.com/1/output_other_things.html



Since functions are written in a very general way (since Haskell has little or no limitation on our ability to express things at all), we can write functions that use types that express things like "any type, if any Monoid

." These are called type constraints, as indicated above.

In general, abstractions are very useful because we can, for example, write one function at a time to work with the entire range of types, which means that we can often find functions that do exactly what we want on our types if we just let's make them instances of specific types. An example of this is the class Ord

. By creating a type we define ourselves, an instance Ord

gives us a whole bunch of sorting and comparison functions for free.

This is, in my opinion, one of the most exciting parts about Haskell, because while most other languages โ€‹โ€‹also allow you to be very general, they basically take an extreme influence on how expressive you can be with that generality, so they are also less efficient. ... (This is because they are less precise in what they say, because their types are less "defined").

This is how we can reason about the "possible values" of a function, and it is not limited to Haskell. The more information we encode at the type level, the more towards the end of the specificity of the spectrum of expressiveness we turn. For example, to take the classic case, the function const :: a -> b -> a

. This function requires a

and b

can be absolutely any type at all, including the same type if you like. Because of this, since the second parameter may be different than the first, we may decide that it really only has one possible functionality. It cannot return Int

unless we give it an Int

as its first value, because it is not anytype, right? Therefore, we know that the only value it can return is the first value! The functionality is defined right there in the type! If it doesn't reflect, then I don't know what it is. :)

As we move into dependent types (that is, the type system where the types are first-class, which also means that ordinary values โ€‹โ€‹can be encoded in the type system), we can get closer and closer to what the limitations of possible functionality are. However, kicker, this doesn't necessarily mean implementing functionality if we don't want to, because we control how abstract, but while maintaining expressiveness and high fidelity, It's quite fun and surprisingly powerful.

A lot of math can be expressed in the language that underlies Haskell, lambda calculus.

+6


source







All Articles