Imperative vs Functional - understanding the von Neumann bottleneck

In Curser's course - Functional Programming in Scala - Martin Oderski talks about how imperative programming is constrained by the von Neumann bottleneck because it is heavily concerned with mutable state, and therefore also assignment and de-referencing.

Von Neumann's bottleneck is the read / write latency between the processor and memory.

I am struggling to understand 2 things and hope someone can help shed some light on it: -

  • If we use immutable objects when writing a Scala program, then we still have an assignment where we initialize an immutable object with data when it is created, but not later re-assignment. When we want to unreference an immutable object then there is still a chance that it will no longer exist in the cache and will need to be fetched again from main memory -> latency.

    I am struggling to understand how using immutable data structures helps with von Neumann's bottleneck. Can anyone help me assess the cases where this happens?

  • In his lecture, Martin Oderski states the following when talking about von Neumann's bottleneck: -

    Imperative programming conceptualizes programs word for word, becomes a scalability issue because we are dealing with structure data at too low a level. Functional programming languages (in general) tend to focus on the definition of the theories and methods of work with higher-level abstractions, such as collections

    , polynomials

    , documents

    , etc.

    I understand that using higher level abstractions can really help a developer evaluate the effectiveness of their development work, but how do abstractions help solve the von Neumann bottleneck?

+3


source to share


1 answer


You should read the original, published by John Becks, "Can Programming Get Rid of the Von Neumann Style? The Functional Style and Its Algebra of Programs." It basically speaks of two types of bottlenecks, which are physical hardware constraints, and the other is a kind of conceptual bottleneck formed by the way programmers think about languages. On the second question. As earlier languages โ€‹โ€‹were closer to their corresponding hardware implementations, the programmer's mind was used to simulate a sequential flow of events. Functional languages โ€‹โ€‹give us a new way to look at programs in which concurrency or dataset operations work.

On the first question, I would like to repeat the comment from wiki.c2.com



"What is the choice of programming language on hardware? A functional language that is compiled to run on a von Neumann machine will still suffer from a bottleneck." Answer: ReferentialTransparency - which makes parallel computation much more convenient (and can be automated). The efficient parallelization of imperative languages โ€‹โ€‹is still an active research topic.

http://wiki.c2.com/?CanProgrammingBeLiberatedFromTheVonNeumannStyle

0


source







All Articles