I used to like immutability. Now I still like immutability. Anyone who knows me knows I like immutability without having to ask if I've changed my mind.
mutability is bad so you're in the right direction but in case you want to change your opinion instead of changing it in place you can transform new opinion that makes you like mutability
People need to learn how to explain things. This was brutal. People who already know about that property don't need this long-winded start, and for newbies this is not sufficient or clear at all, it's all over the place. Pick your battles. :)
omg i'm so glad i was not the only one. I appreciate the guy's efforts, but he says to many things that are pragmatically true for a simple case or his specific example, but aren't true in general or in all realistic use cases, and rather than help expand one's mind, it will result in lazy pattern-recognition. _"Don't use shared memory unless you need speed and don't care if it breaks."_ This is an opinion, and it's an opinion based on current-day thinking and tooling. Guess what this guy sells. Erlang tooling. I think that's why I liked Computerphile academics more than all other RUclipsrs.
@@eminence_ Some of this is really interesting and I liked it as well, so I see what you're saying, but in my opinion the opening statement was close to white noise for beginners and completely unnecessary for an advanced crowd. I would think, though, that the video titled like that is mainly for curious people who dabbled into these concepts only somewhat. So much more graphic and patient explanation of "you share what you can and you copy what you can't" is really needed there.
Why is it that functional programmers (particularly those who've been interviewed on this channel) have such a palpably hard time reducing ideas to terms approachable by beginners? I'm quite familiar with the concepts being discussed here and this still sounds like a total word salad.
a lot of the concept were drawn straight from abstract algebra and there's just no other proper way to address it. but it's true beginners need to put in extra effort to understand and appreciate FP
Of all the comments on this video, I was most surprised by yours. I have heard plenty of bad explanations about functional programming, and have been trying (and not succeeding) to construct a mental model. This felt like the first explanation that I could truly parse and understand. If you don't mind me asking, what were some of the sticking points for you/how did it become a word salad?
It's really just a side effect of talking about functional programming. If you don't fully understand all the base ideas it's hard to move forward or even code effectively. Where with OOP the barriers of entry are fairly small. You can kinda do what you want, especially if it's a dynamic language.
I don't understand why he's calling "keeping a copy of the data for each process" immutability. Immutability is about not changing things. You can certainly do what he's talking about in the middle of the video with mutability.
I think the distinction made at the beginning how variables differ between mathematics and programming could be better put this way: In math, an equation like "x = x^2 - 1" is a function that takes an input parameter, and returns a boolean value indicating whether the equation is true or false. A series of equations usually represents using math to transform one equation into another for some purpose (e.g. solving for x), but each of the equations is a standalone statement independent of time (immutable). In programming, an expression like "x = x^2 - 1" is an instruction to calculate some value using the contents of a particular location in memory, and store it in a particular location in memory. A series of expressions usually represents steps for modifying the state of the memory for some purpose (e.g. computing something), and each of the steps depends on the state of memory left by previous steps at the time it's executed (mutable).
Scribblers yes, this is much better. I would add that this seems to come from a confusion in early languages, most notably C, where the = operator has been confused with the : operator, which is define. That's why mathematicians will write f := 5 or g : x -> x² or whatever. : is the definition operator, not =. Furthermore, mutability is essentially an extra but hidden global variable: time.
Yeah I Didn't really get the point he was trying to make. The the top one was an equality statement and the other (in a programming language) was an assignment statement. If the bottom one was "==" and tested for equality it would fail.
The equal sign "=" in most languages is an assignment operator, and is in no way intended to be the same thing as the same symbol in mathematics. Right out of the gate the guy's comparing apples to oranges.
yeah that bugged me. i was learning programming fairly young and the assignment operator never once tripped me up. it feels perfectly natural to assign something based on its current value.
The idea is that there's no assignment in mathematics. Ever wondered how physicists use math to model the universe? The Universe is clearly mutable, yet no equations they created has mutability, they still model mutable things. Mutability is a low level thing of the implementation of computers that should not be exposed that way.
The other problem with mutability is that it creates temporal dependencies between who reads and who writes, but that temporal dependency is never modeled in any mutable programming language (Rust being the except to an extent, but still). So its entire up to the programmer to check everything. Its also very hard to think about the past, and the future and the present values of a cell at the same time. No wonder mutability is the 2º cause of bugs, the 1º one being Null Pointer Errors (this is anecdotal).
Everything that can be done with mutability can be done immutably, with the only exception (by practicality) being state-machines used to control hardware (they can also be totally immutable, but that cost too much stack space). Its only slow doing so nowadays because the hardware was optimized for mutability. There's no fast or slow computing, Lambda Calculus is precisely equivalent to Turing Machines, the only slow things are things not accelerated by hardware. But soon enough with 64 cores it will be coming. The age of Von Neuman machines is coming to an end. Even wonder why hardware is not asynchronous and needs a clock? mutability in hardware.
"We'll see a million cores within our lifetimes" Hmmm. I don't want to dismiss it entirely, but I don't think we will. Moore's "Law" has slowed/ended because we're reaching the physical limits of a transistor's atoms and molecules, which is around 5-7nm. Without a revolutionary/unpredictable new process, or something to drastically reduce diminishing returns on traditional methods, we're very close to the end of transistor miniaturization. So then, since we can't make cores smaller, adding more cores necessarily means adding *physical size* to the chip, and that will be very hard to double consistently every 18 months... mainly because of materials costs, and the difficulties of cooling larger and larger surface areas. And how exactly would a CPU the size of a dinner plate work? :P A weaker argument against a million cores is that software and operating systems already have trouble utilizing 16- or 32-core CPUs. So there'd need to be a massive, uniform paradigm shift away from traditional software development to "million-core" software development. Perhaps even new fundamentals of computer science would need to be developed. Without that, the economics of selling a million-core CPU don't make sense when 99.9% of the chip is idling. But then again, software doesn't require billion-dollar retooling of manufacturing plants, or run up against the laws of physics, so I feel like there's a higher chance that the software industry could just adapt. So, a million-core CPU? Maybe, but with all known manufacturing processes, I see the laws of physics and thermodynamics as the biggest hurdles to shifting Moore's Law from clock speeds to cores.
He didn't say that we'd see million-core machines in the general market. He specifically said that we'd seen see desktops with 64 cores, and 'machines' with a million cores within our lifetime. I'm pretty sure the million-core machines he's speaking of will be custom machines, for specific use-cases, in the same venue as super-computers and such. Not things that anyone is going to have sitting at home for casual use.
@@phiefer3 If you consider the vector stream units in GPUs to be a CPU in and of themselves, then it is likely that supercomputers will exceed a million cores, or maybe already have. They're not completely independent, but they do execute code in parallel.
In the realm of research/academic supercomputing... ok, possibly! That's why I started with "I don't want to dismiss it entirely" :) Thought experiment: say a manufacturer, as a marketing stunt, produces a single million-core chip (technically functional, but never intending to be used)... does that count? Or if the only way to run a million-core chip is in a supercooled bunker beneath a handful of universities or astrophysics labs... does that count? I'd argue that Moore's Law is speaking to transistor counts on *generally available* CPUs, and when people say "We'll see a million-core CPU in our lifetime" they don't mean it as a technicality. They usually mean it as a commodity (even if a niche enthusiast commodity).
@@TorgieMadison If you made a processor core equivalent to a Motorola 6800 with 4,000 transistors, you could put 2.5 million of them into the Apple A12X Bionic that is used in iPads. That's just rough math, but it shows you just how many transistors you can get into things now. 10 billion transistors. Wikipedia even lists a 400,000 core experimental chip with over 1 trillion transistors on one "chip".
And that completely dismisses the current trend to a chiplet design, currently made mainstream by AMD but also being worked on by Intel. It does not have to be a monolithic piece of silicon the size of a dinner plate, that would be neither cost effective nor practical of course. However, stacked chips (Intels approach) or separate chiplets in the AMD style can support more cores much easier. And then of course, if we go to very simple cores but make them massively parallel, like GPUs are build, we have tons of cores already, easy to see that number going up.
Immutability is an important aspect when you deal with concurrency and functional programming, but I didn't like too much the way he introduced and motivated the topic. The '=' sign in mathematics and in programming are two different things. And sure, I remember when I was a teenager and we had our programming courses at school, most of my classmates didn't understand "x = x + 1;" because they interpreted the '=' sign as the comparison of two values and not a the assignment of a value to a variable. For me introducing immutability with this example makes no sense, because apart of using the same sign, they are two different concepts.
There's an alternative explanation, where = is the same as in math, but you're actually writing "x_{i+1}=x_i^2+1", and your variables are sequences and your program is a recursive definition. And the difference between functional and imperative programming is how that implicit sequence index works. In imperative programming, the index goes up globally at every assignment, while in functional programming, it is higher for code that is working on the next step.
@@iabervon Yes, that's true, but I still think this is bad motivation for immutability. I would have started with something like this: char string[] = "Hello"; some_function_with_side_effects(string); and show how side effects can screw you up and why in some scenarios having immutability is a win. But saying something along the lines that programming conflicts with mathematics, is simply not true. The '=' in most programming languages is not the boolean comparison operator.
@RealShaoran : the `=` in maths is also usually not a boolean comparison operator. Sometimes it's a _propositional_ comparison operator. But very often it is indeed just a “define to be equal” statement, which is what C-like like language mean with `=` too. Arguably, this should be made clear by writing `:=`, but often mathematicians don't bother. Regardless: in maths such definitions are always “forever”; you can't just write `x := x + 1` somewhere later in your proof. If you want to re-use the name `x`, it had better be in a separate _scope_ (separate proof or subproof) so there's no confusion (and it's basically two completely unrelated variables you're dealing with, which just happen to use the same formula symbol). That's also how it works in purely functional programming languages.
@@realshaoran4514 If he used side effects mutating a shared array, it might make it more obvious that Erlang's prohibition against variable reassignment is _entirely_ because the designers want '=' to mean what it means in math, and has nothing to do with concurrency. Elixir is another language that runs on the Erlang VM, still does pattern-matching, still has an immutable heap, so it still supports all the same concurrency features... but it allows variable reassignment.
No confusion with “x=x+1”. Programming is NOT Math! Immutable variables are used in every language I have ever programmed in (at least 20 languages). Immutable variables can be accessed by any number of threads without locks or problems. Agreed! The problem is that immutable variables are not the best solution to most programming problems. I am currently programming a C application of over 100,000 lines. I have hundreds of pure functions and dozens of immutable data structures. I use many threads but I still need isolated code/data and internal locks to make my system work. Programming that deals only with immutable state is like one hand clapping. Immutable variables are not Functional programming. Nobody forces you to continuously change a variable in any language and constants have been a part of programming since day one. The system I am creating does multi thread processing automatically and has no user defined locks. The code can be written entirely as if it was single thread. The bottom line is that immutable data is useful but it is only a part of most solutions.
"Programing is not math" that is true becouse that is how you learnt it. Imagine we get the change to reinvent programing, there are alot of ways it could be made better, dont you agree? I belive it is closed minded to think something is not confusing becouse you know how it works.
Also constants are not the same as immutability. In my experience immutability is difficult and basically not worth the effort in languages that aren't designed for it. In languages that are designed for it state becomes easier to reason about
That's Finangle's Law. Murphy's Law is "If there are two or more ways to do a job and one of those results in a catastrophe, then someone will do it that way."
@@userou-ig1ze Nope. It's actually a principle of design. You can prevent a catastrophe by designing things so that either it doesn't matter which way you do it, or by designing things so there's only one way to do it. USB-C plugs are an example of the former. The standard 3.5mm audio plug is an example of the latter.
@@Roxor128 I prefer his more general law of : if something can go wrong it will. I see it happening in real life and I'm trying to prevent it whenever i can.
Immutable variblaes does not need to be copied that is one (common) solution. For example clojure has immutable data structures that only stores the diffs that is realy efficient. And rust has the concept of memmory ownership. So mutable stat can only be changed in one place so no concurrency bugs and no overhead.
You still need to copy. He's talking specifically of when one process needs access to data from another process. You either need to have direct access to memory, or you need to get a copy of that data. At the end of the day there is no 3rd option. There may be special options that allow you to copy the data more efficiently, but it's still a copy. Yes, there are ownership cases where a process can access shared memory but can't change it, but in nearly all cases if a process needs access to something it probably also needs to alter it at some point, which means it's going to make a temporary copy to work with. (and if a process really didn't have any need to alter the data, then the ownership feature was unneeded, in other words the use of such a feature directly implies that copies will be made for local use) These types of technologies may move the problem around, or make it easier to deal with, but in the end it still boils down to the same 2 options.
@@phiefer3 I think the speaker may have been speaking rather generally when referring to "processes." Yes, you are correct that if you were to call *fork()* on a process, the child will receive a copy of the parent's memory in order to operate on it concurrently. You are also correct that any IPC mechanism will require either copying or direct mapping of memory (often the former) for two processes to communicate. But these immutability practices apply just as well to parallel threads within a single process, and in that case, deep copies might not be necessary. The most copying you'll need to do is 8 bytes, the length of a 64-bit pointer, because in that scenario, you can pass pointers or shared references around from thread to thread without need for making copies.
@@xplinux22 see, I would consider the use of a pointer or shared reference between threads to be a mutable relationship, since they are both accessing the same memory, and by default would also both be able to make changes to it. You can use some form of pointer management to prevent one thread from making changes (thus making it into an immutable relationship), but that would again mean that in order for one thread to "use" the data from that pointer they're going to need to make a local copy to do so. That's what I meant when I said that no matter what it's always going to boil down to sharing or copying, regardless of whether you're on the scale of processes, threads, or entire systems.
@@phiefer3 Does this account for mutexes and RW locks, though? In that particular case, all the owning thread receives after locking the concurrency primitive is a mutable pointer, not a full copy of the data itself. It has the full right to dereference the pointer and mutate or reassign the data on the other end. The data may be static and not owned by any particular thread, and its initialized lifetime exists throughout the duration of the program. Perhaps we have different definitions of what "using" means?
Also there are occasions where shared data is _not_ made copies of for local use throughout the entire lifetime of the program. Instead, it is initialized once and kept constant throughout the lifetime of the threads executing on it. This precondition that immutable sharing necessitates local copying doesn't always hold true.
This sounds confused. It seems that he's speaking about non-shared state, not about immutability. Languages like Rust show that you can have mutable non-shared values, or shared immutable values, so non-sharing is not the same thing as immutable state.
Easier way to explain immutability is that your program should basically contain no variables. It is the key rule of functional programming. A class can take exponentially large amount of different states based on number of its properties/variables and this can make it very hard to test and cover all the different state permutations and can lead to unexpected results. With immutability, you can make sure the system will always provide the same output for a particular input, because the system basically becomes a function, like with the equation example.
I agree that everything you've said there is true. But it doesn't tell the whole story. The required complexity of a system is, to a reasonable approximation, a constant value based upon the problem being solved. When you switch to functional programming and immutable state you take complexity out of the state. But the complexity doesn't just disappear. You could frame it like a conservation law: complexity is always conserved. What you do is move the complexity from the state and move it into to the logic of the function itself. So, the relevant question is this: What is easier to understand, and detect and fix when there's a bug? Complexity of state, or complexity of logic? The answer is not obvious.
He's just saying that inmutability and functional programming solves some issues with scalability and realiability but to a price. It depends on the problem you are solving. He's not saying that's the solution for everything.
Oh, see what I'm saying? This was MUCH clearer than the original video. Start with this premise and build on that, and you'll get a great clear thought... x=x+1 argument sounded advertisement-like slogan with no substance.
8:30 he has the definition of Amdahl's law exactly backwards -- Amdahl's law doesn't even apply to single-core computing (modulo the effect of overlapping IO blocking waits with computing). And processors didn't hit a limit in per-core clock speed in 2004 because of Amdahl's law, there were numerous other reasons to do with heat dissipation (how heat generated scales with clock speed vs. feature size), engineering limitations, and the laws of physics.
Amdahl's law as most people understand it absolutely does apply to single-core computing; its intuitive interpretation is simply that you should focus your efforts optimising the code that takes longest to execute. That said, I've no idea why Cesarini came up with this slightly odd 2004 thing. Possibly getting confused between that and Moore's law?
@@davidf2281 Amdahl's Law is not up for interpretation, it is defined by a mathematical formula involving the inherently serial and inherently parallel sections of a program. Moore's Law is more subjective, and certainly got cooped from talking about transistor density to talking about overall increases in computing speeds. Yes, he probably meant to say Moore's Law rather than Amdahl's Law, but his statements would still be largely incorrect (just less incorrect than as stated with Amdahl's Law).
@@davidf2281 Check Amdahl's original 1967 paper, 4th paragraph, which the Amdahl's Law formula was later extracted from by others. Literally the only context Amdahl applies it to is parallel computing, hence the use of _p_ and _s_ in the formula. Others later observed that this observation could be generalized to other domains of resource optimization.
@@usethefooorce ...which reinforces my original point that's it's an interpretation. Amdahl didn't come up with a mathematical formula; others interpreted it as such. Amdahl came up with a useful intuitive idea for optimisation, which applies to both serial and parallel computation.
There is a diffrence between writing x = x^2 - 1 if x is a *variable* and if x is a *constant* . If x is a *variable* : x = x^2 -1 means that we need do find the x value (which is the golden ratio) that satisfies the equation and assign it to x. If x is a *constant* : x = x^2 - 1 means that the equation is false and therefore nonvalid (x ≠ x^2 - 1) Francesco should've denoted x ≡ x^2 - 1 (identical equality symbol), which means that any x value is the solution to the equation. Which in this case it's not, therefore ONLY THEN he can show the equation is false.
Somewhat confusing when you talk about Moore's Law (speed doubles every 1.5 years) directly after mentioning Amdahl's Law but not talking why it hits the limit (sequential parts). Multicores are not a solution in respect to Amdahl's Law!
Yeah, I was super confused.... Did he mean Moore's Law when he said Amdahl's Law? Because it would mostly make sense if that's what he meant. ...and I understand brain farts happen, but if that's what happened, that's kind of a bad mixup to not receive a correction in editing.
You were very helpful for my hw assignment working on immutability. I struggled with the process of elimination to get the literal syntax. Again, thank you😁
He says that with mutability you share memory and with immutability you copy. But isn't it the opposite? When you give some arguments to a function, if those variables are immutable, it's safe to pass just a reference to the variable, but if those variables are mutable, you should give the function copies instead to be safe.
To my understanding if you apply his perspective to your example it is about whole function being mutable or immutable. So when you run it with the same parameters, either copied or given by reference, mutable function could return different result based on previous runs. A immutable function will return same result given same parametes. But what you are saying is also true, just a different issue. In concurrency you really don`t want a mutable variable in parallel contexts.
@@grobacz You're talking about a "pure function" (i.e., a function that returns the same value given the same inputs); mutable/immutable refers to being able to change the value stored (or referenced) by a variable.
Most of his argument for immutability is really an argument against shared-memory concurrency (threads), and in favor of Erlang's CSP (Communicating Sequential Processes) model. Erlang happens to use immutability within its processes, but that's by no means a requirement for this model. Also, Erlang's message-passing doesn't actually take advantage of the ability to share immutable memory between threads -- it actually just copies the message to the other Erlang process, even if the other process is local, even if it's inside the same OS process.
Normally in most programming languages function arguments are passed by value, which basically involves copying the arguments to the function parameters. So in this case, immutability is achieved by copying (or copying leads immutability) But if you pass a pointer/reference to a function, then the caller and callee are basically sharing the memory and mutation done by the callee will be visible to the caller, and vice versa. So in this case, mutability is achieved by sharing memory (or sharing memory leads to mutability)
What I like about immutability is that it can be defined recursively: an immutable object is an object where all class variables are final/constant and either a primitive or an immutable object.
you mean like X₂ = X₁^2-1? I don't think thats an 'explanation' so much as it is an alternative form of notation. the semantics of those two statements are implied to be different by commonplace experience with programming languages of various varieties
That said, it is interesting to note that conversion to this form (from actual mutable code) is an optimization routine performed by low level language compilers like LLVM (It's called static single assignment)
Also the statement x = x^2 - 1 doesn't defy the laws of mathematics at all, one possible solution would be x = 0.5 + 0.5 * sqrt(5), which where I live you learn at secondary school. I'm guessing this was Cesarini's first and last appearance on the channel :P
Or by, you know, just not being disingenuous about what the notation indicates. It isn't "equals"; it's assignment. Wow, we used a symbol to mean something it doesn't mean in other context. How unheard of and inexcusable.
I feel like the example with the equation at the start is a bit confusing. That's not really mutability? That equals sign is more like == than = as far as computers are concerned, right? I guess it's assignment but, like, bidirectional assignment. You declare that both sides be equal. You don't update x's value with x²+1. As such, x=x²+1 makes perfect sense and has solutions under which that equation holds. It implicitly defines x as one fixed value.
The value of x does change in this example. The right side gets evaluated with whatever the current value of x is, then the value of x gets overwritten with the result of the computation. Because x changes, it demonstrates mutability.
I agree, the interpretation of the = character is in one case an equality and an assignment in the other case. The programming languages I'm familiar with use different ways for both occurrences, for example := for an assignment and = for an equality or = and ==.
Think of it like an array (I'll use JavaScript, as that's my domain) let state = [] Now if you have some data (let data = [1, 2, 3]) that has to be added to state, you could do it in 2 ways; By mutation: state = state + data (or state += data) Or immutably: state = [...state, data] or state.concat(data)
BASIC used the command "let" followed by the variable (x) an operand(=) and an argument. The double equal sign wasn't a valid argument in BASIC 2.0 I don't know if that ever found it's way to later versions though. The "let" command was optional in practice. If you omitted it, most BASIC interpreters executed that line correctly. Of course, BASIC was a botch of a language and probably crippled many potential programmers who just couldn't adjust to properly structured programming.
Oh wow, I somehow, I don't know how, managed to miss the crucial part "until they taught you programming". Disregard my comment. Of course it is absolutely mutability in this case. I was under the impression that he was still talking about math at the time. If it were math, it would be a definition of a fixed point, rather than mutable assignment. Or alternatively, it would have to be stated as some kind of induction or recursion scheme. Something like x[n+1] = x[n]² + 1, defining an entire family of x-es as indexed by some index set I with n element I
Concurrent programming with immutable state is indeed a speed (in some cases) and safety upgrade but what Cesarini does not tell you is that it is prohibitively expensive. If performance is a strategic ability of your application you should very rarely touch anything that has to do with immutability and be extra careful how you manage complexity that shared state might introduce. If performance is not strategic... you shouldn't touch immutability either until the problems start. It is easy to branch out data into multiple copies and have parts of your system run in isolation. But it is almost impossible to merge modules together which where designed from the ground up to be completely separate. Also x=x^2 - 1 . Math is the inferior reasoning tool.
the family computer growing up was immutable. sometimes it would even turn itself on in the middle of the night and start speaking Spanish, even after having cut power to the speakers.
At least now I know that if I wanted to talk about immutability, I would start by motivating it with program analysis. Introduce SSA (Static Single Assignment) form. And show how an analyzer (like a compiler) could reason much more easily about a program with this form. And then you can motivate that it's not only easier for tools. It's also (arguably) easier for humans to reason about programs where variables are immutable. And tools can help ensure that.
Maybe they CAN share a process even though they are galaxies apart. Everything is mutable. Quantum computing teaches us that state of qubits that can be somewhere else at the same time, meaning in future, your code will pop up on a long distant computer somewhere at the edge of universe.
I *love* playing with buffers and flows through transforming programs running on the gpu and fences and trying to like, tetris stuff so everything is SUPER BUSY and the critical path is minimised hmm
I'm still not understanding the concept of immutability. It would have been nice to see real code that shows this concept, like the other guys do in their videos.
Think of a String. For example. When you change a String, lets say by doing String str = "Hello"; then doing, str = str + " World". Because a String is immutable, 'World' does not get appended directly to the original str. Rather, a different location in memory is assigned and the entire String has to be written into that new location.
How does recursive summing of an array work in a functional language with immutable data structures? Do you just create a new array with each recursive call or what?
Recursive summation of a linked list doesn't require copying, since you're just traversing the nodes. For non-functional data structures though, there are tricks done.
I'm mainly working with C++ and I can't help but dislike having e.g. int be an immutable type in Python. i++ is not supported but i = i+1 is perfectly fine. The first one would change the object behind i and is not supported due to immutability. i = i+1 is okey because it creates a new int and reassigns the reference i. This seems like mutability with annoying extra steps. What am I missing? Why is this better? The video did not really answer this because if we talk about immutable state, being able to reassign the reference makes the state mutable again right? C/C++ give you lots of freedom when it comes to this. int i => mutable, i++ works. const int i => immutable, neither i++ nor i = i+1 work anymore And with pointers you can go even further and have pointers to immutable values, immutable pointers to mutable values or with something like const int* const you get a pointer to an immutable integer that cannot be reassigned. (note that const references in C++ work a bit differently because you cannot reassign them in the first place so a reference already is like an immutable pointer allowing you to change what it points to, a const reference is like an immutable pointer to an immutable type)
stackoverflow.com/q/29967086/8076041 This so question talks about a language where you have inmutable data, with reference rebinding, just like the situation with python ints or strs. Other languages preclude assignment and only allow definitions, thus enforcing an even higher level of inmutability.
Kicking off with an equivocation on assignment and equality does not inspire confidence in the content. The programming equivalent of the mathematical statement "Y = X^2 - 1" is "Y == X^2 - 1", and the result is a bool which depends on the arguments X and Y. Then, when you write "X == X^2 - 1", you're just writing "0 ==X^2 - X - 1", and again, the result is a bool which depends on the argument X alone. No problems here at all. Only when you mistakenly replace equality semantics with assignment semantics do you create an issue. Why would one be motivated to change semantics like this? Perhaps to push an ideology? Can this ideology not be supported without dishonesty?
The argument was poorly-constructed, but there is a genuine case for what he's saying - mutability does make it more complex to keep track of a program's state, and immutability goes a long way toward solving this problem by constraining the semantics of your code.
@@jsbarretto Immutability of state arguably makes it easier to track state, that's true. But as I said in another comment, the complexity required to solve some problem is relatively constant. If you remove complexity from state, then you add it to logic. So it's not sufficient to look merely at the complexity of state, observe that it was reduced, and claim victory. The question which needs to be asked is this: Do you want complexity of state, or complexity of logic? What is easier for humans to understand when they come across an unfamiliar code base? What is easier to debug? What is easier to fix if a problem is found? It's absolutely true that immutable state and functional approaches reduce complexity of state. But from this alone we can't deduce that there's a benefit when we look at the whole picture. The answer to which is better is not at all obvious.
And some languages use a leftwards pointing arrow instead of the equals symbol for assignment. The nitpicking of it not agreeing with mathematics seems entirely based on the use of the same symbol. (I guess abstract algebra's use of the plus and multiplication symbols to mean almost anything would annoy him too.)
The thing is, programming doesn’t have to use assignments. It might be better, idk, but you can work with just definitions. That’s what the speaker is doing when he compares the mathematical and the imperative ‘=‘. He’s comparing definition with re-definition.
Copy-per-concurrent process - when you send data to another process, a copy of the data is sent. This is with the exception of binaries (byte arrays which are used to hold things like Strings) which work the same way as Strings work on the JVM, a substring is a reference to a part of the original string - reference counting is applied and when the last reference goes out of scope the resource is freed. If you really need to share a big blob of data amongst processes you can do so using binaries. In Erlang every process has it's own stack and process dictionary, and there is also a global pool for storing these binaries, as garbage collection is per-stack, as opposed to global (with the exception of binaries), Erlang systems tend to have very predictible and uniform memory allocation/deallocation behavior - which is nice for telecom systems or big web servers. They tend to be pretty easy to program too as you don't have to worry about random threads corrupting your application state.
I guess he's thinking of PC where it really doesn't matter. Try that stuff when writing assembly for a processor with 224 bytes of RAM and you have to switch banks for more than 80 bytes. It won't work.
Sure memory (RAM) is needed but In Functional Programming there us such notions as laziness and streaming processing. One of the technique is to design program in a way that you can use just portion of memory for arbitrary data volume if no need to load the "whole" state. In case of shortage of memory on 1 node/host you start to scale out your program to many nodes and that is where Erlang/Elixir are shining!
Well, as soon as you copy data, sure you avoid locality problems, but you're just kicking the can down the road to staleness/cache invalidation problems.. also, a single sentence to say "oh we'll just merge the data back together when the network comes back" is a terribly insufficient encapsulation of the incredibly fundamental engineering that split brain reconvergence requires starting with the very first line of code. Plus, in your trading algorithm verbal example, you've completely ignored the gritty legal reality conjured when your fairy-tale land of "two immutable copies of the data" trading corp has only N items to buy/sell. What happens when your two split brains sell 1.5(N)? Are Accounting and Legal now just API calls to automatically reconverge?
Did you mean to say "Hoist with your own petard"? I'm a bit disappointed as I thought he would elaborate on how processes working on immutable data can rejoin their results, preferably in an ongoing continuous manner. Without this part, immutable would be just a dumb way to distribute work. Indeed, the most useful tasks that need to be distributed for computation are not independent.
So what you're saying is that my computer needs to have a supercooled quantum entanglement in order to run my one Chrome tab without crashing my system because of the need to avoid mutability? Got it.
Actually you got it backwards, immutability is only recently getting into mainstream production programming. Chrome is a massive weave of mutable data shared between many threads, why it needs half a gig of ram just to start up is (partly) because of the risk management associated with mutability
@Alex Vitkov One example of when you really, really, _really_ want to avoid locks: A game's audio system. Humans are pretty tolerant of timing variation or stalls in the graphics, but we notice similar problems in the audio extremely readily. Locking the audio thread is one of the last things you want your program to do.
Bad concept to introduce immutibility. Because in code and math immutibility is not equivalent. Also the explanation of concurrency models could be described much better.
Learning Rust at the moment. One issue I'm seeing is it forces you to think about how things are stored in memory. This can be an advantage in some scenarios, but there are many situations where it just doesn't matter.
@Cronicas 559 immutability is a property inherent in more blockchains than Bitcoin. And proof-of-work is not necessary to create immutability. Proof-of-stake and proof-of-authority can serve equally as well to assist creating algorithmic immutability across distributed systems.
@Cronicas 559 if immutability is the only standard by which you judge a blockchain's effectiveness, then sure. Bitcoin is probably your best bet. But if, say, usability is an important factor, maybe you would go with a blockchain built with some extra neat features like Ethereum or Hyperledger for smart contracts, etc.
Yikes!!!! Where the flock did they find this guy???? I would not trust this guy to accurately explain toppings on a pizza. *Immutable* should be a 1 minute video.... including the intro and credits afterwards. Immutable = It cannot be changed. End of story. This is the last guy who should be talking about future programming languages. Wow..... has this guy ever heard of Moore’s Law????? 🤔 I subscribed to this channel less than a week ago. This video is yet another Computerphile video that makes me want to unsubscribe.
Really poor content, with a considerable number of mistakes. 1. x = x^2 - 1 "this is wrong" -> it is not wrong, it just does not represent the same as x = x^2 - 1 in computer science. One is an equation / equality, the other is an assignment. Nothing is "wrong" here, they just represent different things... 2. "[concurrency] it is an easier way to reason and an easier way to code a program" -> [citation needed] I did not went through the remaining of the video. @computerphile, do not bring "experts" that only know their own view of the world. An expert acknowledges the different tradeoffs of the different programing paradigms and describes their use cases. Such a beautiful topic so poorly explained.
Depends what you're doing. When you get down to it, computers have mutable memory. So low-level and performance-heavy stuff can't afford to make that assumption.
I used to like immutability. Now I still like immutability. Anyone who knows me knows I like immutability without having to ask if I've changed my mind.
Your preference for Immutability is itself immutable
Well, 3 years later, do you still like immutability?
@@aeuludaghis opinion itself is immutable, so yes.
mutability is bad so you're in the right direction but in case you want to change your opinion instead of changing it in place you can transform new opinion that makes you like mutability
People need to learn how to explain things. This was brutal.
People who already know about that property don't need this long-winded start, and for newbies this is not sufficient or clear at all, it's all over the place. Pick your battles. :)
omg i'm so glad i was not the only one. I appreciate the guy's efforts, but he says to many things that are pragmatically true for a simple case or his specific example, but aren't true in general or in all realistic use cases, and rather than help expand one's mind, it will result in lazy pattern-recognition. _"Don't use shared memory unless you need speed and don't care if it breaks."_ This is an opinion, and it's an opinion based on current-day thinking and tooling. Guess what this guy sells. Erlang tooling. I think that's why I liked Computerphile academics more than all other RUclipsrs.
Welcome to online tutorials! They are either too simple to be usable or assume their audience already has a solid CS background...
Weird, I enjoyed it very much
@@eminence_ Some of this is really interesting and I liked it as well, so I see what you're saying, but in my opinion the opening statement was close to white noise for beginners and completely unnecessary for an advanced crowd. I would think, though, that the video titled like that is mainly for curious people who dabbled into these concepts only somewhat. So much more graphic and patient explanation of "you share what you can and you copy what you can't" is really needed there.
I doubt this channel has the ambition to be the keeper of absolute knowledge. There's plenty of information around if somebody needs to learn more.
Why is it that functional programmers (particularly those who've been interviewed on this channel) have such a palpably hard time reducing ideas to terms approachable by beginners? I'm quite familiar with the concepts being discussed here and this still sounds like a total word salad.
a lot of the concept were drawn straight from abstract algebra and there's just no other proper way to address it. but it's true beginners need to put in extra effort to understand and appreciate FP
Of all the comments on this video, I was most surprised by yours. I have heard plenty of bad explanations about functional programming, and have been trying (and not succeeding) to construct a mental model. This felt like the first explanation that I could truly parse and understand. If you don't mind me asking, what were some of the sticking points for you/how did it become a word salad?
It's really just a side effect of talking about functional programming. If you don't fully understand all the base ideas it's hard to move forward or even code effectively. Where with OOP the barriers of entry are fairly small. You can kinda do what you want, especially if it's a dynamic language.
I don't understand why he's calling "keeping a copy of the data for each process" immutability. Immutability is about not changing things. You can certainly do what he's talking about in the middle of the video with mutability.
not everybody's Joe Armstrong, may he rest in peace
I think the distinction made at the beginning how variables differ between mathematics and programming could be better put this way:
In math, an equation like "x = x^2 - 1" is a function that takes an input parameter, and returns a boolean value indicating whether the equation is true or false. A series of equations usually represents using math to transform one equation into another for some purpose (e.g. solving for x), but each of the equations is a standalone statement independent of time (immutable).
In programming, an expression like "x = x^2 - 1" is an instruction to calculate some value using the contents of a particular location in memory, and store it in a particular location in memory. A series of expressions usually represents steps for modifying the state of the memory for some purpose (e.g. computing something), and each of the steps depends on the state of memory left by previous steps at the time it's executed (mutable).
Scribblers yes, this is much better. I would add that this seems to come from a confusion in early languages, most notably C, where the = operator has been confused with the : operator, which is define. That's why mathematicians will write f := 5 or g : x -> x² or whatever. : is the definition operator, not =.
Furthermore, mutability is essentially an extra but hidden global variable: time.
@@IshayuG, I think assignment and definition are different. Assignments are part of a process and allow mutability, but definitions shouldn't change.
Yeah I Didn't really get the point he was trying to make. The the top one was an equality statement and the other (in a programming language) was an assignment statement. If the bottom one was "==" and tested for equality it would fail.
@@michaelsteffensen6844, actually, there are two values for x for which it would evaluate as true.
@@JNCressey Yeah you're right.
The equal sign "=" in most languages is an assignment operator, and is in no way intended to be the same thing as the same symbol in mathematics. Right out of the gate the guy's comparing apples to oranges.
yeah that bugged me. i was learning programming fairly young and the assignment operator never once tripped me up. it feels perfectly natural to assign something based on its current value.
The idea is that there's no assignment in mathematics. Ever wondered how physicists use math to model the universe? The Universe is clearly mutable, yet no equations they created has mutability, they still model mutable things. Mutability is a low level thing of the implementation of computers that should not be exposed that way.
The other problem with mutability is that it creates temporal dependencies between who reads and who writes, but that temporal dependency is never modeled in any mutable programming language (Rust being the except to an extent, but still). So its entire up to the programmer to check everything.
Its also very hard to think about the past, and the future and the present values of a cell at the same time. No wonder mutability is the 2º cause of bugs, the 1º one being Null Pointer Errors (this is anecdotal).
Everything that can be done with mutability can be done immutably, with the only exception (by practicality) being state-machines used to control hardware (they can also be totally immutable, but that cost too much stack space).
Its only slow doing so nowadays because the hardware was optimized for mutability.
There's no fast or slow computing, Lambda Calculus is precisely equivalent to Turing Machines, the only slow things are things not accelerated by hardware. But soon enough with 64 cores it will be coming.
The age of Von Neuman machines is coming to an end.
Even wonder why hardware is not asynchronous and needs a clock? mutability in hardware.
I wish I had this guy for concurrency at my third year instead of the fungal humanoid we got twice
I am not excited for my concurrency class, it has a reputation at my school for being soulcrushingly hard
"We'll see a million cores within our lifetimes" Hmmm. I don't want to dismiss it entirely, but I don't think we will.
Moore's "Law" has slowed/ended because we're reaching the physical limits of a transistor's atoms and molecules, which is around 5-7nm. Without a revolutionary/unpredictable new process, or something to drastically reduce diminishing returns on traditional methods, we're very close to the end of transistor miniaturization. So then, since we can't make cores smaller, adding more cores necessarily means adding *physical size* to the chip, and that will be very hard to double consistently every 18 months... mainly because of materials costs, and the difficulties of cooling larger and larger surface areas. And how exactly would a CPU the size of a dinner plate work? :P
A weaker argument against a million cores is that software and operating systems already have trouble utilizing 16- or 32-core CPUs. So there'd need to be a massive, uniform paradigm shift away from traditional software development to "million-core" software development. Perhaps even new fundamentals of computer science would need to be developed. Without that, the economics of selling a million-core CPU don't make sense when 99.9% of the chip is idling. But then again, software doesn't require billion-dollar retooling of manufacturing plants, or run up against the laws of physics, so I feel like there's a higher chance that the software industry could just adapt.
So, a million-core CPU? Maybe, but with all known manufacturing processes, I see the laws of physics and thermodynamics as the biggest hurdles to shifting Moore's Law from clock speeds to cores.
He didn't say that we'd see million-core machines in the general market. He specifically said that we'd seen see desktops with 64 cores, and 'machines' with a million cores within our lifetime. I'm pretty sure the million-core machines he's speaking of will be custom machines, for specific use-cases, in the same venue as super-computers and such. Not things that anyone is going to have sitting at home for casual use.
@@phiefer3 If you consider the vector stream units in GPUs to be a CPU in and of themselves, then it is likely that supercomputers will exceed a million cores, or maybe already have. They're not completely independent, but they do execute code in parallel.
In the realm of research/academic supercomputing... ok, possibly! That's why I started with "I don't want to dismiss it entirely" :)
Thought experiment: say a manufacturer, as a marketing stunt, produces a single million-core chip (technically functional, but never intending to be used)... does that count? Or if the only way to run a million-core chip is in a supercooled bunker beneath a handful of universities or astrophysics labs... does that count?
I'd argue that Moore's Law is speaking to transistor counts on *generally available* CPUs, and when people say "We'll see a million-core CPU in our lifetime" they don't mean it as a technicality. They usually mean it as a commodity (even if a niche enthusiast commodity).
@@TorgieMadison If you made a processor core equivalent to a Motorola 6800 with 4,000 transistors, you could put 2.5 million of them into the Apple A12X Bionic that is used in iPads. That's just rough math, but it shows you just how many transistors you can get into things now. 10 billion transistors. Wikipedia even lists a 400,000 core experimental chip with over 1 trillion transistors on one "chip".
And that completely dismisses the current trend to a chiplet design, currently made mainstream by AMD but also being worked on by Intel. It does not have to be a monolithic piece of silicon the size of a dinner plate, that would be neither cost effective nor practical of course. However, stacked chips (Intels approach) or separate chiplets in the AMD style can support more cores much easier. And then of course, if we go to very simple cores but make them massively parallel, like GPUs are build, we have tons of cores already, easy to see that number going up.
Immutability is an important aspect when you deal with concurrency and functional programming, but I didn't like too much the way he introduced and motivated the topic. The '=' sign in mathematics and in programming are two different things. And sure, I remember when I was a teenager and we had our programming courses at school, most of my classmates didn't understand "x = x + 1;" because they interpreted the '=' sign as the comparison of two values and not a the assignment of a value to a variable. For me introducing immutability with this example makes no sense, because apart of using the same sign, they are two different concepts.
There's an alternative explanation, where = is the same as in math, but you're actually writing "x_{i+1}=x_i^2+1", and your variables are sequences and your program is a recursive definition. And the difference between functional and imperative programming is how that implicit sequence index works. In imperative programming, the index goes up globally at every assignment, while in functional programming, it is higher for code that is working on the next step.
@@iabervon Yes, that's true, but I still think this is bad motivation for immutability. I would have started with something like this:
char string[] = "Hello";
some_function_with_side_effects(string);
and show how side effects can screw you up and why in some scenarios having immutability is a win. But saying something along the lines that programming conflicts with mathematics, is simply not true. The '=' in most programming languages is not the boolean comparison operator.
@RealShaoran : the `=` in maths is also usually not a boolean comparison operator. Sometimes it's a _propositional_ comparison operator. But very often it is indeed just a “define to be equal” statement, which is what C-like like language mean with `=` too. Arguably, this should be made clear by writing `:=`, but often mathematicians don't bother.
Regardless: in maths such definitions are always “forever”; you can't just write `x := x + 1` somewhere later in your proof. If you want to re-use the name `x`, it had better be in a separate _scope_ (separate proof or subproof) so there's no confusion (and it's basically two completely unrelated variables you're dealing with, which just happen to use the same formula symbol). That's also how it works in purely functional programming languages.
@@realshaoran4514 If he used side effects mutating a shared array, it might make it more obvious that Erlang's prohibition against variable reassignment is _entirely_ because the designers want '=' to mean what it means in math, and has nothing to do with concurrency. Elixir is another language that runs on the Erlang VM, still does pattern-matching, still has an immutable heap, so it still supports all the same concurrency features... but it allows variable reassignment.
No confusion with “x=x+1”. Programming is NOT Math!
Immutable variables are used in every language I have ever programmed in (at least 20 languages). Immutable variables can be accessed by any number of threads without locks or problems. Agreed! The problem is that immutable variables are not the best solution to most programming problems. I am currently programming a C application of over 100,000 lines. I have hundreds of pure functions and dozens of immutable data structures. I use many threads but I still need isolated code/data and internal locks to make my system work.
Programming that deals only with immutable state is like one hand clapping. Immutable variables are not Functional programming. Nobody forces you to continuously change a variable in any language and constants have been a part of programming since day one.
The system I am creating does multi thread processing automatically and has no user defined locks. The code can be written entirely as if it was single thread.
The bottom line is that immutable data is useful but it is only a part of most solutions.
"Programing is not math" that is true becouse that is how you learnt it.
Imagine we get the change to reinvent programing, there are alot of ways it could be made better, dont you agree?
I belive it is closed minded to think something is not confusing becouse you know how it works.
Alot of the modern languages enforce immutable state. Have you tried them? I'm using Elixir at work and it works just fine without mutable state.
Also constants are not the same as immutability. In my experience immutability is difficult and basically not worth the effort in languages that aren't designed for it. In languages that are designed for it state becomes easier to reason about
Murphy's law: If anything can fail, it will fail
Common comment on Murphy's law: Murphy was an optimist…
That's Finangle's Law. Murphy's Law is "If there are two or more ways to do a job and one of those results in a catastrophe, then someone will do it that way."
@@Roxor128 I thought it's just a law used to show the pseudo progress we made and cough cough will make cough
@@userou-ig1ze Nope. It's actually a principle of design. You can prevent a catastrophe by designing things so that either it doesn't matter which way you do it, or by designing things so there's only one way to do it. USB-C plugs are an example of the former. The standard 3.5mm audio plug is an example of the latter.
@@Roxor128 I prefer his more general law of : if something can go wrong it will. I see it happening in real life and I'm trying to prevent it whenever i can.
Demonstration: I read that as "Murray's law".
Immutable variblaes does not need to be copied that is one (common) solution.
For example clojure has immutable data structures that only stores the diffs that is realy efficient.
And rust has the concept of memmory ownership. So mutable stat can only be changed in one place so no concurrency bugs and no overhead.
You still need to copy. He's talking specifically of when one process needs access to data from another process. You either need to have direct access to memory, or you need to get a copy of that data. At the end of the day there is no 3rd option. There may be special options that allow you to copy the data more efficiently, but it's still a copy.
Yes, there are ownership cases where a process can access shared memory but can't change it, but in nearly all cases if a process needs access to something it probably also needs to alter it at some point, which means it's going to make a temporary copy to work with. (and if a process really didn't have any need to alter the data, then the ownership feature was unneeded, in other words the use of such a feature directly implies that copies will be made for local use)
These types of technologies may move the problem around, or make it easier to deal with, but in the end it still boils down to the same 2 options.
@@phiefer3 I think the speaker may have been speaking rather generally when referring to "processes." Yes, you are correct that if you were to call *fork()* on a process, the child will receive a copy of the parent's memory in order to operate on it concurrently. You are also correct that any IPC mechanism will require either copying or direct mapping of memory (often the former) for two processes to communicate.
But these immutability practices apply just as well to parallel threads within a single process, and in that case, deep copies might not be necessary. The most copying you'll need to do is 8 bytes, the length of a 64-bit pointer, because in that scenario, you can pass pointers or shared references around from thread to thread without need for making copies.
@@xplinux22 see, I would consider the use of a pointer or shared reference between threads to be a mutable relationship, since they are both accessing the same memory, and by default would also both be able to make changes to it. You can use some form of pointer management to prevent one thread from making changes (thus making it into an immutable relationship), but that would again mean that in order for one thread to "use" the data from that pointer they're going to need to make a local copy to do so.
That's what I meant when I said that no matter what it's always going to boil down to sharing or copying, regardless of whether you're on the scale of processes, threads, or entire systems.
@@phiefer3 Does this account for mutexes and RW locks, though? In that particular case, all the owning thread receives after locking the concurrency primitive is a mutable pointer, not a full copy of the data itself. It has the full right to dereference the pointer and mutate or reassign the data on the other end. The data may be static and not owned by any particular thread, and its initialized lifetime exists throughout the duration of the program. Perhaps we have different definitions of what "using" means?
Also there are occasions where shared data is _not_ made copies of for local use throughout the entire lifetime of the program. Instead, it is initialized once and kept constant throughout the lifetime of the threads executing on it. This precondition that immutable sharing necessitates local copying doesn't always hold true.
Check out Erlang and Elixir. They get around the mutability problem by passing data as messages. Also, error handling is beautifully implemented.
the real question is... KEEP CALM AND WHAT?
... `Let it crash.' :D
KEEP CALM AND 2.0
That Erlang philosophy is just lovely, provided you or someone else created the code that cleans up the mess we're in.
And do Erlang !
@@ximalas Joe Armstrong reference :)
His book on Erlang is awesome. Nice video!
If he understand Erlang then he's probably too clever to explain mutability to mortals ;-) /s
This sounds confused. It seems that he's speaking about non-shared state, not about immutability. Languages like Rust show that you can have mutable non-shared values, or shared immutable values, so non-sharing is not the same thing as immutable state.
Exactly.
Easier way to explain immutability is that your program should basically contain no variables. It is the key rule of functional programming. A class can take exponentially large amount of different states based on number of its properties/variables and this can make it very hard to test and cover all the different state permutations and can lead to unexpected results. With immutability, you can make sure the system will always provide the same output for a particular input, because the system basically becomes a function, like with the equation example.
I agree that everything you've said there is true.
But it doesn't tell the whole story. The required complexity of a system is, to a reasonable approximation, a constant value based upon the problem being solved. When you switch to functional programming and immutable state you take complexity out of the state. But the complexity doesn't just disappear. You could frame it like a conservation law: complexity is always conserved. What you do is move the complexity from the state and move it into to the logic of the function itself.
So, the relevant question is this: What is easier to understand, and detect and fix when there's a bug? Complexity of state, or complexity of logic?
The answer is not obvious.
@@allmhuran I think the answer to the question is quite subjective for each person (for me it would be logic, but I know some who would answer state).
He's just saying that inmutability and functional programming solves some issues with scalability and realiability but to a price. It depends on the problem you are solving. He's not saying that's the solution for everything.
Oh, see what I'm saying? This was MUCH clearer than the original video.
Start with this premise and build on that, and you'll get a great clear thought... x=x+1 argument sounded advertisement-like slogan with no substance.
8:30 he has the definition of Amdahl's law exactly backwards -- Amdahl's law doesn't even apply to single-core computing (modulo the effect of overlapping IO blocking waits with computing). And processors didn't hit a limit in per-core clock speed in 2004 because of Amdahl's law, there were numerous other reasons to do with heat dissipation (how heat generated scales with clock speed vs. feature size), engineering limitations, and the laws of physics.
Amdahl's law as most people understand it absolutely does apply to single-core computing; its intuitive interpretation is simply that you should focus your efforts optimising the code that takes longest to execute. That said, I've no idea why Cesarini came up with this slightly odd 2004 thing. Possibly getting confused between that and Moore's law?
@@davidf2281 Amdahl's Law is not up for interpretation, it is defined by a mathematical formula involving the inherently serial and inherently parallel sections of a program. Moore's Law is more subjective, and certainly got cooped from talking about transistor density to talking about overall increases in computing speeds. Yes, he probably meant to say Moore's Law rather than Amdahl's Law, but his statements would still be largely incorrect (just less incorrect than as stated with Amdahl's Law).
@@usethefooorce Nothing in the mathematical definition of Amdahl's law says anything about parallelism. It's simply often useful in that context.
@@davidf2281 Check Amdahl's original 1967 paper, 4th paragraph, which the Amdahl's Law formula was later extracted from by others. Literally the only context Amdahl applies it to is parallel computing, hence the use of _p_ and _s_ in the formula. Others later observed that this observation could be generalized to other domains of resource optimization.
@@usethefooorce ...which reinforces my original point that's it's an interpretation. Amdahl didn't come up with a mathematical formula; others interpreted it as such. Amdahl came up with a useful intuitive idea for optimisation, which applies to both serial and parallel computation.
There is a diffrence between writing x = x^2 - 1 if x is a *variable* and if x is a *constant* .
If x is a *variable* : x = x^2 -1 means that we need do find the x value (which is the golden ratio) that satisfies the equation and assign it to x.
If x is a *constant* : x = x^2 - 1 means that the equation is false and therefore nonvalid (x ≠ x^2 - 1)
Francesco should've denoted x ≡ x^2 - 1 (identical equality symbol), which means that any x value is the solution to the equation. Which in this case it's not, therefore ONLY THEN he can show the equation is false.
Somewhat confusing when you talk about Moore's Law (speed doubles every 1.5 years) directly after mentioning Amdahl's Law but not talking why it hits the limit (sequential parts). Multicores are not a solution in respect to Amdahl's Law!
Yeah, I was super confused.... Did he mean Moore's Law when he said Amdahl's Law? Because it would mostly make sense if that's what he meant. ...and I understand brain farts happen, but if that's what happened, that's kind of a bad mixup to not receive a correction in editing.
1:00
CS Professor: “It defies the laws of mathematics”
***crosses out equation***
The golden ratio: “Am I a joke to you?”
You were very helpful for my hw assignment working on immutability. I struggled with the process of elimination to get the literal syntax. Again, thank you😁
He says that with mutability you share memory and with immutability you copy.
But isn't it the opposite? When you give some arguments to a function, if those variables are immutable, it's safe to pass just a reference to the variable, but if those variables are mutable, you should give the function copies instead to be safe.
To my understanding if you apply his perspective to your example it is about whole function being mutable or immutable. So when you run it with the same parameters, either copied or given by reference, mutable function could return different result based on previous runs. A immutable function will return same result given same parametes. But what you are saying is also true, just a different issue. In concurrency you really don`t want a mutable variable in parallel contexts.
This guy is either a terrible communicator or not a very trustworthy source of information on immutability in programming languages.
@@grobacz You're talking about a "pure function" (i.e., a function that returns the same value given the same inputs); mutable/immutable refers to being able to change the value stored (or referenced) by a variable.
Most of his argument for immutability is really an argument against shared-memory concurrency (threads), and in favor of Erlang's CSP (Communicating Sequential Processes) model. Erlang happens to use immutability within its processes, but that's by no means a requirement for this model.
Also, Erlang's message-passing doesn't actually take advantage of the ability to share immutable memory between threads -- it actually just copies the message to the other Erlang process, even if the other process is local, even if it's inside the same OS process.
Normally in most programming languages function arguments are passed by value, which basically involves copying the arguments to the function parameters.
So in this case, immutability is achieved by copying (or copying leads immutability)
But if you pass a pointer/reference to a function, then the caller and callee are basically sharing the memory and mutation done by the callee will be visible to the caller, and vice versa.
So in this case, mutability is achieved by sharing memory (or sharing memory leads to mutability)
The documentary on this was "Colossus: The Forbin Project"
9:15 AMD says hello.
The guys spoke too many things at once, most went above my head
wait aren't you a robot? shouldn't you understand this stuff better than the common folk?
Look, you have Nottingham, and you have London. Taxes, death and network partitions. That is Immutability.
I didn't think he was all over the place. Everything he talked about is related.
That feel when x = x^2 - 1 is satisfied by one of the most famous constants in mathematics
golden ratio
What I like about immutability is that it can be defined recursively: an immutable object is an object where all class variables are final/constant and either a primitive or an immutable object.
I really don't understand the issue with X=X^2-1, it's easily explained by writing X[n+1] = X[n]^2-1 or more properly with subtext on paper/LaTeX.
you mean like X₂ = X₁^2-1? I don't think thats an 'explanation' so much as it is an alternative form of notation. the semantics of those two statements are implied to be different by commonplace experience with programming languages of various varieties
That said, it is interesting to note that conversion to this form (from actual mutable code) is an optimization routine performed by low level language compilers like LLVM (It's called static single assignment)
Also the statement x = x^2 - 1 doesn't defy the laws of mathematics at all, one possible solution would be x = 0.5 + 0.5 * sqrt(5), which where I live you learn at secondary school. I'm guessing this was Cesarini's first and last appearance on the channel :P
Or by, you know, just not being disingenuous about what the notation indicates. It isn't "equals"; it's assignment. Wow, we used a symbol to mean something it doesn't mean in other context. How unheard of and inexcusable.
yeah, except he was totally not saying or talking about what you do.
I feel like the example with the equation at the start is a bit confusing. That's not really mutability? That equals sign is more like == than = as far as computers are concerned, right?
I guess it's assignment but, like, bidirectional assignment. You declare that both sides be equal. You don't update x's value with x²+1. As such, x=x²+1 makes perfect sense and has solutions under which that equation holds. It implicitly defines x as one fixed value.
The value of x does change in this example. The right side gets evaluated with whatever the current value of x is, then the value of x gets overwritten with the result of the computation. Because x changes, it demonstrates mutability.
I agree, the interpretation of the = character is in one case an equality and an assignment in the other case. The programming languages I'm familiar with use different ways for both occurrences, for example := for an assignment and = for an equality or = and ==.
Think of it like an array
(I'll use JavaScript, as that's my domain)
let state = []
Now if you have some data (let data = [1, 2, 3]) that has to be added to state, you could do it in 2 ways;
By mutation:
state = state + data (or state += data)
Or immutably:
state = [...state, data]
or state.concat(data)
BASIC used the command "let" followed by the variable (x) an operand(=) and an argument. The double equal sign wasn't a valid argument in BASIC 2.0
I don't know if that ever found it's way to later versions though.
The "let" command was optional in practice. If you omitted it, most BASIC interpreters executed that line correctly.
Of course, BASIC was a botch of a language and probably crippled many potential programmers who just couldn't adjust to properly structured programming.
Oh wow, I somehow, I don't know how, managed to miss the crucial part "until they taught you programming". Disregard my comment. Of course it is absolutely mutability in this case. I was under the impression that he was still talking about math at the time.
If it were math, it would be a definition of a fixed point, rather than mutable assignment. Or alternatively, it would have to be stated as some kind of induction or recursion scheme. Something like x[n+1] = x[n]² + 1, defining an entire family of x-es as indexed by some index set I with n element I
Concurrent programming with immutable state is indeed a speed (in some cases) and safety upgrade but what Cesarini does not tell you is that it is prohibitively expensive.
If performance is a strategic ability of your application you should very rarely touch anything that has to do with immutability and be extra careful how you manage complexity that shared state might introduce.
If performance is not strategic... you shouldn't touch immutability either until the problems start. It is easy to branch out data into multiple copies and have parts of your system run in isolation. But it is almost impossible to merge modules together which where designed from the ground up to be completely separate.
Also x=x^2 - 1 . Math is the inferior reasoning tool.
the family computer growing up was immutable. sometimes it would even turn itself on in the middle of the night and start speaking Spanish, even after having cut power to the speakers.
At least now I know that if I wanted to talk about immutability, I would start by motivating it with program analysis. Introduce SSA (Static Single Assignment) form. And show how an analyzer (like a compiler) could reason much more easily about a program with this form.
And then you can motivate that it's not only easier for tools. It's also (arguably) easier for humans to reason about programs where variables are immutable. And tools can help ensure that.
@Ella Blun I don't see your point.
You indeed don't explain something to someone that doesn't want to understand. This goes without saying.
Hii, is there a video about the Mutations? I couldn't find it anywhere.
Maybe they CAN share a process even though they are galaxies apart. Everything is mutable.
Quantum computing teaches us that state of qubits that can be somewhere else at the same time, meaning in future, your code will pop up on a long distant computer somewhere at the edge of universe.
I *love* playing with buffers and flows through transforming programs running on the gpu and fences and trying to like, tetris stuff so everything is SUPER BUSY and the critical path is minimised hmm
I'm still not understanding the concept of immutability. It would have been nice to see real code that shows this concept, like the other guys do in their videos.
Think of a String. For example. When you change a String, lets say by doing String str = "Hello"; then doing, str = str + " World". Because a String is immutable, 'World' does not get appended directly to the original str. Rather, a different location in memory is assigned and the entire String has to be written into that new location.
How does recursive summing of an array work in a functional language with immutable data structures? Do you just create a new array with each recursive call or what?
Recursive summation of a linked list doesn't require copying, since you're just traversing the nodes.
For non-functional data structures though, there are tricks done.
I'm mainly working with C++ and I can't help but dislike having e.g. int be an immutable type in Python.
i++ is not supported but i = i+1 is perfectly fine.
The first one would change the object behind i and is not supported due to immutability.
i = i+1 is okey because it creates a new int and reassigns the reference i.
This seems like mutability with annoying extra steps.
What am I missing? Why is this better?
The video did not really answer this because if we talk about immutable state, being able to reassign the reference makes the state mutable again right?
C/C++ give you lots of freedom when it comes to this.
int i => mutable, i++ works.
const int i => immutable, neither i++ nor i = i+1 work anymore
And with pointers you can go even further and have pointers to immutable values, immutable pointers to mutable values or with something like const int* const you get a pointer to an immutable integer that cannot be reassigned.
(note that const references in C++ work a bit differently because you cannot reassign them in the first place so a reference already is like an immutable pointer allowing you to change what it points to, a const reference is like an immutable pointer to an immutable type)
stackoverflow.com/q/29967086/8076041
This so question talks about a language where you have inmutable data, with reference rebinding, just like the situation with python ints or strs. Other languages preclude assignment and only allow definitions, thus enforcing an even higher level of inmutability.
I raise you "i+=1". To be fair, it's syntactic sugar for "i=i+1", but the "in-place mutation" is syntactically supported where it makes sense.
Kicking off with an equivocation on assignment and equality does not inspire confidence in the content.
The programming equivalent of the mathematical statement "Y = X^2 - 1" is "Y == X^2 - 1", and the result is a bool which depends on the arguments X and Y. Then, when you write "X == X^2 - 1", you're just writing "0 ==X^2 - X - 1", and again, the result is a bool which depends on the argument X alone. No problems here at all.
Only when you mistakenly replace equality semantics with assignment semantics do you create an issue.
Why would one be motivated to change semantics like this? Perhaps to push an ideology? Can this ideology not be supported without dishonesty?
The argument was poorly-constructed, but there is a genuine case for what he's saying - mutability does make it more complex to keep track of a program's state, and immutability goes a long way toward solving this problem by constraining the semantics of your code.
@@jsbarretto Immutability of state arguably makes it easier to track state, that's true. But as I said in another comment, the complexity required to solve some problem is relatively constant. If you remove complexity from state, then you add it to logic. So it's not sufficient to look merely at the complexity of state, observe that it was reduced, and claim victory. The question which needs to be asked is this: Do you want complexity of state, or complexity of logic? What is easier for humans to understand when they come across an unfamiliar code base? What is easier to debug? What is easier to fix if a problem is found?
It's absolutely true that immutable state and functional approaches reduce complexity of state. But from this alone we can't deduce that there's a benefit when we look at the whole picture. The answer to which is better is not at all obvious.
And some languages use a leftwards pointing arrow instead of the equals symbol for assignment. The nitpicking of it not agreeing with mathematics seems entirely based on the use of the same symbol. (I guess abstract algebra's use of the plus and multiplication symbols to mean almost anything would annoy him too.)
He's not taking about the meaning of a symbol or equations. He's taking about inmmutability and using the expression X
The thing is, programming doesn’t have to use assignments. It might be better, idk, but you can work with just definitions. That’s what the speaker is doing when he compares the mathematical and the imperative ‘=‘. He’s comparing definition with re-definition.
Don't communicate by sharing memory, share memory by communicating.
are there extra bits
What about memory usage ?
I guess he _forgot_
Copy-per-concurrent process - when you send data to another process, a copy of the data is sent. This is with the exception of binaries (byte arrays which are used to hold things like Strings) which work the same way as Strings work on the JVM, a substring is a reference to a part of the original string - reference counting is applied and when the last reference goes out of scope the resource is freed. If you really need to share a big blob of data amongst processes you can do so using binaries. In Erlang every process has it's own stack and process dictionary, and there is also a global pool for storing these binaries, as garbage collection is per-stack, as opposed to global (with the exception of binaries), Erlang systems tend to have very predictible and uniform memory allocation/deallocation behavior - which is nice for telecom systems or big web servers. They tend to be pretty easy to program too as you don't have to worry about random threads corrupting your application state.
I guess he's thinking of PC where it really doesn't matter. Try that stuff when writing assembly for a processor with 224 bytes of RAM and you have to switch banks for more than 80 bytes. It won't work.
Basically it's not as big of a deal as it seems at first. There are immutable data structures designed to make the data more memory efficient
Sure memory (RAM) is needed but In Functional Programming there us such notions as laziness and streaming processing. One of the technique is to design program in a way that you can use just portion of memory for arbitrary data volume if no need to load the "whole" state. In case of shortage of memory on 1 node/host you start to scale out your program to many nodes and that is where Erlang/Elixir are shining!
I'll have to re-watch this later. I never got so far in computer science.
Three hip hips for the Erlang tshirt
As an Embedded sw engineer this was great too hear. We use both shared memory and message passing on our system :)
Well, as soon as you copy data, sure you avoid locality problems, but you're just kicking the can down the road to staleness/cache invalidation problems.. also, a single sentence to say "oh we'll just merge the data back together when the network comes back" is a terribly insufficient encapsulation of the incredibly fundamental engineering that split brain reconvergence requires starting with the very first line of code. Plus, in your trading algorithm verbal example, you've completely ignored the gritty legal reality conjured when your fairy-tale land of "two immutable copies of the data" trading corp has only N items to buy/sell. What happens when your two split brains sell 1.5(N)? Are Accounting and Legal now just API calls to automatically reconverge?
I always thought it was something like speaking muteness not mutations. That makes more sense
Thanks for sharing
Did you mean to say "Hoist with your own petard"?
I'm a bit disappointed as I thought he would elaborate on how processes working on immutable data can rejoin their results, preferably in an ongoing continuous manner. Without this part, immutable would be just a dumb way to distribute work. Indeed, the most useful tasks that need to be distributed for computation are not independent.
So what you're saying is that my computer needs to have a supercooled quantum entanglement in order to run my one Chrome tab without crashing my system because of the need to avoid mutability? Got it.
Actually you got it backwards, immutability is only recently getting into mainstream production programming. Chrome is a massive weave of mutable data shared between many threads, why it needs half a gig of ram just to start up is (partly) because of the risk management associated with mutability
It's Moore's law, not Amdahl's
Nothing new under the programming sun except the lexicon
Or the syntax.
What is a compute
Bleep bloop science
Cries in Java
parallelism is a subset of concurrency
The viewers should expose themselves to both Erlang and Rust.
That's just a succession defined by recurrence
x = 1.618 OR -0.618
It's actually 2sin18° but nonetheless
I was thinking the same, but I just said φ.
My thoughts exactly, I don't think they picked the best example to showcase the logic. :p
I see that Immutability's advantage is about convenience more than about speed or performance.
It's mostly about robustness. It's extremely difficult to create a multi-threaded program where various threads all have access to mutable data.
@Alex Vitkov Sure, but locking is a security measure because the underlying type isn't robust enough to be used in a multi-threaded environment.
@Alex Vitkov One example of when you really, really, _really_ want to avoid locks: A game's audio system. Humans are pretty tolerant of timing variation or stalls in the graphics, but we notice similar problems in the audio extremely readily. Locking the audio thread is one of the last things you want your program to do.
@Alex Vitkov I'd argue the main advantage is one of code readability and maintainability, but I suppose it depends what problem you're solving.
I wish that Sheriff of Nottingham and Robin Hood are in peace with taxation :)
please make video about various threading models. e.g. async await in JS, locks, mutexes, semaphores in other languages etc. thanks
Yes you need both in your toolset. However immutability should be the default. Scala/haskell etc. get this right. Java doesnt.
If you're looking for some interesting videos in this sort of area have a search on RUclips for "functional programming".
c# dictionaries, they are talking about you
use ImmutableDictionary ?
And Rust language (near-C-like systems language) will force you to use immutability!
Bad concept to introduce immutibility. Because in code and math immutibility is not equivalent. Also the explanation of concurrency models could be described much better.
My ryzen cpu will get better in the futute
How long does it take for Intel to offer you a 64 core desktop CPU?
"Hoist with his own petard" and that, children, is how Rust took over, (in a perfect world. {So we will probably have golang and python3}.)
Learning Rust at the moment. One issue I'm seeing is it forces you to think about how things are stored in memory. This can be an advantage in some scenarios, but there are many situations where it just doesn't matter.
Go Language
No WhatsApp!
This was a little difficult to follow
i thought this was going to be about blockchains 🤣
Saaaaaaaaaaaame. This is still cool, but not AS cool.
@Cronicas 559 I don't understand your point?
@Cronicas 559 immutability is a property inherent in more blockchains than Bitcoin. And proof-of-work is not necessary to create immutability. Proof-of-stake and proof-of-authority can serve equally as well to assist creating algorithmic immutability across distributed systems.
@Cronicas 559 if immutability is the only standard by which you judge a blockchain's effectiveness, then sure. Bitcoin is probably your best bet. But if, say, usability is an important factor, maybe you would go with a blockchain built with some extra neat features like Ethereum or Hyperledger for smart contracts, etc.
Ugh, this was hard to watch. Yes, programming with immutable state is easier, but it's not the only way to do it.
9:38 all programming languages with mutable-state by default are doomed.
Yikes!!!! Where the flock did they find this guy???? I would not trust this guy to accurately explain toppings on a pizza.
*Immutable* should be a 1 minute video.... including the intro and credits afterwards. Immutable = It cannot be changed. End of story.
This is the last guy who should be talking about future programming languages.
Wow..... has this guy ever heard of Moore’s Law????? 🤔
I subscribed to this channel less than a week ago. This video is yet another Computerphile video that makes me want to unsubscribe.
Really poor content, with a considerable number of mistakes.
1. x = x^2 - 1 "this is wrong" -> it is not wrong, it just does not represent the same as x = x^2 - 1 in computer science. One is an equation / equality, the other is an assignment. Nothing is "wrong" here, they just represent different things...
2. "[concurrency] it is an easier way to reason and an easier way to code a program" -> [citation needed]
I did not went through the remaining of the video.
@computerphile, do not bring "experts" that only know their own view of the world. An expert acknowledges the different tradeoffs of the different programing paradigms and describes their use cases.
Such a beautiful topic so poorly explained.
Awful explanation. totally incoherent.
state shouldn’t even exist; only IO is allowed
Depends what you're doing. When you get down to it, computers have mutable memory. So low-level and performance-heavy stuff can't afford to make that assumption.
4th
null
Smooches
commercial too long to care to watch
1st?
Captions don't seem be embedded into this recording. Does that seem to be the case for anyone else?