Because this video basically said go to wikipedia, I'll try to explain what Y Combinator is. YC is a way to name your function in a language that does not allow such pointers. It proves that function names, classes, etc. are just syntactic sugar. Let's look at the same example: function factorialFn(number){ return number == 1 ? 1: number * factorialFn(number-1) } The only thing that generates a pointer in lambda calculus is a function parameter. So let's rewrite everything to anonymous functions. // This is the mind bending part, a function takes in the factorial function, returns factorial function that takes a number (factorialFn)=> (number) => number == 1 ? 1: number * factorialFn(number-1) Let's call this our Prepared function - we know it would work, but we don't know how to run it yet. So how do we pass the internal function into itself? Looks impossible, right? We could just pass garbage(empty function) to get the internal function and use this to generate the actual function. ((factorialFn)=> (number) => number == 1 ? 1: number * factorialFn(number-1)) (((factorialFn)=> (number) => number == 1 ? 1: number * factorialFn(number-1)) (()=> console.log('garbage'))) Trying number 2 would return a correct answer, but number 3 would call the garbage function. You could just calculate how many times you will need to recurse and nest that many times, so that garbage function would never get called? :P Kinda defeats the purpose. What if we made a function that would nest functions for us? Let's look at the same example as in the video of the simplest recursion: (fn) => fn(fn) If we give this function to itself, it will repeat to no end: ((x) => x(x))((y) => y(y)) same fn, just different variable names for readability. The "y" function becomes a variable "x" after the first application, the x function repeats to infinity. This is the essence of YC. So how do we make this to call our function? Well Lambda calculus allows only way, same as before, as a function parameter: (yourPreparedFn) => (x => x(x))(y => yourPreparedFn(number => y(y)(number))) // The number function exists because I wanted this to be valid JS, and JS is not lazy - it's arguments are immediately evaluated, else the second part could have been just y=> yourPreparedFn(y(y)) If you try to run the function in your head, you will see that the "y" variable is "y" function itself. The first part (x => x(x)) makes that happen as it calls the function and passes the function to itself. And now you are thinking with portals. The second part becomes easier to understand. Because y function has a reference to itself, it has an ability to loop by simply doing y(y). And it can share this ability with your Prepared function. When your Prepared function calls factorialFn with the next number, it's not actually calling factorialFn - it calls y function that generates the next factorialFn and then calls it with the number you passed. This is it, that's Y combinator: Y = f => (x => x(x))(x => f(y => x(x)(y))) You can run this JS to and try to debug or break it for better understanding: ((yourPreparedFn) => (x => x(x))(y => yourPreparedFn(number => y(y)(number)))) ((factorialFn)=> (number) => number == 1 ? 1: number * factorialFn(number-1)) (5) Don't worry if it takes a long time to get it - took me more than a day the first time and I had better resources than a youtube comment :)
The company would only be a real y combinator if they helped out in setting up companies that help out in setting up companies that help out in setting up companies...
So the first question should be "loop = rec(λx.x)". If we look at it, we get rec(λx.x) = (λx.x)(rec(λx.x)) = rec(λx.x), which gives us our infinite recursion as desired. Not sure how to do the second one.
@@r4masami speaking of non-terminating but incorrect would this one be non-terminating but correct? fac = rec(λf.λn.max(n,1)*fac(n-1) tho maybe there isn't a meaningful notion of correctness for something that doesn't return
Problem is that this series skips soooo much from basics of functional programming that it's complitely impossible to follow why stuff is done this way.
Recursion is useful because it is usually simpler and more elegant to represent a problem. Also, since you have less explicit state that usually means there is less that can go wrong, so to speak.
@@xarcaz you've missed the point, basically functional programming isn't the most efficient way to solve a problem, and if translated raw to imperative programming is likely to cause something like a stack overflow instead, it's a "better" (or so I and many other functional programming enthusiasts would argue) way of reasoning about problems then, compilers come in and make it just as efficient (or near so) as imperative programming. This is why compilers for functional languages are often much more complex and perform significantly more optimisations, because functional programming just isn't the model in which computers work in and they have to translate to that model efficiently to make well-running programs
@@xarcaz Your iterative implementations are just going to use a stack anyway (or an algorithm that trades space complexity for time complexity). Having random access state is more important for better time complexity,, but that's not inherent to imperative programming.
yea, this was disappointing. Video should have been twice as long or cut out the introduction to recursion. Because I mean... why is it there? People can just search for a video on recursion... probably even o this channel, if they don't know what it is.
You saw what it is. Understanding it isn't really something that can be conveyed. When it clicks though, it's very cool. Personally I found the way conditionals were done with similar recursion was the 'aha' moment for me rather than looping. How numbers are defined is pretty cool too (although will seem obvious to you if you've seen a set theoretic construction of the natural numbers).
Dear Computerphile. The topics you cover are interesting, but you sometimes, such as in this video skip the important parts. While it is nice to know the history of an idea and the people behind it, it is really a lot better when you explain the idea through-fully, and when you first establish the grounds to complete the exercises the lecturer gives us. Like how does the Y Combinator Works, like please use it with an example, define how to use parenthesis, if the TRUE and FALSE statements are functions on themselves, please define why and how we can distinguish between functions and variables, etc.
> first establish the grounds to complete the exercises the lecturer gives us. ] YES. The first and biggest takeaway was "How can I possibly solve these exercises when I do not remember how the notation (and semantics) work from prior knowledge, never mind the fact they were not actually taught in the video!" (I thought I was watching an informational video, not an unreasonably demanding homework tape.)
Yeah, I honestly don't know why I bother with computerphile videos. From what I've seen, the pacing is just terrible. They start off so basic that it's arguably a waste of our time, and then they skip so far ahead that it's impossible to understand without prior knowledge or outside research. They know it's inadequate but don't care, simply telling us to go look it up. Guy, if you don't want to explain it, then don't have a youtube channel dedicated to explaining it. Oh well, off to Wikipedia I go...
+Ernestas Viskontas, I am working on a game(engine), which should support embedded code in world-files, which should be secure but fast. Maybe you have an idea how to make a portable secure JIT-engine? The code/world is saved in a binary format, which is serialized by a script and the code just consists of a list of (store parameters to variables, load new parameters from variables (one can be the next instruction, to return after functioncall), set function to a value of a variable).
For the exercice on the fac function, I'm not sure but is it possible to write : fac = rec (λf.λn. n*(f (n-1) - 1/n) +1) So that the n*(-1/n)+1 cancel out when n is greater than 0 but as soon as n hits 0, we should compute 0*(stuff) and no matter the stuff we get 0 then add 1 so 0!=1 I'm assuming we're computing in a lazy way and when we see 0*... we immediately return 0
I think a possible definition of the factorial using 'rec' can be: rec (\f n -> if n = 1 => 1 else n * (f n-1)), where 'f' is going to be 'rec' after the first step. So the first call should be : rec (\f n -> if n = 1 => 1 else n * (f n-1)) n . Where n is the number whose factorial you want to calculate.
I have almost 10 years of experience in more classical programming languages like C, Java, Javascript and Python, yet I haven't got the slightest idea of what the answer to these 2 exercises is.
The way I see it: The Y combinator is a function that takes a function _f_ and returns a new function _g_ that behaves exactly like _f_ except that it's implicitly passed _g_ as it's first argument.
What puzzled is that the `f(f(f(…)))` expanding never actually ends! Which to me meant that the argument of `f` will never be calculated, thus we won't ever be able to call the `f`. But that's only true if we eagerly evaluate the arguments. But we can postpone this evaluation until it's needed. E.g. in the factorial calculator we only need `f` when we call it in the "else" case: var fac_calc = f => n => n == 0 ? 1 : n * f(n -1) // < only here we need the actual function of the `f` which means that the actual unwrapping of `f` to the next step of recursion in `f(…)` can happen on demand. So `f(f(…))` turns into `f( lazy f ( lazy f(…) ) )`. For laziness we can declare functions with no arguments, that upon call will do some calculations: () => calculate_the_answer(42) Thus we redefine our `rec` and `fac_calc` to use lazy evaluation: var rec = f => f(() => rec( f ) ) Then we can define our factorial calculator as: var fac_calc = f => n => n == 0 ? 1 : n * f()(n -1) // unwrap lazy `f` and then call it And then derive factorial fn from rec + fac_calc: var fac = rec(fac_calc); And finally call it: fac(3); // = 6 Hope it helps someone else. Also see CheezyRusher's comment with great explanation of Y Combinator. P.S.: this is how we can define lazy `rec` via a Y combinator: var rec = f => (x => f(() => x(x))) (x => f(() => x(x)))
i like the structure of Hutton's explanation. especially how he first introduces general recursion with rec f. but, he could have brought some extra attention to the fact that the result of rec f is a function. i think that this makes it extra difficult for people to understand (lot's of people confused in the comment section). working out the examples would have helped. on the other hand people might not really understand unless they come to the solution themselves. difficult dilema
Love computerphile videos, but this..? Thanks for being like almost *every* maths teacher i've ever had: Teacher: 1+1=2 Me: ok Teacher: therefor, *obviously* integrating the sum of two cosines with their parameters being the derivatives of sqrt(2)/π *clearly* has to be.. Me: .. i wonder what's for lunch today..
Obj[A].flatMap[B](f: (A) -> Obj[B]): Obj[B] A monad is an object that implements a flatMap function (also often called chain or bind). The bind function takes a function that takes the wrapped type [A] and produces a monad wrapping type [B]. The bind function then returns a monad[B] that is a flattened list of the results of the passed function applied to every value wrapped by monad[A]. It sounds confusing af until you get your head around it. In truth its nearly identical to a functor (all monads are in fact functors because map can be implemented using flatMap).
I love how he said "... it will loop around to 2, 1 and so on, until it eventually gets around to 1 ...". Not sure whether it was intentional but that made me really happy :D
quote: "And that's basically what their company is doing..." should have been followed by: "They're a company that helps start companies which helps start companies" Just to be a real recursive company. Otherwise that company doesn't even deserve the name Y-Combinator.
So the loop one is easy. loop = rec(λx.x). The factorial one, though - how do you stop? It seems to me that the definition of rec will loop infinitely no matter what you put as the argument to rec, but the factorial function obviously has to stop.
Yeah, I also read the wiki definition but what they does is wrapping rec somewhere else instead of the whole thing. Not sure if the code in the video is intended or not.
let's define the function if(b x y) = b(x y) assume you have a function λx. (x == 0) which returns a TRUE or FALSE just like shown in the video now you can you use the function if to stop recursion
The way to think about it is that fac = rec([something]) = [something](rec([something])) = [something](fac). So when we make [something] be λf.λn.[body], the value of f in [body] will be fac. That is, you're still trying to write the body of your function, and we've already passed you the function that you're trying to write, in case you'd like to call it sometimes. Then we need to use the fact that about that lambda calculus that (λx.λy.x)(1)([anything]) = 1, even if [anything] expands forever.
fac = rec(λf.λn. if n = 0 then 1 else n * f (n-1)) let body = λn. if n = 0 then 1 else n * f (n-1) Example, enough to show recursion, remember that 1! = 1 * 0! and that λ-calculus is left associative: fac 1 = rec( λf.body) 1 = (λf.body) rec( λf.body) 1 = (λn. if n = 0 then 1 else n * rec( λf.body) (n-1)) 1 = if 1 = 0 then 1 else 1 * rec(λf.body) (1-1) = 1 * (rec(λf.body) 0) = 1 * (λf.body ( rec(λf.body)) 0) = 1 * (λn. if n = 0 then 1 else n * rec( λf.body) (n-1)) 0 = 1 * (if 0 = 0 then 1 else 0 * rec( λf.body) (0-1)) = 1 * 1
FP isn't hard, the problem its about industry standards that dictate production, so new programmers make the choice that gonna put then in the market faster. The most interesting, cool and exciting things happening in computer science today comes from FP concepts. FP handles parallelism and concurrency in a elegant, fun and manageable way, and everybody knows that these are the challenges of contemporary programmers to deal . The fact that most major languages are adopting FP in their architecture its at least a indication of the importance of the concept, so why so much repulse? I think programmers should always be open minded and ready to learn new stuff, only benefits can come from this attitude. A programmer that closes itself in conservatism can only condemn itself to closure, and that is not what we want, it is? I know that a programmer that worry only about his deadline is just living his life, all of us have bills to pay. But you can't go to everywhere despising others ways of thinking just because that wont fit in your deadline. The world is not just about that. ps: Thanks for the videos, i learned new stuff! I know that is not very complete, but you could consider a complete series in this topic, that would be great! I just finished the Nand to Tetris self course and was imagining a computer architecture built entirely of lambdas, that would be cool.
"\" is the "lambda": rec(\x.x) = (\x.x) (rec (\x.x)). Now you subsitute the (rec (\x.x)) into (\x.x) and you get: rec (\x.x), wich is the starting point. So the end equation is: rec(\x.x) = rec(\x.x), wich is exactly the definition of loop.
And by the way \x.x is the identity function, it takes the input and it returns it without modifying it; that's what he meant with "the simplest function" We are gonna have a really bad time here while studying this...
It helps looking ahead to the Y combinator. Imagine what you need to feed it, in order to get back to the original definition of loop. In essense, we need to pick f such that f(x x) = x x. And indeed, the simplest function, \x.x does exactly that. When plugged in Y we get Y(\x.x) = (\f.(\x.f(x x))(\x.f(x x)))(\x.x) = (\x.(\x.x)(x x))(\x.(\x.x)(x x)) = (\x. x x)(\x. x x) (The amount of x's make it a bit confusing, but we could rename it to \y.y to help out with that)
Just "listening to and agreeing with " is the most ineffective way of learning ever. Solving little excercises along the way makes learning much more effective.
@@Bratjuuc but also I think Computerphile director must cut multiple parts of this video. I can't believe there was no other introductory, no other assumptions or definitions, silently assumed that ppl know lambda calculus so they open "popular educational video" from Computerphile to watch about the lambda calculus and Y combinator...
One thing that can be confusing about this video is that he uses a language with lazy evaluation. So I translated the code to JavaScript and used a function to simulate lazy evaluation. i.e. () => rec(f) instead of rec(f) let rec = f => f(() => rec(f)); let loop = rec(f => loopBody => { loopBody(); f()(loopBody); }); let fac = rec(f => x => { if(x === 0) return 1; else return x * f()(x-1); }); let add = rec(f => x => y => { if(x === 0) return y; else return f()(x-1)(y+1); }); loop(() => { console.log(fac(5)); console.log(add(20)(22)); });
Here is my answer in Haskell: true = \a b -> a equals a b = if a == b then true else false fix f = f (fix f) fac = fix (\f n -> (n `equals` 0) 1 $ n * f (n-1)) It doesn't go on forever because when n is zero the result is 1, which doesn't use f. this is like writing \f -> 1, and so haskell doesn't bother with evaluating anything after that.
Thanks this code helped me understand it a bit better. Here's my implementation: fix f = f (fix f) fac = fix (\f n -> if n==0 then 1 else n * f (n-1)) Sample execution: (I'm writing this out to understand it better myself) fac 3 = fix (\f n -> if n==0 then 1 else n * f (n-1)) 3 = (\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 3 = if 3==0 then 1 else 3 *(fix (\f n -> if n==0 then 1 else n * f (n-1))) (3-1) = 3 * (fix (\f n -> if n==0 then 1 else n * f (n-1))) 2 = 3 * (\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 2 = 3 * (if 2==0 then 1 else 2 * (fix (\f n -> if n==0 then 1 else n * f (n-1))) (2-1)) = 3 * (2 * (fix (\f n -> if n==0 then 1 else n * f (n-1))) 1) = 3 * (2 * (\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 1) = 3 * (2 * if 1==0 then 1 else 1 * ((fix (\f n -> if n==0 then 1 else n * f (n-1))) (1-1)) = 3 * (2 * 1 * ((fix (\f n -> if n==0 then 1 else n * f (n-1))) 0) = 3 * (2 * 1 * ((\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 0) = 3 * (2 * 1 * (if 0==0 then 1 else 0 * ((fix (\f n -> if n==0 then 1 else n * f (n-1))) (n-1)))) = 3 * 2 * 1 * 1 = 6 Well, that was longer than I expected.
This part looks strange to me: > if a == b then true else false Isn't that just an overly verbose way of simply saying a == b? I.e., could you simplify the definition of equals to just "equals a b = a == b", or does that make a difference in Haskell?
in Haskell, the boolean values are True and False. In this case, equals returns the two functions, `true` and `false`(which in Haskell are `const` and `flip const`). so when you do (equals n 0) 1 (n * f (n-1)) what you do is: check if n equals 0 if n equals zero, return the first parameter and ignore the second else, return the second parameter and ignore the first one. This is really just the typed version of the lambda calculus version(which is untyped, so it doesn't have boolean values. It doesn't have numbers either, and yes, there's a way to create numbers using functions, but let's not talk about that now).
+Anaetherus First I wanted to ask what are you even doing, but that part I solved, and google even gave me everything I needed to find out it's basically the gamma function. Now, I still don't really understand why the gamma function works the way it does.
Factorial is always the example used for recursion... but that form of factorial is certainly one of the worst possible ways to write it. I once came across a page that was like "58 better ways to calculate the factorial". But none are as useful for explaining simple recursion to people who haven't seen it before. Just don't actually DO it that way.
1:30 The terminal state of the recursive function should be n==0, because (0!) == 1. And remember to test for negative numbers (and non-integers if necessary), so that you don't blow your stack...
wlan2 Just write in the API documentation that one should never do those things and avoid useless conditionals statements cluttering your code and ruining your scalability.
You would check for an input of 0 at the input, not at the algorithm itself. Imagine you had to calculate factorial of some huge number 100000, do you want to perform an additional unnecessary check per iteration, or would you rather check it once at the place of input? 0 should never be passed to the function.
0! == 1 by definition, so the function needs to be able to return the correct value. As for negative values, never rely on "the other guy" to do the boundary-checking for your code.
Funny video. At 9:24 he says to put in a simple function so not to over complicate the recursive function, but all this lambda calculus already seems like a massive over complication to me. Might just be me, but I'm sitting here behind my PC thinking, "yea, that's what recursive is, i do this all the time in my work", and its presented like this is quantum physics. I do have a question though. How would a functional language deal with memory overhead when stepping through a recursive function? Every function call is going to store some memory that's not released until the function completes. So a recursive function that isn't meant to end after a few calls, for example a computer game loop, will eventually crash the program.
It's not overcomplication, it's just very basic. It's like the Turing machine: you don't have to care about it, using your computer but it's a language to express computational concepts in the most basic way so you can understand it. In that way it's actually like quantum mechanics: you have to formalize it in order to being able to falsify it.
@@joesdrummer2842 I thinks its that many programmers understand this at such an elemental level that this exercise itself is like trying to diagram the meaning of every word in a sentence. It's so unnecessary that it's baffling
You didn't explain the Y combinator at all. You could've cut this video down to five seconds and just said "Look it up on Wikipedia". Very disappointing.
Haskell is so cool. In GHCi you can write > let loop = loop and it accepts it fine. In fact, it knows that loop can be of any type: > :t loop loop :: t So loop can even be of a function type: > :t loop True loop True :: t > :t loop pi 4 True loop pi 4 True :: t Because Haskell is lazy, some expressions can terminate, even if they have loop in them > head [4, loop True] 4 But of course, if it ever needs to call loop then it gets stuck.
Recursion isn't easy to grasp just by watching a video, if you press pause and think about it and/or look up other videos about recursion and then come back to this video with the basic understanding of recursion you will get it.
Sandro Zimmermann not even just recursion though- you'd need to be comfy with both recursion and the lambda calculus for this to make sense on the first go.
Pure Functional programming is unintuitive by it's nature. First uni class I ever failed was Haskell and I still harbour a (partially) irrational hatred for it. Mostly though it's a pointless academic exercise that needlessly complicates things when applied to the real world.
TheSpacecraftX Pure functional programming enforced everywhere at a language level, perhaps. But a lot of the functional programming style is about detangling your program- splitting it into smaller independent bits which can be tested and later combined with high level tools. Most of the rest of FP is about optimizing your program to make the most use of that. Immutability, for example, prevents multiple parts of your code from being able to step over each others toes- it forces them to work independently. I'm a big fan of clojure, which makes use of the FP style to provide a much safer environment for multithreading, because you have a limited set of mutable types which could potentially be synchronization problems.
Is it possible to record your audio in a way where the pen sound is quieter? That felt tip pen on paper goes right through me worse than nails on a blackboard, it's horrible. Or some kinda pen that doesn't make that sound?
It's not added in post, the video and audio are edited separately so you don't always hear exactly what you see, and you can't really remove the pen sound without messing up voice.
Great video! Just a question...when the author writes down the symbol "=" First he uses it as a term rewriting rule and later to give an alias to a complex lambda expression.,.former case seems to legitimate the use of recursion, latter does not. Is not this a way to use = symbol ambiguously? Does it make sense my rigorous worry? Wouldnt it be preferable yo use -> and the equivalent sumbol ≈ instead to clarify? . Somewhat pedantic, = should be used for denotational equations. Anyway, this is q wonderful video... straight and short.
PS: Can confirm how the y combinator gets past step 3 here? IE: Does the input function get passed in as f again somehow or is the f on left side of the function always automatically assigned the original input value on every pass?: Step 1: (λf. (λx. f (x x))(λx. f (x x)))SomeFunction Step 2: (λx. SomeFunction (x x))(λx. f (x x)) Step 3: SomeFunction(λx. f (x x))(λx. f (x x)) Step 4 (Want to make sure this "f is automatically assigned SomeFunction as val on every pass" step is correct): SomeFunction(SomeFunction(λx. f (x x))(λx. f (x x))) If f is automatically passed the first input function on every iteration, then maybe this is a better way to show?: Step 1: (λf. (λx. f (x x))(λx. f (x x)))SomeFunction Step 2: (λx. SomeFunction (x x))(λx. f (x x)) Step 3: SomeFunction(λx. SomeFunction(x x))(λx. f (x x)) And so on SomeFunction(SomeFunction(λx. SomeFunction(x x))(λx. f (x x)))
i think the arguments he chose as the result for TRUE and FALSE are intuitive. it's similar to the if statement in imperative languages in an imperative language you could write: if(1=1) {return 2;} else{return 4;}... in his examples you could use TRUE and FALSE as an if statement: (1=1) 2 4 the order of the possible results is the same it's easy to remember
In lazily evaluated untyped languages, this works. But in strict statically typed (eager) languages like ML and F# this won’t work. The compiler will complain that the resulting type would be infinite. And if you work around that, your loop would be infinite.
Somebody needs to explain the explanatory comments to me. and then I also need an explanation of the resulting explanation. Ad infinitum. This is the essence of lambda calculus.
I'm wondering, why it always factorial? Can't you pick sum of a list? Search in a sorted array? This single choice costs thousands of people who think CS is useless garbage every year.
wow, peculiar accent, e.g. whenever he says 'idea' it sounds like he's actually adding a halftoned 'r' at the end of the word, like it sounds 'idear' (8:45), and the same with some other words too
I'm just a humble engineer attempting to transition to computer science. In common with all videos referring to lambda calculus, I still don't get it. I feel like the Red Dwarf Cat: "I understood everything up to simply..."
Well, if fac is: fac = rec [some] , then according to the definition of rec we got: fac = rec[some]= [some] rec [some] . But rec[some] is fac due to definition of fac. So fac=[some] fac in that case [ some] = £f.£n (n==1) [ 1 n*( f n-1)]
Just as soon as I start thinking that I understand it, I realize I don't understand anything at all. It's like that moment from FRIENDS when Joey was trying to speak French. What is this sorcery?
Please stop with marker on paper sounds - its screeching and makes me turn down the volume then I can't hear the speaker. Is there a way to selectively edit it out?
So much hate against functional programming. This video isn't even about functional programming, it's about a theoretical construct used to formalize a notion in a mathematical framework. Will we see everyone else hate on imperative programming next time there's a video about the recursion theorem and Turing Machines?
(\x.xx)(\x.xx) would become \x.xx\x.xx. We should probably rename the variables within the last lambda since they refer to a different variable entirely. So then you get \x.xx\y.yy. This is a function which takes a function, renames it 'x', then replaces all free variables in the resulting expression with that function. Thus you end up with the function being applied to itself, the result of which is supplied another argument which corresponds to the function \y.yy. Don't really know how useful that is. Here's an example with id. Applying id to the function (\x.xx\y.yy)\z.z Replacing x with \z.z: (\z.z)(\z.z)(\y.yy) Then applying id to itself (identity of identity is identity!). (\z.z)(\y.yy) Then because id simply returns it's argument: \y.yy This is our final result because it contains no more redex's (REDucible EXpressions).
Not necessarily, e.g. y (λx.5) = 5 - in Haskell, y (const 5) = 5 with y f = f (y f). For all the interesting cases such as factorial, you need more parameters, though. Clarification: Technically, there are no functions in Haskell with more than one parameter due to currying… by two parameters I mean a type of a -> b -> c and so on.
it would be even more efficient if it had no recursion or looping but instead just a precomputed lookup table for results and just used the input to index to the table. something like this int FactorialO1(int x) { const int f[] = { 1, 1, 2, 3, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600 }; return f[x]; }
both the recursive and lookup table implementation waste some memory recursive 1 grows its stack and lookup 1 has table. this uses minimal memory int Factorial(int x) { int f = 1; while (x) f *= x--; return f; }
We could _try_ solving the factorial case if he at least told us how are we supposed to encode integers at all. How do I check for 1? How do I subtract? The only things we know about lambda calculus is how to do boolean logic.
The best we can do now is something like this, assuming ISONE, ONE, MUL and SUB can be defined _somehow_: FAC = REC (λf.λn. (ISONE n) ONE (MUL n (f (SUB n ONE))))
I like the lecture but the noise of the marker on the paper is very disturbing to me and makes it hard to listen to the video... Please use a pen for the videos
Btw, I heard that Y Combinator is usefull in call bu name reduction strategy, but useless in a call by value one.... In last case Y g can diverge for ever, despite of g
'Defining factorial using Lambda Calculus has been left as an exercise for the viewer.'
Yep, he's a computer scientist.
Hehe... I have heard a similar joke about Rudin, whose real analysis textbook makes similar statements all over the place! :)
One of the most annoying phrases to encounter when I studied it 30 years ago: The solution is left as an exercise for the reader.
Why's that annoying? I have a degree in computer science. That isn't a hard question.
@@cloerenjackson3699 He did not mean factorial specifically.
It's the oldest trick in the book. Which book? Euclid's Elements. He started this.
Computerphile owns all of the world's supply of line-feed dot matrix print paper.
My father has some in his office as well
@@stv3qbhxjnmmqbw835 I remember seeing some of that in my dad’s office 20 years ago!
Thought the same
"loop = loop"
*That's enough for this sheet of paper, let's grab another one*
He was already at the bottom ⊥
he didnt define equal sign. syntax error.
I can't unsee it now
Dude must own stock in a paper company!
me
Because this video basically said go to wikipedia, I'll try to explain what Y Combinator is.
YC is a way to name your function in a language that does not allow such pointers. It proves that function names, classes, etc. are just syntactic sugar. Let's look at the same example:
function factorialFn(number){
return number == 1 ? 1: number * factorialFn(number-1)
}
The only thing that generates a pointer in lambda calculus is a function parameter. So let's rewrite everything to anonymous functions.
// This is the mind bending part, a function takes in the factorial function, returns factorial function that takes a number
(factorialFn)=>
(number) => number == 1 ? 1: number * factorialFn(number-1)
Let's call this our Prepared function - we know it would work, but we don't know how to run it yet.
So how do we pass the internal function into itself? Looks impossible, right? We could just pass garbage(empty function) to get the internal function and use this to generate the actual function.
((factorialFn)=>
(number) => number == 1 ? 1: number * factorialFn(number-1))
(((factorialFn)=>
(number) => number == 1 ? 1: number * factorialFn(number-1))
(()=> console.log('garbage')))
Trying number 2 would return a correct answer, but number 3 would call the garbage function.
You could just calculate how many times you will need to recurse and nest that many times, so that garbage function would never get called? :P Kinda defeats the purpose. What if we made a function that would nest functions for us?
Let's look at the same example as in the video of the simplest recursion:
(fn) => fn(fn)
If we give this function to itself, it will repeat to no end:
((x) => x(x))((y) => y(y))
same fn, just different variable names for readability. The "y" function becomes a variable "x" after the first application, the x function repeats to infinity. This is the essence of YC. So how do we make this to call our function?
Well Lambda calculus allows only way, same as before, as a function parameter:
(yourPreparedFn) =>
(x => x(x))(y => yourPreparedFn(number => y(y)(number)))
// The number function exists because I wanted this to be valid JS, and JS is not lazy - it's arguments are immediately evaluated, else the second part could have been just y=> yourPreparedFn(y(y))
If you try to run the function in your head, you will see that the "y" variable is "y" function itself. The first part (x => x(x)) makes that happen as it calls the function and passes the function to itself. And now you are thinking with portals.
The second part becomes easier to understand. Because y function has a reference to itself, it has an ability to loop by simply doing y(y). And it can share this ability with your Prepared function. When your Prepared function calls factorialFn with the next number, it's not actually calling factorialFn - it calls y function that generates the next factorialFn and then calls it with the number you passed.
This is it, that's Y combinator:
Y = f => (x => x(x))(x => f(y => x(x)(y)))
You can run this JS to and try to debug or break it for better understanding:
((yourPreparedFn) =>
(x => x(x))(y => yourPreparedFn(number => y(y)(number))))
((factorialFn)=>
(number) => number == 1 ? 1: number * factorialFn(number-1))
(5)
Don't worry if it takes a long time to get it - took me more than a day the first time and I had better resources than a youtube comment :)
🙏TY
YOU SIR, you are awesome! Thanks!
This was really helpful. Thanks.
May the lord of our simulation bless you
Man, when I got to the valid js part - taking my hat off! I think you should do these videos instead of Paul Graham.
The company would only be a real y combinator if they helped out in setting up companies that help out in setting up companies that help out in setting up companies...
which is not to far from the truth. companies grow to become large, and spark new projects, investments, companies.
@@NitinPatelIndia loops are scams confirmed
@@adrycough investment certainly is a scam
So a pyramid scheme?
Teachers like this professor here are, in fact, Y combinator themselves. Thank you.
Y Combinator is a y combinator. 🤯 I have been programming since 1989, and I literally just learned this today. I feel so ashamed, yet amazed.
So the first question should be "loop = rec(λx.x)". If we look at it, we get
rec(λx.x) = (λx.x)(rec(λx.x)) = rec(λx.x), which gives us our infinite recursion as desired. Not sure how to do the second one.
I guess the second one is: fac = rec (\f.
. n * f (n-1))
But that doesn't terminate, right? If I give you 5, your program would give me 5 * 4 * 3 * 2 * 1 * 0 * -1 * ...
You can see the solution on the wiki page, you need to include an if case for the 0! = 1 base case.
Oh really I've forgot. I need to rethink, mixed haskell-like and lambda-calculus notations has confused me.
@@r4masami speaking of non-terminating but incorrect would this one be non-terminating but correct? fac = rec(λf.λn.max(n,1)*fac(n-1) tho maybe there isn't a meaningful notion of correctness for something that doesn't return
Problem is that this series skips soooo much from basics of functional programming that it's complitely impossible to follow why stuff is done this way.
Recursion is useful because it is usually simpler and more elegant to represent a problem. Also, since you have less explicit state that usually means there is less that can go wrong, so to speak.
@@Wasabiofip Until you run into something akin to a stack overflow... which is why iteration generally trumps recursion in the real world.
@@xarcaz unless your language of choice implements tail calls, then the stack won't become an issue.
@@xarcaz you've missed the point, basically
functional programming isn't the most efficient way to solve a problem, and if translated raw to imperative programming is likely to cause something like a stack overflow
instead, it's a "better" (or so I and many other functional programming enthusiasts would argue) way of reasoning about problems
then, compilers come in and make it just as efficient (or near so) as imperative programming. This is why compilers for functional languages are often much more complex and perform significantly more optimisations, because functional programming just isn't the model in which computers work in and they have to translate to that model efficiently to make well-running programs
@@xarcaz Your iterative implementations are just going to use a stack anyway (or an algorithm that trades space complexity for time complexity).
Having random access state is more important for better time complexity,, but that's not inherent to imperative programming.
This is certainly one of the coolers idears in computer science.
y-combinator: here are 10 minutes of introduction. To actually see more then the plain formula itself pls visit the wikipedia page. wtf :(
Understand y-combinator = rec ( watch 10 minutes of introduction )
yea, this was disappointing. Video should have been twice as long or cut out the introduction to recursion. Because I mean... why is it there? People can just search for a video on recursion... probably even o this channel, if they don't know what it is.
Since recursion is the purpose of the Y combinator it is natural to explain it.
I searched for a video of someone searching for a video on recursion.
You saw what it is. Understanding it isn't really something that can be conveyed. When it clicks though, it's very cool. Personally I found the way conditionals were done with similar recursion was the 'aha' moment for me rather than looping. How numbers are defined is pretty cool too (although will seem obvious to you if you've seen a set theoretic construction of the natural numbers).
Dear Computerphile. The topics you cover are interesting, but you sometimes, such as in this video skip the important parts. While it is nice to know the history of an idea and the people behind it, it is really a lot better when you explain the idea through-fully, and when you first establish the grounds to complete the exercises the lecturer gives us. Like how does the Y Combinator Works, like please use it with an example, define how to use parenthesis, if the TRUE and FALSE statements are functions on themselves, please define why and how we can distinguish between functions and variables, etc.
> first establish the grounds to complete the exercises the lecturer gives us. ]
YES. The first and biggest takeaway was "How can I possibly solve these exercises when I do not remember how the notation (and semantics) work from prior knowledge, never mind the fact they were not actually taught in the video!"
(I thought I was watching an informational video, not an unreasonably demanding homework tape.)
Yeah, I honestly don't know why I bother with computerphile videos. From what I've seen, the pacing is just terrible. They start off so basic that it's arguably a waste of our time, and then they skip so far ahead that it's impossible to understand without prior knowledge or outside research. They know it's inadequate but don't care, simply telling us to go look it up. Guy, if you don't want to explain it, then don't have a youtube channel dedicated to explaining it. Oh well, off to Wikipedia I go...
♬ When a grid's misaligned; with another behind; that's a moiré ♬
♬ When the spacing is tight; And the difference is slight; That's a moiré ♬
Most random comment i ever saw
♬ When a random comment is sang by a nice gent that's a moiré! ♬
♬ If you see an eel with a jaw pharyngeal, that's a Moray ♬
@@acwaller1000 ♬ When you go for a dive, and an eel bites your eye, that's a moray ♬
Can you make a video about assembly, compiler-omptimization or/and compiler-building, please?
They did one on bootstrapping.
I'd like videos about cpu architecture/instruction set design.
I'm working on a simple compiler for my CPU, I could help out a bit if you need. Hit me up on discord Naruto#6555
+Ernestas Viskontas, I am working on a game(engine), which should support embedded code in world-files, which should be secure but fast. Maybe you have an idea how to make a portable secure JIT-engine? The code/world is saved in a binary format, which is serialized by a script and the code just consists of a list of (store parameters to variables, load new parameters from variables (one can be the next instruction, to return after functioncall), set function to a value of a variable).
@@0EEVV0 How's it going? I am thinking to go for something like that too soon
For the exercice on the fac function, I'm not sure but is it possible to write :
fac = rec (λf.λn. n*(f (n-1) - 1/n) +1)
So that the n*(-1/n)+1 cancel out when n is greater than 0 but as soon as n hits 0, we should compute 0*(stuff) and no matter the stuff we get 0 then add 1 so 0!=1
I'm assuming we're computing in a lazy way and when we see 0*... we immediately return 0
I think a possible definition of the factorial using 'rec' can be:
rec (\f n -> if n = 1 => 1 else n * (f n-1)), where 'f' is going to be 'rec' after the first step.
So the first call should be : rec (\f n -> if n = 1 => 1 else n * (f n-1)) n . Where n is the number whose factorial you want to calculate.
This might be the most important computer science video on YT
I have almost 10 years of experience in more classical programming languages like C, Java, Javascript and Python, yet I haven't got the slightest idea of what the answer to these 2 exercises is.
The way I see it: The Y combinator is a function that takes a function _f_ and returns a new function _g_ that behaves exactly like _f_ except that it's implicitly passed _g_ as it's first argument.
This is the comment that finally let me wrap my head around it :D
What puzzled is that the `f(f(f(…)))` expanding never actually ends!
Which to me meant that the argument of `f` will never be calculated, thus we won't ever be able to call the `f`.
But that's only true if we eagerly evaluate the arguments. But we can postpone this evaluation until it's needed.
E.g. in the factorial calculator we only need `f` when we call it in the "else" case:
var fac_calc = f => n =>
n == 0
? 1
: n * f(n -1) // < only here we need the actual function of the `f`
which means that the actual unwrapping of `f` to the next step of recursion in `f(…)` can happen on demand.
So `f(f(…))` turns into `f( lazy f ( lazy f(…) ) )`.
For laziness we can declare functions with no arguments, that upon call will do some calculations:
() => calculate_the_answer(42)
Thus we redefine our `rec` and `fac_calc` to use lazy evaluation:
var rec = f => f(() => rec( f ) )
Then we can define our factorial calculator as:
var fac_calc = f => n =>
n == 0
? 1
: n * f()(n -1) // unwrap lazy `f` and then call it
And then derive factorial fn from rec + fac_calc:
var fac = rec(fac_calc);
And finally call it:
fac(3); // = 6
Hope it helps someone else.
Also see CheezyRusher's comment with great explanation of Y Combinator.
P.S.: this is how we can define lazy `rec` via a Y combinator:
var rec = f => (x => f(() => x(x))) (x => f(() => x(x)))
i like the structure of Hutton's explanation. especially how he first introduces general recursion with rec f. but, he could have brought some extra attention to the fact that the result of rec f is a function. i think that this makes it extra difficult for people to understand (lot's of people confused in the comment section). working out the examples would have helped. on the other hand people might not really understand unless they come to the solution themselves. difficult dilema
The result of everything in untyped lambda calculus is a function, but it definitely does bear repeating...
Love computerphile videos, but this..? Thanks for being like almost *every* maths teacher i've ever had:
Teacher: 1+1=2
Me: ok
Teacher: therefor, *obviously* integrating the sum of two cosines with their parameters being the derivatives of sqrt(2)/π *clearly* has to be..
Me: .. i wonder what's for lunch today..
exactly how i felt in my functional programming class
Very good video! Now can we have a video explaining monads?
Andrew Ray A monad is just a monoid in the category of endofunctors, what's the problem?
Oh, that's easy. Monads are burritos.
A monad is just the lower triangular half of a dyadic, duh.
Obj[A].flatMap[B](f: (A) -> Obj[B]): Obj[B]
A monad is an object that implements a flatMap function (also often called chain or bind). The bind function takes a function that takes the wrapped type [A] and produces a monad wrapping type [B]. The bind function then returns a monad[B] that is a flattened list of the results of the passed function applied to every value wrapped by monad[A].
It sounds confusing af until you get your head around it. In truth its nearly identical to a functor (all monads are in fact functors because map can be implemented using flatMap).
A video on monads is in preparation...
Oh, I'm reading Prof Hutton's and Erik Meijer's paper on parser combinators from 1996. Cool to see him on computerphile.
This video defines y combinator in terms of itself. Watch again and again to understand.
I love how he said "... it will loop around to 2, 1 and so on, until it eventually gets around to 1 ...". Not sure whether it was intentional but that made me really happy :D
quick refresher lol last video was 7 months ago! come on computerphile!
The founder of "Y Combinator" company is now the famous Sam Altman, the founder of Open AI
quote: "And that's basically what their company is doing..."
should have been followed by:
"They're a company that helps start companies which helps start companies"
Just to be a real recursive company. Otherwise that company doesn't even deserve the name Y-Combinator.
Really great explanation at 6:40.
So the loop one is easy. loop = rec(λx.x).
The factorial one, though - how do you stop? It seems to me that the definition of rec will loop infinitely no matter what you put as the argument to rec, but the factorial function obviously has to stop.
Yeah, I also read the wiki definition but what they does is wrapping rec somewhere else instead of the whole thing. Not sure if the code in the video is intended or not.
let's define the function if(b x y) = b(x y)
assume you have a function λx. (x == 0) which returns a TRUE or FALSE just like shown in the video
now you can you use the function if to stop recursion
The way to think about it is that fac = rec([something]) = [something](rec([something])) = [something](fac). So when we make [something] be λf.λn.[body], the value of f in [body] will be fac. That is, you're still trying to write the body of your function, and we've already passed you the function that you're trying to write, in case you'd like to call it sometimes.
Then we need to use the fact that about that lambda calculus that (λx.λy.x)(1)([anything]) = 1, even if [anything] expands forever.
One thing you must understand is that in the untyped lambda calculus, everything will be encoded as functions, including numbers and conditions.
fac = rec(λf.λn. if n = 0 then 1 else n * f (n-1))
let body = λn. if n = 0 then 1 else n * f (n-1)
Example, enough to show recursion, remember that 1! = 1 * 0! and that λ-calculus is left associative:
fac 1
= rec( λf.body) 1
= (λf.body) rec( λf.body) 1
= (λn. if n = 0 then 1 else n * rec( λf.body) (n-1)) 1
= if 1 = 0 then 1 else 1 * rec(λf.body) (1-1)
= 1 * (rec(λf.body) 0)
= 1 * (λf.body ( rec(λf.body)) 0)
= 1 * (λn. if n = 0 then 1 else n * rec( λf.body) (n-1)) 0
= 1 * (if 0 = 0 then 1 else 0 * rec( λf.body) (0-1))
= 1 * 1
FP isn't hard, the problem its about industry standards that dictate production, so new programmers make the choice that gonna put then in the market faster. The most interesting, cool and exciting things happening in computer science today comes from FP concepts. FP handles parallelism and concurrency in a elegant, fun and manageable way, and everybody knows that these are the challenges of contemporary programmers to deal . The fact that most major languages are adopting FP in their architecture its at least a indication of the importance of the concept, so why so much repulse?
I think programmers should always be open minded and ready to learn new stuff, only benefits can come from this attitude. A programmer that closes itself in conservatism can only condemn itself to closure, and that is not what we want, it is?
I know that a programmer that worry only about his deadline is just living his life, all of us have bills to pay. But you can't go to everywhere despising others ways of thinking just because that wont fit in your deadline. The world is not just about that.
ps: Thanks for the videos, i learned new stuff! I know that is not very complete, but you could consider a complete series in this topic, that would be great! I just finished the Nand to Tetris self course and was imagining a computer architecture built entirely of lambdas, that would be cool.
That functions all fun and games till someone puts -1 in it
LOL or the square root of -1 😳 !
By the way, his factorial function is wrong for a valid input: 0! = 1 by definition.
@@TruthNerds He's only doing natural numbers.
The Y-combinator is the primal recipe for stackoverflow
You made a video about the Y-combinator without explaining the Y-Combinator at all. Okay.
Niall Newman Welcome to functional Programming
I'm glad it's not just me!
The Y-combinator idea is so abstract that it needs to be encapsulated in order to understand it. As Janox said, welcome to functional programming
@@mauriciocortazar9604 It's not really that abstract at all. They just didn't explain it.
Make some videos on category theory
"This one (loop = rec(?)) is quite easy..." Um... maybe I'm just a silly imperative programmer, but I can't figure that out >_
"\" is the "lambda": rec(\x.x) = (\x.x) (rec (\x.x)). Now you subsitute the (rec (\x.x)) into (\x.x) and you get: rec (\x.x), wich is the starting point. So the end equation is: rec(\x.x) = rec(\x.x), wich is exactly the definition of loop.
And by the way \x.x is the identity function, it takes the input and it returns it without modifying it; that's what he meant with "the simplest function"
We are gonna have a really bad time here while studying this...
Status update: tomorrow this will probably become the first subject I've completed :D
Sometimes these Professors get caught up in their own intellect and forget not everyone thinks as abstract as they do and get it first time
It helps looking ahead to the Y combinator. Imagine what you need to feed it, in order to get back to the original definition of loop.
In essense, we need to pick f such that f(x x) = x x. And indeed, the simplest function, \x.x does exactly that. When plugged in Y we get
Y(\x.x) = (\f.(\x.f(x x))(\x.f(x x)))(\x.x)
= (\x.(\x.x)(x x))(\x.(\x.x)(x x))
= (\x. x x)(\x. x x)
(The amount of x's make it a bit confusing, but we could rename it to \y.y to help out with that)
Wait... did he give us homework?
Just "listening to and agreeing with " is the most ineffective way of learning ever. Solving little excercises along the way makes learning much more effective.
@@Bratjuuc but also I think Computerphile director must cut multiple parts of this video. I can't believe there was no other introductory, no other assumptions or definitions, silently assumed that ppl know lambda calculus so they open "popular educational video" from Computerphile to watch about the lambda calculus and Y combinator...
@@Bratjuuc but yup, homewrks are gread, aren't they? :)
One thing that can be confusing about this video is that he uses a language with lazy evaluation. So I translated the code to JavaScript and used a function to simulate lazy evaluation. i.e. () => rec(f) instead of rec(f)
let rec = f => f(() => rec(f));
let loop = rec(f => loopBody => {
loopBody();
f()(loopBody);
});
let fac = rec(f => x => {
if(x === 0) return 1;
else return x * f()(x-1);
});
let add = rec(f => x => y => {
if(x === 0) return y;
else return f()(x-1)(y+1);
});
loop(() => {
console.log(fac(5));
console.log(add(20)(22));
});
Here is my answer in Haskell:
true = \a b -> a
equals a b = if a == b then true else false
fix f = f (fix f)
fac = fix (\f n -> (n `equals` 0) 1 $ n * f (n-1))
It doesn't go on forever because when n is zero the result is 1, which doesn't use f. this is like writing \f -> 1, and so haskell doesn't bother with evaluating anything after that.
Sorry, add:
false = \a b -> b
*now it works
I love these videos!!!!!
Thanks this code helped me understand it a bit better. Here's my implementation:
fix f = f (fix f)
fac = fix (\f n -> if n==0 then 1 else n * f (n-1))
Sample execution: (I'm writing this out to understand it better myself)
fac 3
= fix (\f n -> if n==0 then 1 else n * f (n-1)) 3
= (\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 3
= if 3==0 then 1 else 3 *(fix (\f n -> if n==0 then 1 else n * f (n-1))) (3-1)
= 3 * (fix (\f n -> if n==0 then 1 else n * f (n-1))) 2
= 3 * (\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 2
= 3 * (if 2==0 then 1 else 2 * (fix (\f n -> if n==0 then 1 else n * f (n-1))) (2-1))
= 3 * (2 * (fix (\f n -> if n==0 then 1 else n * f (n-1))) 1)
= 3 * (2 * (\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 1)
= 3 * (2 * if 1==0 then 1 else 1 * ((fix (\f n -> if n==0 then 1 else n * f (n-1))) (1-1))
= 3 * (2 * 1 * ((fix (\f n -> if n==0 then 1 else n * f (n-1))) 0)
= 3 * (2 * 1 * ((\f n -> if n==0 then 1 else n * f (n-1)) (fix (\f n -> if n==0 then 1 else n * f (n-1))) 0)
= 3 * (2 * 1 * (if 0==0 then 1 else 0 * ((fix (\f n -> if n==0 then 1 else n * f (n-1))) (n-1))))
= 3 * 2 * 1 * 1
= 6
Well, that was longer than I expected.
This part looks strange to me:
> if a == b then true else false
Isn't that just an overly verbose way of simply saying a == b? I.e., could you simplify the definition of equals to just "equals a b = a == b", or does that make a difference in Haskell?
in Haskell, the boolean values are True and False. In this case, equals returns the two functions, `true` and `false`(which in Haskell are `const` and `flip const`).
so when you do (equals n 0) 1 (n * f (n-1)) what you do is:
check if n equals 0
if n equals zero, return the first parameter and ignore the second
else, return the second parameter and ignore the first one.
This is really just the typed version of the lambda calculus version(which is untyped, so it doesn't have boolean values. It doesn't have numbers either, and yes, there's a way to create numbers using functions, but let's not talk about that now).
Pika ^_^ I see; great explanation; thanks!
Very clear...
i would like my teacher watched your videos...
That's an hard way to do factorial!
I always do it the easy way !n = int_0^inf(t^n*e^-t)dt
Anaetherus
Omg, why does that work? Maths is so maths.
+Anaetherus
First I wanted to ask what are you even doing, but that part I solved, and google even gave me everything I needed to find out it's basically the gamma function. Now, I still don't really understand why the gamma function works the way it does.
what do I need to search for to find an explanation of this?
It's called the gamma function (wikipedia.org/wiki/Gamma_function)
Factorial is always the example used for recursion... but that form of factorial is certainly one of the worst possible ways to write it. I once came across a page that was like "58 better ways to calculate the factorial". But none are as useful for explaining simple recursion to people who haven't seen it before. Just don't actually DO it that way.
Grandpa = father’s father
Recursive is everywhere in universe
What is the Y Combinator?
Computerphile: Yes
Nice, I don't even know why I clicked on the video and now I have to know math I do not need
This is the first time I’ve ever heard the phrase “last day” in the sense of “previously.” Which branch of English says that?
1:30 The terminal state of the recursive function should be n==0, because (0!) == 1. And remember to test for negative numbers (and non-integers if necessary), so that you don't blow your stack...
wlan2
Just write in the API documentation that one should never do those things and avoid useless conditionals statements cluttering your code and ruining your scalability.
But I love blowing stacks :(
Bryan Keller Or you could encode it in your type system so the compiler won't let you give it a negative number.
You would check for an input of 0 at the input, not at the algorithm itself. Imagine you had to calculate factorial of some huge number 100000, do you want to perform an additional unnecessary check per iteration, or would you rather check it once at the place of input? 0 should never be passed to the function.
0! == 1 by definition, so the function needs to be able to return the correct value. As for negative values, never rely on "the other guy" to do the boundary-checking for your code.
Funny video. At 9:24 he says to put in a simple function so not to over complicate the recursive function, but all this lambda calculus already seems like a massive over complication to me. Might just be me, but I'm sitting here behind my PC thinking, "yea, that's what recursive is, i do this all the time in my work", and its presented like this is quantum physics.
I do have a question though. How would a functional language deal with memory overhead when stepping through a recursive function? Every function call is going to store some memory that's not released until the function completes. So a recursive function that isn't meant to end after a few calls, for example a computer game loop, will eventually crash the program.
Tail recursion
It's not overcomplication, it's just very basic. It's like the Turing machine: you don't have to care about it, using your computer but it's a language to express computational concepts in the most basic way so you can understand it. In that way it's actually like quantum mechanics: you have to formalize it in order to being able to falsify it.
@@joesdrummer2842 I thinks its that many programmers understand this at such an elemental level that this exercise itself is like trying to diagram the meaning of every word in a sentence.
It's so unnecessary that it's baffling
You didn't explain the Y combinator at all. You could've cut this video down to five seconds and just said "Look it up on Wikipedia".
Very disappointing.
Fun fact: 1:46 is valid Haskell code
This was intentional.
recursion is the idea of having recursion
Recursive definition
Angel Carvajal love it
Haskell is so cool. In GHCi you can write
> let loop = loop
and it accepts it fine. In fact, it knows that loop can be of any type:
> :t loop
loop :: t
So loop can even be of a function type:
> :t loop True
loop True :: t
> :t loop pi 4 True
loop pi 4 True :: t
Because Haskell is lazy, some expressions can terminate, even if they have loop in them
> head [4, loop True]
4
But of course, if it ever needs to call loop then it gets stuck.
@Evi1M4chine Hm, that sounds a little non-pure, although I guess that's why a monad is controlling it. Sounds interesting, will have to check it out.
great video; more on functional programming please
Y combinator? More like “Why is my mind melting? I’ll have to think about this later…” 🤯
This video shows to me, how much I DON'T understand lambda calculus.
More lambda calculus videos
this video turns my brain into a rec.
This is the first computerphile video that I didn't unserstand.. is this a bad sign
No, y combinators are weird
Recursion isn't easy to grasp just by watching a video, if you press pause and think about it and/or look up other videos about recursion and then come back to this video with the basic understanding of recursion you will get it.
Sandro Zimmermann not even just recursion though- you'd need to be comfy with both recursion and the lambda calculus for this to make sense on the first go.
Pure Functional programming is unintuitive by it's nature. First uni class I ever failed was Haskell and I still harbour a (partially) irrational hatred for it. Mostly though it's a pointless academic exercise that needlessly complicates things when applied to the real world.
TheSpacecraftX Pure functional programming enforced everywhere at a language level, perhaps. But a lot of the functional programming style is about detangling your program- splitting it into smaller independent bits which can be tested and later combined with high level tools. Most of the rest of FP is about optimizing your program to make the most use of that. Immutability, for example, prevents multiple parts of your code from being able to step over each others toes- it forces them to work independently. I'm a big fan of clojure, which makes use of the FP style to provide a much safer environment for multithreading, because you have a limited set of mutable types which could potentially be synchronization problems.
Is it possible to record your audio in a way where the pen sound is quieter? That felt tip pen on paper goes right through me worse than nails on a blackboard, it's horrible. Or some kinda pen that doesn't make that sound?
I didn't notice so much in this one, but I've had that experience in other computerphile videos. I think the sound is added in post.
I like that sound
It's not added in post, the video and audio are edited separately so you don't always hear exactly what you see, and you can't really remove the pen sound without messing up voice.
honestly I agree. It may be a picky distinction but that noise is horrible.
yep I stopped the video early due to the same reason.
Great video! Just a question...when the author writes down the symbol "="
First he uses it as a term rewriting rule and later to give an alias to a complex lambda expression.,.former case seems to legitimate the use of recursion, latter does not.
Is not this a way to use = symbol ambiguously? Does it make sense my rigorous worry?
Wouldnt it be preferable yo use -> and the equivalent sumbol ≈ instead to clarify? .
Somewhat pedantic, = should be used for denotational equations.
Anyway, this is q wonderful video... straight and short.
1:35 shouldn't it be "if n=0 then 1 else n*fac(n-1)"
Muchas gracias profesor. La recursión es una gran ayuda!!
Can you do a video on how calculators calculate things like sine functions, and square roots?
"the way factorio works"... is that you build a factori on a alien planet using robots..
hmm .. wrong. ?
Love that game ;)
I wrote Agui - the gui library used in Factorio!
Wow, that's awesome! My respect, dude.
I can confirm that he did in fact write that. Well done.
Nice. Is it the library's fault we cant really mod any stock factorio UI, or the games devs for not passing it though to LUA script stuff?
Whoever animated 7:25 really missed the point of the "pluging it in".
It seems fine to me. What's the issue?
What do you mean? You expand xx out to (\x.xx)(\x.xx). There's not much more to it.
@@skepticmoderate5790 Isn't that exactly what happens in the animation? What am I missing?
PS: Can confirm how the y combinator gets past step 3 here? IE: Does the input function get passed in as f again somehow or is the f on left side of the function always automatically assigned the original input value on every pass?:
Step 1:
(λf. (λx. f (x x))(λx. f (x x)))SomeFunction
Step 2:
(λx. SomeFunction (x x))(λx. f (x x))
Step 3:
SomeFunction(λx. f (x x))(λx. f (x x))
Step 4 (Want to make sure this "f is automatically assigned SomeFunction as val on every pass" step is correct):
SomeFunction(SomeFunction(λx. f (x x))(λx. f (x x)))
If f is automatically passed the first input function on every iteration, then maybe this is a better way to show?:
Step 1:
(λf. (λx. f (x x))(λx. f (x x)))SomeFunction
Step 2:
(λx. SomeFunction (x x))(λx. f (x x))
Step 3:
SomeFunction(λx. SomeFunction(x x))(λx. f (x x))
And so on
SomeFunction(SomeFunction(λx. SomeFunction(x x))(λx. f (x x)))
Please make a playlist of lambda calculus videos.
How do I propagate the last case =1 up the recursion chain? Here is where I am stuck. Passing the 1 we receive at last up the chain: \f.
.n*f(n-1) (1)
Ok so all of this makes sense, but please why would it be written as at 4:06? This seems really ambiguous
i think the arguments he chose as the result for TRUE and FALSE are intuitive. it's similar to the if statement in imperative languages
in an imperative language you could write:
if(1=1) {return 2;} else{return 4;}...
in his examples you could use TRUE and FALSE as an if statement:
(1=1) 2 4
the order of the possible results is the same it's easy to remember
Isn't the definition of `rec` at the end not a combinator because it has the free/unbounded variable x ???
In lazily evaluated untyped languages, this works. But in strict statically typed (eager) languages like ML and F# this won’t work. The compiler will complain that the resulting type would be infinite. And if you work around that, your loop would be infinite.
I can't stand the sound of marker on a paper, I go crazy. But still I watch and enjoy this through pain :)
Somebody needs to explain the explanatory comments to me. and then I also need an explanation of the resulting explanation. Ad infinitum. This is the essence of lambda calculus.
I'm wondering, why it always factorial? Can't you pick sum of a list? Search in a sorted array? This single choice costs thousands of people who think CS is useless garbage every year.
wow, peculiar accent, e.g. whenever he says 'idea' it sounds like he's actually adding a halftoned 'r' at the end of the word, like it sounds 'idear' (8:45), and the same with some other words too
And that's why I DO love FP. So simple and beautiful.
I'm trying to use F# as much as possible (yeah, it's not "pure FP", but it does the job done)
I'm just a humble engineer attempting to transition to computer science. In common with all videos referring to lambda calculus, I still don't get it. I feel like the Red Dwarf Cat: "I understood everything up to simply..."
Why is everything on computerphile left audio only and everything else is fine?
In Python: (lambda x: x(x))(lambda x:x(x)) - although this bottoms out against Python's recursion limit instantly.
So hold on, WHY you made this video? Please change description to "Intro to Lambda and Y Combinator PART 1".
Upload the others because am lost.
Well, if fac is:
fac = rec [some] ,
then according to the definition of rec we got:
fac = rec[some]= [some] rec [some] .
But rec[some] is fac due to definition of fac.
So fac=[some] fac
in that case
[ some] =
£f.£n (n==1) [ 1 n*( f n-1)]
now make infix operators illegal too...are they not functions to be defined as lambdas too?
They need to make a video about combinator calculus
Anyone else think that was Kryten in the thumbnail?
Just as soon as I start thinking that I understand it, I realize I don't understand anything at all. It's like that moment from FRIENDS when Joey was trying to speak French. What is this sorcery?
Please stop with marker on paper sounds - its screeching and makes me turn down the volume then I can't hear the speaker. Is there a way to selectively edit it out?
So much hate against functional programming. This video isn't even about functional programming, it's about a theoretical construct used to formalize a notion in a mathematical framework.
Will we see everyone else hate on imperative programming next time there's a video about the recursion theorem and Turing Machines?
How do you get the recursion to stop?
if you delete all the parentheses in the y combinator, what does the resulting thing do?
(\x.xx)(\x.xx) would become \x.xx\x.xx. We should probably rename the variables within the last lambda since they refer to a different variable entirely. So then you get \x.xx\y.yy. This is a function which takes a function, renames it 'x', then replaces all free variables in the resulting expression with that function. Thus you end up with the function being applied to itself, the result of which is supplied another argument which corresponds to the function \y.yy. Don't really know how useful that is. Here's an example with id.
Applying id to the function
(\x.xx\y.yy)\z.z
Replacing x with \z.z:
(\z.z)(\z.z)(\y.yy)
Then applying id to itself (identity of identity is identity!).
(\z.z)(\y.yy)
Then because id simply returns it's argument:
\y.yy
This is our final result because it contains no more redex's (REDucible EXpressions).
How often do you need recursion? Not often.
Does anyone else feel unsettled that he takes an entire sheet of paper and writes only one thing right in the middle
Yes, I hope he atleast recycles
The function you apply to Y should have 2 parameters correct?
Not necessarily, e.g. y (λx.5) = 5 - in Haskell, y (const 5) = 5 with y f = f (y f). For all the interesting cases such as factorial, you need more parameters, though.
Clarification: Technically, there are no functions in Haskell with more than one parameter due to currying… by two parameters I mean a type of a -> b -> c and so on.
the recursive factorial function is slightly more efficient if you return n when n < 3 since 2 factorial is also just 2
it would be even more efficient if it had no recursion or looping but instead just a precomputed lookup table for results and just used the input to index to the table.
something like this
int FactorialO1(int x)
{
const int f[] = { 1, 1, 2, 3, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600 };
return f[x];
}
That defeats the purpose of programming entirely. And it won't work past the last one in the array.
Sakari N
No, it would be quicker to "compute" (do the lookups), but it would be terrible memory-wise.
Actually you return 1 when the input is 0 in the recursive defintion, otherwise 0! and 1! are not defined.
both the recursive and lookup table implementation waste some memory recursive 1 grows its stack and lookup 1 has table. this uses minimal memory
int Factorial(int x)
{
int f = 1;
while (x)
f *= x--;
return f;
}
Simply awesome!
We could _try_ solving the factorial case if he at least told us how are we supposed to encode integers at all. How do I check for 1? How do I subtract? The only things we know about lambda calculus is how to do boolean logic.
The best we can do now is something like this, assuming ISONE, ONE, MUL and SUB can be defined _somehow_: FAC = REC (λf.λn. (ISONE n) ONE (MUL n (f (SUB n ONE))))
Okay, Eventually I'll get this about couple of decades after. no problem. :D
Wouldn't a monitor with an editor not be a better tool to show of the examples?
I like the lecture but the noise of the marker on the paper is very disturbing to me and makes it hard to listen to the video... Please use a pen for the videos
Btw, I heard that Y Combinator is usefull in call bu name reduction strategy, but useless in a call by value one....
In last case Y g can diverge for ever, despite of g
Will learning this make me popular? Gets out notepad to work out the lambda calculus of the previous statement. Computer says no.