Looking for books & other references mentioned in this video? Check out the video description for all the links! Want early access to videos & exclusive perks? Join our channel membership today: ruclips.net/channel/UCs_tLP3AiwYKwdUHpltJPuAjoin Question for you: What’s your biggest takeaway from this video? Let us know in the comments! ⬇
When I heard about Clojure for the very first time my reaction was exactly like that “ how can you build anything if you can’t change your data!?!”. I love how this speech explains that. Great one!
I can't get over the irony of having a talk about functional programming in a conference called "goto", the epitome of procedural thinking :D (great talk though!)
Your statement considered harmful. Procedural programming is quite the opposite of "goto". Procedural programming is further development of structured programming. And the whole reason why structured programming was invented is to get rid of "goto".
Fun fact: some functional languages compile tail recursion to goto jumps to avoid stack overflow. And to be honest, everything compiles to some form of jump somewhere down the line since everything eventually has to run on hardware.
@@ianzen pretty much - hardware inherently involves unsafe things, mutations, jumps. higher level programming is about creating abstractions over top of it, restricting us in a way that makes structuring applications simpler to reason about and protect ourselves
Hands down the best introduction to functional programming on RUclips. A key problem with adoption is that many of the prominent gurus are math-oriented computer scientists who don't understand how to communicate well with pragmatic working programmers. Olsen gets over the key ideas here without muddying the waters with monads or functors or currying or any of the esoteric terms that are so off-putting to the newbie. He presents FP as something accessible that solves real-world problems. But he doesn't over-promise. To hear some of the enthusiasts, FP is some kind of magic fairy dust that makes all the messiness of real development go away. Outstanding!
A note on Principia Mathmatica: In modern mathematical language, proving 1+1=2 is WAY more compact. This is because modern mathematics is much DRYer than Russel and Whitehead. In R&W, they duplicate the 'code' defining set operations (union, intersection, etc.) as the code defining relations (all the same formulas with a friggin dot over all the symbols. Just twenty papges of repetition.) This is because the idea that a relation is a set of tuples came AFTER Principia. A note on side-effects: Haskell does this best. Haskell does side-effects by constructing a data structure representing side-effectful computation. Since functions are data, this data structure consists of nested functions taking values and producing side-effect representations (which recursively contain functions taking values returning side-effect representation and so on). As the runtime evaluates this side-effect representation, it progressively calls these nested functions. It's all-together very elegant, built on only one primitive: compose a side-effect with a side-effect-producing function into a new side-effect, without ever actually making anything happen. A note on the pie chart: 96% functions means 96% code that CANNOT sneak up on you. 96% guaranteed thread-safe code. 96% trivially unit-testable code.
Thank you, this is interesting. Could you please explain how the idea of relations as sets of tuples makes 1+1=2 proof easier? And why was it so hard before?
Another super important thing about Principia is that Whitehead and Russell worked within the logicism program. You can very easily take natural numbers as primitive, like Brouwer does, and then 1 + 1 = 2 is pretty much true by definition
«This is because the idea that a relation is a set of tuples came AFTER Principia.» -- That a relations CAN be represented as tuples, and ONLY when suitable (actual value, among others) ! Smarty functionals, in whatever language, would better prove the superiority by means of concrete implementations, instead of "talking the functional superiority".
Thus far, this is absolutely the best video on the subject. BTW, if you are short on time, start at 11:17, but I would just listen to the preamble anyway to give you sense (FOR ONCE!) that no, there's no magic here and you don't have to forget everything. Such a joy to watch this. Just the array to tree concept by itself is an eye-opener.
muddi900 And a beautiful one at that. It sort of flips programming upside down. Instead of looking at the code and imagining the data being transformed at each stage, you type the code into a box, and the code disappears, and all you see is the raw data in its new, transformed state.
@@dupersuper1000 There is Jupyter, Lighttable, Spark for functional languages like Clojure, Haskell, Python, APL, etc... ruclips.net/video/nYBW4ExtNvo/видео.html
What Excel is missing is the ability to name the functions you create and then reuse them (as far as I know). That feature would make VisualBasic mostly obsolete.
Im so glad youtube algorithm re-upped this. This is one of the best fundamental talks for people who already have a programming or mathematics background or academic experience and want to make sure their next steps are chosen carefully in the world of functional programming. Considering what we learned from Russel and Godel and Cantor, this is pretty close to a fundamental talk about having a sound basis.
@@frantisek_heca You were not the only one. I have been programming for years and still have no clue how functional programming works in reality. Pep talk rather than a true presentation of value.
I learned two things from this talk: function and atom. Function responsible for what change to make. Atom responsible for making change according to the function. Atom takes care of the complexity of multiple threads, ideally implemented by the language. Developer writes simple functions with no concerns of multi thread. If this is the essence of the FP, this is the first time I get the idea of FP. Either way, I found this talk amusing.
As someone struggling to really understand functional programming, this video was quite helpful. He touches on something that still doesn't quite make sense to me. Our job is entirely side effects. We get paid to write code that does stuff, and all of that stuff involves interactions with the outside world. When I tried to learn clojure, as one example, I spent some time on it and still had no clue how to actually do anything useful. Now I'm back at it, although this time trying to learn Erlang and I have a similar problem. I learn some basics, but immutability and the lack of side effects makes it difficult for me to figure out how to get any real work done. It's truly a different way of thinking about solving problems and I haven't really got a handle on it yet.
The phrase “functional programming doesn’t have side-effects” is not exactly true. For example, those of us who work in the Haskell family of languages have just figured out how to encode side-effects in such a way that we can keep some of the composability and reasoning benefits of non-side-effecting functions in our side-effecting functions. We’ve also done it in such a way that side-effecting code cannot pollute non-side-effecting code. FP languages in general are just more careful with how they deal with side effects. This is chiefly because of the observation that side-effecting code is more challenging to compose than non-side-effecting code. And in some cases side effects make certain ways of composing things literally impossible. So any practical FP language has side-effects. They are just more careful and restrictive about side-effecting code in an effort to achieve the ability to better predict what code will do-including the side-effecting parts.
The way I understand it is the side effects are unavoidable. The point of functional programming is not to entirely eliminate them, but rather to make sure the only side effects in a program happen at very well known and well defined areas.
Fabulous! Clear, concise. Thank you for making the presentation. It helped me get a handle on what FP means and how it translates into the real world. Cheers!
Thanks for this video. It's the best I've found so far on the fundamental concepts of FP. But after watching it, I realize there is nothing new under the sun, and it sounds a whole lot like what I learned decades ago from one of my old college profs... (1) Do as much as possible from a function of small size, and (2) Don't depend on global variables.
Excellent point about 'refactoring' our ideas, instead of forgetting. Applicable to other areas of endeavor. Learn new concepts/conceptual frameworks and then, depending on your assessment, refactor what you already 'know', and see if it works.
Refactoring is nothing else than comment writing. If you refactor correctly, your code does exactly the same thing. For most developers the refactoring operation is the equivalent of the crew shuffling chairs on the sinking Titanic.
I think the moral of this story is, any paradigm taken too far leads right back to whatever you were trying to avoid in the first place, but with extra steps.
In Programming JavaScript Applications by Eric Elliot he brings up a a story from the MIT Lightweight Languages discussion list, by Anton van Straaten: The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing-is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupil- objects are merely a poor man’s closures.” Chastised, Anton took his leave from his master and returned to his cell, intent on study‐ ing closures. He carefully read the entire “Lambda: The Ultimate...” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress. On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.
A very neat explanation of functional programming. Some points helped me a lot 1. how to avoid the side effects? Ans: We cannot. You simply have to confine it and different languages handle it in different ways. Some examples are atoms, Agents/Actor model. Redux (a library to manage the states, mostly used with react) has similarly way of working, where you trigger the events and it will go through the pure functions (the state will be passed as param in each of the functions) and the response of these functions will mutate the state internally by library. 2. As functions do not have side effects, it helps a lot in handling threads. what is missed in this talk is to highlight why people often talk about curry functions while explaining functional programming. Why it is so needed in functional programming?
It's not needed, it's very convenient, and comes easily with proper function types. ML family languages autocurry functions, but you can curry functions yourself in any language with higher order functions.
currying only allows you to use partial application in a way that is a little terser than lambdas. for example instead of: increment x = add 1 x you can write: increment = add 1 or instead of: users |> filter (\user -> isOlderThan 12 user) you can do: users |> filter (isOlderThan 12) this is particularly convenient in function composition when you combine 3+ functions as partially applied functions in the middle of function composition are a lot more terse then lambdas getYoungSurnames = toUpper . map surname . filter ((< 12) . age) Vs getYoungSurnames users = (toUpper . map (\u -> surname u) . filter (\u -> age u < 12)) users
Since the 1980s I have been amazed at how programmers keep over complicating programming. He gave only scant evidence that this allows a programmer to solve bigger problems in the real world. My decades of coding experience have taught me that focusing on real world challenges is what pays dividends. Programmers often get caught on a treadmill of learning new techniques, yet they don't really get anywhere. Choosing the right problems to solve with your coding skills is far, far more important than your programming style or language. Your shiny new coding style will only impress other coders. Solving the right problem with less sexy skills will make you rich, however.
if I ve understood your comment correctly all you are saying is that being better at your specialisation is not important. To give an example all you are saying is the same as being the owner of a delivery service and here you are claiming that being a more effecient delivery service is not important since being the (or being better at) connecting service of seller to buyer can make you more money, aka being amazon or ebay instead of just a delivery service. Going back at the programming and solving problems, there is a valid point that solving the right problems might earn you more money, but that's not where people specialise, the specialise in programming and are satisfied with being paid by people who specialise in figuring the right problems that need to be solved. Some do both and some specialise and that's part of how we function.
Of course it is important to get better at what you do. But don’t assume your best course in life is to be only a programmer and to engage in greater specialization. You are ultimately a problem solver. Programming is your primary tool, but if that is your only tool then you put yourself at a great disadvantage. Understanding the problem domain is hugely important, even if you are not an expert in the exact problem being solved. For example, I would be reluctant to hire a programmer for a new space probe if the person had no knowledge or interest in space. I’ve worked with people like that before and they are ineffective problem solvers. Their programming solutions might strictly meet requirements, but they often don’t solve the underlying problems well.
"Solving the right problem with less sexy skills will make you rich" I pity whoever has to maintain your code from catching fire and making someone not rich
DuckieMcduck In my case my business owns and maintains all the code we have written for decades. For you to assume that “solving the right problems” means “making shitty code” is very immature and baseless. You are the prime example of the coder I would never hire. Realize that you are paid to solve problems efficiently. Beautiful code is a means to that end. But if all you see is code then you are too blind to be of much good.
@@Drone256 the possibility of fancy code being someone's passion hasn't crossed your mind, I love programming because of programming. It's not to make myself money or to be of good use honestly.
the beauty of this speech is (I fully agree with it) that functionality on software is frequently only a side effect for programmer;). For a geek the functional beauty and the right efficient algorithm is all but this cannot be eaten on breakfast and you can't pay your bills as well - so customer/client/buyer interests are also worth to focus on. Cheers! Excellent video!
I use a mix of functional and OO. I think it's a mistake to follow a specific set of rules for programming (OO vs FP). Just write what makes sense, causes less problems and is cleaner code to maintain. For every style of programming, you'll still end up with problems, albeit different ones depending on the style.
If you're working on existing code, do whatever is needed/customary in that codebase is the way to go. But if you're starting on a new codebase, you better stick to one particular style, and also you should choose what language you use carefully. Many modern practices that are extremely prevalent provides the same set of benefits as OO (modularity vs encapsulation, etc.), so it's much more about the style of code you're accustomed to. Designing your code around OO, especially with multi-threading, may cause a new set of problems since they aren't designed to be deterministic. FP brings its own sets of benefits, especially in provability/testing, and also the ability to refactor code without needing to worry about correctness too much. However, you instantly lose these benefits if you mix OO with FP (introducing side effects into a pure FP environment). Hence, if correctness and testing are important for a project, it's much better to do pure FP than mix OO with FP. The drawback of doing pure FP is that not many people are able to do pure FP, but that's not the fault of FP, but rather we are in a phase of transition and the educational infrastructure isn't here yet. FP is so fundamentally different from state-based imperative programming, which all non-FP styles assume means this transition will be difficult as we need to learn to think about computation and programming in a completely different light. It's kind of a dilemma. If you mix FP with non-FP, you'll end up not enjoying some of the key benefits of FP, and if you try pure FP, the learning curve is steep for programmers who are already accustomed to thinking under an imperative framework, and this gives it a reputation of being extremely difficult, so new-comers tend to not learn pure FP.
This is a great conference for someone who wants to say they learned about functional programming but don't actually want to learn; maybe someone who's being paid by the hour to learn about functional programming and nobody is going to check in on how much they learned at the end
This talk was super awesome. I was a java geek, but now I love functional programming (I do elixir) For me functional programming means... Pure functions with no side effects Immutable Data structures Eazy multiple threading (bye-bye mutexes) Thinking in terms of transformations Testable small units functional_programing(your_ideas) = awesome_products
I like the idea of pure functions and higher order functions, but I'm not yet sold on the whole immutable COW data structures or the bridge to the messy mutable world/atoms. Is that really a better approach than just minimizing the state in your program, making anything that can be immutable immutable, and using pure functions whenever a function can be pure?
Immutable data structures are a bit of a hard sell until you've worked with heavyweight business code that utilizes them. Safe concurrency primitives on the other hand, not so much. Certain approaches like software transactional memory eliminates the need for managing locks altogether.
Great talk! Although it didn’t sell me on the concept of functional programming. First rule: Everything is immutable. Second rule: because the first rule doesn’t really work, here are ways around the first rule.
It does work. It's about creating abstraction of immutability. If something behaves like it's immutable, then the fact that under the hood it's mutable is just an implementation details. Interface is immutable.
@@virzenvirzen1490 Also Prolog works. You define your Programm as a bunch of facts in predacte logic and then you formulate problems and prolog queries the solutions based on facts and logic predicates. It's even more abstract than Functional programming, you don't control how the problems are solved , you just define the facts and predicates and how these are unifyable. Yet you can still mutate state of the Programm by changing deleting or adding facts and predicates...every prolog programm is basically a self mutating database! Mathematicians use Logic! Programmers should too! Logic and unifications are way more powerfull and closer to real world problems than the "Linear Algebra 1" model of bijective, surjective and injective functions.
I don't get it either. I mean, I don't get the advantage at all. And this is not the first talk on FP I watched. I change states all the time, it's what's required. All I see is that being made more convoluted? Maybe I'm just too dumb for FP. I ain't no mathematician.
"its just an unfortunate word they picked that means something completely different" @22:32 you mean the fundamental problem with accessibility in computer science? Obfuscation behind unnecessary and misleading jargon?
This content is really nice, I was wondering at the beginning on how annoying the implementation sounds but to end of the video it all falls together really nicely.
Great talk. I love seeing people explain the simplicity of functional programming. It's a beautiful thing. It took me so long to understand functional programming well enough to be productive. The shift in thinking from OOP to functional is difficult to make. Once you get it , it's beautiful but the path can be painful, at least it was for this old chunk of coal.
Kind of reminds me of making the switch from linear execution (think 8-bit BASIC programs, or COBOL for example) to OOP. The conceptual leap was bl**dy hard work, but the reward was substantial. I have yet to make the conceptual leap to FP, and I may never make it fully. I just can't see how FP will bring anything useful to the sort of work I'm involved with daily given that it's such a perfect fit with basic OOP (don't even need inheritance or polymorphism - although some wise-arse has managed to shoehorn in both). On the other hand, the mathematical models we use to transform our input figures into outputs would almost certainly be significantly improved if re-written in an FP paradigm.
When I went to college there was no undergrad computer science department. All the undergrad computer science courses were in the math department and if you wanted to concentrate in computer science, you graduated with a math degree. As such you had to take all of the required courses for a math degree in order to graduate.
Another functional programming talk that's thick in theory, but thin in practice. Functional programming has its place, but it isn't the silver bullet its proponents claim. Functional programming, for instance, is great for mathematical or processing data, but falls on its face for a GUI, where the state is constantly flux. Many object-oriented languages today already have facilities for functional style programming such as C# Linq.
This was very informative and after starting with Elixir a couple weeks ago, it all makes sense now. I think I've wasted my 5 years with JavaScript doing a lot of imperative coding while functional programming languages like Elixir exist
The problem is not the language but how you use it. Switching to a different language doesn't make you a better programmer if you don't understand what is actually causing the problems in programming.
I'm semi-tempted to see if I can rewrite the Arduino C++ code I wrote on Sunday in an FP style. Was a UI to adjust half a dozen parameters on a running process. Current code uses a lot of inheritance and is ?overengineered.
I just wanted to write a comment this morning and this one really shows my style. So let's start with how when I first started writing comments I was told to forget everything I know about writing comments. But most people here know alot about them so I thought I just keep this default. I like this video and how he talks. Another thing one needs is there more than one type of RUclips comments.. That said one can think of it like any other ordinary comments that has a design known well to subscribers as well as it is user friendly
My problem with this is that real code is largely fixing data and error processing. Maybe if you are coding some back end thing that that reads perfect data from a queue and writes back to a queue you don't need to worry, but I have to check every value and either reject it or fix it.
I still dont understand why is there lots of buzz around FP! the core concepts that this gentle man has been using, are really normal abcs of any programming languages. in C++/C#, that I have extensively worked, you can have mutable or immutable data structures. This really boils down to the programmer him/herself knowledge or desire really!
FP is really about adding additional restrictions to the programmer to protect them from themselves. You can write functional code in C++/C#, but a dedicated FP language prevents you from straying outside of FP.
@@michaelcarter577 Nail on the head! I thought I was going mad when I heard the explanation but it does seem to be a means of stopping you from hurting yourself. Unless you are mad too but that is fine because it means I am not alone :)
At 36:19 - Are those wire-rim glasses? Looks like my phone's voicemail logo reminiscent of a cassette tape. Wait..handcuffs?? Oh - now it's a bicycle?!?
The most interesting part was the bridge. Because I mostly do the rest with my own Python code. But I always wondered how functional languages dealt with the "messy mutable outside world". Great explanation.
I just try to segregate mutability as much as possible. The moment the data comes in, it becomes immutable, and the only time it changes is as close to the "o" in "io" as possible. I think people get a little dogmatic about it though. Being real strict about rules like that is mostly a practice in making use of fewer tools to keep things organized and simple, and as a benefit, keeping yourself sharp and able to think things through with fewer options. It's like yeah, a mechanic can make your car work with his expensive education and thousands of dollars worth of tools, but a redneck can do it with half a shoelace. Finding the balance between the two is key.
Functional programming (or what it is being sold as because surely this is were programming all started) seems to merely be a wrapper around semantics. Some people understand the concept/design in object based semantics, others understand the concept/design in mathematical semantics. Maybe I missed something.
If I summarized the point of functional programming, it comes down to functions not creating side-effects. But not in the way you might think. It isn't that the functions can't "do anything", it's that they will reliably and always do the SAME thing given the input. So you pass in the number 12, it will return 24 and that's it. It won't return 24 but also sometimes 30. Or return 24 but sometimes as an array and sometimes as a string. The point is that you write a function that works one way, always returning the same output given the input. The input itself needs to be the proper format too, which is very useful for testing and during development. If your function receives an integer, then it would cause an error to pass in an array, or a string, or an object. And this can be easily tested and even caught in the linting phase. So again, reliably return the same output for the same input, and no "side effects" beyond that. Nothing random, nothing out of scope, nothing changes based on some extra logic (on Tuesdays, only when it's raining, etc). Of course I may have my definition completely wrong, but people seem to be confused, "how do we program without creating effects?" That's not the point, the point is your functions return the same output for the same input every time. Immutable data structures is kind of a 2nd issue for me. Understanding how functions should be written is what I focus on first. And really, functions can be written this way even in procedural and OO.
If a functional programing function checks the time, then it's not a pure function. Checking the time implies time is important which implies there's a situation where tuesday produces a different result to wednesday.
I really appreciate how he didn't sell it as a cure-all for all modern programming woes. So often I see a new paradigm pop up that sells itself as the perfect way to code every application ever, and it turns out to just have different hurdles than the ones it compares itself to. From that alone, I'll probably give functional programming a try some time.
Modern programming woes have nothing to do with applications. They all stem from server-centric architecture and that comes from the need of the application provider to steal your data. What can be done in a hundred lines of code on the device takes a million lines of code if it has to be executed reliably on a server. That is the only problem that modern software developers are facing.
Seems people are fighting about what is the best programming method. But in the end, it is really depends on your purpose and idea for an application. I code using both OO and FP, I switch if what is better and more efficient for what needed/wanted. I have even mixed both with efficient enough flows. Seems the most die hard people who likes a specific thing keeps push what they want to what other people want. (I am not saying the presenter is like this, I just heard someone from a Clojure talk just ranted why OO is better)
Came here to learn something about functional programming, out of curiosity. Gave up after 10 minutes, around the time he starts showing documentation from OOP languages and essentially says “Gosh, look at that snippet, doesn’t it sound hard?”. It’s getting frustrating that every video I watch on FP begins by trying to sell me on the idea that I should give up OOP because it’s inferior.
That isn't at all what he is saying. He is saying there are aspects of OOP that can make code convoluted and difficult to maintain and that functional programing has solutions to some of these issues. You don't have to choose between OOP and functional programming but rather choose where you want to use aspects of each. You take skills and practices from both OOP and FP to make your code as optimal as it can be.
Exactly my thoughts too! None of these so called gurus ever shows a real world business case solutions using their genius approach. They only work in simple academic scenarios, but fail dramatically for everyday work.
So maybe I'm just misunderstanding, but what's the difference between those bridges (ie: the Atoms) and just regular old programming. While I see the Atom has benefits in multithreaded applications, other than that, what's the difference between altering regular variables in regular programming and altering "atoms"? I'm sure I'm missing something because it just sounds like introducing the same problems it claims to solve? Like the "outside world" code that actually performs tasks and produces the work that is the entire point of our programs, that can't be written functionally? If not, are we really gaining anything? Or is it just a way to segment your code so you can maybe minimize mutability a bit.
To me the Atoms solution to multi-threaded concurrent access that is presented here is a very bad design choice. First, the function g is executed twice, which means wasted processor execution time. Second and most importantly, if function f is faster than function g and is called frequently you will have a starvation issue where the thread using f runs smoothly constantly updating the Atom while the thread using g is constantly rejected and locked in a loop retrying g. I can't see why they don't use a reliable queuing mechanism that can ensure no processor time is wasted in avoidable retries and no thread is locked in an infinite retry loop. This is multi-threading basics. The "messy outside world" needs results and it needs them in a finite predictable time. Timelessness is the main issue I have with functional programming: most often, the design completely ignores the timing aspects of the program. I see functional programming as an overkill and excessively restrictive solution to the problem of correctly identifying, implementing, and documenting the code sections that need thread safety, and/or immutability. Those issues have efficient solutions in procedural programming. Not knowing or using them properly is not an excuse for trying to completely remove mutability from the programming world. Programming is the art of creating efficient tools to solve real world problems. Programs are not autonomous flawless deities that live on their own in a timeless paradise and sometimes allow mere mortals to get a glimpse at side effects of their perfect beauty.
@@christianbarnay2499 Clojure provides for whatever paradigm works best for you. Atoms are synchronous and Agents are asynchronous. If you want a coordinated state, use agents.
Great talk. My favourite quote: "I don't have to look through 25 pages of code trying to figure out if someone STUCK something in the middle of x" 19:00 :D
An array typically has accessing its elemnts in O(1). The array built from a tree would be O(logn). That's kind of a big issue for something that as commonly used as an array.
An array accesses to an element by addition to base index and dereferencing the location to a register (about 2-3 instructions). Let's say you have an array containing 512 GBs of int values (which is a lot actually). When you have a tree you do the above process repeatedly (we can also assume that the tree structure will perform more dereferencing and arithmetic for node access, so let's say 8 instructions). 512GB = 512 * 1024 * 1024 * 1024 bytes -> /4 (size of int) -> log32 -> round up -> 8. thus at most 64 instructions. Considering there are pipelining mechanisms implemented in modern CPUs it actually takes less than that. Now think of how many cycles it is necessary to load such array into the memory, then to write it to a disk. In most optimization requiring tasks that uses arrays (games, image processing, deep learning, ) any required fetch or update is assigned to the GPU so this case that we are currently inspecting is a bit far case. Now consider this, do you prioritize more understandable, error-prone, maintainable code or optimizations that are premature %95 of the time anyway?
indexing into an array is a O(1) op only in theory. In actual execution, indexing into an array is not even the same over time, even for the same location.
Full circle. When I started programming, all we had was functional programming. I do 100% of my stuff with it. It's kind of like keeping your orange polyester turtleneck shirts in the closet....one day, you'll be at the edge of fashion again.
@@ccgarciab I think, maybe, the definition of "functional programming" is a little less intuitive to some of us that started programming in or before the mid 80s.
36:00 'It solves the threads-issue' Well, only if the state you wish to modify, fits in a single atom....... In which case, most other tools can be made to work just as well. Still: The fact I could put on this talk, while doing something else and pick out this issue, this clearly: The praise this presentation gets, is without a doubt deserved. The 'you could write stateless, in languages not built around it': still true. The 'we don't deadlock, as we don't do collision-avoidance. We do collision-detection and rerun (the latter is only possible because we know we're completely stateless)': it is a choice, if you build around it probably more often than not a reasonable one. (keep your heavy functions away from your collision-prone ones) The 'and if you make this choice, we'll do the handling for you': nice, nothing to add really (others 'can be made' to work just as well, but this work still needs to be done)
Simply making a copy of the variable transforms the problem from "what is the current value? " to "which copy should I be using and am I aware of it ?"
No. To use his example, say you want to change the letter D to the letter E in an array of alphabet letters for whatever purpose. The system will then produce a copy of the whole array with that change included as a result which you then pass to whatever function needs the modified data. Then the copies are discarded, the array is reset to its original state and you will have to request the change to be made again if you want to change D to E. I think the idea is to get rid of situations where programmers use an array of values, then change the values in their code and then later on run some code on the array expecting a result based on the original value of the array but getting the result based on the modified array. Because you can't change the array via code you won't ever have to comb through lines of code to find out where the values got changed. Is it worth switching to functional programming for? I would say mostly in cases where you just cannot afford to ever have ambiguity or errors in the data you manipulate. Like A.I systems for example. If you're just making a computer game for kids then you probably don't need to bother.
@@HurricaneSA it's like riding a bike, with enforced use of safety wheels, then? In many cases I don't care about my initial array, and over writing it as I go is just more efficient. If for some reason I need to preserve it, I can do that too and create a copy instead. I really don't get the upside at all. I've watched multiple talks. I don't understand. All I take away from FP is that it deliberately removes half your tool box and puts on a safety helmet for you. But now I get to hit nails with a screwdriver, and god forbid I ever need glue, because there's none there. Is there anything beyond the enforced self-protection I'd like to use from FP? I literally don't understand why I'd ever use it. Not trying to be condescending at all here. Maybe I'm just too dumb. But I really don't see the upside. I have access to immutables outside of FP, without jumping through hoops to use a variable.
@@theral056 Yeah, I agree that it seems rather redundant. However, I suspect the benefit in this case would be more applicable to large projects where you have multiple programmers accessing the same data via different modules simultaneously. I confess though that I'm just a C# programmer so I tend to agree that it seems rather convoluted.
Pass your function parameters as const. Congratulations, you have now invented functional programming.
4 года назад+23
My favourite thing about functional programming are higher level functions. Classic example is sorting, you don't have to program sort for each data structure, you just need a function to compare the data and pass it to the sort function. It is not only sort, but every other operation with data.
Great introduction to FP. Thank you very much. Can't help but being a bit cynical about the whole enterprise though. FP is designed to avoid specific programmers pitfalls (that plagued OO for instance) but generates its own. I'm sure I can find a youtube video about a new programming paradigm designed to correct the mistakes/limits of FP. I think it is an illusion to believe we can solve the challenge of communicating solutions to problems (of increasing complexity) to each other by creating a new way to write them up in code.
Ok I think I got the idea but one question remains: why would you do that? Is there only the advantage of having code that's easier to understand? Or are there cases in which functional programming would offer a better solution than traditional programming?
My only problem (not with functional) but with how Java handles functional, is the syntax needed for VarArgs, they should have used something different than commas, by doing it, one could place VarArgs at the beginning of a method parameter, making it not only extremely easy to code, bu extremely easy to read and understand the data flow.
The Win32 Api is C. However, it Does Use an Object Oriented Technique for Reducing code Size and Reusing code. It Does this by Using a Struct and Passing the Struct around many Functions. The Same function is Used for Common things in the Struct and additional functions are Used for Differences. You Can DO OO Object Orientation in C. With One Caveat. When You Do it in C, you Don't Have Encapsulation, Inheritance and Polymorphism. Somebody elses's Functions can Mess Up Your Shared Struct. When Done in C++ with Objects, Inheritance and Polymorphism. Somebody else's Code, Doesn't Mess Up Yours. However, this Comes at a Price. in C++, the Program has to Jump about to get things Done. This is Unlike C. In Conclusion: C++ is for Large Parts of Code that is Run infrequently.
After watching 40 minutes video I haven't learned how to write "Hello World" in FP. How do I start? Can I refactor my code to FP? Should I? Is FP something I can introduce step by step in my work? Can I write FP code within OO framework?
Some content may be hard to get at first (imo List Comprehensions were pretty hard), so just skip it, study a bit more of the following content and come back later
So I am clearly misunderstanding something here but how does this kind of node-based data-structure work in the new world where cache coherency is more important that almost anything else in performant systems? How on earth can an implementation of an array based on nodes and links be even vaguely compared to a contiguous array of data in terms of its on-cache performance? Or is this more abstract and performance considered less of an issue?
What if the regular functions f(a,b,c) were pure, but object methods O.m(a,b,c) didn't have to adhere to these rules? This obviously wouldn't be a purely functional language, but it would be a primarily functional language with OOP support, the exact opposite of most languages today - primarily OOP languages with functional support.
29:20 isn't that called a MONAD A monad is a monoid in the category of endofunctors what's the problem - a random developer I don't remember his name ;)
Agree or disagree, it doesn't matter because I am not here to argue. My observation is, programmers trying to make "more clever" programming methods in order to solve problems that were created by programmers. That is to say, the human element of being lazy and the attitude of "it works." There is NOTHING that anyone can in OO that cannot be done in Basic, that cannot be done in FP. Keep in mind I am speaking of functional basics here. I have personally gone through many programming languages and I am hard pressed to find actual differences. Isn't immutability a constant? A class a function? This list could go on for a while I suppose.... Feel free to quibble over semantics if you must. Wrap a new shiny bow on your new language give it some similar but different name and POOF, we now have the next best thing. I've had the opportunity in my life to write BIOS code (that was fun: ASM and C -General Software, Phoenix, and AMI, and learning a TON about hardware), web pages from HTML, PHP, CSS, VS, JS, the usual suspects anyway, a few basic Windows and Linux drivers, and many custom applications using both Java and VS mostly for data management or job tracking of some sort. I've never once landed on a single programming language and said, "Yes, this is the one that all others need to be modeled after." I have noticed that all of them provide you with the tools to tie your legs and arms into knots and create some of the ugliest code possible. Comments are your friends. Extra notes at the start of a code block will save you so much time. I would say to anyone to not get hung up on any language's ability to do X better because Y class/function that's built in. I cannot count the number of times that I've written a simple function in an OO language because it far superior to relying on inheritance and the trappings of mutability. Sure one method of solving a problem might be quicker to write but quicker to maintain is a different story. Bah what do I know anyway, I'm just some random putz on the internet....
@Will Cockram I'm not saying don't comment. I'm saying that if you have to explain why the code is written in a certain way then it's not good code, and by extension this makes the comment "helpful" (even mandatory). It excuses the poor code. In a worst case scenario, the comment will actually be wrong and will mislead the maintainer since it has no bearing on the execution. So if you skip constructs and features under the guise that you can use your comments to simply excuse your way of doing things, think again.
It seems that people with broad experience such as yourself, share the same opinion. That it's more important to just become good at using the tools available rather than harp on about which tool is best. A rather unpopular opinion, it seems, judging from this comment secion. But then again, that's why good programmers are very valuable.
Alan Turing proved this. Any algorithm can be computed using a very basic set of operations (read, write, forward 1, backward 1) and a set of rules. Knowing this reduces all languages to how easily they implement libraries of functions. I could (well, maybe once upon a time, 8^) ) write the character C to the screen using an assembly language to move byte values around different registers. Or I could just type print('C'). For me, as a developer, knowing a language is about knowing the syntax and the libraries it supports. I should say, I'm not a heads down developer. I'm a data engineer and hobbyist. For me it's all about solving a problem in the easiest and fasted way I can. I mostly use SQL and Python. And I'm a formatting Nazi. I use long, descriptive names and LOTS of whitespace. This is where Python gives me a little grief. I like to capitalize keywords so the functions stand out.
I still don't understand it; if all the language consists of is functions, then what does a function do? does it send a function to a different function, where the function it acts on is again a function? What is the use of such a program? How can I do input/output handling when I only have functions?
Well your input is passed as (surprise, surprise!) input to a function that transforms it into (surprise, surprise!) an output. This function might obviously call other functions underneath, but it's that simple.
Data Processing - calculations are important BUT most software is data processing software. With data processing, it is data structures that are hard work. Watch Grace Hopper, she is fun and explains the issue.
Looking for books & other references mentioned in this video?
Check out the video description for all the links!
Want early access to videos & exclusive perks?
Join our channel membership today: ruclips.net/channel/UCs_tLP3AiwYKwdUHpltJPuAjoin
Question for you: What’s your biggest takeaway from this video? Let us know in the comments! ⬇
I tend to use my own paradigm - a blend of ObjectOriented and functional, which others seem to refer to as 'Objectional' programming.
lol under rated comment
I have adopted something pretty common called disfunctional code
@@cyboticIndustries FPOOP programmers rise up
When I heard about Clojure for the very first time my reaction was exactly like that “ how can you build anything if you can’t change your data!?!”. I love how this speech explains that. Great one!
I can't get over the irony of having a talk about functional programming in a conference called "goto", the epitome of procedural thinking :D (great talk though!)
It's like having a religious meeting called 'Satan 2018'.
Your statement considered harmful.
Procedural programming is quite the opposite of "goto". Procedural programming is further development of structured programming. And the whole reason why structured programming was invented is to get rid of "goto".
@@antonkapelyushnik47 My statement is a joke, apologies if that wasn't clear
Fun fact: some functional languages compile tail recursion to goto jumps to avoid stack overflow. And to be honest, everything compiles to some form of jump somewhere down the line since everything eventually has to run on hardware.
@@ianzen pretty much - hardware inherently involves unsafe things, mutations, jumps. higher level programming is about creating abstractions over top of it, restricting us in a way that makes structuring applications simpler to reason about and protect ourselves
Hands down the best introduction to functional programming on RUclips.
A key problem with adoption is that many of the prominent gurus are math-oriented computer scientists who don't understand how to communicate well with pragmatic working programmers.
Olsen gets over the key ideas here without muddying the waters with monads or functors or currying or any of the esoteric terms that are so off-putting to the newbie. He presents FP as something accessible that solves real-world problems.
But he doesn't over-promise. To hear some of the enthusiasts, FP is some kind of magic fairy dust that makes all the messiness of real development go away.
Outstanding!
A math oriented computer scientist will NEVER push functional programming on you. It contradicts everything those guys know about algorithms. ;-)
A note on Principia Mathmatica:
In modern mathematical language, proving 1+1=2 is WAY more compact. This is because modern mathematics is much DRYer than Russel and Whitehead. In R&W, they duplicate the 'code' defining set operations (union, intersection, etc.) as the code defining relations (all the same formulas with a friggin dot over all the symbols. Just twenty papges of repetition.) This is because the idea that a relation is a set of tuples came AFTER Principia.
A note on side-effects:
Haskell does this best. Haskell does side-effects by constructing a data structure representing side-effectful computation. Since functions are data, this data structure consists of nested functions taking values and producing side-effect representations (which recursively contain functions taking values returning side-effect representation and so on). As the runtime evaluates this side-effect representation, it progressively calls these nested functions. It's all-together very elegant, built on only one primitive: compose a side-effect with a side-effect-producing function into a new side-effect, without ever actually making anything happen.
A note on the pie chart:
96% functions means 96% code that CANNOT sneak up on you. 96% guaranteed thread-safe code. 96% trivially unit-testable code.
best comment
Thank you, this is interesting. Could you please explain how the idea of relations as sets of tuples makes 1+1=2 proof easier? And why was it so hard before?
Excellent explanation of Haskell effects without the 'M-word'. Congratulations!
Another super important thing about Principia is that Whitehead and Russell worked within the logicism program. You can very easily take natural numbers as primitive, like Brouwer does, and then 1 + 1 = 2 is pretty much true by definition
«This is because the idea that a relation is a set of tuples came AFTER Principia.»
--
That a relations CAN be represented as tuples, and ONLY when suitable (actual value, among others) !
Smarty functionals, in whatever language, would better prove the superiority by means of concrete implementations, instead of "talking the functional superiority".
he is soo genuinely happy explaining wht he knows..its inspiring
Thus far, this is absolutely the best video on the subject.
BTW, if you are short on time, start at 11:17, but I would just listen to the preamble anyway to give you sense (FOR ONCE!) that no, there's no magic here and you don't have to forget everything.
Such a joy to watch this. Just the array to tree concept by itself is an eye-opener.
"Immutability of inputs"
"Immutability of outputs"
Excel is a functional programming language.
Correct
muddi900 And a beautiful one at that. It sort of flips programming upside down. Instead of looking at the code and imagining the data being transformed at each stage, you type the code into a box, and the code disappears, and all you see is the raw data in its new, transformed state.
@@dupersuper1000 There is Jupyter, Lighttable, Spark for functional languages like Clojure, Haskell, Python, APL, etc...
ruclips.net/video/nYBW4ExtNvo/видео.html
What Excel is missing is the ability to name the functions you create and then reuse them (as far as I know). That feature would make VisualBasic mostly obsolete.
Im working on exactly this...getting excel type fp integrated into 5th gl server sides...its Realy exiting ☺️
Im so glad youtube algorithm re-upped this. This is one of the best fundamental talks for people who already have a programming or mathematics background or academic experience and want to make sure their next steps are chosen carefully in the world of functional programming. Considering what we learned from Russel and Godel and Cantor, this is pretty close to a fundamental talk about having a sound basis.
One of the most sensible and clear explanation about Functional Programming.
And still, as a beginner, I don't take anything new from this video, that I could use..
František Heča same, but I hope I can use this video as a stepping stone into the world of FP
btw. Closures are bad/lazy programming.
@@alejandrotorres-py4wz wow so smart
@@frantisek_heca You were not the only one. I have been programming for years and still have no clue how functional programming works in reality. Pep talk rather than a true presentation of value.
I learned two things from this talk: function and atom. Function responsible for what change to make. Atom responsible for making change according to the function. Atom takes care of the complexity of multiple threads, ideally implemented by the language. Developer writes simple functions with no concerns of multi thread. If this is the essence of the FP, this is the first time I get the idea of FP. Either way, I found this talk amusing.
As someone struggling to really understand functional programming, this video was quite helpful. He touches on something that still doesn't quite make sense to me. Our job is entirely side effects. We get paid to write code that does stuff, and all of that stuff involves interactions with the outside world. When I tried to learn clojure, as one example, I spent some time on it and still had no clue how to actually do anything useful. Now I'm back at it, although this time trying to learn Erlang and I have a similar problem. I learn some basics, but immutability and the lack of side effects makes it difficult for me to figure out how to get any real work done. It's truly a different way of thinking about solving problems and I haven't really got a handle on it yet.
Yeah, it really is a different way of looking at the problem which isn't intuitive if you've learned the standard imperative or OOP style.
The phrase “functional programming doesn’t have side-effects” is not exactly true. For example, those of us who work in the Haskell family of languages have just figured out how to encode side-effects in such a way that we can keep some of the composability and reasoning benefits of non-side-effecting functions in our side-effecting functions. We’ve also done it in such a way that side-effecting code cannot pollute non-side-effecting code. FP languages in general are just more careful with how they deal with side effects. This is chiefly because of the observation that side-effecting code is more challenging to compose than non-side-effecting code. And in some cases side effects make certain ways of composing things literally impossible. So any practical FP language has side-effects. They are just more careful and restrictive about side-effecting code in an effort to achieve the ability to better predict what code will do-including the side-effecting parts.
Store passing style
When given a data structure to your method, don't change the structure. Create a new one having the results of the method.
The way I understand it is the side effects are unavoidable. The point of functional programming is not to entirely eliminate them, but rather to make sure the only side effects in a program happen at very well known and well defined areas.
This video helped me understand what the heck functional programming actually is. The mathematics comparison is super memorable too. Amazing talk!
Wow, his passionate presentation style is a brease.
From a curious laymen: thank you. Truly. This is 40 minutes of clarity and honesty.
@Vendicar Kahn And by the looks of it, you spent another 40 minutes replying to all positive comments on this video. You seem like a joyful fella.
Fabulous! Clear, concise. Thank you for making the presentation. It helped me get a handle on what FP means and how it translates into the real world. Cheers!
Thanks for this video. It's the best I've found so far on the fundamental concepts of FP. But after watching it, I realize there is nothing new under the sun, and it sounds a whole lot like what I learned decades ago from one of my old college profs... (1) Do as much as possible from a function of small size, and (2) Don't depend on global variables.
Lisp is one of the oldest languages still in use. Developed around 1960. Very good functional features.
Functional programming IS decades old.
From the bottom of my heart
, I'd like to say "Thank you
nice profile picture
Excellent point about 'refactoring' our ideas, instead of forgetting. Applicable to other areas of endeavor. Learn new concepts/conceptual frameworks and then, depending on your assessment, refactor what you already 'know', and see if it works.
Refactoring is nothing else than comment writing. If you refactor correctly, your code does exactly the same thing. For most developers the refactoring operation is the equivalent of the crew shuffling chairs on the sinking Titanic.
I think the moral of this story is, any paradigm taken too far leads right back to whatever you were trying to avoid in the first place, but with extra steps.
In Programming JavaScript Applications by Eric Elliot he brings up a a story from the MIT Lightweight Languages discussion list, by Anton van Straaten:
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing-is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupil- objects are merely a poor man’s closures.”
Chastised, Anton took his leave from his master and returned to his cell, intent on study‐ ing closures. He carefully read the entire “Lambda: The Ultimate...” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.
what? that doesn’t follow at all
@@AndreiGeorgescu-j9p No, I did.
A very neat explanation of functional programming. Some points helped me a lot
1. how to avoid the side effects? Ans: We cannot. You simply have to confine it and different languages handle it in different ways. Some examples are atoms, Agents/Actor model. Redux (a library to manage the states, mostly used with react) has similarly way of working, where you trigger the events and it will go through the pure functions (the state will be passed as param in each of the functions) and the response of these functions will mutate the state internally by library.
2. As functions do not have side effects, it helps a lot in handling threads.
what is missed in this talk is to highlight why people often talk about curry functions while explaining functional programming. Why it is so needed in functional programming?
It's not needed, it's very convenient, and comes easily with proper function types. ML family languages autocurry functions, but you can curry functions yourself in any language with higher order functions.
currying only allows you to use partial application in a way that is a little terser than lambdas. for example instead of:
increment x = add 1 x
you can write:
increment = add 1
or instead of:
users |> filter (\user -> isOlderThan 12 user)
you can do:
users |> filter (isOlderThan 12)
this is particularly convenient in function composition when you combine 3+ functions as partially applied functions in the middle of function composition are a lot more terse then lambdas
getYoungSurnames = toUpper . map surname . filter ((< 12) . age)
Vs
getYoungSurnames users = (toUpper . map (\u -> surname u) . filter (\u -> age u < 12)) users
This is an extraordinary talk! Thank you!
Since the 1980s I have been amazed at how programmers keep over complicating programming. He gave only scant evidence that this allows a programmer to solve bigger problems in the real world. My decades of coding experience have taught me that focusing on real world challenges is what pays dividends. Programmers often get caught on a treadmill of learning new techniques, yet they don't really get anywhere. Choosing the right problems to solve with your coding skills is far, far more important than your programming style or language. Your shiny new coding style will only impress other coders. Solving the right problem with less sexy skills will make you rich, however.
if I ve understood your comment correctly all you are saying is that being better at your specialisation is not important. To give an example all you are saying is the same as being the owner of a delivery service and here you are claiming that being a more effecient delivery service is not important since being the (or being better at) connecting service of seller to buyer can make you more money, aka being amazon or ebay instead of just a delivery service.
Going back at the programming and solving problems, there is a valid point that solving the right problems might earn you more money, but that's not where people specialise, the specialise in programming and are satisfied with being paid by people who specialise in figuring the right problems that need to be solved.
Some do both and some specialise and that's part of how we function.
Of course it is important to get better at what you do. But don’t assume your best course in life is to be only a programmer and to engage in greater specialization. You are ultimately a problem solver. Programming is your primary tool, but if that is your only tool then you put yourself at a great disadvantage. Understanding the problem domain is hugely important, even if you are not an expert in the exact problem being solved. For example, I would be reluctant to hire a programmer for a new space probe if the person had no knowledge or interest in space. I’ve worked with people like that before and they are ineffective problem solvers. Their programming solutions might strictly meet requirements, but they often don’t solve the underlying problems well.
"Solving the right problem with less sexy skills will make you rich" I pity whoever has to maintain your code from catching fire and making someone not rich
DuckieMcduck In my case my business owns and maintains all the code we have written for decades. For you to assume that “solving the right problems” means “making shitty code” is very immature and baseless. You are the prime example of the coder I would never hire. Realize that you are paid to solve problems efficiently. Beautiful code is a means to that end. But if all you see is code then you are too blind to be of much good.
@@Drone256 the possibility of fancy code being someone's passion hasn't crossed your mind, I love programming because of programming. It's not to make myself money or to be of good use honestly.
Mentions "Principia Mathematica": Hey maybe i should read this and get my math game going
Shows pages: I want to die right now... or go work on a farm
principia isnt useful math anyway
and it's outdated anyway, so unless you are a historian don't read it
the beauty of this speech is (I fully agree with it) that functionality on software is frequently only a side effect for programmer;). For a geek the functional beauty and the right efficient algorithm is all but this cannot be eaten on breakfast and you can't pay your bills as well - so customer/client/buyer interests are also worth to focus on. Cheers! Excellent video!
I use a mix of functional and OO. I think it's a mistake to follow a specific set of rules for programming (OO vs FP). Just write what makes sense, causes less problems and is cleaner code to maintain. For every style of programming, you'll still end up with problems, albeit different ones depending on the style.
that's why as a C# dev I'm learning F#. Seamless interaction between them are a blessing :)
@@Qrzychu92 The thing I love most about F# is that the best bits make it into C# soon afterwards.
@@Qrzychu92 I'm learning G#. Because, you know, one better... :D
If you're working on existing code, do whatever is needed/customary in that codebase is the way to go. But if you're starting on a new codebase, you better stick to one particular style, and also you should choose what language you use carefully. Many modern practices that are extremely prevalent provides the same set of benefits as OO (modularity vs encapsulation, etc.), so it's much more about the style of code you're accustomed to. Designing your code around OO, especially with multi-threading, may cause a new set of problems since they aren't designed to be deterministic. FP brings its own sets of benefits, especially in provability/testing, and also the ability to refactor code without needing to worry about correctness too much. However, you instantly lose these benefits if you mix OO with FP (introducing side effects into a pure FP environment). Hence, if correctness and testing are important for a project, it's much better to do pure FP than mix OO with FP.
The drawback of doing pure FP is that not many people are able to do pure FP, but that's not the fault of FP, but rather we are in a phase of transition and the educational infrastructure isn't here yet. FP is so fundamentally different from state-based imperative programming, which all non-FP styles assume means this transition will be difficult as we need to learn to think about computation and programming in a completely different light. It's kind of a dilemma. If you mix FP with non-FP, you'll end up not enjoying some of the key benefits of FP, and if you try pure FP, the learning curve is steep for programmers who are already accustomed to thinking under an imperative framework, and this gives it a reputation of being extremely difficult, so new-comers tend to not learn pure FP.
This is a great conference for someone who wants to say they learned about functional programming but don't actually want to learn; maybe someone who's being paid by the hour to learn about functional programming and nobody is going to check in on how much they learned at the end
Now this is what i call an explanation!, thank you for aiding me in my fp journey
An absolutely amazing talk, thanks to Russ!
This talk was super awesome.
I was a java geek, but now I love functional programming (I do elixir)
For me functional programming means...
Pure functions with no side effects
Immutable Data structures
Eazy multiple threading (bye-bye mutexes)
Thinking in terms of transformations
Testable small units
functional_programing(your_ideas) = awesome_products
Excellent lecture. Easy language accessible even to juniors! Bravo!
I like the idea of pure functions and higher order functions, but I'm not yet sold on the whole immutable COW data structures or the bridge to the messy mutable world/atoms. Is that really a better approach than just minimizing the state in your program, making anything that can be immutable immutable, and using pure functions whenever a function can be pure?
Immutable data structures are a bit of a hard sell until you've worked with heavyweight business code that utilizes them. Safe concurrency primitives on the other hand, not so much. Certain approaches like software transactional memory eliminates the need for managing locks altogether.
Great talk! Although it didn’t sell me on the concept of functional programming.
First rule: Everything is immutable.
Second rule: because the first rule doesn’t really work, here are ways around the first rule.
It does work. It's about creating abstraction of immutability. If something behaves like it's immutable, then the fact that under the hood it's mutable is just an implementation details. Interface is immutable.
@@virzenvirzen1490 Also Prolog works. You define your Programm as a bunch of facts in predacte logic and then you formulate problems and prolog queries the solutions based on facts and logic predicates.
It's even more abstract than Functional programming, you don't control how the problems are solved , you just define the facts and predicates and how these are unifyable.
Yet you can still mutate state of the Programm by changing deleting or adding facts and predicates...every prolog programm is basically a self mutating database!
Mathematicians use Logic! Programmers should too!
Logic and unifications are way more powerfull and closer to real world problems than the "Linear Algebra 1" model of bijective, surjective and injective functions.
I don't get it either. I mean, I don't get the advantage at all. And this is not the first talk on FP I watched. I change states all the time, it's what's required. All I see is that being made more convoluted? Maybe I'm just too dumb for FP. I ain't no mathematician.
@@theral056 I guess it's ok. As in fp every problem is of the size of just one function.
But it's just bloody inefficient.
Him: I don't like the "pure function" name bcs it means other functions are bad
Also him: the nasty outside stateful world
You have 100% missed the point, I'm afraid. The whole point of f
I would call them "deterministic functions". If you use the same inputs, you'll always have the same output.
He said messy not nasty which is emtirely different.
"its just an unfortunate word they picked that means something completely different" @22:32 you mean the fundamental problem with accessibility in computer science? Obfuscation behind unnecessary and misleading jargon?
This content is really nice, I was wondering at the beginning on how annoying the implementation sounds but to end of the video it all falls together really nicely.
Great talk. I love seeing people explain the simplicity of functional programming. It's a beautiful thing. It took me so long to understand functional programming well enough to be productive. The shift in thinking from OOP to functional is difficult to make. Once you get it , it's beautiful but the path can be painful, at least it was for this old chunk of coal.
Kind of reminds me of making the switch from linear execution (think 8-bit BASIC programs, or COBOL for example) to OOP. The conceptual leap was bl**dy hard work, but the reward was substantial.
I have yet to make the conceptual leap to FP, and I may never make it fully. I just can't see how FP will bring anything useful to the sort of work I'm involved with daily given that it's such a perfect fit with basic OOP (don't even need inheritance or polymorphism - although some wise-arse has managed to shoehorn in both). On the other hand, the mathematical models we use to transform our input figures into outputs would almost certainly be significantly improved if re-written in an FP paradigm.
When I went to college there was no undergrad computer science department. All the undergrad computer science courses were in the math department and if you wanted to concentrate in computer science, you graduated with a math degree. As such you had to take all of the required courses for a math degree in order to graduate.
Another functional programming talk that's thick in theory, but thin in practice. Functional programming has its place, but it isn't the silver bullet its proponents claim. Functional programming, for instance, is great for mathematical or processing data, but falls on its face for a GUI, where the state is constantly flux. Many object-oriented languages today already have facilities for functional style programming such as C# Linq.
@@bobweiram6321 for GUIs OOP is a good fit, but e.g. for data pipelines functional is the way to go.
This was very informative and after starting with Elixir a couple weeks ago, it all makes sense now. I think I've wasted my 5 years with JavaScript doing a lot of imperative coding while functional programming languages like Elixir exist
The problem is not the language but how you use it. Switching to a different language doesn't make you a better programmer if you don't understand what is actually causing the problems in programming.
Answered a lot of questions, thanks for putting this out there!
This is an awesome talk ! You explained functional programming in a great way !
one of the best explanation of Functional programming
I'm semi-tempted to see if I can rewrite the Arduino C++ code I wrote on Sunday in an FP style. Was a UI to adjust half a dozen parameters on a running process. Current code uses a lot of inheritance and is ?overengineered.
I just wanted to write a comment this morning and this one really shows my style. So let's start with how when I first started writing comments I was told to forget everything I know about writing comments. But most people here know alot about them so I thought I just keep this default.
I like this video and how he talks. Another thing one needs is there more than one type of RUclips comments.. That said one can think of it like any other ordinary comments that has a design known well to subscribers as well as it is user friendly
Category theory is actually really interesting. We study it at our university, it gives you another taste of what's haskell about.
Great Lecture.
Functional programming is growing and this was a great introduction as to why.
Thanks.
Luv and Peace.
Best functional programming explanation
One of best talk I have ever heard on Functional Programming...
My problem with this is that real code is largely fixing data and error processing. Maybe if you are coding some back end thing that that reads perfect data from a queue and writes back to a queue you don't need to worry, but I have to check every value and either reject it or fix it.
The talk does not begin till 7:00
Awesome talk, thank you a lot!
I still dont understand why is there lots of buzz around FP! the core concepts that this gentle man has been using, are really normal abcs of any programming languages. in C++/C#, that I have extensively worked, you can have mutable or immutable data structures.
This really boils down to the programmer him/herself knowledge or desire really!
FP is really about adding additional restrictions to the programmer to protect them from themselves. You can write functional code in C++/C#, but a dedicated FP language prevents you from straying outside of FP.
@@michaelcarter577 Nail on the head! I thought I was going mad when I heard the explanation but it does seem to be a means of stopping you from hurting yourself. Unless you are mad too but that is fine because it means I am not alone :)
At 36:19 - Are those wire-rim glasses? Looks like my phone's voicemail logo reminiscent of a cassette tape. Wait..handcuffs?? Oh - now it's a bicycle?!?
The most interesting part was the bridge. Because I mostly do the rest with my own Python code. But I always wondered how functional languages dealt with the "messy mutable outside world". Great explanation.
They don't. That's why they are so useless.
I just try to segregate mutability as much as possible. The moment the data comes in, it becomes immutable, and the only time it changes is as close to the "o" in "io" as possible.
I think people get a little dogmatic about it though. Being real strict about rules like that is mostly a practice in making use of fewer tools to keep things organized and simple, and as a benefit, keeping yourself sharp and able to think things through with fewer options. It's like yeah, a mechanic can make your car work with his expensive education and thousands of dollars worth of tools, but a redneck can do it with half a shoelace. Finding the balance between the two is key.
@@grawss A nice comment that made me hopeful that some cognitive recover could happen in Our Brave World Of Parrots.
Functional programming (or what it is being sold as because surely this is were programming all started) seems to merely be a wrapper around semantics. Some people understand the concept/design in object based semantics, others understand the concept/design in mathematical semantics. Maybe I missed something.
Well, of course. That is what he explains with his "forget everything" piece.
If I summarized the point of functional programming, it comes down to functions not creating side-effects. But not in the way you might think. It isn't that the functions can't "do anything", it's that they will reliably and always do the SAME thing given the input. So you pass in the number 12, it will return 24 and that's it. It won't return 24 but also sometimes 30. Or return 24 but sometimes as an array and sometimes as a string.
The point is that you write a function that works one way, always returning the same output given the input. The input itself needs to be the proper format too, which is very useful for testing and during development. If your function receives an integer, then it would cause an error to pass in an array, or a string, or an object. And this can be easily tested and even caught in the linting phase.
So again, reliably return the same output for the same input, and no "side effects" beyond that. Nothing random, nothing out of scope, nothing changes based on some extra logic (on Tuesdays, only when it's raining, etc).
Of course I may have my definition completely wrong, but people seem to be confused, "how do we program without creating effects?" That's not the point, the point is your functions return the same output for the same input every time. Immutable data structures is kind of a 2nd issue for me. Understanding how functions should be written is what I focus on first. And really, functions can be written this way even in procedural and OO.
37:35 -> that happens when there's no state. Run function "look at clock to read time left" !
This basically "tell don't ask".
If a functional programing function checks the time, then it's not a pure function. Checking the time implies time is important which implies there's a situation where tuesday produces a different result to wednesday.
Experts explain hot, complex, wiered things in easy and clean manner , thanks for the great teck talk
I'm at 12:30 and I have a prediction. Functional prog is 60% regular programming, 39% data oriented programming and 1% object oriented.
the intro to the functional programing really amazed me
nice profile picture
I really appreciate how he didn't sell it as a cure-all for all modern programming woes. So often I see a new paradigm pop up that sells itself as the perfect way to code every application ever, and it turns out to just have different hurdles than the ones it compares itself to. From that alone, I'll probably give functional programming a try some time.
Modern programming woes have nothing to do with applications. They all stem from server-centric architecture and that comes from the need of the application provider to steal your data. What can be done in a hundred lines of code on the device takes a million lines of code if it has to be executed reliably on a server. That is the only problem that modern software developers are facing.
Seems people are fighting about what is the best programming method. But in the end, it is really depends on your purpose and idea for an application.
I code using both OO and FP, I switch if what is better and more efficient for what needed/wanted. I have even mixed both with efficient enough flows.
Seems the most die hard people who likes a specific thing keeps push what they want to what other people want. (I am not saying the presenter is like this, I just heard someone from a Clojure talk just ranted why OO is better)
Thanks, that was a good refresh.
Came here to learn something about functional programming, out of curiosity. Gave up after 10 minutes, around the time he starts showing documentation from OOP languages and essentially says “Gosh, look at that snippet, doesn’t it sound hard?”. It’s getting frustrating that every video I watch on FP begins by trying to sell me on the idea that I should give up OOP because it’s inferior.
That isn't at all what he is saying. He is saying there are aspects of OOP that can make code convoluted and difficult to maintain and that functional programing has solutions to some of these issues. You don't have to choose between OOP and functional programming but rather choose where you want to use aspects of each. You take skills and practices from both OOP and FP to make your code as optimal as it can be.
Exactly my thoughts too! None of these so called gurus ever shows a real world business case solutions using their genius approach. They only work in simple academic scenarios, but fail dramatically for everyday work.
Got really inspired.. Indeed a gem of a presentation! thanks!
So maybe I'm just misunderstanding, but what's the difference between those bridges (ie: the Atoms) and just regular old programming. While I see the Atom has benefits in multithreaded applications, other than that, what's the difference between altering regular variables in regular programming and altering "atoms"? I'm sure I'm missing something because it just sounds like introducing the same problems it claims to solve?
Like the "outside world" code that actually performs tasks and produces the work that is the entire point of our programs, that can't be written functionally? If not, are we really gaining anything? Or is it just a way to segment your code so you can maybe minimize mutability a bit.
To me the Atoms solution to multi-threaded concurrent access that is presented here is a very bad design choice. First, the function g is executed twice, which means wasted processor execution time. Second and most importantly, if function f is faster than function g and is called frequently you will have a starvation issue where the thread using f runs smoothly constantly updating the Atom while the thread using g is constantly rejected and locked in a loop retrying g. I can't see why they don't use a reliable queuing mechanism that can ensure no processor time is wasted in avoidable retries and no thread is locked in an infinite retry loop. This is multi-threading basics.
The "messy outside world" needs results and it needs them in a finite predictable time. Timelessness is the main issue I have with functional programming: most often, the design completely ignores the timing aspects of the program.
I see functional programming as an overkill and excessively restrictive solution to the problem of correctly identifying, implementing, and documenting the code sections that need thread safety, and/or immutability. Those issues have efficient solutions in procedural programming. Not knowing or using them properly is not an excuse for trying to completely remove mutability from the programming world.
Programming is the art of creating efficient tools to solve real world problems. Programs are not autonomous flawless deities that live on their own in a timeless paradise and sometimes allow mere mortals to get a glimpse at side effects of their perfect beauty.
@@christianbarnay2499 Clojure provides for whatever paradigm works best for you. Atoms are synchronous and Agents are asynchronous. If you want a coordinated state, use agents.
Great talk. My favourite quote: "I don't have to look through 25 pages of code trying to figure out if someone STUCK something in the middle of x" 19:00 :D
Thank you! Clojure superb!! ♥
Never has Bertrand Russell, Lord Russell been referred to as one of „two British guys“.
An array typically has accessing its elemnts in O(1). The array built from a tree would be O(logn). That's kind of a big issue for something that as commonly used as an array.
An array accesses to an element by addition to base index and dereferencing the location to a register (about 2-3 instructions). Let's say you have an array containing 512 GBs of int values (which is a lot actually). When you have a tree you do the above process repeatedly (we can also assume that the tree structure will perform more dereferencing and arithmetic for node access, so let's say 8 instructions). 512GB = 512 * 1024 * 1024 * 1024 bytes -> /4 (size of int) -> log32 -> round up -> 8. thus at most 64 instructions. Considering there are pipelining mechanisms implemented in modern CPUs it actually takes less than that. Now think of how many cycles it is necessary to load such array into the memory, then to write it to a disk. In most optimization requiring tasks that uses arrays (games, image processing, deep learning, ) any required fetch or update is assigned to the GPU so this case that we are currently inspecting is a bit far case. Now consider this, do you prioritize more understandable, error-prone, maintainable code or optimizations that are premature %95 of the time anyway?
indexing into an array is a O(1) op only in theory. In actual execution, indexing into an array is not even the same over time, even for the same location.
Full circle. When I started programming, all we had was functional programming. I do 100% of my stuff with it. It's kind of like keeping your orange polyester turtleneck shirts in the closet....one day, you'll be at the edge of fashion again.
You mean procedural programming?
@@ccgarciab I think, maybe, the definition of "functional programming" is a little less intuitive to some of us that started programming in or before the mid 80s.
This is such a great talk. Thank you.
36:00
'It solves the threads-issue'
Well, only if the state you wish to modify, fits in a single atom.......
In which case, most other tools can be made to work just as well.
Still:
The fact I could put on this talk, while doing something else and pick out this issue, this clearly: The praise this presentation gets, is without a doubt deserved.
The 'you could write stateless, in languages not built around it': still true.
The 'we don't deadlock, as we don't do collision-avoidance. We do collision-detection and rerun (the latter is only possible because we know we're completely stateless)': it is a choice, if you build around it probably more often than not a reasonable one. (keep your heavy functions away from your collision-prone ones)
The 'and if you make this choice, we'll do the handling for you': nice, nothing to add really (others 'can be made' to work just as well, but this work still needs to be done)
Simply making a copy of the variable transforms the problem from "what is the current value? " to "which copy should I be using and am I aware of it ?"
No. To use his example, say you want to change the letter D to the letter E in an array of alphabet letters for whatever purpose. The system will then produce a copy of the whole array with that change included as a result which you then pass to whatever function needs the modified data. Then the copies are discarded, the array is reset to its original state and you will have to request the change to be made again if you want to change D to E.
I think the idea is to get rid of situations where programmers use an array of values, then change the values in their code and then later on run some code on the array expecting a result based on the original value of the array but getting the result based on the modified array. Because you can't change the array via code you won't ever have to comb through lines of code to find out where the values got changed.
Is it worth switching to functional programming for? I would say mostly in cases where you just cannot afford to ever have ambiguity or errors in the data you manipulate. Like A.I systems for example. If you're just making a computer game for kids then you probably don't need to bother.
@@HurricaneSA it's like riding a bike, with enforced use of safety wheels, then? In many cases I don't care about my initial array, and over writing it as I go is just more efficient. If for some reason I need to preserve it, I can do that too and create a copy instead. I really don't get the upside at all. I've watched multiple talks. I don't understand. All I take away from FP is that it deliberately removes half your tool box and puts on a safety helmet for you. But now I get to hit nails with a screwdriver, and god forbid I ever need glue, because there's none there. Is there anything beyond the enforced self-protection I'd like to use from FP? I literally don't understand why I'd ever use it. Not trying to be condescending at all here. Maybe I'm just too dumb. But I really don't see the upside. I have access to immutables outside of FP, without jumping through hoops to use a variable.
@@theral056 Yeah, I agree that it seems rather redundant. However, I suspect the benefit in this case would be more applicable to large projects where you have multiple programmers accessing the same data via different modules simultaneously. I confess though that I'm just a C# programmer so I tend to agree that it seems rather convoluted.
Pass your function parameters as const. Congratulations, you have now invented functional programming.
My favourite thing about functional programming are higher level functions. Classic example is sorting, you don't have to program sort for each data structure, you just need a function to compare the data and pass it to the sort function. It is not only sort, but every other operation with data.
LOL! You can do the same thing in both procedural and object oriented programming. It's just an abstract data type.
That's what you can do even in Fortran :)
That exists in object oriented languages as well. Java has Collections.sort() for example which works on any data type with a .compareTo() method.
Great introduction to FP. Thank you very much.
Can't help but being a bit cynical about the whole enterprise though. FP is designed to avoid specific programmers pitfalls (that plagued OO for instance) but generates its own.
I'm sure I can find a youtube video about a new programming paradigm designed to correct the mistakes/limits of FP.
I think it is an illusion to believe we can solve the challenge of communicating solutions to problems (of increasing complexity) to each other by creating a new way to write them up in code.
That is exactly right.
Ok I think I got the idea but one question remains: why would you do that? Is there only the advantage of having code that's easier to understand? Or are there cases in which functional programming would offer a better solution than traditional programming?
No, there are no such cases.
Superlatives deployed. Gratitude expressed. Curious aspiration ignited.
Nice video. Confirmation as to why I never have really leaned toward FP languages.
fp takeaway: try to minimalize making side effects
Great lecture
21:00 I totally don't like the idea of immutability. I have major slow down with string concatenations
My only problem (not with functional) but with how Java handles functional, is the syntax needed for VarArgs, they should have used something different than commas, by doing it, one could place VarArgs at the beginning of a method parameter, making it not only extremely easy to code, bu extremely easy to read and understand the data flow.
Great explanation!
"Functional programming == Object immutable programming. End of the talk. Thank very much, see you tomorrow".
Fantastic talk! Well done Russ. :-)
The Win32 Api is C. However, it Does Use an Object Oriented Technique for Reducing code Size and Reusing code.
It Does this by Using a Struct and Passing the Struct around many Functions.
The Same function is Used for Common things in the Struct and additional functions are Used for Differences.
You Can DO OO Object Orientation in C. With One Caveat. When You Do it in C, you Don't Have Encapsulation, Inheritance and Polymorphism. Somebody elses's Functions can Mess Up Your Shared Struct.
When Done in C++ with Objects, Inheritance and Polymorphism. Somebody else's Code, Doesn't Mess Up Yours.
However, this Comes at a Price. in C++, the Program has to Jump about to get things Done. This is Unlike C.
In Conclusion: C++ is for Large Parts of Code that is Run infrequently.
Now I feel hyped about it.
After watching 40 minutes video I haven't learned how to write "Hello World" in FP. How do I start? Can I refactor my code to FP? Should I? Is FP something I can introduce step by step in my work? Can I write FP code within OO framework?
Some content may be hard to get at first (imo List Comprehensions were pretty hard), so just skip it, study a bit more of the following content and come back later
That is a beautiful presentation .. and literally too .. free of clutter .. just like Functional Programming ;)
So I am clearly misunderstanding something here but how does this kind of node-based data-structure work in the new world where cache coherency is more important that almost anything else in performant systems?
How on earth can an implementation of an array based on nodes and links be even vaguely compared to a contiguous array of data in terms of its on-cache performance?
Or is this more abstract and performance considered less of an issue?
What if the regular functions f(a,b,c) were pure, but object methods O.m(a,b,c) didn't have to adhere to these rules? This obviously wouldn't be a purely functional language, but it would be a primarily functional language with OOP support, the exact opposite of most languages today - primarily OOP languages with functional support.
29:20
isn't that called a MONAD
A monad is a monoid in the category of endofunctors what's the problem
- a random developer I don't remember his name ;)
Agree or disagree, it doesn't matter because I am not here to argue. My observation is, programmers trying to make "more clever" programming methods in order to solve problems that were created by programmers. That is to say, the human element of being lazy and the attitude of "it works." There is NOTHING that anyone can in OO that cannot be done in Basic, that cannot be done in FP. Keep in mind I am speaking of functional basics here. I have personally gone through many programming languages and I am hard pressed to find actual differences. Isn't immutability a constant? A class a function? This list could go on for a while I suppose.... Feel free to quibble over semantics if you must.
Wrap a new shiny bow on your new language give it some similar but different name and POOF, we now have the next best thing. I've had the opportunity in my life to write BIOS code (that was fun: ASM and C -General Software, Phoenix, and AMI, and learning a TON about hardware), web pages from HTML, PHP, CSS, VS, JS, the usual suspects anyway, a few basic Windows and Linux drivers, and many custom applications using both Java and VS mostly for data management or job tracking of some sort. I've never once landed on a single programming language and said, "Yes, this is the one that all others need to be modeled after."
I have noticed that all of them provide you with the tools to tie your legs and arms into knots and create some of the ugliest code possible. Comments are your friends. Extra notes at the start of a code block will save you so much time. I would say to anyone to not get hung up on any language's ability to do X better because Y class/function that's built in. I cannot count the number of times that I've written a simple function in an OO language because it far superior to relying on inheritance and the trappings of mutability. Sure one method of solving a problem might be quicker to write but quicker to maintain is a different story. Bah what do I know anyway, I'm just some random putz on the internet....
comments in code are at best excuses and at worst straight up lies. good code speaks for itself.
@Will Cockram I'm not saying don't comment. I'm saying that if you have to explain why the code is written in a certain way then it's not good code, and by extension this makes the comment "helpful" (even mandatory). It excuses the poor code. In a worst case scenario, the comment will actually be wrong and will mislead the maintainer since it has no bearing on the execution. So if you skip constructs and features under the guise that you can use your comments to simply excuse your way of doing things, think again.
It seems that people with broad experience such as yourself, share the same opinion. That it's more important to just become good at using the tools available rather than harp on about which tool is best. A rather unpopular opinion, it seems, judging from this comment secion. But then again, that's why good programmers are very valuable.
Pretty much my thoughts. Languages are merely an abstraction from the run time. Use the language you feel best describes the work you are doing.
Alan Turing proved this. Any algorithm can be computed using a very basic set of operations (read, write, forward 1, backward 1) and a set of rules. Knowing this reduces all languages to how easily they implement libraries of functions. I could (well, maybe once upon a time, 8^) ) write the character C to the screen using an assembly language to move byte values around different registers. Or I could just type print('C'). For me, as a developer, knowing a language is about knowing the syntax and the libraries it supports. I should say, I'm not a heads down developer. I'm a data engineer and hobbyist. For me it's all about solving a problem in the easiest and fasted way I can. I mostly use SQL and Python. And I'm a formatting Nazi. I use long, descriptive names and LOTS of whitespace. This is where Python gives me a little grief. I like to capitalize keywords so the functions stand out.
Proving 1+1=2 on page 379 is a bit like drawing a triangle in Vulkan (or so I've heard)
I still don't understand it; if all the language consists of is functions, then what does a function do? does it send a function to a different function, where the function it acts on is again a function?
What is the use of such a program? How can I do input/output handling when I only have functions?
Well your input is passed as (surprise, surprise!) input to a function that transforms it into (surprise, surprise!) an output. This function might obviously call other functions underneath, but it's that simple.
Data Processing - calculations are important BUT most software is data processing software. With data processing, it is data structures that are hard work. Watch Grace Hopper, she is fun and explains the issue.
To me the perfect definition of side effect so far. Thanks for the talk.