Are YOU a Great Programmer? or a Regular Programmer with GREAT Habits. Download my FREE guide on how to adopt 30 of my favourite Programming Habits. Get yours HERE ➡ www.subscribepage.com/great-programmer
Automatic parallelization and performance really isn't the main reason to choose a functional programming language. Algebraic data types for domain driven design, immutability by default and concepts like referential transparency produce code that is much easier to reason about the program state by merely reading the code rather than assessing all mutable fields and properties across multiple objects that talk to each other, usually by having to step through the operations in a debugger.
@@J-Kimble But no one argues that an oversized objectivity is good. And many argue that a purely functional approach is better. No, it isn't. The code is often bloated and hard to debug. Much more readable is imperative with funcitonal elements. Not pure fuctional.
@@piotrc966 A gazillion function calls is just a straw man. You should be doing function composition at progressively increasing levels of semantic meaning, so the higher or lower levels can be disregarded depending on the problem to be fixed. The same principle applies to imperative code, it would be a poor design for the same imperative function to be applying business rules and be doing bit twiddling.
Not just has had. There are two ways of language development: towards Lambda Calculus, Category Theory and other math, or towards ad hoc stillborn internally controversial ideas (COBOL, OOP, etc). And this concerns not just languages, but the application architecture as well.
Yeah. I learned functional through Java libraries, not a functional language. Did a bit more in .Net. I’ve got 1 year in production Clojure but that’s it for me. I still think in functional terms daily in Java, Kotlin, Typescript. In Clojure we still used protocols quite a bit and the impure bits of the codebase looked fairly OO.
Perhaps thinking in something closer to a natural language (conditionals, repetition and commands) is more accessible to most than mathematical thinking...and not many people have been exposed to algebraic structures, functors and monads. @@InconspicuousChap
Not to forget that SML (processor to haskell) is the reason that we have type inference, and are still the only language which are proven to always to run correct if it compile 😅
It's like Cartesian vs Polar coordinates. Some problems are better solved or expressed in one vs the other. Religious fanaticism is baked into some people's DNA.
Yeah, this is a sensible way to look at it. Sometimes state is a thing that makes sense to use an a class for... sometimes it's better to pass state explicitly in a function. It really depends on the function you are writing, and what the state represents.
The underlying components in a CPU have state and side effects ... So... It's like you could do stuff in polar coordinates, but you just have a compiler that always translates this into Cartesian for you...
No this comparison is wrong. Both Cartesian and Polar coordinates are part of Classical Mathematics and neither is Constructive. It's like Set Theory versus Type Theory. This is a proper comparison.
Hi Dave, thank you for the interesting video. It is difficult to come up with hard data on performance, but maybe my experience is still worth mentioning: In 2012, we once wrote a data processing program in Java, using self-implemented purely functional streams (not the java stream lib!) and data structures. There were only two single-threaded streams, one in the middle for sorting and then the final stream for collecting the data. All other stream transformations inbetween could be parallelized horizontally and vertically. While our program run 3x faster on a quadcore compared to a single core, I agree that if tight loops were written in a heavily tuned imperative/destructive style, the program might have been about 20x faster, only limited by I/O. The funtional style allowed us, however, to be lazy with optimization and focus more on testability and expressivness. So my other observations might be of interest: 1. Conversions from iterators to immutable, lazy streams and vice versa enabled us to travel between both worlds, with the former being a bit tricky to implement. 2. Immutability made it easy to transparently compress the data outside the working window, which actually increased(!) the overall speed. 3. Writing tests worked like a charm and even a small testing dsl could be written in only 2 days. 4. Lazy streams and structural sharing lead to memory leaks at tail calls that were difficult to track due to the fact that eclipse and the jdk produced different bytecode. 5. In an imperative language you can implement purely functional data structures which are impossible to write in most (all?) purely functional language, namely those involving identity checking via deduplication for fast diffing. 6. One must be very careful that non-functional side effects don't leak, for instance, identity hashs destroy referential transparency and therefore repeatability of tests. In the end, it was a fun experience, and the new program was still 20x faster than the old, db-centric C++ solution. LOC was reduced from approx. 300.000 to 45.000. The old program did not have any tests while the 45.000 for the new program even included ca. 8000 lines for the new data structures and 10.000 lines for tests plus 4000 lines for spherical geometry. The old system only processed text data, processing geometric data was one reason for the rewrite, the other the painfully slow development cycle. As far as I know, the program still runs flawlessly, processsing several 100.000 fairly complex data records per day within a few minutes. Of course, the numbers are from memory and not exact, but I'm pretty sure that they faitfully describe the overall picture. EDIT: I corrected the numbers according to my project notes, they are still approximate but now less than 20% off. The 20x speedup for a highly tuned imperative version is guesswork, while the speedup of the new program compared to the db-centric C++ version has been measured and includes the final writing to DB. Since lazy, purely functional streams allow tranparent reading of infinite(!) data sequences, the program has also an online-mode (without db-access) which was even faster and allowed to gather several data records over the course of several years to be processed in one single end-to-end unit test. We also implemented an automated generation of such test cases from our database, new testcases where then automatically appended to our testfiles in form of our own, special dsl (think of something like cucumber). I mention this because this development was completely unforseen in the beginning but only possible because of the tranparency of infinite input and because going purely functional made us to factor out any sideeffects (like systemtime, db-access) right from the start. The whole thing was a 9 month, 3 developer and one domain expert project. Also worth mentioning about "perfomance": Due to faster feedback (we used the full test pyramide) the time for adding new features decreased dramatically, for instance from one week to several hours.
Keep a large and varied toolbox. "Functional programming is the ultimate tool" is dumb. "OOP is the ultimate tool" is dumb. Etc. Learn everything, use the tool that best solves the problem.
Yes. Doing a 50% / 50% mix is probably also dumb. Seems like a great way to increase your bus factor. We (programmers) like to think of our code output as deterministic. That we solved the problem the "one right way". Read any dogma about any pattern / language / framework. Decisions are hard and sadly... documented numbers generally don't inform most software decisions.
@@amigalemming It does make sense because OOP does not usually care about algebraic typing so you can't get those benefits there. OCaml is Functional but has typed Objects that are nevertheless more limited than e.g. Ruby objects but they're still algebraic and in practice there is seldom a reason to use Objects in OCaml and mostly they're not used there.
I agree with the talk about parallelization thing, but FP's main selling point was never parallelization. That argument is popularized by Silicon Valley engineers who were looking for ways to "scale" things and saw an opportunity in FP and their higher order functions to easily parallelize work. Main selling selling point of FP was controlling side effects and being able to reason about them systemically. Unfortunately many only focus on the part where there is no side effect (immutability) and not to the effect controlling. This seems to be changing these days as React kind of have an effect system that is inspired by recent work in effect systems. Also there are new languages where this effect system is "built in" and also incorporated into the types so that side effects are really part of the interfaces and can be checked mechanically.
Erlang and Elixir being so pragmatic, based on the scalable ideas, I think, it's even good that these remain in an alleged "niche" so that it remains a "secret weapons" for effective engineers. My favorite talk on how to make your program actually scale with the number of cores is this: ruclips.net/video/bo5WL5IQAd0/видео.htmlsi=Osm2yF8gJppcGAoh Another way of seeing all the languages versus the ones running on the BEAM (VM) is that to enable the modeling of lightweight, state-isolated but communicating, safeily interruptible, concurrent and potentially distributed units of computation is part of the execution environment, supported by the syntax, and the included batteries (OTP). No magic parallelization involved. It's deliberate and engineering-friendly. There's likely simply no other modern ecosystem with such a "Solid Ground": ruclips.net/video/5SbWapbXhKo/видео.htmlsi=5a9suZ6eBNT1muu5 at the moment. ruclips.net/video/JvBT4XBdoUE/видео.htmlsi=-Kc7BKSbRdkYUWRM&t=2243 Many languages differ in syntax, but they have execution environments / run-times / VMs that differ in semantics and other properties. This is the actual gold mine. Syntax gets boring but starting concurrent and well-progressing processes on a single-threaded device in the same way as one would on a distributed supercomputer - that is exciting. www.atomvm.net/
Functional programming doesn’t just potentially make your code run faster, it also forces you to structure your code in terms of inputs and outputs, making it easier to test and reuse. This can of course be done in OOP languages as well, but then it’s much easier to make a mess out of globals and side effects.
@ If you write pure functions it is much easier and safer to run those over a data collection using many threads or even using different machines. Think “map-reduce” where each worker gets a small piece of the total work where those results are later collected in a reduce step. This CAN of course be done in any language, but then you have to worry about thread safety in a way that you don’t need to to if all your data is immutable.
@@krumbergify So the main benefit of functional programming is immutability which can really be achived in any language, it is just the pure functional languages that enforce it at the compiler level.
@ Yes. Guarantees matter. You can certainly write thread safe code in C++, but Rust makes it much easier since the compiler enforces that there are no data races in your Rust code (as long as you don’t write unsafe blocks).
Smalltalk is really fun to use. It's a living system full of objects that you can modify (mutate) and play with and see what breaks. Educational systems, such as Etoys and Scratch, were built on top of it. If you're daring enough, you can even modify its compiler while using it. And if you make a mistake, you have save points (images) to save the day. Functional languages, on the other hand, aim at writing repeatable and testable algorithms by limiting side effects. Their focus on immutable data help avoid mistakes that stem from complex graphs of mutable objects and unconstrained concurrency. They both have their uses. One is explorative and playful the other is rigorous and predictable.
*Way* too much emphasis on parallelization. It's like dunking on the OO 'reuse' argument that just doesn't work beyond the most basic stuff. That doesn't mean OO is useless, just that aspect as oversold.
OO reuse never worked in practice until NuGet happened. Nuget delivered pretty soundly on that promise. How many projects have you worked on lately without using a NuGet package or two? Doesn't that seem like re-use?
Not to mention the argument is actually wrong. Algebraic Effects can now be type-checked and they allow us to write procedural code that gets then type-checked for all side-effects, so we can get our cake (writing procedural code) and eat it too (type-check all side-effects as if we wrote functional code). Therefore every procedural code can now be parallelized.
@@adambickford8720 Answer Refinement Modification: Refinement Type System for Algebraic Effects and Handlers Fuga Kawamata, Hiroshi Unno, Taro Sekiyama, Tachio Terauchi
Modern languages have just taken the best parts of several paradigms, including FP. When every class with behaviour is really a singleton that's just a module, and when every method is pure then that's really a function, the syntax is just closer to natural language so it's easier for most people to read, and less strict about mutable state in local variables, as long as that mutable state is confined to the function.
That's not how this works, it's either fp or it's not. By this logic Java 1 was fp because it had a garbage collector and every language with a REPL is FP
You talk all about "Performance", but in functional languages like F# or Haskell, there is a saying, if it's compiling, it works. And that is very often true.
No, it can't be true, I can always write syntacticly correct code that does the wrong thing. If I can't then the language is broken. It may certainly mitigate against some kinds of mistakes, even common mistakes, but the compiler can't decide that the code is doing the right things.
@ContinuousDelivery Your comment here tells me you don't understand the statement. I'd strongly suggest you work in a language like F# for a while, and I think you'd understand it. I've certainly found it to be a trueism.
@ContinuousDelivery F# quickly became by favorite language because THAT IS ACTUALLY TRUE. I don't know other functional languages (although F# is a hybrid functional first one), so I can't say anything about them, but in F# there's a saying: "make invalid states unrepresentable"; and that's quite easy using algebraic types. Son, yeah... If it compiles, it works.
FP is not a class of programming languages. As it suggests, it is a programming paradigm. So, your analysis of FP's popularity based on the popularity of individual languages is fundamentally flawed. Yes, purely FP languages like haskell and erllang are not much higher on the list than before, however almost all the major languages on that list have moved towards FP. Since, Java8 onwards, the vast majority of new Java language features were adding support for FP. Also Javascript has completely been moving towardsFP. Especially in its libraries. Ever heard of a little library called React? The reactive design at its core comes completely from FP. FP is indeed eating the world, but it is mostly doing so by changing the existing programming language giants, not by breaking through with soleley FP based languages.
I'll be hitting that button. You'll not get continuous feedback from me, I'll just let you know that it's not necessarily because I disagree with you. Just that this hasn't been worth my 20 minutes, and I'm not a fan of clickbait titles in general.
Well said. I was confused by the first three-quarters of the video where it addressed FP as a language rather than the paradigm. It's kind of a metaphor for what's happening at the implementation level a lot of time, so the irony gave me a chuckle :D
By my measure, functional programming had a resurgence about a decade ago. Mainstream languages subsumed some of those features, and static typing took charge then with things like Typescript, Rust, and typing for Python. I think automatic parallelization isn't a big deal in major programs. But one place I really like it is with scripting. I experimented with writing my utility scripts in a variety of languages, and one thing I really liked about Elixir was how trivial it was to turn most of my serial scripts into parallel ones. All it usually involved was changing a couple lines of code. Experimentation and measuring was trivial. Lots of functional languages can also do OOP. Like Clojure with their multimethods, which mimics a less powerful version of Common Lisp's OOP. If you've never tried out OOP in Common Lisp, I recommend it. It made it a real joy for me.
In my (admittedly not very wide) experience, the only place I saw anything like "automatic parallelization" actually work is with OpenMP in Fortran-based weather models, i.e. feeding the outermost loop in deeply nested do-loops to different threads with a compiler directive. I still have yet to see major software get (re)written in the pure functional languages that were supposed to make this Just Work™ (and without annotations, mind!) like they were telling us in college a quarter century ago. It's still too weird/hard/whatever a way to program most of the time it seems. I always did like the design philosophy of Erlang and OTP though, and it's a shame I never spent much time with them or Elixir.
I don't think performance is the main advantage, but correctness. Performance comes as a side-effect, when you are free to write highly concurrent code that doesn't trip itself.
@amigalemming to some extent, it is. I have never heard of a FP enthusiast that chose FP for performance first. It's correctness that drives us. But the inability to do concurrency correctly can have huge performance impacts. Python's GIL is a great example, and they haven't finished removing it… That is, we are used to build upon correctness to achieve performance.
@@PierreThierryKPH Concurrency (Threads, Software Transactional Memory) is not about performance but about user experience. E.g. GUI shall not be blocked while the application performs some computation. The approach for performance.is parallelism.
I have used Scala a about a decade ago and got it in production systems. Not pure FP however it does nudge you to think about state, idempotency, etc. Now even if I code in Python and JavaScript I definitely like there are FP influenced constructs and think about the lessons I learned back then.
@@a_rugau From what I have seen Rust is Functional inspired but isn't functional in the traditional sense. Due to its borrow checker you can write low level code and know it will work with certain guarantees, like functional, however, the mechanism it does it to give these safety guarantees don't use immutability but the borrow checker. Its syntax is inspired from the ML functional family, esp. OCAML
Well, certainly the default stance of variables to being immutable goes some way to justifying that claim, but I think I'd think of it as a hybrid, rather than a pure functional language, like most modern languages.
Thanks, good commentary! Absolutely agree: apply the approaches and paradigms that best express the problem and the solution in the most understandable, readable - and, yes, elegant! - way. Then think about performance & optimize, including parallelization. Weaned on structured programming, thence to OO and finally FP, I've found they all bring concepts that can help enable beautiful code that solves the problems. I do find myself using FP more and more for more concise code, but wherever there is persistence and mutable state, those are objects. (These days the team I'm on slings Python, but I have a crush on Julia...)
Always a thumbs up for any emphasis on Amdahls law, the most important in high performance compute. Most often for large problems that require massive parallelism one is more concerned about memory cache, memory paging and data distribution. Some problems it is cheaper to repeat calculations than to distribute those calculations, in others the very largest memory space possible is the way to go for the exact opposite reason. OO or functional is pretty far down the list, and likely immaterial, especially once the compiler has done its thing. If the kernel matters that much then assembly may be required, but that still has to do better than the compiler on its own.
As long as the BEAM is still used, functional will never die. The people who actually write code that needs to scale (and vercel doesn't count, sorry) use it for a reason.
Thank you. I happened upon this video, and watched to the end, rolling back a few times to listen again. Very well written and thought through. I learned a thing or two in a surprisingly short amount of time. Great work. I am going to check out your other vieos. Intelligent. Thank you. [ps: Subscribed of course]
I worked for a company that used ruby and elixir, and spend some time learning both (previously I used JS and Java). I loved both languages (oop and fp respectively), but could not find another elixir job after that contract ended. I stick with ruby for personal projects. IMHO, I think the culprit is the dynamic of the job market. A competent java programmer can learn elixir in a reasonable amount of time, but the recruiters will pass on those candidates, then complain that it is impossible to staff, and suggest moving to another technology. Is a vicious circle.
Fp languages didn't catch much on because the paradigm is very hard and very different true FP means you doing math and recursions all the time . While software engineers/developers are good at math in general they aren't that good. I mean with a lot of training you get there but who wants to do this when you simply instruct the machine what to do? Companies also don't want to spend money to retrain their entire stuff. I done a FP module(Scala) in collage easy isn't a word I would descipe it
Nailed it. True, there's a lot more to be said both for and against, but I'm glad we're actually talking about this. FP is great, but it's not a silver bullet and there are valuable things you give up when you use it.
I've been loving FP in R. R is higher on the list than any of the other languages you highlighted, and while it's not pure FP, much of its design lends itself to FP (like functions passing lazily by value by default). My last project I've been slowly converting so that it's almost entirely done in maps, reduces, and filters, with pipes everywhere and nary a for loop to be seen.
In my limited experience, writing parallel programs, off the main thread, meant that I could write really unoptimised code in the parallel processes and it didn't have any impact on the program as a whole. I'm sure you wouldn't be able to measure any performance boost on a benchmark but the perceived performance was great. And writing unoptimised code takes a lot less time to write. I want to make clear that agree with everything you said in the video. Your millage may vary.
A great presentation - kind of Emperpr's new clothes. Parallel programming is the game where one thread or process does the work while the others are slowing down this thread/process. What AI may change is the following - AI may indeed program optimally a well defined task and even re-program existing code.
I've been doing FP for over 25 years and was always very skeptical of the possibly for automatic parallelization because, as you pointed out, you may end up with a parallel but slower program. However, there are much easier benefit for concurrency: immutable data structures allow safe unrolling the effects of concurrent threads. Together with explict side-effects is why you can have easy and efficient Software Transactional Memory in Haskell but not in mainstream languages.
A programming paradigm is like a lens. Each lens brings certain elements into focus and others out of focus. What is best depends on the problem at hand. (I have heard this described previously as: "The Problem of Composition.") There are inherent tradeoffs with each choice. Our best option then is to have several options available and the freedom to switch seamlessly between them. We can then make mixed-paradigm code that is constantly moving toward an optimal tradeoff for each unit of code.
I was a devout Haskell adherent for my first few years of college. It has been useful to see another perspective on many things, but the only concept I've really kept is referential transparency. I try to write procedures that mutate nothing. But on the inside, they are very imperative and go step-by-step, I keep them intentionally simplistic and avoid anything fancy. I use map occasionally, but even the next simplest thing, reduce/fold, is already useless - it never, ever, ever fails to be way more confusing than an equivalent for loop. First-class functions can be very handy, but only now and then. I also sometimes wish that whatever language I'm using made it as easy to declare new types as Haskell, but not often. All in all, FP has a few good ideas, but the core of my coding style remains imperative. I kinda get the impression that this is how the mainstream has experienced FP too: influential, but not likely to take over any time soon.
2:47 Fortran is a modern programming language that now sees its renessaince with an active community and new set of tools developed. :) It actually was in the top 10 TIOBE index for the past couple months!
Alan Kay once said that with Java, they were trying to drag programmers halfway to Lisp. They failed, but twenty years later Rust came along and succeeded. Rust captured some of the major value propositions of functional programming - memory safety and type safety specifically, and compiler-based guardrails and higher assurance more generally. I think that's why the hype has died down, Rust just addressed enough security/correctness/reliability needs that functional languages were no longer necessary. Maybe Typescript too in the Javascript world.
Guy Steele (of Scheme and Common Lisp fame) tried (and he tried _really hard)_ to make a language that would actually live up to the promise of effortless parallelism. It was called "Fortress", and in many ways it was heavily influenced by functional ideas, but built from the ground up with this vision of "abstracting away" the computational substrate as a core design idea. Long story short, it didn't work. They came across some very difficult problems along the way and the project, while promising, was abandoned. This idea that FP will just automatically and effortlessly let you threadpool everything is a pipe dream.
Functional programming, like OOP, became a commodity in nearly all modern languages. I often apply FP and functional concepts in C++ or Go, especially in multi-threaded applications. If it cannot be done purely functional, I step back to single threaded solutions, because it's often an indicator that the algorithm is not suitable for parallel processing (think twice before using mutexes or semaphores). But the most simple solution is usually just an imperative procedure. OOP shines when used like Alan Kay suggested - by radical decoupling and only using messages for communication. Use the paradigms where you gain the biggest benefit from them, always take the middle path and never trust in silver bullets!
I use functional styles quite a lot. The basic concept (no side effects) is worth understanding. As to looking at charts of programming language use, there is just so much complexity around why a particular language is used that it is really hard to draw conclusions from the data.
1. Rust also would help with adoption of FP principles :) 2. Ada is NOT dying! New versions are still pushed forward, modern tooling like alire and ada-language-server as built.
I think one argument for FP style, if we talk a language like Rust, is that it gives you a way to make higher level abstraction, and also we can get rid of the null by using the option Monad. And if the compiler can manage to make the code into a imperative style, one compilation then it should be with out cost.
@@sarabwt The way Koitlin handles null is a functional programming element. All major languages have integrated elements from functional languages, that is the real success of functional programming.
@ The way how you call a function on a type that may be null and it returns null in case the underlying object was null instead of raising an exception, that’s a functional programming pattern.
Hey Dave, functional programming is a high value but extremely niche discipline that is in use within Amazon, Google and Meta. I’m not talking about functional aspects of common languages (like lambdas in python, etc). But Common Lisp, oCaml and Haskell are used within automated reasoning departments
Functional programming becomes more necessary as so-called AI is used to assist programmers. The rules of functional programming allow a coading model to make segments of code that can be plugged in to an overall code base, without it having to learn thousands of lines of code that will work with it.
It's clear by now that hybrid languages embracing the positive aspect of functional programming are gaining ground. Instead of being ideologically driven, just take what works and make the best out of it.
Horses for courses is right. Or perhaps horses for environments? I once worked on Java applets in the browser, and I can say that OOP is not a good fit for that environment. It cured me of my Java snobbery when I saw how easy it was to handle events and async code using first-class functions that even early Javascript supported, compared to the laborious boilerplate that Java 5/6 required.* I recall an earlier video on this channel about event-driven programming, and my feeling is that this is a paradigm that may well suit FP more. At least, if GUI coding is any indication. I have no data to support that claim, though. :( * It has been a while since I have worked in the Java space. I would expect that the lambdas have improved things now.
A practical example for use of these concepts in realtime applications (for AI and solving certain graphics-related work, typically): with Java's Lombok library, I think it's quite straightforward to combine OOP and functional programming concepts where each might be best-suited. A straightforward rule of thumb: parallelized operations may read, but not write, to shared memory spaces until the thread terminates. Writing exception handlers with Lombok that allow one to easily track down bugs within parallel code is nicely straightforward, too. The idea of "automatic parallelization" is, I feel, a bit of a chimera: parallel is not an instant win for performance. It's relative to the complexity of the problem to be solved, vs. the inherent costs to make code reliably parallel. Good parallel code needs to go do something really expensive, but doesn't absolutely require time-accuracy; it may or may not return before the main thread is done, and one should program in a way that handles that elegantly. It's never cut and dried. Use what actually works better. Lastly, if there's anything where I thoroughly disagree with Dijkstra, it's the use of globals, or things that act like them. By all means, use them. Just be organized about it, and document them well. They aren't inherently evil, lol.
My advice: program at least 3 years in a FP language and then make a 2nd video. Hearsay is not good enough. The main problem is mainstream, critical mass or what is a safe choice career-wise for both developers and their managers. Started a new FP project 5 years ago - ask me how many times I was (jokingly) asked if it was a good choice, if hiring is OK etc...
Just 3 years for an experienced programmer to see the benefits of FP? Thats like nothing.. surely I will take you up on that. Who's a good FP guru? What is a good FP cult?
I studied FP mainly to learn about it. Then I applied some of its concepts to the procedural programming that I do for my day job. I also use Excel HEAVILY, and that is functional programming. I think the real issue with FP is it makes certain every-day tasks HARDER than other languages do. You know, the ones like file access and database access that are the heart of most work we do.
we're doing both OO and functional style programming in python. In the case of needing to model objects that provide functionality to other users in modules - we use the OO features where those fit best with tackling that problem. In other pieces of code we are manipulating lists and collections of data that lend themselves really well to set theory and functional programming to solve the problem in an elegant way. it's horses for courses. you tighten a nut with a spanner. you put in a nail with a hammer.
Every paradigm has its place. I use OO where encapsulated state is of importance (in-memory caching comes to mind for example) and FF for modeling my data types and implementing something with inputs and outputs (often algorithms). Interesting for me was my realisation that, in the end, web requests and their responses are nothing else than input->output functions. So, for me, the functional approach fits extremely well for that kind of application.
I really enjoy learning bits of functional programming. It's very helpful to writing better OOP that doesn't keep so much state around and it really trains you to think of flow differently. It's a nice place to visit, but wouldn't want to live there.
As a functional programmer I get really frustrated when I see giant abstraction spaghetti monsters that barely solve the problem, from programmers that value form over function - where the "form" in most cases is impossible to maintain until you learn all of the idiosyncrasies of that particular programmer. Functional programming is essentially the opposite - function over the traditional "form". You write half as much code, it's maintainable, everything is in the same spot and it usually runs faster as well. Ironically, functionally programming is even more modular because you don't have to scale an inheritance hierarchy or go digging around in some moron's interpretation of SOLID principles just to add or remove something. Honestly, functional programming is just way better for small to medium sized software solutions.
@@richardgomes5420 They also ignore Scala is not a language that is taken seriously by anyone respectable. It is a toy experiment by Martin Odersky. He wasted some 20 pages to prove through equations his "implicit" system while all of lambda calculus takes 3 lines to define. It's not a serious language.
But how small is a small software? And what kind of software are we talking about? and again, if a paradigm is only, or mostly, good in certain circumstances, then why promote it as "the best"? Sounds like different problems should be solved with different approaches.
The problem I have with comments like this is that none of the problems you describe are caused by object oriented design. They are caused by bad design. Those problems are also not inherently solved by functional design. My theory is that many FP zealots got frustrated with dealing with people's poorly designed code so they switched to FP in an effort to learn better design. They started caring about good design and with practice got better at it. Then they incorrectly attribute their improved design to the paradigm rather than their own experience.
As the BEAM shows the actor model really is a developers superpower... with it's idea of isolation of processes, the ability to restart just 1 pid without affecting the system, the fault tolerance it offers and Elixir phoenix liveview as a alternative to js it really is a massive jump over OO. Unlike almost any other Language, Erlang and Elixir actually do make concurrency easy... and as for blindingly obvious, I remember many stories where 50-100 servers, reduced to 3 when switching to Elixir. There is also more stuff available in the BEAM, you can remove redis, dump docker clustering, and more. The complexity is also massively reduced when you include all the other services needed to make most OO progs work compared to Elixir, you don't have a million other services, you can remove the endless js churn and you suddenly have 5 devs being more productive than 50. Even if you say Elixir ONLY has 1/30 of the devs of JS ... that is actually a vast number!... it shows in just over a decade its got a real foothold, everyman and his dog is a JS dev these days, they are so many React bootcamps it's obsurd.
I Find FP can be easier to understand and to write unit tests for, and so it is easier to debug or update in the future. So for me it is about the development process more than performance. I wasn't aware of Amdahls law ... That was useful as your videos usually are 😊
I like the functional idea and I always rejoice when I can make methods static and when I can remove loops by using things like LINQ. That said, purely functional styles tend to end up clunky and have difficulties interacting with the rest of the world. For me I strive to make my code more functional wherever possible but it isn't an end goal in itself. I like representing my systems with objects and I try to make all my transformations of data between objects as functional as possible.
The main advantage of using "purely" functional, or OO programming languages or a language that enforces paradigm X, is that you are "forced" to develop an new type of thinking. This is purely impossible in an environment that allows multiple paradigms. This is why C++ programmers back in the day, used C++ as C, with a sprinkle of OO ideas or we use functional ideas here and there. I think the great value of strictly X-type languages is that they force you to think that way and develop some skills that you will or will not use, depending on your problem. You do not learn Haskell to make your next big project in Haskell, but to train your brain to think AND in that way
Yes, it's just you ;-) I'm only getting started with Elixir, and I love it. Functional programming is far from dead. The problem is of course that procedural languages are much more ingrained into companies. It's a kind of vicious cycle because programmers learn the languages that companies use so they can make money, and companies use the languages that they can find people for. That being said, a lot of popular features of functional programming languages are making it into the procedural languages ( lamda's , pattern matching ,etc...) There's another sense in which they are useful : just by learning them, you broaden your way of thinking, and that helps to write better software even in procedural languages.
How many people think the Unix shell is not a Functional Programming language? They don't use the pipe operator. It's not the end all and be all of FP because f(g(x),h(y)) is hard to code (without side effects), but f(g(x)) is easy: g < x | f
I've thought for a while that the good way to have data science sorts do stuff with traditional software devs is to have the former work in a purely functional way (or almost so). That allows (for example) the latter to do much of the application security (e.g., related to file and network I/O). I have yet to succeed in using this division of labour, however. A sticky point was that DS wants to do "visualizations" and that often entails web stuff (e.g., ShinyApp) hence a security boundary, etc.
I went through a phase of loving fp and in the process took it too far. I lot of code was satisfying to write but sometimes difficult to read after written. I also began to fear the runtime overhead that it creates. Nowadays I’m much more pragmatic I still fp but not exclusively, rather, I like the declarative approach when it can improve maintainability. For example, I discovered that reduce can be used in more situations than at first may seem to be the case. When I see a reduce in code I know exactly what it’s trying to achieve. However solving the same problem without a reduce often means it can be harder to deduce the intentions of the code because of the lack of an instantly recognisable pattern and thus increased cognitive load. But to me practically is king, so use fp when appropriate not because you want to go all in on an ideology
Loved the talk on software, from a hardware point of view they should have less processor cores and higher core speeds. That day may be coming for this with optical cores as hardware. Neural networks will change everything. It will change how code is processed stored and executed. Neural nets will be programed for functions in a language. Optimized by I/O speed. We can dedicate Neural Nets for its function. We could also do this in current hardware if they wanted to. Government projects no longer want C++ as the source code because of hackers. They want full proof code that can not be hacked, ADA was supposed to do this for them. It's quite amazing how a 486 cpu hooked up to a SSD can run code almost as fast as current processors in 16 bit from a users point of view. Oh your missing the colors, yes. !
Sir, Would you please provide your view on Rust Language ? I feel the design paradigm is very different. Not much clearto me. But the way you explain is very good. I am looking forward to see Rust releated talk from you. Thankyou
There's something fascinating about how we fail grasping formal languages, even if we created them, like PLs. I think it has to do with fundamental problems in formal reasoning, that the sciences can't recognize or are shoving aside as something that's not worth thinking about,
I wouldn't go that far, but I do think it that it is fascinating that we serialize by speaking then try to develop artificial "speech like" systems (written language) that we have trouble reparalleizing.
Maybe the hype has died down, though the patterns feel much more prevalent throughout code bases. Best case as any hype passes is to have the elements that organically make sense be absorbed by the relevant parties/sectors. Personally am still reading books on functional cpp, and patterns learned from an old ocaml course still heavily define my style in all languages. So it hasn't gone anywhere for me.
I see that when programming there's no single answer or paradigm that's answering how to provide the best solution. In many cases you have a situation where the solution involves many different steps, sensor measurements, narrow band data transfers, data sanity checks, data aggregation, data storage, data processing and data analysis. In some of the stages the order of data is important, while in others synchronization of data streams are important. Some languages are better than other and sometimes just a certain feature of one language is enough to make it easier to perform a certain task. Absolute immutability is something that really messes with your mind if you come from a world where you usually don't use it. In C# and Java you can decide which variables that shall be immutable in order to safeguard the behavior of the program. The program will run fine without the declarations, so it's not always used and unless you have strict ideas about how the best code shall look then it doesn't really matter.
I'm a hardware engineer of many years. I studied multi-processing back in the early '80s and still remain unimpressed. The processor manufacturers, I feel, have stuck more cores into a piece of silicon because they could; it never has been clear to me that there is a performance increase, and the programming complexity and synchronisation issues remain the bottleneck. Yes, I can see how FP could address these issues but it hasn't happened. Amdahl's law rules 🙂Better to put additional special functions in there to use up that silicon.
I remember the times when we had parallel IO ports on you PC and for HDD's (Parallel SCSI). But now we use SATA / USB 3.0 en PCI 16x together. Its all about the usecase.
The value of FP is not only the pararel of performance compared to the single core. But how to compose the software and the cognitive impact of it. In my opinion, oop shines in dependency management and encapsulation, but it failed to sell inheritance as a core feature when it isn't. FP basics help maintain explicit state, avoid globals, and decouple software from time. I like both styles.
I think you will find that top modern processors can run rather faster than 3 Ghz. There is also the issue that modern processors are superscalar and can exploit parallism inherent in both a given ISA, and in the stream of instructions the CPI is required to execute. Thus single-thread performance still continues to improve, albeit at a lower rate than in the past. It is not just a story of more cores, but of the hardware designers exploiting parallelism at a much lower level. There will be a theoretical limit to progress made this way, but it is real and cannot be forgotten.
Try again. Benchmarking is hard. You'll often look at small parts instead of the whole. Plenty of companies are doing "functional first" programming successfully.
I feel the need to nitpick about the clockspeed claim of 3 GHz, true enough that the previous trend has broken, but a lot of CPUs these days are capable of ~5 GHz sustained if adequately cooled. Even if we ignore "gaming PCs" and look at the pinnacle of reliability, IBM mainframes, their newest CPU is the Telum II which runs at a fixed 5.5 GHz. So I think 5 GHz would be a better choice as the current practical limit of clock speed.
I like that Dave wears his opinions on his sleeves. I listen to him even if I disagree with him on many points. He seems to have ignored what Michael Feathers said on his own channel.
Is there any data available to explore an hypothesis about defect rates? Although pure FP implementations cannot have any side effect defects, is there another class of defect that becomes more frequent? Perhaps the constraints of pure lambdas makes some code harder to read?
Type-level wankery is all well and good when you’re in your 20s and have time and money on your hands, but the cost-benefit tradeoff of using complex languages for real world systems, where every different developer and team uses a different subset of the language, doesn’t work. Happy to be convinced otherwise if “lean Scala” ever takes off.
Amazing take 👏 👌 the only thing that you are not taking into consideration is that today "mainstream" influence based on "influencers", influencers whoare pushing different methodologies,technologies and concepts based on their individual misconceptions and beliefs
The main Haskell compiler has some very subtle performance-gotchas that trip up even experts, one being related to how it does beta-reductions of let-bindings IIRC, and the side effect of heat is not captured in the I/O monad. Probably Rust is a better fit in most cases where performance is equally important to correctness. But sometimes OCaml or Lua is best, and why not keep an eye on Inko and Zig. Having a large toolbox with not just hammers is a win.
the immutability is the big strength of functional programming, but also one of its weaknesses. State ultimately needs to change. It is not just about putting in all the parameters and get the result 42 out. It can be heavily paralellized because of the immutability, but it eats easily up much memory in the process. In most systems, there are some underlying persistent data that has to be changed by either a service or a person at a computer or web interface.
Look into Algebraic Effects. You can build mutable variables out of immutable constructs. The point being it can then be type-checked by an algebraic type-system unlike plain mutable variables. Mutability is not needed.
@@oysteinsoreide4323 " @ you have type check in many mutable languages. And in C++ and C# it's very easy to ensure immutability if you need it. " Those are not type checks technically, they are tag checks. Type checking proper only happens in algebraic type systems. C# and Java have tag systems, not type systems.
If you try OOP in Rust, you would have a very hard time. If you try FP in Rust, you could make your program outcompete other implementations in other languages in term of benchmarks (even C++ sometimes).
I also think presenting FP as a huge performance boost is wrong. Fp can only outshine OO performance wise in machine learning which rely heavely on paralellism. It's also impossible to build everything with a purely FP style At the beginning I thought I hated OO but I just had a problem with inheritance and classes. When I get rid of those and use OO ideology designed by Allan Key, everything become great So I prioritize a FP first then procedural and OO when needed
@@rotgertesla Your business processes are also in an order. You will not ship a product before you get the money. You will not push a button on your coffee machine before you make sure that there is already some water and coffee beans in the machine. The thing with the order is one of the most important things in FP. When you feed the output of your function to the input of your next function you have to make sure that the output of the previous matches the input of the next one. After you made this work then you write this with errors/exceptions and missing values in mind (none (null/nothing in other languages))
What happened to functional programming (especially Haskell) is that it took a big hit from the high interest rate environment of 2022-2024 since it was primarily used by small to medium size start ups, for which the macroeconomic environment was brutal.
The readability, understand-ability, and maintainability by developers has to trump functional (or functional style) code most of the time, for businesses at least. I've been a dev for 34 years and hate picking up someone else's code that is unfathomable, taking far too long to figure out what it is doing.
Twenty years ago, I led a team that did blackbox reverse engineering to replace 500,000 lines of compiled language code (C about 150 kloc of C, and the rest Tandem Application Language) running on Mainframes, with 15,000 lines of python, running on linux servers... and it ran faster, on hardware costing 10x less. That didn't lead to conclude that Python is faster than C. There is no substitute for understanding the problem. I find it really really unfortunate that people call the ideal case for parallelism "Embarrasingly Parallel"... weirdly many people get confused by this term, thinking it is somehow a bad thing. It should really be called "Perfectly Parallel" ... it represents the possibility of infinite speed up. That's the gold standard. As you rightly indicated with Amdahl's law, FP automates the drudge work of writing parallel code, but cannot get around Amdahl''s limits. Writing an algorithm with no parallelism... but that you can run infinite copies of. if you can formulate the problem that way, is much better.
The core of the problem is, as a community, we're obsessed with The Technical Solution to basically every problem, while the reality is that the biggest piece of what makes software difficult to work with is the human side. Functional programming, OO, type safety, etc etc; all of it tries to technically solve the fact that people write bad code. None of them work though, because the problem is between the ears and on the delivery schedule, not in the silicon.
Hey mate, Functional Programming is top-notch. Honestly, those familiar with the bridge between DOP Data Oriented Programming, and Functional Programming should get acquainted. The challenges of design decisions to mitigate and deliver great software need further exploration for people to understand the implications, which are indeed very positive. Sincerely, this is not a subjective opinion, but an objective fact, as I have read about Functional OOP Book and seen the power of this design pattern.
I think it kinda correlates to the death of enthusiasm on rust backend. Something something clean code + messy internet + messy business requirement = disaster.
Many parallelization claims for automatic parallelization totally ignore caches in modern CPUs. All the cache-compatible synchronization primitives are simply too slow to parallelize fragments where the synchronization takes more time than the actual execution of the fragment. And the faster the CPUs get, the more expensive the minimal synchronization gets when the cost is expressed in potential instructions that can be executed by the current thread. The higher the IPC the more expensive the synchronization tied to base clock of the CPU gets. And the synchronization must be tied to base clock to be synchronous over all cores. As a result, automatic parallelization nowadays focuses more onto SIMD than whole threads because SIMD doesn't require similar level of synchronization because it all happens in one L1 cache on the local core.
ReactJS library has moved from OOP to functional programming style. This has made code easer to read and understand how it works, leading to better maintainability. OOP with it's state mutability and mixing of data with business logic is so clunky to work with.
Ive been experimenting with functional programming, it is good but i have been using a hybrid approach, sometimes using a direct function call in an object is more performant and sometimes having a decoupled task or service is more appropriate. I probably dont fully grasp what is an isnt functional programming, but i have enjoyed using messages to keep everything nice and tidy and synchrononized.
@@jonathanjacobson7012 [...] May or may not be a good method of abstractly representing the problem at hand. But sure, it is individuals' problems if recursion doesn't represent the problems they want to solve.
Evidently, the answer for any tool is that it the best when it better solve the problem that any other. The problem cannot be more than a statement of a few line (I need to put this painting on my bedroom wall), not a consolidation of all problems a company is facing. The problem like in math must be defined in a certain space, or constraint set (time, budget, resources available, scalability, short/mid/long term objectives, …)
Are YOU a Great Programmer? or a Regular Programmer with GREAT Habits. Download my FREE guide on how to adopt 30 of my favourite Programming Habits. Get yours HERE ➡ www.subscribepage.com/great-programmer
Automatic parallelization and performance really isn't the main reason to choose a functional programming language. Algebraic data types for domain driven design, immutability by default and concepts like referential transparency produce code that is much easier to reason about the program state by merely reading the code rather than assessing all mutable fields and properties across multiple objects that talk to each other, usually by having to step through the operations in a debugger.
@@robertlenders8755 DDD mentioned
Not always. A gazillion function calls are often difficult to read and debug.
@@piotrc966 A gazillion of everything is hard to read. Think of the 234 layers of interfaces in any oop lang.
@@J-Kimble But no one argues that an oversized objectivity is good. And many argue that a purely functional approach is better. No, it isn't. The code is often bloated and hard to debug. Much more readable is imperative with funcitonal elements. Not pure fuctional.
@@piotrc966 A gazillion function calls is just a straw man. You should be doing function composition at progressively increasing levels of semantic meaning, so the higher or lower levels can be disregarded depending on the problem to be fixed. The same principle applies to imperative code, it would be a poor design for the same imperative function to be applying business rules and be doing bit twiddling.
FP has had huge influence on other languages, with features like lambdas and immutability and declarative coding.
Not just has had. There are two ways of language development: towards Lambda Calculus, Category Theory and other math, or towards ad hoc stillborn internally controversial ideas (COBOL, OOP, etc). And this concerns not just languages, but the application architecture as well.
Yeah. I learned functional through Java libraries, not a functional language. Did a bit more in .Net. I’ve got 1 year in production Clojure but that’s it for me.
I still think in functional terms daily in Java, Kotlin, Typescript.
In Clojure we still used protocols quite a bit and the impure bits of the codebase looked fairly OO.
Perhaps thinking in something closer to a natural language (conditionals, repetition and commands) is more accessible to most than mathematical thinking...and not many people have been exposed to algebraic structures, functors and monads. @@InconspicuousChap
Not to forget that SML (processor to haskell) is the reason that we have type inference, and are still the only language which are proven to always to run correct if it compile 😅
An enormous amount of functional programming is being done with C++ nowadays, and more all the time.
It's like Cartesian vs Polar coordinates. Some problems are better solved or expressed in one vs the other. Religious fanaticism is baked into some people's DNA.
@@Laggie74 I don't like what you're saying! I'm gonna go become a mips programmer!!!
Yeah, this is a sensible way to look at it. Sometimes state is a thing that makes sense to use an a class for... sometimes it's better to pass state explicitly in a function. It really depends on the function you are writing, and what the state represents.
The underlying components in a CPU have state and side effects ... So... It's like you could do stuff in polar coordinates, but you just have a compiler that always translates this into Cartesian for you...
@@kayakMike1000 FP is not about the absence of state and side effects; it is about having fine control over them.
No this comparison is wrong.
Both Cartesian and Polar coordinates are part of Classical Mathematics and neither is Constructive.
It's like Set Theory versus Type Theory.
This is a proper comparison.
Hi Dave, thank you for the interesting video. It is difficult to come up with hard data on performance, but maybe my experience is still worth mentioning:
In 2012, we once wrote a data processing program in Java, using self-implemented purely functional streams (not the java stream lib!) and data structures. There were only two single-threaded streams, one in the middle for sorting and then the final stream for collecting the data. All other stream transformations inbetween could be parallelized horizontally and vertically. While our program run 3x faster on a quadcore compared to a single core, I agree that if tight loops were written in a heavily tuned imperative/destructive style, the program might have been about 20x faster, only limited by I/O. The funtional style allowed us, however, to be lazy with optimization and focus more on testability and expressivness. So my other observations might be of interest: 1. Conversions from iterators to immutable, lazy streams and vice versa enabled us to travel between both worlds, with the former being a bit tricky to implement. 2. Immutability made it easy to transparently compress the data outside the working window, which actually increased(!) the overall speed. 3. Writing tests worked like a charm and even a small testing dsl could be written in only 2 days. 4. Lazy streams and structural sharing lead to memory leaks at tail calls that were difficult to track due to the fact that eclipse and the jdk produced different bytecode. 5. In an imperative language you can implement purely functional data structures which are impossible to write in most (all?) purely functional language, namely those involving identity checking via deduplication for fast diffing. 6. One must be very careful that non-functional side effects don't leak, for instance, identity hashs destroy referential transparency and therefore repeatability of tests.
In the end, it was a fun experience, and the new program was still 20x faster than the old, db-centric C++ solution. LOC was reduced from approx. 300.000 to 45.000. The old program did not have any tests while the 45.000 for the new program even included ca. 8000 lines for the new data structures and 10.000 lines for tests plus 4000 lines for spherical geometry. The old system only processed text data, processing geometric data was one reason for the rewrite, the other the painfully slow development cycle. As far as I know, the program still runs flawlessly, processsing several 100.000 fairly complex data records per day within a few minutes. Of course, the numbers are from memory and not exact, but I'm pretty sure that they faitfully describe the overall picture.
EDIT: I corrected the numbers according to my project notes, they are still approximate but now less than 20% off. The 20x speedup for a highly tuned imperative version is guesswork, while the speedup of the new program compared to the db-centric C++ version has been measured and includes the final writing to DB. Since lazy, purely functional streams allow tranparent reading of infinite(!) data sequences, the program has also an online-mode (without db-access) which was even faster and allowed to gather several data records over the course of several years to be processed in one single end-to-end unit test. We also implemented an automated generation of such test cases from our database, new testcases where then automatically appended to our testfiles in form of our own, special dsl (think of something like cucumber). I mention this because this development was completely unforseen in the beginning but only possible because of the tranparency of infinite input and because going purely functional made us to factor out any sideeffects (like systemtime, db-access) right from the start. The whole thing was a 9 month, 3 developer and one domain expert project. Also worth mentioning about "perfomance": Due to faster feedback (we used the full test pyramide) the time for adding new features decreased dramatically, for instance from one week to several hours.
Keep a large and varied toolbox. "Functional programming is the ultimate tool" is dumb. "OOP is the ultimate tool" is dumb. Etc. Learn everything, use the tool that best solves the problem.
Yes. Doing a 50% / 50% mix is probably also dumb. Seems like a great way to increase your bus factor.
We (programmers) like to think of our code output as deterministic. That we solved the problem the "one right way". Read any dogma about any pattern / language / framework.
Decisions are hard and sadly... documented numbers generally don't inform most software decisions.
"OOP or functional programming" is a question that does not make sense, because there are object oriented functional programming languages.
@@rranft it’s not just about “solving the problem” it’s about refactoring code, maintaining code etc etc
So if I'm a saboteur or a marketing greedy guy and I make a paradigm that is crap just to sell you books then I will fool you? Good to know.
@@amigalemming It does make sense because OOP does not usually care about algebraic typing so you can't get those benefits there. OCaml is Functional but has typed Objects that are nevertheless more limited than e.g. Ruby objects but they're still algebraic and in practice there is seldom a reason to use Objects in OCaml and mostly they're not used there.
C# (my primary language) has so much functional in it that the line is quite blurred.
MS Excel is the most used functional programming platform on the planet.
Probably bash too.
@@complexity5545 bash is not functional, it has mutable variables
But without VBA as a fallback option it probably wouldn't be.
I agree with the talk about parallelization thing, but FP's main selling point was never parallelization. That argument is popularized by Silicon Valley engineers who were looking for ways to "scale" things and saw an opportunity in FP and their higher order functions to easily parallelize work.
Main selling selling point of FP was controlling side effects and being able to reason about them systemically. Unfortunately many only focus on the part where there is no side effect (immutability) and not to the effect controlling. This seems to be changing these days as React kind of have an effect system that is inspired by recent work in effect systems. Also there are new languages where this effect system is "built in" and also incorporated into the types so that side effects are really part of the interfaces and can be checked mechanically.
Erlang and Elixir being so pragmatic, based on the scalable ideas, I think, it's even good that these remain in an alleged "niche" so that it remains a "secret weapons" for effective engineers.
My favorite talk on how to make your program actually scale with the number of cores is this: ruclips.net/video/bo5WL5IQAd0/видео.htmlsi=Osm2yF8gJppcGAoh
Another way of seeing all the languages versus the ones running on the BEAM (VM) is that to enable the modeling of lightweight, state-isolated but communicating, safeily interruptible, concurrent and potentially distributed units of computation is part of the execution environment, supported by the syntax, and the included batteries (OTP). No magic parallelization involved. It's deliberate and engineering-friendly. There's likely simply no other modern ecosystem with such a "Solid Ground": ruclips.net/video/5SbWapbXhKo/видео.htmlsi=5a9suZ6eBNT1muu5 at the moment.
ruclips.net/video/JvBT4XBdoUE/видео.htmlsi=-Kc7BKSbRdkYUWRM&t=2243
Many languages differ in syntax, but they have execution environments / run-times / VMs that differ in semantics and other properties. This is the actual gold mine. Syntax gets boring but starting concurrent and well-progressing processes on a single-threaded device in the same way as one would on a distributed supercomputer - that is exciting. www.atomvm.net/
Functional programming doesn’t just potentially make your code run faster, it also forces you to structure your code in terms of inputs and outputs, making it easier to test and reuse. This can of course be done in OOP languages as well, but then it’s much easier to make a mess out of globals and side effects.
How exactly does functional programming make your code faster ?
Functional programming is math. OOP is anti-math, promoting poor code structure and awful data models.
@ If you write pure functions it is much easier and safer to run those over a data collection using many threads or even using different machines. Think “map-reduce” where each worker gets a small piece of the total work where those results are later collected in a reduce step.
This CAN of course be done in any language, but then you have to worry about thread safety in a way that you don’t need to to if all your data is immutable.
@@krumbergify So the main benefit of functional programming is immutability which can really be achived in any language, it is just the pure functional languages that enforce it at the compiler level.
@ Yes. Guarantees matter. You can certainly write thread safe code in C++, but Rust makes it much easier since the compiler enforces that there are no data races in your Rust code (as long as you don’t write unsafe blocks).
Smalltalk is really fun to use. It's a living system full of objects that you can modify (mutate) and play with and see what breaks. Educational systems, such as Etoys and Scratch, were built on top of it. If you're daring enough, you can even modify its compiler while using it. And if you make a mistake, you have save points (images) to save the day.
Functional languages, on the other hand, aim at writing repeatable and testable algorithms by limiting side effects. Their focus on immutable data help avoid mistakes that stem from complex graphs of mutable objects and unconstrained concurrency.
They both have their uses. One is explorative and playful the other is rigorous and predictable.
*Way* too much emphasis on parallelization. It's like dunking on the OO 'reuse' argument that just doesn't work beyond the most basic stuff. That doesn't mean OO is useless, just that aspect as oversold.
OO reuse never worked in practice until NuGet happened. Nuget delivered pretty soundly on that promise. How many projects have you worked on lately without using a NuGet package or two? Doesn't that seem like re-use?
@@timlong7289 wtf is nuget
Not to mention the argument is actually wrong. Algebraic Effects can now be type-checked and they allow us to write procedural code that gets then type-checked for all side-effects, so we can get our cake (writing procedural code) and eat it too (type-check all side-effects as if we wrote functional code). Therefore every procedural code can now be parallelized.
@@jboss1073 I'd like to see a link
@@adambickford8720 Answer Refinement Modification: Refinement Type System for Algebraic Effects and Handlers
Fuga Kawamata, Hiroshi Unno, Taro Sekiyama, Tachio Terauchi
Modern languages have just taken the best parts of several paradigms, including FP.
When every class with behaviour is really a singleton that's just a module, and when every method is pure then that's really a function, the syntax is just closer to natural language so it's easier for most people to read, and less strict about mutable state in local variables, as long as that mutable state is confined to the function.
That's not how this works, it's either fp or it's not. By this logic Java 1 was fp because it had a garbage collector and every language with a REPL is FP
You talk all about "Performance", but in functional languages like F# or Haskell, there is a saying, if it's compiling, it works. And that is very often true.
No, it can't be true, I can always write syntacticly correct code that does the wrong thing. If I can't then the language is broken. It may certainly mitigate against some kinds of mistakes, even common mistakes, but the compiler can't decide that the code is doing the right things.
@ContinuousDelivery Your comment here tells me you don't understand the statement. I'd strongly suggest you work in a language like F# for a while, and I think you'd understand it. I've certainly found it to be a trueism.
@ContinuousDelivery
F# quickly became by favorite language because THAT IS ACTUALLY TRUE.
I don't know other functional languages (although F# is a hybrid functional first one), so I can't say anything about them, but in F# there's a saying: "make invalid states unrepresentable"; and that's quite easy using algebraic types.
Son, yeah... If it compiles, it works.
FP is not a class of programming languages. As it suggests, it is a programming paradigm. So, your analysis of FP's popularity based on the popularity of individual languages is fundamentally flawed. Yes, purely FP languages like haskell and erllang are not much higher on the list than before, however almost all the major languages on that list have moved towards FP. Since, Java8 onwards, the vast majority of new Java language features were adding support for FP. Also Javascript has completely been moving towardsFP. Especially in its libraries. Ever heard of a little library called React? The reactive design at its core comes completely from FP. FP is indeed eating the world, but it is mostly doing so by changing the existing programming language giants, not by breaking through with soleley FP based languages.
I guess you didn't watch to the end 😉
I hope watching too the end will discourage me from hitting that "Don't recommend channel" button.
I'll be hitting that button. You'll not get continuous feedback from me, I'll just let you know that it's not necessarily because I disagree with you. Just that this hasn't been worth my 20 minutes, and I'm not a fan of clickbait titles in general.
Well said. I was confused by the first three-quarters of the video where it addressed FP as a language rather than the paradigm. It's kind of a metaphor for what's happening at the implementation level a lot of time, so the irony gave me a chuckle :D
@@ContinuousDelivery No, if you start the video with a stupid argument, we won't watch it to the end.
By my measure, functional programming had a resurgence about a decade ago. Mainstream languages subsumed some of those features, and static typing took charge then with things like Typescript, Rust, and typing for Python.
I think automatic parallelization isn't a big deal in major programs. But one place I really like it is with scripting. I experimented with writing my utility scripts in a variety of languages, and one thing I really liked about Elixir was how trivial it was to turn most of my serial scripts into parallel ones. All it usually involved was changing a couple lines of code. Experimentation and measuring was trivial.
Lots of functional languages can also do OOP. Like Clojure with their multimethods, which mimics a less powerful version of Common Lisp's OOP.
If you've never tried out OOP in Common Lisp, I recommend it. It made it a real joy for me.
In my (admittedly not very wide) experience, the only place I saw anything like "automatic parallelization" actually work is with OpenMP in Fortran-based weather models, i.e. feeding the outermost loop in deeply nested do-loops to different threads with a compiler directive. I still have yet to see major software get (re)written in the pure functional languages that were supposed to make this Just Work™ (and without annotations, mind!) like they were telling us in college a quarter century ago. It's still too weird/hard/whatever a way to program most of the time it seems.
I always did like the design philosophy of Erlang and OTP though, and it's a shame I never spent much time with them or Elixir.
Scripting is probably the worst use for it
I don't think performance is the main advantage, but correctness. Performance comes as a side-effect, when you are free to write highly concurrent code that doesn't trip itself.
So performance as selling point for functional programming is a strawman.
@amigalemming to some extent, it is. I have never heard of a FP enthusiast that chose FP for performance first.
It's correctness that drives us.
But the inability to do concurrency correctly can have huge performance impacts. Python's GIL is a great example, and they haven't finished removing it…
That is, we are used to build upon correctness to achieve performance.
@@PierreThierryKPH Concurrency (Threads, Software Transactional Memory) is not about performance but about user experience. E.g. GUI shall not be blocked while the application performs some computation. The approach for performance.is parallelism.
He can't argue against correctness.
@@jboss1073 some people will definitely argue that correctness isn't necessary or that some approach to correctness is overkill.
Kotlin made much easier to mix functional and OO programming into the Java virtual machine
Compared to modern Java I find the differences to be small.
I have used Scala a about a decade ago and got it in production systems. Not pure FP however it does nudge you to think about state, idempotency, etc. Now even if I code in Python and JavaScript I definitely like there are FP influenced constructs and think about the lessons I learned back then.
Great content sir! Thank you.
May I ask, would Rust be considered functional? I've dip the toe in it, and seems like it wants to be seen that way.
@@a_rugau From what I have seen Rust is Functional inspired but isn't functional in the traditional sense. Due to its borrow checker you can write low level code and know it will work with certain guarantees, like functional, however, the mechanism it does it to give these safety guarantees don't use immutability but the borrow checker.
Its syntax is inspired from the ML functional family, esp. OCAML
Well, certainly the default stance of variables to being immutable goes some way to justifying that claim, but I think I'd think of it as a hybrid, rather than a pure functional language, like most modern languages.
@@ContinuousDelivery Its type-checker is algebraic. That sort of forces functional programming even if impure.
Thanks, good commentary! Absolutely agree: apply the approaches and paradigms that best express the problem and the solution in the most understandable, readable - and, yes, elegant! - way. Then think about performance & optimize, including parallelization. Weaned on structured programming, thence to OO and finally FP, I've found they all bring concepts that can help enable beautiful code that solves the problems. I do find myself using FP more and more for more concise code, but wherever there is persistence and mutable state, those are objects. (These days the team I'm on slings Python, but I have a crush on Julia...)
Always a thumbs up for any emphasis on Amdahls law, the most important in high performance compute. Most often for large problems that require massive parallelism one is more concerned about memory cache, memory paging and data distribution. Some problems it is cheaper to repeat calculations than to distribute those calculations, in others the very largest memory space possible is the way to go for the exact opposite reason. OO or functional is pretty far down the list, and likely immaterial, especially once the compiler has done its thing. If the kernel matters that much then assembly may be required, but that still has to do better than the compiler on its own.
As long as the BEAM is still used, functional will never die. The people who actually write code that needs to scale (and vercel doesn't count, sorry) use it for a reason.
@@C4CH3S I guess I am a Next.js soy dev then 🤣
I can't wait to test out Gleam on the BEAM I hear it is OP.
Vercel typescript soydev andys would never understand being an elixir gigachad
Let it crash!
Thank you. I happened upon this video, and watched to the end, rolling back a few times to listen again.
Very well written and thought through. I learned a thing or two in a surprisingly short amount of time.
Great work. I am going to check out your other vieos.
Intelligent. Thank you.
[ps: Subscribed of course]
I worked for a company that used ruby and elixir, and spend some time learning both (previously I used JS and Java). I loved both languages (oop and fp respectively), but could not find another elixir job after that contract ended. I stick with ruby for personal projects. IMHO, I think the culprit is the dynamic of the job market. A competent java programmer can learn elixir in a reasonable amount of time, but the recruiters will pass on those candidates, then complain that it is impossible to staff, and suggest moving to another technology. Is a vicious circle.
Fp languages didn't catch much on because the paradigm is very hard and very different true FP means you doing math and recursions all the time . While software engineers/developers are good at math in general they aren't that good. I mean with a lot of training you get there but who wants to do this when you simply instruct the machine what to do?
Companies also don't want to spend money to retrain their entire stuff. I done a FP module(Scala) in collage easy isn't a word I would descipe it
Nailed it. True, there's a lot more to be said both for and against, but I'm glad we're actually talking about this. FP is great, but it's not a silver bullet and there are valuable things you give up when you use it.
I've been loving FP in R. R is higher on the list than any of the other languages you highlighted, and while it's not pure FP, much of its design lends itself to FP (like functions passing lazily by value by default).
My last project I've been slowly converting so that it's almost entirely done in maps, reduces, and filters, with pipes everywhere and nary a for loop to be seen.
Me too, I love R just because it’s functional paradigm look alike. I can program faster in R than in python because of it.
In my limited experience, writing parallel programs, off the main thread, meant that I could write really unoptimised code in the parallel processes and it didn't have any impact on the program as a whole. I'm sure you wouldn't be able to measure any performance boost on a benchmark but the perceived performance was great. And writing unoptimised code takes a lot less time to write. I want to make clear that agree with everything you said in the video. Your millage may vary.
A great presentation - kind of Emperpr's new clothes. Parallel programming is the game where one thread or process does the work while the others are slowing down this thread/process.
What AI may change is the following - AI may indeed program optimally a well defined task and even re-program existing code.
I've been doing FP for over 25 years and was always very skeptical of the possibly for automatic parallelization because, as you pointed out, you may end up with a parallel but slower program. However, there are much easier benefit for concurrency: immutable data structures allow safe unrolling the effects of concurrent threads. Together with explict side-effects is why you can have easy and efficient Software Transactional Memory in Haskell but not in mainstream languages.
A programming paradigm is like a lens. Each lens brings certain elements into focus and others out of focus. What is best depends on the problem at hand. (I have heard this described previously as: "The Problem of Composition.")
There are inherent tradeoffs with each choice. Our best option then is to have several options available and the freedom to switch seamlessly between them. We can then make mixed-paradigm code that is constantly moving toward an optimal tradeoff for each unit of code.
I was a devout Haskell adherent for my first few years of college. It has been useful to see another perspective on many things, but the only concept I've really kept is referential transparency.
I try to write procedures that mutate nothing. But on the inside, they are very imperative and go step-by-step, I keep them intentionally simplistic and avoid anything fancy. I use map occasionally, but even the next simplest thing, reduce/fold, is already useless - it never, ever, ever fails to be way more confusing than an equivalent for loop.
First-class functions can be very handy, but only now and then. I also sometimes wish that whatever language I'm using made it as easy to declare new types as Haskell, but not often.
All in all, FP has a few good ideas, but the core of my coding style remains imperative. I kinda get the impression that this is how the mainstream has experienced FP too: influential, but not likely to take over any time soon.
2:47 Fortran is a modern programming language that now sees its renessaince with an active community and new set of tools developed. :) It actually was in the top 10 TIOBE index for the past couple months!
I liked Fortran when I had to use it for the first time in 2015.
Alan Kay once said that with Java, they were trying to drag programmers halfway to Lisp. They failed, but twenty years later Rust came along and succeeded. Rust captured some of the major value propositions of functional programming - memory safety and type safety specifically, and compiler-based guardrails and higher assurance more generally. I think that's why the hype has died down, Rust just addressed enough security/correctness/reliability needs that functional languages were no longer necessary. Maybe Typescript too in the Javascript world.
Bingo. Vote parent up.
Best comment. Spot on.
@@richardgomes5420 We agree that this is the best comment here. Rust is the best FP language so far.
If only Rust came with capability objects for IO, I could even consider it a purely functional language
@@mskiptr capabilities are not FP. You can use something akin to Algebraic Effects in Rust to get pure.
18:40 An excellent demonstration of the danger of premature optimisation!
Guy Steele (of Scheme and Common Lisp fame) tried (and he tried _really hard)_ to make a language that would actually live up to the promise of effortless parallelism. It was called "Fortress", and in many ways it was heavily influenced by functional ideas, but built from the ground up with this vision of "abstracting away" the computational substrate as a core design idea.
Long story short, it didn't work. They came across some very difficult problems along the way and the project, while promising, was abandoned.
This idea that FP will just automatically and effortlessly let you threadpool everything is a pipe dream.
Functional programming, like OOP, became a commodity in nearly all modern languages.
I often apply FP and functional concepts in C++ or Go, especially in multi-threaded applications. If it cannot be done purely functional, I step back to single threaded solutions, because it's often an indicator that the algorithm is not suitable for parallel processing (think twice before using mutexes or semaphores).
But the most simple solution is usually just an imperative procedure.
OOP shines when used like Alan Kay suggested - by radical decoupling and only using messages for communication.
Use the paradigms where you gain the biggest benefit from them, always take the middle path and never trust in silver bullets!
❤ Thank you so much for this video, it was totally necessary.
I use functional styles quite a lot. The basic concept (no side effects) is worth understanding.
As to looking at charts of programming language use, there is just so much complexity around why a particular language is used that it is really hard to draw conclusions from the data.
1. Rust also would help with adoption of FP principles :)
2. Ada is NOT dying! New versions are still pushed forward, modern tooling like alire and ada-language-server as built.
I think one argument for FP style, if we talk a language like Rust, is that it gives you a way to make higher level abstraction, and also we can get rid of the null by using the option Monad. And if the compiler can manage to make the code into a imperative style, one compilation then it should be with out cost.
Null safety has nothing to do with FP. Kotlin had null safety way before rust was even a thing.
@@sarabwt i was more thinking about using an optional monad to force you to handle "null" checks at compile time
@@sarabwt The way Koitlin handles null is a functional programming element. All major languages have integrated elements from functional languages, that is the real success of functional programming.
@@janlanik2660 No it isn't. Again, null safety has nothing to do with FP.
@ The way how you call a function on a type that may be null and it returns null in case the underlying object was null instead of raising an exception, that’s a functional programming pattern.
Hey Dave, functional programming is a high value but extremely niche discipline that is in use within Amazon, Google and Meta. I’m not talking about functional aspects of common languages (like lambdas in python, etc). But Common Lisp, oCaml and Haskell are used within automated reasoning departments
Functional programming becomes more necessary as so-called AI is used to assist programmers.
The rules of functional programming allow a coading model to make segments of code that can be plugged in to an overall code base, without it having to learn thousands of lines of code that will work with it.
It's clear by now that hybrid languages embracing the positive aspect of functional programming are gaining ground. Instead of being ideologically driven, just take what works and make the best out of it.
Horses for courses is right. Or perhaps horses for environments?
I once worked on Java applets in the browser, and I can say that OOP is not a good fit for that environment. It cured me of my Java snobbery when I saw how easy it was to handle events and async code using first-class functions that even early Javascript supported, compared to the laborious boilerplate that Java 5/6 required.*
I recall an earlier video on this channel about event-driven programming, and my feeling is that this is a paradigm that may well suit FP more. At least, if GUI coding is any indication. I have no data to support that claim, though. :(
* It has been a while since I have worked in the Java space. I would expect that the lambdas have improved things now.
A practical example for use of these concepts in realtime applications (for AI and solving certain graphics-related work, typically): with Java's Lombok library, I think it's quite straightforward to combine OOP and functional programming concepts where each might be best-suited. A straightforward rule of thumb: parallelized operations may read, but not write, to shared memory spaces until the thread terminates. Writing exception handlers with Lombok that allow one to easily track down bugs within parallel code is nicely straightforward, too.
The idea of "automatic parallelization" is, I feel, a bit of a chimera: parallel is not an instant win for performance. It's relative to the complexity of the problem to be solved, vs. the inherent costs to make code reliably parallel. Good parallel code needs to go do something really expensive, but doesn't absolutely require time-accuracy; it may or may not return before the main thread is done, and one should program in a way that handles that elegantly. It's never cut and dried. Use what actually works better.
Lastly, if there's anything where I thoroughly disagree with Dijkstra, it's the use of globals, or things that act like them. By all means, use them. Just be organized about it, and document them well. They aren't inherently evil, lol.
My advice: program at least 3 years in a FP language and then make a 2nd video. Hearsay is not good enough.
The main problem is mainstream, critical mass or what is a safe choice career-wise for both developers and their managers.
Started a new FP project 5 years ago - ask me how many times I was (jokingly) asked if it was a good choice, if hiring is OK etc...
Just 3 years for an experienced programmer to see the benefits of FP? Thats like nothing.. surely I will take you up on that. Who's a good FP guru? What is a good FP cult?
I studied FP mainly to learn about it. Then I applied some of its concepts to the procedural programming that I do for my day job. I also use Excel HEAVILY, and that is functional programming. I think the real issue with FP is it makes certain every-day tasks HARDER than other languages do. You know, the ones like file access and database access that are the heart of most work we do.
we're doing both OO and functional style programming in python. In the case of needing to model objects that provide functionality to other users in modules - we use the OO features where those fit best with tackling that problem. In other pieces of code we are manipulating lists and collections of data that lend themselves really well to set theory and functional programming to solve the problem in an elegant way. it's horses for courses. you tighten a nut with a spanner. you put in a nail with a hammer.
Every paradigm has its place. I use OO where encapsulated state is of importance (in-memory caching comes to mind for example) and FF for modeling my data types and implementing something with inputs and outputs (often algorithms).
Interesting for me was my realisation that, in the end, web requests and their responses are nothing else than input->output functions. So, for me, the functional approach fits extremely well for that kind of application.
I really enjoy learning bits of functional programming. It's very helpful to writing better OOP that doesn't keep so much state around and it really trains you to think of flow differently. It's a nice place to visit, but wouldn't want to live there.
As a functional programmer I get really frustrated when I see giant abstraction spaghetti monsters that barely solve the problem, from programmers that value form over function - where the "form" in most cases is impossible to maintain until you learn all of the idiosyncrasies of that particular programmer.
Functional programming is essentially the opposite - function over the traditional "form". You write half as much code, it's maintainable, everything is in the same spot and it usually runs faster as well. Ironically, functionally programming is even more modular because you don't have to scale an inheritance hierarchy or go digging around in some moron's interpretation of SOLID principles just to add or remove something.
Honestly, functional programming is just way better for small to medium sized software solutions.
Those zealots in general ignore that Scala runtime library has 500+ while loops.
@@richardgomes5420 They also ignore Scala is not a language that is taken seriously by anyone respectable. It is a toy experiment by Martin Odersky. He wasted some 20 pages to prove through equations his "implicit" system while all of lambda calculus takes 3 lines to define. It's not a serious language.
But how small is a small software? And what kind of software are we talking about? and again, if a paradigm is only, or mostly, good in certain circumstances, then why promote it as "the best"?
Sounds like different problems should be solved with different approaches.
The problem I have with comments like this is that none of the problems you describe are caused by object oriented design. They are caused by bad design. Those problems are also not inherently solved by functional design.
My theory is that many FP zealots got frustrated with dealing with people's poorly designed code so they switched to FP in an effort to learn better design. They started caring about good design and with practice got better at it. Then they incorrectly attribute their improved design to the paradigm rather than their own experience.
@@jboss1073 What are the attributes which define a "serious programming language"?
As the BEAM shows the actor model really is a developers superpower... with it's idea of isolation of processes, the ability to restart just 1 pid without affecting the system, the fault tolerance it offers and Elixir phoenix liveview as a alternative to js it really is a massive jump over OO. Unlike almost any other Language, Erlang and Elixir actually do make concurrency easy... and as for blindingly obvious, I remember many stories where 50-100 servers, reduced to 3 when switching to Elixir. There is also more stuff available in the BEAM, you can remove redis, dump docker clustering, and more. The complexity is also massively reduced when you include all the other services needed to make most OO progs work compared to Elixir, you don't have a million other services, you can remove the endless js churn and you suddenly have 5 devs being more productive than 50.
Even if you say Elixir ONLY has 1/30 of the devs of JS ... that is actually a vast number!... it shows in just over a decade its got a real foothold, everyman and his dog is a JS dev these days, they are so many React bootcamps it's obsurd.
I Find FP can be easier to understand and to write unit tests for, and so it is easier to debug or update in the future.
So for me it is about the development process more than performance.
I wasn't aware of Amdahls law ... That was useful as your videos usually are 😊
I like the functional idea and I always rejoice when I can make methods static and when I can remove loops by using things like LINQ. That said, purely functional styles tend to end up clunky and have difficulties interacting with the rest of the world. For me I strive to make my code more functional wherever possible but it isn't an end goal in itself.
I like representing my systems with objects and I try to make all my transformations of data between objects as functional as possible.
The main advantage of using "purely" functional, or OO programming languages or a language that enforces paradigm X, is that you are "forced" to develop an new type of thinking.
This is purely impossible in an environment that allows multiple paradigms. This is why C++ programmers back in the day, used C++ as C, with a sprinkle of OO ideas or we use functional ideas here and there.
I think the great value of strictly X-type languages is that they force you to think that way and develop some skills that you will or will not use, depending on your problem. You do not learn Haskell to make your next big project in Haskell, but to train your brain to think AND in that way
Yes, it's just you ;-)
I'm only getting started with Elixir, and I love it. Functional programming is far from dead.
The problem is of course that procedural languages are much more ingrained into companies. It's a kind of vicious cycle because programmers learn the languages that companies use so they can make money, and companies use the languages that they can find people for.
That being said, a lot of popular features of functional programming languages are making it into the procedural languages ( lamda's , pattern matching ,etc...)
There's another sense in which they are useful : just by learning them, you broaden your way of thinking, and that helps to write better software even in procedural languages.
The highest quality content (and comments) on all of RUclips 💯
How many people think the Unix shell is not a Functional Programming language? They don't use the pipe operator.
It's not the end all and be all of FP because f(g(x),h(y)) is hard to code (without side effects), but f(g(x)) is easy: g < x | f
I've thought for a while that the good way to have data science sorts do stuff with traditional software devs is to have the former work in a purely functional way (or almost so). That allows (for example) the latter to do much of the application security (e.g., related to file and network I/O). I have yet to succeed in using this division of labour, however. A sticky point was that DS wants to do "visualizations" and that often entails web stuff (e.g., ShinyApp) hence a security boundary, etc.
I went through a phase of loving fp and in the process took it too far. I lot of code was satisfying to write but sometimes difficult to read after written. I also began to fear the runtime overhead that it creates. Nowadays I’m much more pragmatic I still fp but not exclusively, rather, I like the declarative approach when it can improve maintainability. For example, I discovered that reduce can be used in more situations than at first may seem to be the case. When I see a reduce in code I know exactly what it’s trying to achieve. However solving the same problem without a reduce often means it can be harder to deduce the intentions of the code because of the lack of an instantly recognisable pattern and thus increased cognitive load. But to me practically is king, so use fp when appropriate not because you want to go all in on an ideology
Loved the talk on software, from a hardware point of view they should have less processor cores and higher core speeds. That day may be coming for this with optical cores as hardware. Neural networks will change everything. It will change how code is processed stored and executed. Neural nets will be programed for functions in a language. Optimized by I/O speed. We can dedicate Neural Nets for its function. We could also do this in current hardware if they wanted to. Government projects no longer want C++ as the source code because of hackers. They want full proof code that can not be hacked, ADA was supposed to do this for them. It's quite amazing how a 486 cpu hooked up to a SSD can run code almost as fast as current processors in 16 bit from a users point of view. Oh your missing the colors, yes. !
Thanks, Dave. I think I can put off learning Clojure a bit longer now.
Sir, Would you please provide your view on Rust Language ?
I feel the design paradigm is very different. Not much clearto me. But the way you explain is very good. I am looking forward to see Rust releated talk from you.
Thankyou
There's something fascinating about how we fail grasping formal languages, even if we created them, like PLs. I think it has to do with fundamental problems in formal reasoning, that the sciences can't recognize or are shoving aside as something that's not worth thinking about,
I wouldn't go that far, but I do think it that it is fascinating that we serialize by speaking then try to develop artificial "speech like" systems (written language) that we have trouble reparalleizing.
We don't fail grasping formal languages. OOP languages are ill-defined. FP languages are not.
We're here, we've just integrated these days. Quietly getting on with work :D
Maybe the hype has died down, though the patterns feel much more prevalent throughout code bases. Best case as any hype passes is to have the elements that organically make sense be absorbed by the relevant parties/sectors.
Personally am still reading books on functional cpp, and patterns learned from an old ocaml course still heavily define my style in all languages. So it hasn't gone anywhere for me.
I see that when programming there's no single answer or paradigm that's answering how to provide the best solution.
In many cases you have a situation where the solution involves many different steps, sensor measurements, narrow band data transfers, data sanity checks, data aggregation, data storage, data processing and data analysis. In some of the stages the order of data is important, while in others synchronization of data streams are important. Some languages are better than other and sometimes just a certain feature of one language is enough to make it easier to perform a certain task.
Absolute immutability is something that really messes with your mind if you come from a world where you usually don't use it. In C# and Java you can decide which variables that shall be immutable in order to safeguard the behavior of the program. The program will run fine without the declarations, so it's not always used and unless you have strict ideas about how the best code shall look then it doesn't really matter.
I'm a hardware engineer of many years. I studied multi-processing back in the early '80s and still remain unimpressed. The processor manufacturers, I feel, have stuck more cores into a piece of silicon because they could; it never has been clear to me that there is a performance increase, and the programming complexity and synchronisation issues remain the bottleneck. Yes, I can see how FP could address these issues but it hasn't happened. Amdahl's law rules 🙂Better to put additional special functions in there to use up that silicon.
There is a nice conference by Richard Feldman about why FP has not become the norm.
I remember the times when we had parallel IO ports on you PC and for HDD's (Parallel SCSI). But now we use SATA / USB 3.0 en PCI 16x together. Its all about the usecase.
The value of FP is not only the pararel of performance compared to the single core. But how to compose the software and the cognitive impact of it. In my opinion, oop shines in dependency management and encapsulation, but it failed to sell inheritance as a core feature when it isn't. FP basics help maintain explicit state, avoid globals, and decouple software from time. I like both styles.
I'd love to see people who actually program in both styles make a video about this
I remember programming in lisp back in the university days.
What a parenthesis orgy!!
Thx for some nice context. I have found its difficult to get some people to understand the phenomenon of diminishing returns when adding cpu cores.
I think you will find that top modern processors can run rather faster than 3 Ghz. There is also the issue that modern processors are superscalar and can exploit parallism inherent in both a given ISA, and in the stream of instructions the CPI is required to execute. Thus single-thread performance still continues to improve, albeit at a lower rate than in the past. It is not just a story of more cores, but of the hardware designers exploiting parallelism at a much lower level. There will be a theoretical limit to progress made this way, but it is real and cannot be forgotten.
I've abandoned the idea of FP after I benchmarked it years go. Having said that, OOP has its overheads so I use it only when it makes sense.
Try again. Benchmarking is hard. You'll often look at small parts instead of the whole.
Plenty of companies are doing "functional first" programming successfully.
Benchmark Rust. It is FP. It has an algebraic type-system.
I'm sure you're CRUD app login button really needs that extra picosecond
I feel the need to nitpick about the clockspeed claim of 3 GHz, true enough that the previous trend has broken, but a lot of CPUs these days are capable of ~5 GHz sustained if adequately cooled. Even if we ignore "gaming PCs" and look at the pinnacle of reliability, IBM mainframes, their newest CPU is the Telum II which runs at a fixed 5.5 GHz. So I think 5 GHz would be a better choice as the current practical limit of clock speed.
I like that Dave wears his opinions on his sleeves. I listen to him even if I disagree with him on many points. He seems to have ignored what Michael Feathers said on his own channel.
Is there any data available to explore an hypothesis about defect rates? Although pure FP implementations cannot have any side effect defects, is there another class of defect that becomes more frequent? Perhaps the constraints of pure lambdas makes some code harder to read?
Type-level wankery is all well and good when you’re in your 20s and have time and money on your hands, but the cost-benefit tradeoff of using complex languages for real world systems, where every different developer and team uses a different subset of the language, doesn’t work. Happy to be convinced otherwise if “lean Scala” ever takes off.
"type-level wankery" ... have a like sir.
There are purely functional languages with dead-simple type systems out there.
Amazing take 👏 👌 the only thing that you are not taking into consideration is that today "mainstream" influence based on "influencers", influencers whoare pushing different methodologies,technologies and concepts based on their individual misconceptions and beliefs
The main Haskell compiler has some very subtle performance-gotchas that trip up even experts, one being related to how it does beta-reductions of let-bindings IIRC, and the side effect of heat is not captured in the I/O monad. Probably Rust is a better fit in most cases where performance is equally important to correctness. But sometimes OCaml or Lua is best, and why not keep an eye on Inko and Zig. Having a large toolbox with not just hammers is a win.
Rust is miles ahead of all those competitors at this point.
the immutability is the big strength of functional programming, but also one of its weaknesses. State ultimately needs to change. It is not just about putting in all the parameters and get the result 42 out. It can be heavily paralellized because of the immutability, but it eats easily up much memory in the process. In most systems, there are some underlying persistent data that has to be changed by either a service or a person at a computer or web interface.
Look into Algebraic Effects. You can build mutable variables out of immutable constructs. The point being it can then be type-checked by an algebraic type-system unlike plain mutable variables. Mutability is not needed.
@ can you give a couple of lines of C# code so I understand what you mean?
@ you have type check in many mutable languages. And in C++ and C# it's very easy to ensure immutability if you need it.
@@oysteinsoreide4323 Look for implementing State in Algebraic Effects using C#. It's an advanced topic currently but it's simple once you get it.
@@oysteinsoreide4323 " @ you have type check in many mutable languages. And in C++ and C# it's very easy to ensure immutability if you need it. "
Those are not type checks technically, they are tag checks. Type checking proper only happens in algebraic type systems. C# and Java have tag systems, not type systems.
If you try OOP in Rust, you would have a very hard time. If you try FP in Rust, you could make your program outcompete other implementations in other languages in term of benchmarks (even C++ sometimes).
Functional programming was never alive to begin with. It's one of those "better but nobody wants it" things. Depressing.
I also think presenting FP as a huge performance boost is wrong. Fp can only outshine OO performance wise in machine learning which rely heavely on paralellism. It's also impossible to build everything with a purely FP style
At the beginning I thought I hated OO but I just had a problem with inheritance and classes. When I get rid of those and use OO ideology designed by Allan Key, everything become great
So I prioritize a FP first then procedural and OO when needed
I don't like inheritance. Or the fact that most languages don't let you name your constructors.
I am an F# developer and I use it for domain modelling and readability
Yeah, FP languagea are great for domain modeling! I like using Scala for that
Do you still have to code your F# functions in the right order or else he program won't compile?
@@rotgertesla Your business processes are also in an order. You will not ship a product before you get the money. You will not push a button on your coffee machine before you make sure that there is already some water and coffee beans in the machine. The thing with the order is one of the most important things in FP. When you feed the output of your function to the input of your next function you have to make sure that the output of the previous matches the input of the next one. After you made this work then you write this with errors/exceptions and missing values in mind (none (null/nothing in other languages))
What happened to functional programming (especially Haskell) is that it took a big hit from the high interest rate environment of 2022-2024 since it was primarily used by small to medium size start ups, for which the macroeconomic environment was brutal.
The readability, understand-ability, and maintainability by developers has to trump functional (or functional style) code most of the time, for businesses at least. I've been a dev for 34 years and hate picking up someone else's code that is unfathomable, taking far too long to figure out what it is doing.
Twenty years ago, I led a team that did blackbox reverse engineering to replace 500,000 lines of compiled language code (C about 150 kloc of C, and the rest Tandem Application Language) running on Mainframes, with 15,000 lines of python, running on linux servers... and it ran faster, on hardware costing 10x less. That didn't lead to conclude that Python is faster than C. There is no substitute for understanding the problem.
I find it really really unfortunate that people call the ideal case for parallelism "Embarrasingly Parallel"... weirdly many people get confused by this term, thinking it is somehow a bad thing. It should really be called "Perfectly Parallel" ... it represents the possibility of infinite speed up. That's the gold standard. As you rightly indicated with Amdahl's law, FP automates the drudge work of writing parallel code, but cannot get around Amdahl''s limits. Writing an algorithm with no parallelism... but that you can run infinite copies of. if you can formulate the problem that way, is much better.
The core of the problem is, as a community, we're obsessed with The Technical Solution to basically every problem, while the reality is that the biggest piece of what makes software difficult to work with is the human side. Functional programming, OO, type safety, etc etc; all of it tries to technically solve the fact that people write bad code. None of them work though, because the problem is between the ears and on the delivery schedule, not in the silicon.
Hey mate, Functional Programming is top-notch. Honestly, those familiar with the bridge between DOP Data Oriented Programming, and Functional Programming should get acquainted. The challenges of design decisions to mitigate and deliver great software need further exploration for people to understand the implications, which are indeed very positive.
Sincerely, this is not a subjective opinion, but an objective fact, as I have read about Functional OOP Book and seen the power of this design pattern.
I think it kinda correlates to the death of enthusiasm on rust backend. Something something clean code + messy internet + messy business requirement = disaster.
Many parallelization claims for automatic parallelization totally ignore caches in modern CPUs. All the cache-compatible synchronization primitives are simply too slow to parallelize fragments where the synchronization takes more time than the actual execution of the fragment.
And the faster the CPUs get, the more expensive the minimal synchronization gets when the cost is expressed in potential instructions that can be executed by the current thread. The higher the IPC the more expensive the synchronization tied to base clock of the CPU gets. And the synchronization must be tied to base clock to be synchronous over all cores.
As a result, automatic parallelization nowadays focuses more onto SIMD than whole threads because SIMD doesn't require similar level of synchronization because it all happens in one L1 cache on the local core.
ReactJS library has moved from OOP to functional programming style. This has made code easer to read and understand how it works, leading to better maintainability.
OOP with it's state mutability and mixing of data with business logic is so clunky to work with.
Ive been experimenting with functional programming, it is good but i have been using a hybrid approach, sometimes using a direct function call in an object is more performant and sometimes having a decoupled task or service is more appropriate. I probably dont fully grasp what is an isnt functional programming, but i have enjoyed using messages to keep everything nice and tidy and synchrononized.
FP is also about recursion, which may (or may not) improve your understanding of how to solve a probelm.
@@jonathanjacobson7012
[...] May or may not be a good method of abstractly representing the problem at hand.
But sure, it is individuals' problems if recursion doesn't represent the problems they want to solve.
Evidently, the answer for any tool is that it the best when it better solve the problem that any other. The problem cannot be more than a statement of a few line (I need to put this painting on my bedroom wall), not a consolidation of all problems a company is facing. The problem like in math must be defined in a certain space, or constraint set (time, budget, resources available, scalability, short/mid/long term objectives, …)