@@rcht958 yeah I have actually! One of my friends was learning it and asked me if I knew any, I’d heard of it but never looked into how to program with it. More frequently I get asked if I was named after the england rugby player James Haskell or if I like rugby but neither are true.
It took me a year to get really started with writing actual programs on Haskell, but once you do it, every other programming language feels lacking and incomplete, it’s really mind blowing !
The languages which truly challenge what programming knowledge you already have, like perhaps Rust, Haskell, and Agda, are the ones worth learning once you’ve get the standard ones like C++ and Python. I would go so far as to say that writing functional and memory-safe code, in a language which makes you do so, will just make you better at it in any language, period.
That's entirely my experience with Lisp as well. Unfortunately having got really into Lisp first I just can't quite grok Haskell - there's so much syntax going on in Haskell, which my time with Lisps has made me find really uncomfortable. Clojure is very neat though, even if CL is my happy place, so I'm sure I'd like Haskell if it just had more parentheses...
On my first semester of studying CS we had a very unlikeable professor who made us learn haskell. I basically had no experience in any programming language except a little bit of python we learned in a precourse to that haskell one. When I learned Python it was really easy to wrap my mind around but then getting hit with haskell has made me believe that programming is the hardest thing in the universe and all programmers are geniuses. 3years later I still have no clue how haskell works but I dont think programmers are geniuses anymore, except the ones who are using haskell.
That's because most programmers work on management systems. So, the hardest structure they encounter are tuples! The thing becomes real when you do tree to tree transformation and it has to be speedy and not kill the system. Suddenly, you need to know much more than the language and libraries! You need to consider writing your own versions of data structures and of course, know those that are readily available inside out. But it's a thing very few programmers can do and luckily, very few programmers like that are needed. So, you mostly don't see them! Consider something that 99% don't: how your code hits L1 and L2 caches! That requires rewriting array access in a whole different way that is not intuitive and maybe some unrolling. Or what about writing code in a way that if you receive some specific data, you can stop all processing, but that specific data processing to give it priority and prompt treatment ? What about dealing with FPGAs, where "slow" means "took some picoseconds" ? Just to say that you're probably seeing the easiest part of programming. And besides that, there is the big elephant in the room: "What is really the problem to be solved ?". That question, very few standard developer can even answer...because they don't have to! When I was interviewing web developers, I always asked to write down a sample HTTP query as plain text. Most developers couldn't, because they used a framework which did abstract that away. Then I asked how they would use HTTP Trailers and they didn't knew it even existed! It was part of those uncomfortable questions I used to see how they would behave when they don't know! And fear not, I have been on the other side too. In one interview, I was asked to write some PHP code to process CSV files and store them in a database. Instead of going straight to it, I diagrammed a solution as a generic ETL where their flow was just a particular implementation. Then I wrote few mock classes and called them to see if it was along the idea they had and if I needed to change something. They reply was that the interview was completed and that I could start asap! I was the first candidate to have taken the time to understand the requirements and model them in an extensible way and then asked to review the analysis and not the finished product. Your average developer will just dive in and correct his course as he discovers new things and will end up with a mostly subpart frozen solution. But again, good enough and disposable! That's also why most programmers are not geniuses, no need at that level! GPT or any AGI will easily replace those, though!
I had to make a lexer and parser in haskell for some basic mathematical functions (with variables) which in the end solved the equation when it was possible. It was very hard, but in the end also very satisfying. What made it even better were the very elegant solutions with pattern matching and list comprehension etc
People usually think that FP is something new in comparison to the imperative programming paradigm, but Church discovered lambda-calculus (which is effectively a programming language by itself) in 1936, same year as Turing invented his machine
The fun part about Haskell is that due to its rigor, there’s no pieces of the language that are just slapped on to add a feature. At first I though “oh those -> type arrows are just a nice way to tell the compiler things. Odd syntax.” Nope, those things are the function type constructor (->) with the type of Type -> Type -> Type. The sheer dedication to keeping the entire language fully in the abstract realm while still compiling to safe and efficient code (given how Haskell does it’s very best to stop you from describing exactly how to do things, its ability to compete with the other compiled languages is astonishing)
Most "weird" things from haskell are just basic things from math. `A -> B` is how you'd represent a map/function in set theory. It confuses me that people have CS degrees and aren't already familiar with it
I had a very hard time learning Haskell until I got into Scala first. It made the transition from imperative to declarative much easier as you get to keep some OO training wheels.
I second this approach as well! I think if you just started programming fitting your mind to the functional approach is easier than if you did imperative/OO first then functional. For the latter ppl (which is the majority I guess), Scala is the best way into functional programming!
I want to go to F# once I know enough Python. It seems to be "functional enough" to make me work under functional paradigm as much as possible while removing the dreaded parts (mostly the cumbersome IO cheats, courtesy of F# functional impurity).
I don't know the difference between imperative and declarative anymore. Imperative = you tell the computer what to do on a memory-near level. Declarative = you tell the computer what to do only on a higher abstraction level over the hardware. Pro: the computer can figure out WHEN to do it best. Con: you don't know what the Computer actually does at any given point in time. Other approach: Imperative = the what, when and how. Declarative = the what and how. Yes?
I found the opposite. When I was using scala I kept falling back to the OO whereas Hasksell forced the functional way so I found it a lot easier to learn it because of that
Haskell was very difficult to learn, but it gave me a new perspective on writing high-level code like nothing I've done before. Making the switch back to OOP when working on a large, poorly maintained C++11 code base was probably just as hard when tasked to make it "run faster and multi-threaded". Some of the issues include: Everything touches everything. Everything is in a class and variables might as well be considered global. And a laundry list of bad coding practices.
@@mskiptr You say it as a joke, but the thing is, I'm gonna be working on a medical robot soon, but I'll be working on the UI and everything is/will be in C++. I'd ask for permission to add Rust as a dependency, but Rust UI support isn't quite there yet.
@@playerguy2 Cool! I don't know that much about Rust UI toolkits, but out of curiosity: are you targeting some external technology (like QT, GTK, …) and need bindings or are native Rust libraries (like iced or relm) an option too?
It's intersting to note that the Turing Machine was invented to reason about computability and not as a model for computation. Unfortunately the Von Neumann architecture took it for the latter and now it seems we're stuck with it :(
Someone might have already mentioned it in the comments, but there was a bit of a problem with your example of declarative vs imperative. Both loop types you showed were declarative. Both are what's knows as a "for each loop" just using different syntax. Examples for imperative loops (sometimes also called "raw loops") are the for(int i = a; i
There's a much less vague way of explaining the difference. Imperative is all about steps in a process whereas declarative is all about defining invariants. So for example, in an imperative language `a = b + c` just says, "at this particular moment in time, a is equal to b plus c"; it doesn't say anything more than that. In a declarative language, it means that "a is and will always be equal to b plus c". In Haskell, since b and c can't change, the invariant will always hold, but even in Excel (another declarative language) where (cell) b and c *can* change, it is still the case that a will always equal b plus c because the runtime itself will update a every time b or c change.
I wasn’t too sure on how to explain this, people have said this as basically “time doesn’t exist.” I tried to make Haskell sound more useful to someone who doesn’t know it, and I felt that just saying variables can’t change isn’t very impressive. I think what I said about defining order also means essentially the same idea.
@@TonyZhang01 Your answer was specific to Haskell and since that is what you are explaining, it's good. I just wanted to point out that "declarative" doesn't have to mean immutable. That's just Haskell's way of making itself declarative. Either way, it's much better than the old "what" vs "how" dichotomy.
The algorithm has blessed this channel. Just saw that you have something like 50 videos uploaded; if they are all half the quality of this one then this is a tremendous find! You gained a new subscriber : )
haskell is so cool. I had to learn it for one class and I don't have a reason to use it anymore but I really wish I did, even writing simple stuff was awesome
0:50 _code for fibonacci_ 5:29 _"variables are just [labels] for an expression"_ 5:45 _"the equal sign means what it means in maths"_ i dont really understand if ppl find haskell difficult bcz it _is_ difficult, or just due to the reason they learnt other languages first....
tbf the code for fib at 0:50 has horrible complexity (too slow even for testing) unless you do an optimizing compilation a more workable implementation of fib would look like this, with fib' not exported: > fib' 0 = (1, 0) > fib' x = let r = fib' $ pred $ x in > (fst r + snd r, fst r) > fib = snd . fib' still shorter and more readble than the C version, but not by so much and if we want strict evaluation and tail calls then it gets even weirder
It's definitely about background. Most people struggle when they learn to program for the first time because they're thinking about solving problems in a completely new way. After the first time, learning new languages is (usually) not that hard because they operate with mostly the same principles. Learning functional programming is more like learning to code again for the first time. You're learning to solve problems in a totally new way. Neither functional programming generally, or Haskell specifically, are inherently harder than the alternatives. It's just about what you're expectations are for difficultly. When you learn a new language, you expect to learn a new syntax which allows you to express the same ideas. Learning Haskell is not that. It involves learning a new way to solve problems and a syntax for expressing it, just like learning to code for the first time.
"what the equals sign means in math" - oh sweet summer child... the '=' sign in math is probably the most overloaded, pernicious, imprecise concept that rots the whole discipline of math at it's core.
I started Haskell a year ago and now I don't like other programming languages. Composition is really powerful and although nobody knows what a monad is, you don't need to know this to write code with (>>=).
I would love this to be slowed down with stepping through Haskell way of processing the examples. I know there are a ton of tutorials out there, but there are also a ton of historical videos on Haskell. A combination of overview and basic syntax in steps would be awesome
I agree that Hakell feels more like math than programming languague. As software developer, I decided I could give it try year ago and end up diving into lambda calculus, category theory and general algebra.
5:00 I don't know if the "returing a value" is made on purpose for joke's sake, but if it is it's very funny and also thank you for the explanation, I've already touched a bit of functional language programming with racket and Scala, and I like it because i find it very elegant, I think I'll try to learn more
Learning Haskell was pretty straightforward for me at first until I got to monads. The books I had on Haskell didn’t explain what a monad is or how it could be used. They said the concept was derived from a branch of mathematics called Category Theory. I figured it wouldn’t help to dive into a refined branch of higher mathematics which mightn’t make things clearer. Looking on the web I can see I am not alone in being puzzled about monads.
a monad is basically a sensible definition of flatmap. a good example of a type with a monad is an Option (or a Maybe or an Optional, whatever you want to call it). you can think of this as being a box which either contains something or is empty. it's useful for expressing the results of a computation that may fail, like division (anything divided by zero is an error). now let's say you want to do something with the contents of the box, if they exist. like, you want to add 2 to the result of dividing x by y. for this, an option has a "map" function and it returns a box with either nothing in it (in case y was 0) or x/y + 2 in it. however, what happens if you want to perform a function on the contents of the box that returns another box itself? like finding the square root of the contents. if the contents of the box is negative, this will also error. so if you just use map, the return type would be a box containing either nothing or another box which itself either contains a value or nothing. What flatmap (the monad) does is unpack the box, so the box returned by this chain is either empty or contains sqrt(x/y + 2).
For me, exploring all of the standard use-cases (option, list, either, state, writer, etc.) at a deep level will help you build toward a deep "a-ha!" moment, where you'll be able to start discovering (and leveraging) the pattern in your own projects. Also, identifying functors in the wild can also be a really useful step in building up to an intuition of working with monads.
For anyone struggling with monads in Haskell, I highly suggest investing some time in learning Category Theory. It helped me a lot to finally see the forest through the trees.
Also would you discourage a begginer from learning this instead of another language? The way you explained it makes it seem more intuitive for me and different than the couple languages ive learnt anything about (it's just for personal interest as a hobby so i don't need to become an expert or have actual programming skills to program shit in my day-to-day life or work)
For me at least, I learned the other languages first, and learning this was painful, because I didn't get what the point of it was, but it's supposed to be more mathematical and less weird than other languages. Yeah learning it is cool, but you want to actually do things with it probably you can just use python. Haskell doesn't have many libraries to do most stuff (libraries that it does have are really good tho).
One day I looked into writing an implementation of SHA256 in pure Haskell. I gave up, and found someone else had done it. It was way longer, less comprehensible, and probably much harder to maintain than the C version. Not to mention slower.
I don't think you have to worry too much - you're not going to find any useful code in industry written in Haskell. I think it's mostly for hippies to drop a few tabs of acid and have deep discussions about the philosophy of CS. All "coders" do these days is glue bits of python together they scraped off the web, that uses the most heavyweight libraries as possible to do the simplest things. The more libraries you include, the more you're "winning".
@@gorak9000 "All "coders" do these days is glue bits of python together they scraped off the web, that uses the most heavyweight libraries as possible to do the simplest things. The more libraries you include, the more you're "winning"." Like I said. Not professionally. You can get away with that with your own toy projects.
According to this video it seems like a language like Haskell may even be easier for new programmers to learn (this is just a speculation I know nothing about Haskell. This is just a thought upon watching this video). When I learned programming in highschool many students were confused because if the difference between math expressions and the programming language that we were using (Pascal). Haskell seems closer to math than the languages that we were using.
The elevator pitch for Haskell is less about the language and more about what it does for you as a developer. If you've never been forced to explore pure functional programming before and you've only used procedural or object-oriented languages with no strong focus on functional, it's likely that Haskell will break your brain in the same way Javascript or Python might have when you first learned programming. Once you wrap your brain around how to write programs in Haskell, and you recognize the benefits and explore them, you find yourself writing code differently and viewing it differently in other languages.
Procedural programming models the turing machine. Treat each statement in a procedural code as a node in the automata state connected to the next line. And control flow statements like if else and loops as branches based on the read value from the tape. Of course the tape then will be ur memory in the heap and stack. Functional programming on the other hand models lambda calculus. Meaning everything is an expression. And the only expressions just like in lambda calculus are function application and normalized values that cannot be simplified further. Thus function calls are like rewrite rules or beta reduction. By definition, functions have to be pure, and since theres no notion of a tape or memory, theres no side effects. We perform side effects in haskell ad hoc by wrapping it in a context of IO. So really procedural vs functional is statements vs expressions; turing machines vs lambda calculus (rewriting system). OOP however is an indication that the turing model is bad as a human programmer. We need composability, modularity and code reuse. And the turing model does not easily allow that. Lambda calculus model does! Moreover lambda calculus is a form of deductive system, so theres an easy way to have computation correspond to semantic meaning. This leads to type theory, where we can make guarantees about our computation since we add semantic meaning to them. The only time the procedural paradigm gave ergonomics to the programmer is when concepts from the functional paradigm are brought over: expressions, types, composition, to name a few. If we are to strictly comply to the turing machine model, it will be writing in only assembly.
What I like is the modern development of programming languages to support both imperative and functional programming styles at the same time! In this way you have all conciseness of functional code available for many parts of your program, but you're also not limited by it (think about the example of just printing hello world being crazilly difficult). So I am talking about languages that are actually (extremely) usable in practice. Beautiful example is Rust. 🦀 It does this job as perfect as possible in my opinion. I am also a developer in C#, which has functional elements for many years now, known as LINQ and lambda expressions. It does a job as well, and special about it is how they made the association between functional programming and SQL style coding. Makes sense, since SQL is basically functional programming in multiple aspects as well, at least when it comes to SELECT queries.
@@arjix8738 i learned javascript from w3schools. in their node.js tutorial: var http = require('http'); i think the only place they use const is in the page where they give examples of const
@@official-obama var is a thing of the past, there is literally no reason to still use var, use let/const instead. let is for making a variable that can be re-assigned, const is for making a variable that can't be re-assigned
I tried to learn Haskell a while ago to explore what functional programming is all about as Haskell is as I have heard THE definition of functional programming. However when I tried to run my code it said it couldn't find the code file even I was in correct directory and specified the correct file name so i gave up. Maybee I'll give it one more try then. See if it works better this time. Hard to get anywhere with a programming language if the compiler can't even find your project files.
I wrote this in response to one of the comments. But since I find it so neat, here it is as a top-level comment too: There's this very similar language called Idris. Here's a simple session in its interpreter: Welcome to Idris 2. Enjoy yourself! Main> :let main : IO () Main> :let main = putStrLn "Hello World!" Main> Main> Main> main MkIO (prim__putStr "Hello World! ") Main> Main> Main> :exec main Hello World! Main> See? IO is just a data structure! It represents what the runtime should do when executed. But if you ask the interpreter to show it, it will only evaluate the IO expression itself. No hacks. It really is simple! The beauty of (pure) functional programming is that conceptually _it's just expressions all the way down._ You can put them inside lists, you can return them from functions and so on. Everything composes - including IO!
Haskell is the language used in the mandatory first semester CS class in my uni. It was a pretty good indicator who will drop out over the next few months/years by how tough it was for them to get a grasp of the language
i just finished my 3rd semester of computer science and in the 1st semester we learned C and Haskell, guess which one i failed the exam on and i still havent done yet
It’s mostly just cool for learning concepts, here’s a page with some decently elegant code: wiki.haskell.org/Blow_your_mind You can use functional programming in every programming language that has functions, you can use it basically like how you’d use python, like scripts for math and data, people use it for parsing and web servers a lot also.
The initial qs example only works if there are no duplicates in the list. Otherwise the pivot will appear both as [p] and in greater. Unless I’m missing something?
The IO layer looks like Python; for all practical purposes, this is just Python with a worse library ecosystem, but: -Significantly better performance -Idiomatic strong, static, but type-inferred typing (the type checker can guess the types you mean, but idiomatically we either write the types out first as a type annotation, sort of like a comment, or use the IDE to generate the type annotation for us) -Strong preference and support for functional programming as a way to manipulate data; resulting in more expressive, more bug-resistant, and more maintainable code. A further feature that Haskellers enjoy, but others might not, is that we are only allowed to manipulate IO within the IO layer of code, or within something that wraps the IO layer. This is considered good software design; we are pure by default and have an "imperative shell, functional core" model wherein data transformation code lives away from the imperative code that calls it, leading to greater reusability and comprehensibility.
Well, IMO the simplest way to introduce what purely functional programming is about is this: There's there's no execution and there's no _time._ Your whole program is just a bunch of expressions, built out of a bunch of smaller expressions. They can be evaluated to get a result. IO can be achieved by writing an expression that's almost like a little imperative program.
More prceisely: you don't perform IO, but you build IO actions combining (hello, monads) smaller ones, so that at the end your entire program is a huge complex IO action executed by the runtime. Such combining operation builds dependencies between IO actions subsequently ordering them (there is no strictly defined order of pure evaluation)
@@user-tk2jy8xr8b Yes, it's the runtime system that can execute IO. It really is kinda external to the language itself. It's just provided by the compiler (or the interpreter).
I would rather go into a mine, dig out iron, and build cheap circuits over generations to compensate for my python runtime, than learn this mental illness of a programming language.
I think this is where the true difference between "Comsci" and "Software Engineering / udemy" lies in. Many argue that comsci degree is useless because they end up doing the same jobs as software engineers or coders. But a lot also end up doing jobs where these kinds of studies are done. We learn all these functional thingies and monads and lambda calculus and carrying and Churchill functions and weird stuff I recommend SICP by MIT (if you're comfortable with lisp) or the rewrite by NUS (if you rather deal with pseudo-javascript syntax)
I have an interest in binary numbers. I started with designing an algorithm that converts a decimal representation of a number to a binary representation. For integers it’s easy, just continually dividing by 2 and taking the remainder, either 1 or 0. But for fractions it much more complicated. I wrote a program in Perl to implement this and another in Python, both imperative languages. They both run successfully. When I tried doing the same in Haskell, I ran into problems. I still haven’t finished writing the program.
just a feedback: you speak too fast and half of the words are lost in the process. it's very difficult to understand. i had to go back a few times and mostly guess the words i couldn't hear properly. i think overall quality can double if you work on speaking a bit.
Haskell will finally meet its own milestone when someone in their community is capable of explaining to someone without category theory knowledge what monads are. So far, they've been incapable of doing that for over 20 years and counting. Why is that important? If you don't know what monads are and how to use them, you cannot do any program capable of things like handling user input, a feat which is a pedestrian task for most of the other languages out there.
It's more a taboo. Haskell has this feature called typeclasses, which allow overloading of values and functions based on the required type. It's basically equivalent to an OOP interface, except somewhat more powerful in that the caller under type inference does the look-up. Haskell also has a convention of having these typeclasses map to mathematical concepts; Monoid, for instance, maps to the mathematical notion of monoid (x is a monoid in set y if x is an associative operation, that is to say, 1+(2+3) = (1+2)+3 and an identity element exists for the operation (1+0) = 1 = (0 + 1). The idea is that these interfaces are bound by contracts, which happen to be the mathematical laws associated with the mathematical concepts. That allows portability; i.e, if you know the mathematical laws, and can reason with the mathematical laws, the code becomes transparent. Remember, in the case of Monoid, it's just as simple as knowing that mempty (identity element) = 0 for addition and that Add 1 (Add 2 Add 3) = (Add 1 Add 2) Add 3, where is the associative operation defined in Monoid's superclass "Semigroup". Likewise, we can do Mul 1 (Mul 2 Mul 3) = (Mul 1 Mul 2) Mul 3, with mempty = 1. For lists, = list append, arrays, = array append, with the empty list and empty array being mempty. The Monad typeclass is based on the category theory concept of monad in the context of types, specifically functorial types (think, say, Optional or Result in C++/Rust syntax, with a fmap method that obeys the functor laws of: "fmap (a . b) = fmap a . fmap b; fmap id = id", which means that "if I have an inner loop of a fmap with chained two functions, it's the same as having two loops on the outside applying the separated functions, and if I fmap id, which just results the value, it's equivalent to just using the id function". This allows for compiler optimizations to kick off, because any chained fmap can be converted into fmap of chained underlying functions, meaning a tight inner loop. Monad adds upon the basic functor capability of fmap by introducing two new methods, pure and join. Let's address join first. In Haskell, Optional is a valid type, i.e, Maybe (Maybe a) in Haskell type syntax. What join does in this context is that it merges the type layers together, i.e, Maybe (Maybe a) is now Maybe a; if there's a null on the second layer, everything gets replaced by the null. Vectors (Haskell's convenient array type), Vector (Vector Int) is now Vector Int, i.e, a 2-dimensional array is now a one-dimensional array. Pure goes the other way, because it injects a value into a monadic type; i.e, pure 3 into List is now [3], etc... The special properties are that the wrapper pure creates is "neutral" relative to join; i.e, if I fmap pure into a monadic value, then join it, it's the same as if I never applied it in the first place. If I pure a monadic value, then join it, it's also the same as if I never applied it in the first place. Moreover, if I have a triply layered monad of the same type, if I join the outer two layers first, then the inner layer, the result will be the same as if I joined the inner two layers, then the outer layer. Or in other words, "a monad is just a monoid in the category of endofunctors" (in this context, Haskell functors are endofunctors). *** Okay, so this seems awfully abstract and useless; i.e, monad is just a fancy name for flatten or flatmap; we can use flatmap to read the inner value, create a new monadic layer, then compress the monadic layers together as a way of updating them. Why do we care about monads? Well, when Monads were first introduced, they were conceived of as a way to inject imperative computation and sequencing into lambda calculus. It turns out that by using some special syntax, Haskell could conceive of its IO facilities (the IO a type) as a monad. The end result is that you have monadic IO, as well as special syntax to support monadic IO, ending up with something that resembles Python, employing the monadic methods underneath. The funny thing is, people quickly figured out that a ton of other types were also monadic; that is to say, you could manipulate values of these types via the same monadic IO interface, for instance, we could use the same "do notation" to cover lists, arrays, IO actions (statements), Maybe (Optionals, if you get a null, you short-circuit), as well as build custom types utilizing the monadic syntax, such as the Blaze HTML library for writing HTML. We also found out that we could use the monad typeclass to handle and control side effects; i.e, we could build a monad transformer stack where each monadic type allows a specific effect. *** In practical use, it comes down to 3 purposes: -Providing a common, succinct, and convenient interface for various monadic types (I don't need special syntax, like in Javascript or Rust, for different monadic types) -Providing a basis for type-based constraint of side effects (code is only allowed to throw certain side effects) -Providing a basis for custom procedural languages in Haskell (I can implement a monadic eDSL to make certain coding activities succinct and typesafe)
main :: IO () main = do putStrLn "What is your name?" username >= (\_ -> getLine >>= (\username -> putStrLn ("Hello, " ++ username ++ "!"))) >>= is flatMap, putStrLn has a value of IO (), \_ and \username are part of lambda syntax, \_ means to discard the underlying value, because we don't care about () / unit / void, getLine is an IO action whose underlying value is a string, in this case, the user input, and putStrLn refers to the underlying value generated as "username". The flattening is necessary because the instructions live in the IO type, and can't be directly accessed, and the flattening sequences together the instructions.
Of course you can do I/O in Haskell without understand monads. But if you do, then you will eventually ask "What does all my I/O code have in common?" And the answer is... It's a bit like this: "Before you have heard about Zen, you see sticks and stones as merely sticks and stones. When you hear about Zen, you will no longer see them merely as sticks and stones. When you grasp Zen, you start to see sticks and stones as they truly are: as sticks and stones."
When I first discovered Haskell and read learn you a Haskell I was really blown away. Everything seemed so nice and efficient. Then I discovered the ugly reality. You need to use coundless weird language extensions to actually get useful language. Half of the ecosystem are packages so esoteric that you better have a PhD in type theory. The other half is deprecated.
Eh, if it’s not for you that’s fine. It’s got warts, but I’ve ended up using Haskell personally and professionally more and more for over 10 years now, and on balance, it lets me write very reliable code with reasonable performance, and has given me _immensely_ valuable skills applicable to programming in any other language. “Extensions” are a bit of a misnomer imo. They just declare which language features you’re using, if they were added after the Haskell2010 standard. That’s _extremely_ helpful in unfamiliar code, because you have a searchable keyword that points you toward documentation for any feature you don’t recognise. Still, if you prefer, you can now ignore most of them and treat GHC2021 as the latest standard. Package coverage is solid in some areas and practically nonexistent in others, I won’t argue with that. But for a minority PL, not tied to another ecosystem like Java or .NET, it’s remarkably good.
@@EvincarOfAutumn I believe you. I do think it is a beautiful language. But for now JavaScript does the trick so why would I take the risk in spending years to master it and maybe come to the conclusion that for the things I do, JavaScript is more than sufficient? It would be interesting to know which kind of projects benefit from Haskell, in such a way the steap learning curve is worth it.
The world of Rust is exactly the same. Even basic functionality like a simple random number generator is hidden behind a package you have to download off the internet, because of course a system's programming language must have a package and project manager which doesn't work unless you have internet access.
@@BrunodeSouzaLino Having to use packages isn't the problem. After spending roughly similar amounts of time learning Haskell and Rust I could look at usage examples for most rust libraries and pretty quickly figure out, what was going on. In Haskell on the other hand that would require reading up on random sets of language extensions and do a deep dive into some theoretical math concepts. The latter part isn't entirely unappealing to me, but not a good choice when I just want to get something done.
It's primarily a research language whose ideas tend to get adopted into mainstream languages 10 to 20 years later (list comprehensions, typeclasses/concepts/traits, monads in their modern sense etc. all were born or matured in Haskell) Its restricted paradigm `forces' you to write programs in a certain way, which can help find nice ideas you wouldn't necessarily think of otherwise When it comes to actually writing programs with it, I'd not recommend it for applications where performance is critical our ones that require a lot of transput, but any program that mostly maps from an input to a deterministic output without side effects and that doesn't need to be blazingly fast is a good fit For less opinionated and more generally applicable languages that still work a lot like this I'd say try OCaml, Scala or Koka
@@yjlom I'm not familiar with Koka, but Haskell beats OCaml and Scala when it comes to terseness and expressivity. Scala has the advantage of being on the JVM, while OCaml has a much more pragmatic culture than Haskell. Haskell itself is fairly pragmatic, being filled with overrides so that you can do what you want, and it does get employed by firms that are attracted to its mixture of correctness, speed (roughly around Java's speed, some specialized code can get within 70% of C), and expressivity.
It’s so weird sharing a name with a programming language lol
Have you been asked by people whether you know Haskell or not?
@@rcht958 yeah I have actually! One of my friends was learning it and asked me if I knew any, I’d heard of it but never looked into how to program with it. More frequently I get asked if I was named after the england rugby player James Haskell or if I like rugby but neither are true.
Dyou know haskel
The language was named after a person(Haskell Curry) after all
@@ЧингизНабиев-э2г according to a web search, the Pascal programming language was also named after a person (Blaise Pascal) ...
1:30 ayo change ur fire alarm battery
It took me a year to get really started with writing actual programs on Haskell, but once you do it, every other programming language feels lacking and incomplete, it’s really mind blowing !
The languages which truly challenge what programming knowledge you already have, like perhaps Rust, Haskell, and Agda, are the ones worth learning once you’ve get the standard ones like C++ and Python. I would go so far as to say that writing functional and memory-safe code, in a language which makes you do so, will just make you better at it in any language, period.
I would love press the like btn for this. But I wasn't able to achieve that goal 😢
@@chud-dot-us-dot-govI’ve always joked that the best thing I learned to improve my Python was Erlang
Once you've written River Raid please share with me I'd like to test it
That's entirely my experience with Lisp as well. Unfortunately having got really into Lisp first I just can't quite grok Haskell - there's so much syntax going on in Haskell, which my time with Lisps has made me find really uncomfortable. Clojure is very neat though, even if CL is my happy place, so I'm sure I'd like Haskell if it just had more parentheses...
On my first semester of studying CS we had a very unlikeable professor who made us learn haskell. I basically had no experience in any programming language except a little bit of python we learned in a precourse to that haskell one. When I learned Python it was really easy to wrap my mind around but then getting hit with haskell has made me believe that programming is the hardest thing in the universe and all programmers are geniuses. 3years later I still have no clue how haskell works but I dont think programmers are geniuses anymore, except the ones who are using haskell.
That's because most programmers work on management systems. So, the hardest structure they encounter are tuples!
The thing becomes real when you do tree to tree transformation and it has to be speedy and not kill the system. Suddenly, you need to know much more than the language and libraries! You need to consider writing your own versions of data structures and of course, know those that are readily available inside out. But it's a thing very few programmers can do and luckily, very few programmers like that are needed. So, you mostly don't see them!
Consider something that 99% don't: how your code hits L1 and L2 caches! That requires rewriting array access in a whole different way that is not intuitive and maybe some unrolling. Or what about writing code in a way that if you receive some specific data, you can stop all processing, but that specific data processing to give it priority and prompt treatment ? What about dealing with FPGAs, where "slow" means "took some picoseconds" ?
Just to say that you're probably seeing the easiest part of programming.
And besides that, there is the big elephant in the room: "What is really the problem to be solved ?". That question, very few standard developer can even answer...because they don't have to!
When I was interviewing web developers, I always asked to write down a sample HTTP query as plain text. Most developers couldn't, because they used a framework which did abstract that away. Then I asked how they would use HTTP Trailers and they didn't knew it even existed! It was part of those uncomfortable questions I used to see how they would behave when they don't know!
And fear not, I have been on the other side too. In one interview, I was asked to write some PHP code to process CSV files and store them in a database. Instead of going straight to it, I diagrammed a solution as a generic ETL where their flow was just a particular implementation. Then I wrote few mock classes and called them to see if it was along the idea they had and if I needed to change something. They reply was that the interview was completed and that I could start asap! I was the first candidate to have taken the time to understand the requirements and model them in an extensible way and then asked to review the analysis and not the finished product.
Your average developer will just dive in and correct his course as he discovers new things and will end up with a mostly subpart frozen solution. But again, good enough and disposable!
That's also why most programmers are not geniuses, no need at that level! GPT or any AGI will easily replace those, though!
@@programaths A very fine analysis indeed... I learnt something today, thanks!
i didn't expect to randomly see someone from the uni in a city _that doesn't exist_ lmao
@@llunii damn what a coincidence and even funnier that you know just by what I wrote xD
I had to make a lexer and parser in haskell for some basic mathematical functions (with variables) which in the end solved the equation when it was possible. It was very hard, but in the end also very satisfying. What made it even better were the very elegant solutions with pattern matching and list comprehension etc
3:26 don't forget to replace the batteries in your smoke detectors!
People usually think that FP is something new in comparison to the imperative programming paradigm, but Church discovered lambda-calculus (which is effectively a programming language by itself) in 1936, same year as Turing invented his machine
And Lisp is the second ever programming language after Fortran (not counting things like assembly etc.).
Well functional programming with side-effects, i.e. using monads or effect handlers, is a relatively new thing
I really really like your graph animations.
The fun part about Haskell is that due to its rigor, there’s no pieces of the language that are just slapped on to add a feature. At first I though “oh those -> type arrows are just a nice way to tell the compiler things. Odd syntax.” Nope, those things are the function type constructor (->) with the type of Type -> Type -> Type.
The sheer dedication to keeping the entire language fully in the abstract realm while still compiling to safe and efficient code (given how Haskell does it’s very best to stop you from describing exactly how to do things, its ability to compete with the other compiled languages is astonishing)
Most "weird" things from haskell are just basic things from math. `A -> B` is how you'd represent a map/function in set theory. It confuses me that people have CS degrees and aren't already familiar with it
records
I had a very hard time learning Haskell until I got into Scala first. It made the transition from imperative to declarative much easier as you get to keep some OO training wheels.
Declative approach with its pointless declaration doesn't work! Just look at the UN 🤪
I second this approach as well! I think if you just started programming fitting your mind to the functional approach is easier than if you did imperative/OO first then functional. For the latter ppl (which is the majority I guess), Scala is the best way into functional programming!
I want to go to F# once I know enough Python. It seems to be "functional enough" to make me work under functional paradigm as much as possible while removing the dreaded parts (mostly the cumbersome IO cheats, courtesy of F# functional impurity).
I don't know the difference between imperative and declarative anymore.
Imperative = you tell the computer what to do on a memory-near level.
Declarative = you tell the computer what to do only on a higher abstraction level over the hardware. Pro: the computer can figure out WHEN to do it best. Con: you don't know what the Computer actually does at any given point in time.
Other approach:
Imperative = the what, when and how.
Declarative = the what and how.
Yes?
I found the opposite. When I was using scala I kept falling back to the OO whereas Hasksell forced the functional way so I found it a lot easier to learn it because of that
Change your fire alarm battery
Haskell was very difficult to learn, but it gave me a new perspective on writing high-level code like nothing I've done before.
Making the switch back to OOP when working on a large, poorly maintained C++11 code base was probably just as hard when tasked to make it "run faster and multi-threaded".
Some of the issues include:
Everything touches everything.
Everything is in a class and variables might as well be considered global.
And a laundry list of bad coding practices.
Rewrite it in Rust /s
@@mskiptr You say it as a joke, but the thing is, I'm gonna be working on a medical robot soon, but I'll be working on the UI and everything is/will be in C++.
I'd ask for permission to add Rust as a dependency, but Rust UI support isn't quite there yet.
@@playerguy2 Cool! I don't know that much about Rust UI toolkits, but out of curiosity: are you targeting some external technology (like QT, GTK, …) and need bindings or are native Rust libraries (like iced or relm) an option too?
@@playerguy2 btw, your comment seems to be gone
@@mskiptr Switching the programming language doesn't help with bad coding practices.
This is amazing!! Thank you
Wow this looks like my kind of channel! I’m glad I just discovered this. Thanks Tony.
Cant wait for the next videos; subbed
Can't wait for the next videos, a fascinating topic. You've earned a subscription 🙏
It's intersting to note that the Turing Machine was invented to reason about computability and not as a model for computation. Unfortunately the Von Neumann architecture took it for the latter and now it seems we're stuck with it :(
If you couldn't get a computer until they made one with infinite memory we'd still be waiting.
Someone might have already mentioned it in the comments, but there was a bit of a problem with your example of declarative vs imperative. Both loop types you showed were declarative. Both are what's knows as a "for each loop" just using different syntax. Examples for imperative loops (sometimes also called "raw loops") are the for(int i = a; i
Fascinating video!
Also, remember to replace your fire alarm. The beeps mean that it is no longer reliable.
I can't wait to see more in this series, thanks for laying it out so well.
wow! this is so cool!! nice video
I never got to look into Haskell, and now I'm more intimidated but also way more interested haha
Very nice video
learning lambda calculus and implementing some of the crucial FP functions (filter, map, fold, etc) helped me a ton in learning haskell
There's a much less vague way of explaining the difference. Imperative is all about steps in a process whereas declarative is all about defining invariants. So for example, in an imperative language `a = b + c` just says, "at this particular moment in time, a is equal to b plus c"; it doesn't say anything more than that. In a declarative language, it means that "a is and will always be equal to b plus c". In Haskell, since b and c can't change, the invariant will always hold, but even in Excel (another declarative language) where (cell) b and c *can* change, it is still the case that a will always equal b plus c because the runtime itself will update a every time b or c change.
I wasn’t too sure on how to explain this, people have said this as basically “time doesn’t exist.” I tried to make Haskell sound more useful to someone who doesn’t know it, and I felt that just saying variables can’t change isn’t very impressive.
I think what I said about defining order also means essentially the same idea.
@@TonyZhang01 Your answer was specific to Haskell and since that is what you are explaining, it's good. I just wanted to point out that "declarative" doesn't have to mean immutable. That's just Haskell's way of making itself declarative. Either way, it's much better than the old "what" vs "how" dichotomy.
The algorithm has blessed this channel.
Just saw that you have something like 50 videos uploaded; if they are all half the quality of this one then this is a tremendous find!
You gained a new subscriber : )
haskell is so cool. I had to learn it for one class and I don't have a reason to use it anymore but I really wish I did, even writing simple stuff was awesome
Haskell is so interesting. I loved my Function Programming classes in uni.
Yeah it's really interesting and fun because you have to "think around the corner"
Cool vid! I dipped my toes into a tiny bit of Haskell for a school project a few days ago. Also, change your smoke alarm battery.
What's with the alarm
@@youarethecssformyhtml it's a high pitched chirping sound that happens occasionally in the background of the audio.
0:50 _code for fibonacci_
5:29 _"variables are just [labels] for an expression"_
5:45 _"the equal sign means what it means in maths"_
i dont really understand if ppl find haskell difficult bcz it _is_ difficult, or just due to the reason they learnt other languages first....
tbf the code for fib at 0:50 has horrible complexity (too slow even for testing) unless you do an optimizing compilation
a more workable implementation of fib would look like this, with fib' not exported:
> fib' 0 = (1, 0)
> fib' x = let r = fib' $ pred $ x in
> (fst r + snd r, fst r)
> fib = snd . fib'
still shorter and more readble than the C version, but not by so much and if we want strict evaluation and tail calls then it gets even weirder
@@yjlom there is a much prettier impl: fib = 1 : 1 : zipWith (+) fib (tail fib)
9 out of 10 cases it's because they learned Java or C# first. Some don't even understand plain old procedures.
@@yjlom Would the optimizing compilation here be to use automatic memoisation ?
It's definitely about background. Most people struggle when they learn to program for the first time because they're thinking about solving problems in a completely new way. After the first time, learning new languages is (usually) not that hard because they operate with mostly the same principles.
Learning functional programming is more like learning to code again for the first time. You're learning to solve problems in a totally new way. Neither functional programming generally, or Haskell specifically, are inherently harder than the alternatives. It's just about what you're expectations are for difficultly. When you learn a new language, you expect to learn a new syntax which allows you to express the same ideas. Learning Haskell is not that. It involves learning a new way to solve problems and a syntax for expressing it, just like learning to code for the first time.
"what the equals sign means in math" - oh sweet summer child... the '=' sign in math is probably the most overloaded, pernicious, imprecise concept that rots the whole discipline of math at it's core.
Wow. Award for deepest statement on the internet of the day. Seriously. Love it.
I started Haskell a year ago and now I don't like other programming languages. Composition is really powerful and although nobody knows what a monad is, you don't need to know this to write code with (>>=).
>λ=
monads are like burritos
I would love this to be slowed down with stepping through Haskell way of processing the examples. I know there are a ton of tutorials out there, but there are also a ton of historical videos on Haskell. A combination of overview and basic syntax in steps would be awesome
I agree that Hakell feels more like math than programming languague. As software developer, I decided I could give it try year ago and end up diving into lambda calculus, category theory and general algebra.
5:00 I don't know if the "returing a value" is made on purpose for joke's sake, but if it is it's very funny
and also thank you for the explanation, I've already touched a bit of functional language programming with racket and Scala, and I like it because i find it very elegant, I think I'll try to learn more
Learning Haskell was pretty straightforward for me at first until I got to monads. The books I had on Haskell didn’t explain what a monad is or how it could be used. They said the concept was derived from a branch of mathematics called Category Theory. I figured it wouldn’t help to dive into a refined branch of higher mathematics which mightn’t make things clearer.
Looking on the web I can see I am not alone in being puzzled about monads.
a monad is basically a sensible definition of flatmap.
a good example of a type with a monad is an Option (or a Maybe or an Optional, whatever you want to call it). you can think of this as being a box which either contains something or is empty. it's useful for expressing the results of a computation that may fail, like division (anything divided by zero is an error).
now let's say you want to do something with the contents of the box, if they exist. like, you want to add 2 to the result of dividing x by y. for this, an option has a "map" function and it returns a box with either nothing in it (in case y was 0) or x/y + 2 in it.
however, what happens if you want to perform a function on the contents of the box that returns another box itself? like finding the square root of the contents. if the contents of the box is negative, this will also error. so if you just use map, the return type would be a box containing either nothing or another box which itself either contains a value or nothing. What flatmap (the monad) does is unpack the box, so the box returned by this chain is either empty or contains sqrt(x/y + 2).
For me, exploring all of the standard use-cases (option, list, either, state, writer, etc.) at a deep level will help you build toward a deep "a-ha!" moment, where you'll be able to start discovering (and leveraging) the pattern in your own projects.
Also, identifying functors in the wild can also be a really useful step in building up to an intuition of working with monads.
For anyone struggling with monads in Haskell, I highly suggest investing some time in learning Category Theory. It helped me a lot to finally see the forest through the trees.
You have a great video style, I really like it 💜💜
Great vid! Tsoding has some great videos on Haskell too
Dude this channel is ridiculously small compared to the quality of the video and clearness of the explanation. Really cool, keep i tup
Also would you discourage a begginer from learning this instead of another language? The way you explained it makes it seem more intuitive for me and different than the couple languages ive learnt anything about (it's just for personal interest as a hobby so i don't need to become an expert or have actual programming skills to program shit in my day-to-day life or work)
For me at least, I learned the other languages first, and learning this was painful, because I didn't get what the point of it was, but it's supposed to be more mathematical and less weird than other languages.
Yeah learning it is cool, but you want to actually do things with it probably you can just use python. Haskell doesn't have many libraries to do most stuff (libraries that it does have are really good tho).
One day I looked into writing an implementation of SHA256 in pure Haskell. I gave up, and found someone else had done it. It was way longer, less comprehensible, and probably much harder to maintain than the C version. Not to mention slower.
and you can't use haskell for MCUs. Might just go with rust and use functional programming in there.
I don't think you have to worry too much - you're not going to find any useful code in industry written in Haskell. I think it's mostly for hippies to drop a few tabs of acid and have deep discussions about the philosophy of CS. All "coders" do these days is glue bits of python together they scraped off the web, that uses the most heavyweight libraries as possible to do the simplest things. The more libraries you include, the more you're "winning".
@@gorak9000 Doesn't sound like you've worked a single day as a developer in any professional setting.
@@ruukinen Heh, yes, as I sit here with source open for 3 different applications adding new features into all of them from user's requests...
@@gorak9000 "All "coders" do these days is glue bits of python together they scraped off the web, that uses the most heavyweight libraries as possible to do the simplest things. The more libraries you include, the more you're "winning"."
Like I said. Not professionally. You can get away with that with your own toy projects.
bro get new batteries for your fire alarm
Fun Freud slip around 5:00, "re-turing"
According to this video it seems like a language like Haskell may even be easier for new programmers to learn (this is just a speculation I know nothing about Haskell. This is just a thought upon watching this video). When I learned programming in highschool many students were confused because if the difference between math expressions and the programming language that we were using (Pascal).
Haskell seems closer to math than the languages that we were using.
The elevator pitch for Haskell is less about the language and more about what it does for you as a developer. If you've never been forced to explore pure functional programming before and you've only used procedural or object-oriented languages with no strong focus on functional, it's likely that Haskell will break your brain in the same way Javascript or Python might have when you first learned programming.
Once you wrap your brain around how to write programs in Haskell, and you recognize the benefits and explore them, you find yourself writing code differently and viewing it differently in other languages.
Procedural programming models the turing machine. Treat each statement in a procedural code as a node in the automata state connected to the next line. And control flow statements like if else and loops as branches based on the read value from the tape. Of course the tape then will be ur memory in the heap and stack.
Functional programming on the other hand models lambda calculus. Meaning everything is an expression. And the only expressions just like in lambda calculus are function application and normalized values that cannot be simplified further. Thus function calls are like rewrite rules or beta reduction. By definition, functions have to be pure, and since theres no notion of a tape or memory, theres no side effects. We perform side effects in haskell ad hoc by wrapping it in a context of IO.
So really procedural vs functional is statements vs expressions; turing machines vs lambda calculus (rewriting system).
OOP however is an indication that the turing model is bad as a human programmer. We need composability, modularity and code reuse. And the turing model does not easily allow that. Lambda calculus model does! Moreover lambda calculus is a form of deductive system, so theres an easy way to have computation correspond to semantic meaning. This leads to type theory, where we can make guarantees about our computation since we add semantic meaning to them.
The only time the procedural paradigm gave ergonomics to the programmer is when concepts from the functional paradigm are brought over: expressions, types, composition, to name a few. If we are to strictly comply to the turing machine model, it will be writing in only assembly.
Looking forward to the next video. Subbed
good video, I wanna get back into SICP and Scheme before diving into Haskell
hi gingy
@@cno9984 Sup
Very nice, looking forward to the other videos!
Also, I scrolled through your feed. You deserve more subscribers! Nice Job.
I love this recursive fibonacci definition: fib = [0, 1] ++ zipWith (+) fib (drop 1 fib)
High quality video, thanks.
I declare and am imperative about the fact that this is a good video.
What I like is the modern development of programming languages to support both imperative and functional programming styles at the same time!
In this way you have all conciseness of functional code available for many parts of your program, but you're also not limited by it (think about the example of just printing hello world being crazilly difficult).
So I am talking about languages that are actually (extremely) usable in practice. Beautiful example is Rust. 🦀 It does this job as perfect as possible in my opinion.
I am also a developer in C#, which has functional elements for many years now, known as LINQ and lambda expressions. It does a job as well, and special about it is how they made the association between functional programming and SQL style coding.
Makes sense, since SQL is basically functional programming in multiple aspects as well, at least when it comes to SELECT queries.
And as for Javascript, it turns out it's just masochism.
var print = console.log;
function main() {
print("What's wrong with Javascript?");
return 0;
}
main();
@@official-obama I was just tongue-in-cheek wrt. JavaScript.
True masochists stick to some C++ implementation of the 1986 vintage, of course. ;^)
@@official-obama WHY VAR? USE CONST LIKE A NORMAL PERSON
@@arjix8738 i learned javascript from w3schools.
in their node.js tutorial:
var http = require('http');
i think the only place they use const is in the page where they give examples of const
@@official-obama var is a thing of the past, there is literally no reason to still use var, use let/const instead.
let is for making a variable that can be re-assigned, const is for making a variable that can't be re-assigned
Idk why, but I am stuck with C pretty much... I can't just learn another language, I tried several times, but I just give up.
WE GOT ANOTHER ONE TO FALL FOR IT BOYS
Returing instead of returning at 5:03
Nice vid btw
I tried to learn Haskell a while ago to explore what functional programming is all about as Haskell is as I have heard THE definition of functional programming. However when I tried to run my code it said it couldn't find the code file even I was in correct directory and specified the correct file name so i gave up. Maybee I'll give it one more try then. See if it works better this time. Hard to get anywhere with a programming language if the compiler can't even find your project files.
Make sure file names are an absolute path,
or a relative path beginning with ./
I wrote this in response to one of the comments. But since I find it so neat, here it is as a top-level comment too:
There's this very similar language called Idris. Here's a simple session in its interpreter:
Welcome to Idris 2. Enjoy yourself!
Main> :let main : IO ()
Main> :let main = putStrLn "Hello World!"
Main>
Main>
Main> main
MkIO (prim__putStr "Hello World!
")
Main>
Main>
Main> :exec main
Hello World!
Main>
See? IO is just a data structure! It represents what the runtime should do when executed. But if you ask the interpreter to show it, it will only evaluate the IO expression itself. No hacks. It really is simple!
The beauty of (pure) functional programming is that conceptually _it's just expressions all the way down._ You can put them inside lists, you can return them from functions and so on. Everything composes - including IO!
Haskell is the language used in the mandatory first semester CS class in my uni. It was a pretty good indicator who will drop out over the next few months/years by how tough it was for them to get a grasp of the language
Is this ANU by any chance?
@@mursyidelric4734 Nope
Awesome!
i just finished my 3rd semester of computer science and in the 1st semester we learned C and Haskell, guess which one i failed the exam on and i still havent done yet
Really cool video
a monad is a monoid in the category of endofunctors
This is a really good video
Have you tried Prolog?
Why am I getting more and more FP and math themed recommends? Can't I watch videos without being reminded to work on our CHL isomorphism project?!
0:42 what's the name of the music? 🥹
It's just something I found in the youtube sound libraries, "Talkies" by huma huma
@@TonyZhang01 thank you ☺️🙏
What cool things can you do with Haskell?
Applications? Visuals?
It’s mostly just cool for learning concepts, here’s a page with some decently elegant code: wiki.haskell.org/Blow_your_mind
You can use functional programming in every programming language that has functions, you can use it basically like how you’d use python, like scripts for math and data, people use it for parsing and web servers a lot also.
i kinda like haskell tho, im currently learning how to do tail recursion now
You know that recursion is an emergency technique, right? It's not a general problem solving strategy. ;-)
@@lepidoptera9337 yeah yeah, just finished learning functional programming :)
The initial qs example only works if there are no duplicates in the list. Otherwise the pivot will appear both as [p] and in greater. Unless I’m missing something?
should be fine: the p from [p] is not in xs
What would be a good usecase for this? Sounds like if you need files or input / output you're screwed.
Nah, that isn’t accurate. Basic I/O in Haskell looks like in an imperative language:
main :: IO ()
main = do
putStrLn "What is your name?"
name
The IO layer looks like Python; for all practical purposes, this is just Python with a worse library ecosystem, but:
-Significantly better performance
-Idiomatic strong, static, but type-inferred typing (the type checker can guess the types you mean, but idiomatically we either write the types out first as a type annotation, sort of like a comment, or use the IDE to generate the type annotation for us)
-Strong preference and support for functional programming as a way to manipulate data; resulting in more expressive, more bug-resistant, and more maintainable code.
A further feature that Haskellers enjoy, but others might not, is that we are only allowed to manipulate IO within the IO layer of code, or within something that wraps the IO layer. This is considered good software design; we are pure by default and have an "imperative shell, functional core" model wherein data transformation code lives away from the imperative code that calls it, leading to greater reusability and comprehensibility.
Well, IMO the simplest way to introduce what purely functional programming is about is this:
There's there's no execution and there's no _time._ Your whole program is just a bunch of expressions, built out of a bunch of smaller expressions. They can be evaluated to get a result.
IO can be achieved by writing an expression that's almost like a little imperative program.
More prceisely: you don't perform IO, but you build IO actions combining (hello, monads) smaller ones, so that at the end your entire program is a huge complex IO action executed by the runtime. Such combining operation builds dependencies between IO actions subsequently ordering them (there is no strictly defined order of pure evaluation)
@@user-tk2jy8xr8b Yes, it's the runtime system that can execute IO.
It really is kinda external to the language itself. It's just provided by the compiler (or the interpreter).
Why do I keep hearing a high pitched beep randomly throughout the video?
haha, you are really something!
Haskell was the first language I learned (cs 101, ut austin, 1996, lol! ...shout out to dr. ham richards!)
Hey man! you should change ur smoke alarm battery. Thank you!
I’m actually surprised that so many people heard it, but yeah I will be changing it
@@TonyZhang01 it's okay bro. there are rap songs with millions of views where there are smoke detector beeps in.
thats crazy
Ok now try alien language,
Lisp
LOL. Every programming language is math and nothing but math. In Haskell it is just more visible.
Monoids, son.
Pure math ? If you listen carefully you can hear APL laughing in the background !
As a non native speaker, I struggled a bit with understanding it's accent, where do he come from ?
It's just math?
...
Always has been
I would rather go into a mine, dig out iron, and build cheap circuits over generations to compensate for my python runtime, than learn this mental illness of a programming language.
I think this is where the true difference between "Comsci" and "Software Engineering / udemy" lies in.
Many argue that comsci degree is useless because they end up doing the same jobs as software engineers or coders. But a lot also end up doing jobs where these kinds of studies are done. We learn all these functional thingies and monads and lambda calculus and carrying and Churchill functions and weird stuff
I recommend SICP by MIT (if you're comfortable with lisp) or the rewrite by NUS (if you rather deal with pseudo-javascript syntax)
functor fetish
interesting
I have an interest in binary numbers. I started with designing an algorithm that converts a decimal representation of a number to a binary representation. For integers it’s easy, just continually dividing by 2 and taking the remainder, either 1 or 0. But for fractions it much more complicated. I wrote a program in Perl to implement this and another in Python, both imperative languages. They both run successfully. When I tried doing the same in Haskell, I ran into problems. I still haven’t finished writing the program.
Interesting
I barely understand what you say man, not because of terms but because of your pronunciation.
I dont get it
Are you really a high school student?
Yeah
Why are you asking? Lol
It ain't that rare
just a feedback: you speak too fast and half of the words are lost in the process. it's very difficult to understand. i had to go back a few times and mostly guess the words i couldn't hear properly.
i think overall quality can double if you work on speaking a bit.
Lisp - not the programming language
Haskell will finally meet its own milestone when someone in their community is capable of explaining to someone without category theory knowledge what monads are. So far, they've been incapable of doing that for over 20 years and counting. Why is that important? If you don't know what monads are and how to use them, you cannot do any program capable of things like handling user input, a feat which is a pedestrian task for most of the other languages out there.
It's more a taboo.
Haskell has this feature called typeclasses, which allow overloading of values and functions based on the required type. It's basically equivalent to an OOP interface, except somewhat more powerful in that the caller under type inference does the look-up.
Haskell also has a convention of having these typeclasses map to mathematical concepts; Monoid, for instance, maps to the mathematical notion of monoid (x is a monoid in set y if x is an associative operation, that is to say, 1+(2+3) = (1+2)+3 and an identity element exists for the operation (1+0) = 1 = (0 + 1).
The idea is that these interfaces are bound by contracts, which happen to be the mathematical laws associated with the mathematical concepts. That allows portability; i.e, if you know the mathematical laws, and can reason with the mathematical laws, the code becomes transparent. Remember, in the case of Monoid, it's just as simple as knowing that mempty (identity element) = 0 for addition and that Add 1 (Add 2 Add 3) = (Add 1 Add 2) Add 3, where is the associative operation defined in Monoid's superclass "Semigroup". Likewise, we can do Mul 1 (Mul 2 Mul 3) = (Mul 1 Mul 2) Mul 3, with mempty = 1. For lists, = list append, arrays, = array append, with the empty list and empty array being mempty.
The Monad typeclass is based on the category theory concept of monad in the context of types, specifically functorial types (think, say, Optional or Result in C++/Rust syntax, with a fmap method that obeys the functor laws of: "fmap (a . b) = fmap a . fmap b; fmap id = id", which means that "if I have an inner loop of a fmap with chained two functions, it's the same as having two loops on the outside applying the separated functions, and if I fmap id, which just results the value, it's equivalent to just using the id function". This allows for compiler optimizations to kick off, because any chained fmap can be converted into fmap of chained underlying functions, meaning a tight inner loop.
Monad adds upon the basic functor capability of fmap by introducing two new methods, pure and join. Let's address join first. In Haskell, Optional is a valid type, i.e, Maybe (Maybe a) in Haskell type syntax.
What join does in this context is that it merges the type layers together, i.e, Maybe (Maybe a) is now Maybe a; if there's a null on the second layer, everything gets replaced by the null. Vectors (Haskell's convenient array type), Vector (Vector Int) is now Vector Int, i.e, a 2-dimensional array is now a one-dimensional array.
Pure goes the other way, because it injects a value into a monadic type; i.e, pure 3 into List is now [3], etc...
The special properties are that the wrapper pure creates is "neutral" relative to join; i.e, if I fmap pure into a monadic value, then join it, it's the same as if I never applied it in the first place. If I pure a monadic value, then join it, it's also the same as if I never applied it in the first place.
Moreover, if I have a triply layered monad of the same type, if I join the outer two layers first, then the inner layer, the result will be the same as if I joined the inner two layers, then the outer layer.
Or in other words, "a monad is just a monoid in the category of endofunctors" (in this context, Haskell functors are endofunctors).
***
Okay, so this seems awfully abstract and useless; i.e, monad is just a fancy name for flatten or flatmap; we can use flatmap to read the inner value, create a new monadic layer, then compress the monadic layers together as a way of updating them. Why do we care about monads? Well, when Monads were first introduced, they were conceived of as a way to inject imperative computation and sequencing into lambda calculus. It turns out that by using some special syntax, Haskell could conceive of its IO facilities (the IO a type) as a monad. The end result is that you have monadic IO, as well as special syntax to support monadic IO, ending up with something that resembles Python, employing the monadic methods underneath.
The funny thing is, people quickly figured out that a ton of other types were also monadic; that is to say, you could manipulate values of these types via the same monadic IO interface, for instance, we could use the same "do notation" to cover lists, arrays, IO actions (statements), Maybe (Optionals, if you get a null, you short-circuit), as well as build custom types utilizing the monadic syntax, such as the Blaze HTML library for writing HTML.
We also found out that we could use the monad typeclass to handle and control side effects; i.e, we could build a monad transformer stack where each monadic type allows a specific effect.
***
In practical use, it comes down to 3 purposes:
-Providing a common, succinct, and convenient interface for various monadic types (I don't need special syntax, like in Javascript or Rust, for different monadic types)
-Providing a basis for type-based constraint of side effects (code is only allowed to throw certain side effects)
-Providing a basis for custom procedural languages in Haskell (I can implement a monadic eDSL to make certain coding activities succinct and typesafe)
main :: IO ()
main = do
putStrLn "What is your name?"
username >= (\_ -> getLine >>= (\username -> putStrLn ("Hello, " ++ username ++ "!")))
>>= is flatMap, putStrLn has a value of IO (), \_ and \username are part of lambda syntax, \_ means to discard the underlying value, because we don't care about () / unit / void, getLine is an IO action whose underlying value is a string, in this case, the user input, and putStrLn refers to the underlying value generated as "username".
The flattening is necessary because the instructions live in the IO type, and can't be directly accessed, and the flattening sequences together the instructions.
Of course you can do I/O in Haskell without understand monads. But if you do, then you will eventually ask "What does all my I/O code have in common?" And the answer is...
It's a bit like this: "Before you have heard about Zen, you see sticks and stones as merely sticks and stones. When you hear about Zen, you will no longer see them merely as sticks and stones. When you grasp Zen, you start to see sticks and stones as they truly are: as sticks and stones."
@@mattinykanen4780 You could literally pass that analogy as your attempt at explaining what monads are and people still wouldn't get it.
When I first discovered Haskell and read learn you a Haskell I was really blown away. Everything seemed so nice and efficient.
Then I discovered the ugly reality. You need to use coundless weird language extensions to actually get useful language. Half of the ecosystem are packages so esoteric that you better have a PhD in type theory. The other half is deprecated.
I’ve heard exactly the same.
Eh, if it’s not for you that’s fine. It’s got warts, but I’ve ended up using Haskell personally and professionally more and more for over 10 years now, and on balance, it lets me write very reliable code with reasonable performance, and has given me _immensely_ valuable skills applicable to programming in any other language.
“Extensions” are a bit of a misnomer imo. They just declare which language features you’re using, if they were added after the Haskell2010 standard. That’s _extremely_ helpful in unfamiliar code, because you have a searchable keyword that points you toward documentation for any feature you don’t recognise. Still, if you prefer, you can now ignore most of them and treat GHC2021 as the latest standard. Package coverage is solid in some areas and practically nonexistent in others, I won’t argue with that. But for a minority PL, not tied to another ecosystem like Java or .NET, it’s remarkably good.
@@EvincarOfAutumn I believe you. I do think it is a beautiful language. But for now JavaScript does the trick so why would I take the risk in spending years to master it and maybe come to the conclusion that for the things I do, JavaScript is more than sufficient? It would be interesting to know which kind of projects benefit from Haskell, in such a way the steap learning curve is worth it.
The world of Rust is exactly the same. Even basic functionality like a simple random number generator is hidden behind a package you have to download off the internet, because of course a system's programming language must have a package and project manager which doesn't work unless you have internet access.
@@BrunodeSouzaLino Having to use packages isn't the problem.
After spending roughly similar amounts of time learning Haskell and Rust I could look at usage examples for most rust libraries and pretty quickly figure out, what was going on. In Haskell on the other hand that would require reading up on random sets of language extensions and do a deep dive into some theoretical math concepts. The latter part isn't entirely unappealing to me, but not a good choice when I just want to get something done.
This channel deserves more views and subscribers, here's mine :)
I have a speech impediment what should I do? Ah! I'll create a RUclips channel. Fascinating thought process.
Bruh he’s just a ch*nk
Nah
I dare you to learn Prolog.
I always thought functional programming was just a meme.
What’s the practical purpose of Haskell? Is it only for mathematicians?
It's primarily a research language whose ideas tend to get adopted into mainstream languages 10 to 20 years later (list comprehensions, typeclasses/concepts/traits, monads in their modern sense etc. all were born or matured in Haskell)
Its restricted paradigm `forces' you to write programs in a certain way, which can help find nice ideas you wouldn't necessarily think of otherwise
When it comes to actually writing programs with it, I'd not recommend it for applications where performance is critical our ones that require a lot of transput, but any program that mostly maps from an input to a deterministic output without side effects and that doesn't need to be blazingly fast is a good fit
For less opinionated and more generally applicable languages that still work a lot like this I'd say try OCaml, Scala or Koka
@@yjlom I'm not familiar with Koka, but Haskell beats OCaml and Scala when it comes to terseness and expressivity. Scala has the advantage of being on the JVM, while OCaml has a much more pragmatic culture than Haskell.
Haskell itself is fairly pragmatic, being filled with overrides so that you can do what you want, and it does get employed by firms that are attracted to its mixture of correctness, speed (roughly around Java's speed, some specialized code can get within 70% of C), and expressivity.
@@AndreiGeorgescu-j9p
You are aware that do notation basically comes down to bastardized Python, right?
No, not exactly math.
Haskell is for nerd