Honestly, from the video you made me prefer Haskell over APL. It really doesn't matter if the code is shorter if you need someone specialized on the language to make sense of it, while on Haskell after you explained the S operator and that was all I needed to get a rough ideia of what the code was doing.
Many would take that further and justify Scala over Haskell, then Java over Scala, then C over Java, then assembly over C. The question is whether the value you get is worth it in exchange for initial understanding barrier. Personally I'm intrigued enough to try APL and BQN out.
APL isn't meant to be read, its a write it once and then recreate it if you need to change it later kind of thing. This is at least what everyone i've been learning from has said. This might seem stupid, but the language is meant to be easily written out so you can focus on the thing that needs to be done instead of how to do it.
Great talk. For reference, this is how you could solve the problem in the Wolfram Language: (nums - FoldList[Min, nums]) // DeleteCases[0] // Prepend[-1] // Max
Nice one! You can significanlty cut down on character count in WL for this problem using replace repeated to get rid of those pesky zeros: a - FoldList[Min, a] //. 0 -> -1 // Max I also took a stab at a short MATLAB version with a different approach: -min(min(tril(a-a'))) This is pretty terse but I can't find a clever way to make it return -1 instead of 0 for decreasing lists (yet!)
Me: oh c++ thats pretty verbose but I can somewhat understand it *changes to python* Ah! I really get the idea now, yeah this is fairly intuitive *switches to scalla and haskel* uhhh huh... *digression* im trippin balls dude *switches to APL* WHAT THE F-
I have an APL book called "APL An Interactive Approach" and you can tell it's old because it asks you to plug your computer into a phone line. Such a weird and obscure language but it was pretty fun to mess around with.
Thank you so much. This is by far the most accessible intro to combinators and combinatorial logic I've ever seen. I do admit to never making a true effort to seek out understanding of combinators beyond a very quick skim of Wikipedia, but that doesn't change the fact that what little effort I have put in was never rewarded with conceptual clarity. For example, all previous videos I've seen making the claim that they are an easy intro to the topic... well, they failed me. I guess what I'm saying is thank you! So concise and clear and completely useful!
APL is also functional programming, the main difference is that arrays are the main data structure while in Lisp the Single Linked List is the main data structure.
When introducing the Python solution @ 17:12 , replaces clear easy to read semantically meaningful variable names with short abbreviations while stating "...change the variable names to make them more concise which, in my opinion makes them a lot more readble" - realizing, oh yeah his fav language is APL! ;-) haha Also nice job on the combinators intro!
Yeah, this is a style that I strongly disagree with. Also, the Python algorithm is a linear scan, while the functional version seems like it might end up making 2 passes over the data, unless I misunderstand how Haskell and friends can optimize code. I would have assumed the Haskell version was just the imperative version translated to recursion.
My least favorite part of programming is figuring out what everyone's abbreviations mean. They are never as obvious to the reader as they are to the writer.
@@austininflorida Also when they abbreviate words by a single letter to make it just that bit shorter. It's like... why? "low" becomes "lo"... just write "low"!
Im pretty sure APL is the code Neo sees as the matrix when he gets revived… the moral of the film is we should all take a page from neo’s book and learn APL 😤
Is it just me, or is the Scala form the most readable? The briefer the function gets, the harder it is for me to read. Even when I know haskell and not Scala.
I don't know why we are caring so much about how concise the function can get by adding more symbols. Like he mentions how you might not want to spell flip and use parentheses yet the APL solution has symbols that I have no idea how to type. Typing flip() is easier and it's plain English so it's easier for me to read.
Haskell was the most readable for me. Scala looked bloated. People that know math tend to go towards concise solutions and languages because symbols make it shorter to write and read. Once you know what the symbol means it's tedious to grok more and more expanded versions of it. That said I'd rather read the Scala than the Python.
Personally, I'd take the C++ (or a Rust) implementation. It's more obvious what's going on and all of the intermediate values are easily available for evaluation or logging or whatever during debugging. And it's likely better performing, particularly once you get beyond trivial examples. And of course, you can just put the whole thing in a helper if you need the algorithm more than once, and you never even see the guts unless you need to (at which point it's a lot easier to see the guts.) I'd also argue that extremely minimal syntax isn't really an important goal of software. Creating products for people to use is the real purpose of software. One problem in software (as in many pursuits) is that too many people become more about the pursuit itself than about the purpose of the pursuit. Nothing wrong with that in and of itself, but it can become an issue. I think C++ is suffering from this currently. It's too driven by people who are into languages and not enough people who just want to create products for people to use.
@@lorenzrosenthal119 Whether a piece of code has side effects isn't a function of the language it's written in. You can of course write plenty of C++ code that is without side effects, in those areas where it's a benefit, but not have to jump through hoops to deal with mutable shared state, which is something that you often really need. Sometimes everyone needs to agree on the state of the state right now.
I agree, when programming we are programming for other people, and that usually means that elegant code is probably not what you want to write. Take the haskell S-combinator, it'd probably be just as readable, if not more, to just define an argument and pass that to the function.
@@deanroddey2881 it sort of is based on language though. Sure I can write c++ with no side effects but there's not many language level constructs to guarantee it. As a mutable / effectful language by default, the burden is put onto the developers to opt into it, which means i don't really trust any c++ code to do what it says haha. When working in a large code base I have to hope they aren't doing anything funky and have to read the implementation of code more than I would like to. When a language supports it from a more basic level I can KNOW when something is a pure function which saves me a lot of effort / time debugging and digging.
@@alpalz I would never argue that C++ is SAFE of course. I've spent my life writing a huge amount of C++, so I know that isn't the case. It CAN be safe but it is of course on the writer to do so, and most writers are going to take the easier route in a lot of cases. My own C++ code base, which is 1M+ lines, is as good as it gets, because it was written in as close to optimal conditions as possible. Hardly any serious code base even comes close to enjoying those kinds of circumstances, and that's were C++ gets particularly dangerous. In my case, it's all one writer, and everything is uber-consistent, and nothing was ever hacked into place. In that kind of situation, C++ is awfully nice. And it helps that I don't use the STL, or any other third party code, I have built my own entire world, so it's more like working in C# or Java in a lot of ways, in the sense of being built as a completely unified system from the ground up with a comprehensive runtime library that is all designed to work together.
This is probably one of the, if not the best, computer science video I have ever watched. This helped me expand my perspective on problem solving as a new programmer. Thank you!
Great video! Really interesting. A couple of comments: 1. It seems really redundant to use -1 to signify no solution, since, if you read the problem carefully, nums[i] < nums[j]. This means that nums[j] - nums[i] > 0, if a candidate solution. So, if max of all candidates is 0, there is no solution. This would simplify the Scala especially. 2. I understand that Python was just used as an example language for imperative programming, but it could come off that you were comparing Python and Haskell/Scala's abilities, when Python is capable of solving this in a very functional manner as well. In fact, the implementation can be quite elegant: max(map(sub, nums, accumulate(nums, min))) Though Python does not explicitly have scanLeft or zipWith, Python's map auto zips multiple iterables, and the standard itertools library has accumulate, which is essentially the same as scanLeft. Conveniently, if you don't provide an initial value to accumulate, the first value is just the first item of the iterable (which seems similar to Haskells scanl1). Python's operator library provides the sub function. One other nice thing about this, is that it works entirely with iterators, so there are no intermediate results. If you want to use -1's instead of 0's, I think the most elegant way would just be to use an 'or' operator, since 0 is falsey: max(map(sub, nums, accumulate(nums, min))) or -1
@@a2sbestos768 Also I don't think variable names as "lo" and "e" should be used in Python, as explicit is better. Also I'm a huge fan of type annotations so I'd prefer def maximum_range(nums: List[int]) -> int:
@@geeshta It is the trade-off between code readability and code "elegance". Some people like it when there is as little as possible boilerplate and redundant code. Long variable names are therefore a no-no if a shorter one conveys its purpose well enough. Lo for "the lowest number so far" is good enough in this case. I do think that this does increase the size of your function documentation though, because you need to define more specifically what a variable means. Same goes for the "elegant" uses of the combinators to combine multiple statements into one line to not repeat the use of a variable for instance. Sure it saves characters, but you now need to spend more characters in the documentation to help future code refactorers decipher what the piece of code does.
So the S combinator duplicates the argument of the function that follows it and inserts it in the place of itself? i.e. foo bar applied to List gives us foo List (bar List), where foo takes two arguments and bar one. I have never heard of combinators, but this seems quite useful in a language like Haskell wherein there were no explicit references to a named argument anywhere in the function definition.
In my opinion, in imperative languages, branching into loops and conditional blocks really benefit from braces. I agree they add noise, but I feel like it helps sort out what the blocks of code are doing. Which is a main reason I find python infuriating, it's still imperative, but relies on a lot of tabbing to generate structure, which can get annoying to read in large parsing sections of code. For context Haskell is my favorite language, with C# being my professional language
IMO dev teams are diverse and code ideally should be readable for everyone. I personally liked Scala and Haskell solutions (although I need to learn combinators to fully understand it, so I'll watch your videos about it). What's nice about this F# solution is that it's clear, concise, and can be run step by step interactively in REPL to see the transformations: let maximumDifference nums = nums |> Seq.scan min System.Int32.MaxValue |> Seq.tail |> Seq.zip nums |> Seq.map (( Seq.filter (() 0) |> Seq.fold max -1
Man I really really loved this video. You convinced me to look into combinatory logic. I'm gonna check out To Mock a Mockingbird. Thanks for the great video!
APL (A Programming Language) has been the source of jokes in the programming community for nearly 50 years! I couldn’t readily find good references beyond en.wikipedia.org/wiki/Write-only_language?wprov=sfti1 , but there used to be fun competitions on who could write the shortest and most indecipherable code in APL. More practically, APL started out at a time when programming was still regarded by some as an extension of pure mathematics, so a highly symbolic syntax was not totally out of left field. Memory at the time was also at a premium, likely costing over a dollar a bit, so being able to represent core functionality in a glyph represented by just a few bits had practical benefits.
Iverson's decision to keep APL functionality to single glyphs was far less a product of chance or circumstance than of intention. While it may have helped save on memory incidentally, programming was by no means considered "an extension of pure mathematics" in the 60s; these were the days of FORTRAN and COBOL, when programming was still in its infancy but was largely verbose and strictly procedural, and was beginning to be more and more seriously used for day to day business rather than simply mathematics. Even at the time, APL's syntax was very much out of left field to most programmers. It was no mistake or happenstance that he chose notation so dense. Rather, he considered the dense notation of APL to be a feature and selling point in itself, as once you get past the learning curve of understanding the notation, its density shifts into clarity and ease of expression, similar to how exceptionally complex concepts can be expressed in very small amounts of mathematical notation. I highly recommend reading through his paper "Notation as a Tool of Thought" for more insight on why he chose to lean in the direction of the more "mathematical" appearance of APL.
@@foxoninetails_ That sounds like a reasonable explanation. I would guess that, based on the fact that he wrote that paper, it is the correct one. I may try to track the paper down now. One of the first books I owned was APL/360: An Interactive Approach. In 1972, in high school, the only terminal I had access to was based on the IBM Selectric Typewriter. That was a very successful typewriter for IBM and it had swappable character balls. At the time, I did know Iverson worked for IBM and I had assumed the "weird character" approach was because at IBM Research, they might have been encouraged to help sell Selectric based technology. I don't think the language became as mainstream as IBM might have hoped. However, there are a lot of people my age, those who leaned towards math anyway, who learned that language right after FORTRAN and were quite taken by it. I think APL taught me more about how to think as a programmer than any thing I learned in my high school and college careers. I have run into many people since in the tech world who also fell in love with APL. And I would say, to some extent, it taught me how to think about matrix math in general.
One one hand I'm awed at the deep mathematics on display, on the other hand I want to never use it myself. All of these highly formal functional languages are so ridiculously complex and unmaintainable, I can't imagine anyone using them in a serious setting.
I figured the same. Coming from a python background I cant live without intuitive programming design that kinda makes comments not necessary In the grand scheme of things, all thats important to me is: *can i read it off the bat and come back to it later and remember what this function does and what created that call back?" *is it efficient? Ironically having said this youd probably laugh at me since i mentioned efficient and python in the same comment but i digress, something like python with a little more control over variable instantiation would be nice.
You may well find Haskell complex, but it's certainly not _unmaintainable_ - in fact, IMO the biggest advantage of Haskell is that its strong type system makes it almost impossible to break code that's worked before, without the compiler detecting the problem. Thus one can confidently change APIs without the risk that some dependency then ends up with bugs in production. And ultra-compact, point-free code is actually rather frowned upon in the Haskell community. This video strongly overstates the similarities in practice to APL.
it's worth noting that the problem being solved has a real-world interpretation: If the numbers are a time series of prices (like daily prices for a stock), this gives the maximum profit you could make if you bought and sold at the optimal times (assuming you can't short-sell). So if you are evaluating some trading algorithm this gives the upper bound on performance. This is also solved in one of the "Q for mortals" videos.
How do you even produce those characters on your keyboard? Is typing a function call really so taxing that reducing it to a single character is the difference maker? I don't understand how this kind of language can be easier to read/write or reason about.
I think you missed the point. The guy is talking about why he thinks a certain language is amazing in his opinion. He thinks that way because he loves the math and all behind combinatory logic. I agree with you that it might not be practical, but the guy thinks it's cool and he likes the concept and loves that an existing langauge has all this combinatory logic in it. That's why he made this video. At least that's how I understood the video anyway ¯\_(ツ)_/¯
@@0dWHOHWb0 Back in the days when APL was cool, you had special keyboards and unique encodings (well before Unicode). So you probably just use a custom keyboard mapping. I don’t remember the count, but there aren’t that many glyphs in APL. Conceptually, it’s most like the broad use of symbols in mathematics in general; you’re writing a mathematical expression, not a story.
it's not that difficult to have certain key presses produce other characters. vim, for example, makes setting up this type of scheme pretty trivial, but it can be done at a system level very easily on linux and windows. also, J, made in the 90s by the creator of APL, uses ascii characters, so fear not. (:
Combinatory logic gameplay: Combinatory logic lore: "Somebody I Used To Know remix played over this accelerated video" Thanks for this video, quite interesting introduction to combinatory logic and array programming.
Your strategy is overly complex, no? maxDiff :: [Int] -> Int maxDiff = snd $ foldl' go (maxBound, -1) where go (lo, diff) x = (min lo n, max diff (x - lo))
I'm a Haskell beginner, this was my solution: maxDiff [] = -1 maxDiff (a:as) = mD as a mD [] _ = -1 mD (a:as) m = if a > m then max (a-m) (mD as m) else mD as a
> At 18:17 "In the future, I will give a full either 40-60 or 90 min talk all about Combinatory logic and Combinators" Please tell me we live in that future and that video exists somewhere and I just didn't find it 🙏
@@code_report Thank you for the link! This was a great video! I would love your opinion as a C++ developer: it's clear to admit that Array programming is extremely elegant and terse, but I wonder if the elegance comes at the cost of the performance? I didn't catch up with all the ArrayCast podcast episodes, so apologies if it has been addressed there, but at this point I'm still unsure if array programming and performance mix well or poorly with performance, the "naive" examples I came across seem to favor terseness over efficiency but don't know if it's a tendency or not.
Thank you, this is the first time I have a good idea of what the S Combinator is all about... Sharing the same argument again, without being forced to mention it twice... Neat...
Imperative programming is like a recipe. You follow the steps and out comes the result at the end. Functional programming is like an assembly line for your data, where each step computes, recombinates or combines the data, to get to the endproduct. I feel that the recipes of imperative programming are easiest to read and follow, because you can easily digest it step by step and see the intermediate results. If you do simple things like map, filter and reduce and an occasional zip, functional programming can still be very readable for most programmers, but it becomes way more complex once you start using the combinators. And I don't mean that those can't be understood. I mean if you can see at a glance what it does without too much effort, which helps maintainability.
Well as everything i think one becomes good at doing a thing with practice. After some time u probably will be able to quickly think, read and type with combinators, but the problems arise as soon as you need to show it to someone. Pretty high chance it will take quiiite some time to understand. If it was a thing taught in schools/uni, then probably there would be no problems
@@utof the beauty of industry: you will need to show it to someone as you cannot maintain code forever, and your boss will not pay for that person to sit around learning a new way to think.
I hardly understand 5% of what's going on here but I feel amazed and quite smart after watching these vids. LMFAO. Seriously the world of computing is mind blowing. The so called "developers" like me don't even know 1% on what goes behind the scenes when we call an in-built function or use any tooling like Git or SDKs etc. Thanks mate for keep making these vids. Hope you're keeping backups we don't know when WW3 ends everything.
Possibly, but not type consistent. sys.maxsize is like Haskell's maxBound::Int, except Python's int has no max value (it's arbitrary precision like Integer). It's not great as Python code goes anyhow; I might e.g. write max(max(nums[i:])-min(nums[:i]) for i in range(1,len(nums)-1)), which may not be as algorithmically efficient but avoids naming new magic values and probably runs faster.
Probably easier to get into at a younger age because much of maths is still weird glyphs at that stage, so it wouldn't seem so out of left field as it does for adults.
The S combinator explained at 17:32 reminds me of functions as monads: you can chain a binary function x onto a unary function y, so in Haskell it would be something like y >>= x. I believe the same is achieved by treating functions as applicatives with x y (solution of this video).
7:40 100% disagree. Even though I knew what the variables were originally, I got immediately confused by these short variable names. It’s maybe to do with parsing mathematical formulas, which I am super bad at, but I feel like 1-3 word variables are much more readable than 1-3 letter variables. Except for i in a loop, but even there, index, position or location may be more appropriate depending on what you’re actually iterating over.
@@31redorange08 I… don’t know. Maybe? I guess, yeah, when you think about it like that, it makes sense, but I can also see how people do find beauty in pure languages like this. I just thought that it wasn’t for me?
About the example problem: I believe it might be faster to store two variables of the highest number and highest difference, and instead of comparing the difference between the min and the current number to the current biggest difference on every single step, we just compare the current number to the highest number. Each time we find a new min, we compare the highest difference to the different between the current max and min, and store the largest of those as the largest difference, then set the min and max to the new min we found. So instead of doing a subtraction for every element in the array, we only do a subtraction each time we find a new minimum. I have no idea how fast variables can be accessed and stored, so I don't know wether the increased number of variables used increases compute times more than the reduction from fewer subtractions.
Before the functional language section I tried to solve the problem with Haskell myself. What surprised me is that my solution was completely different from yours. I only used recursion and ifs, no fancy functions like scan, map, filter, reduce and combinators, but my code is still O(n). It made me wonder which is faster for larger input arrays, I might check it out later.
5:00 doesn't it make more sense to assume the max_dif to be n[1]-n[0] if positive, otherwise -1, such that .we increase the difference as we find a larger one, instead of assuming it to be MAX_INT?
So what you're saying is that some languages have more useful (for this problem) standard library functions than others. So the comparison is between standard libraries, not languages. Still a valid discussion but you're mixing syntax Vs library
The standard library is an essential part of many programming languages. Consider, for instance, the syntax of Forth: Words are ended by whitespace. How much semantics do you get out of that? In Haskell, booleans are fundamental for several syntactic features, such as conditionals and boolean guards. Although the type is distributed with the compiler, it's part of the library, not special syntax.
@@0LoneTech for sure. That's my point, is that the comparison _should be_ inclusive of language features too, imo. It seems somewhat unfair to the languages that have advantageous tools as a part of the language if the comparison is only between their libraries, as is done in the video.
It's often very fast to get the gist of an expression, but a little slower for me to confirm the exact workings of the chain. In comparison, C syntax is very noisy with redundant terminators, separators, names etc. Haskell is a fairly sweet spot, but forking of data often involves some awkward back and forth parsing.
clojure solution: (defn maximum-difference [nums] (->> nums (reductions min) (map - nums) (filter pos-int?) (cons -1) (apply max))) i like it better than the skala solution and clojures map fn forgoes the need to zip, which is always nice. But point free solutions like these tend to fall apart when you don't have a consistent placement for argument (notice my use of cons instead of conj). But that can happen in haskell and apl too, if it werent for combinators to save the day. clojure just solves this with more macros which has it's own tradeoffs.
Nah, Ken just wanted a version of maths that made sense, like why does summing everything have its own symbol but subbing everything doesn't? Why does min and max have no symbol even though we use often? Why do we need to remember Bedmas/Pedmas, what about if a expression has a Sigma, when do I do that? So Ken made something more consistent, with more symbols and less rules, and it was found that it can be made a programing language easily enough.
What are the pros and cons of writing the algorithm in one paradigm vs the other? For example the pros and cons of the Haskel implementation vs the Python one? For context, Python is my main language but I'm kind of interesting in learning Haskel.
What's your opinion on J programming Language vs APL? honestly I've been giving both a bit of review, and seem to like J language more. What would you say - which one is "more powerful", if you could/can compare them?
Honestly I am far from understanding it but congratulations for the great presentation and thank you for taking the time to explain it. There's a lot I still have to learn tô fully appreciate it.
If BQN also had all the type theory stuff Haskell does, I'd be sold. Without it, although the glyph notation is cool, it's just a limited version of Haskell not worth learning (imo) - just learn Haskell at that point. Or I guess I can ask: Other than the fact that it's more succinct on account of the glyphs, what does BQN offer that Haskell doesn't?
It's not just the glyphs, but also the syntax and chosen data types and polymorphisms. For instance, CBQN will happily build byte arrays rather than Double arrays to improve memory access patterns, simply because the data fits. When they advertise context free syntax (really, only local context), it's to assist both human and automatic readers. It really isn't a version of Haskell. If anything, it's a version of APL, and the array oriented programming is a useful paradigm to learn, which you could translate to some degree into Haskell Massiv. It does bug me that people keep comparing Haskell lists to arrays. There are arrays even in standard Haskell (Data.Array), though not in base.
I'm not 100% sure about scala-I thiiiink that the '.' is like a method call on whatever is returned by the bit before. perhaps a bit like: someNumber.square.add2 in other words, '.' ≈ "and then" '.' is haskell is following the math notation, such that: add2 :: Num a => a -> a add2 = (+2) square :: Num a => a -> a square x = x * x (add2 . square) 10 == add2 (square 10) (add2 . square) 10 == 102 (square . add2) 10 == 144 which is the same order as math notation: (f . g) x == f(g(x))
Also, you probably should do for (const auto& elem : nums), even though the copy here doesn't make a difference... Also, how's performance with those "min scans", differences and other transformations? You could implement those with e.g. std::transform but I figure it'd be slower compared to the explicit loop to do all that.
That is probably what Klingons use to program. Working in the embedded world (C, Assembler) with some tool programming in Python, C# or Java and some "field trips" into web backends in PHP, it really makes me wonder why someone would need or even enjoy that (no offense) and where that is used outside of academic contexts.
It is not returning "auto", that is not a function is actually a lambda. The lambda is defined as the "name(...inputs..) -> return_type { body} }. Thus, "maximum_difference" is a variable of type lambda and to avoid writing the whole type as in any other C++ variable declaration he used the "auto" keyword".
@@josee.ramirez4305 ?? Lambda must have capture list so it would be [] () -> int { return 10}. Besides, this functin returns the result of std::max, so it returns int.
It's too esoteric to me to be honest haha, but it seems really cool to know how to program in a language like this. And I have 2 questions about Array Programming (or APL): 1 - To code in APL you need a specif IDE? because I don't even know if it's possible to type some of the characters that I saw. 2 - And what kind of problems this type of language is target for? like, in which scenario I would choose Array programming instead a normal approach (like C++)?
1) Dyalog APL has a nice built-in REPL with all the character set, and also they have the RIDE IDE which is nice too. 2) In the past APL has been used for teaching maths, and for computational physics. There isn't a hell of a lot of material for the second part, but I can share a 70pg report from the 80s that does some serious physics calculations (Newtonian gravity, Einsteinian gravity, Schrödinger equation, Feynman path integrals) I've found. If you've got a problem that can be modelled using matrices (linear algebra), then APL and J are perfect for it. Also J seems much more equiped and powerful for anything related to mathematics and data. I hope modern APLs will catch up.
This is definitely really cool. But for me there is always a question: how to get things done on such languages? What's the purpose? How difficult it would be to write for example autodiff library on APL? Or train ML model (or GPT-3 for example)? process tabular data from csv? scientific algo, which should be as efficient as possible?
Processing and extracting data from csv's is just about the perfect use for functional and array languages. It excels in things where the input is simple to quantify and the result can be expressed as a series of transformations on that input, so reducing a set of rows and columns into some meaningful value you want to extract from it would look much simpler than an imperative solution.
Smullyan and Priest write some of the most accessible logic books. I don't study much in this area of logic, but I'd definitely pick through the Smullyan text if I wanted a quick comprehensible dive. Good recommendation.
11:00 Scala code, python equivalent: max(-1,*(n-m for m,n in zip(list(reduce(lambda x,y: x+[min(y,x[-1])], nums, [sys.maxsize]))[1:], nums) if n-m)) 19:30 haskell code, python equivalent (no INT_MAX, yet python doesnt have S combinator, so this is best I can): max(-1,*(n-m for m,n in zip(reduce(lambda x,y: x+[min(y,x[-1])], nums[1:], nums[0:1]), nums) if n-m)) Update: Just saw atrus' comment. here is pythonic way: max(map(sub, nums, accumulate(nums, min))) or -1
Smullyan's bird book will arrive most likely on 2021-12-18. I'd ordered it well before I saw this video. I'm happy to see that you recommend it too :D!
Hi, personally many of these languages are not my cup of tea. APL cannot even be typed on a normal keyboard. But I liked to see what I could do in C# here. I am both able to go the imperative way as well al the functional way which is called LINQ. I had to write the ScanLeft part myself, of which my yield return version turned out to be the nicest, because in that way you still benefit from Linq it's lazy execution efficiency. But the rest was quite easy. For Map you use Select and for Filter you use Where. The nicest thing is that Zip is available as well and it accepts and extra input array as direct input, so that I basically get something simular to that S combinator solution from Haskell, even though the code looks more like the Scala code in the video.
Using more complicated key mappings. Classic APL keyboards have more glyphs printed on them, and they're entered with shift modes like Control. There are also workarounds like backtick prefixing (BQN uses \ prefixing).
This talk demonstrates why functional programming is less widely adopted: the imperative approach is, more or less, “do what the problem description says to do”, the functional approaches require reinterpreting the problem in terms of the concepts that are well supported by the language.
Except it's not true. In fact, imperative approach is the opposite thing to declarative approach, so it is always about "how to do what the problem description says" and not "do what the description problem says". Otherwise it's not an imperative approach anymore. The reason why it _looks like_ imperative does whatever the problem description tells you to do, is because resources like leetcode, hackerrank etc most of the time provide issues designed _specifically_ for an imperative approach. They fit imperative approach well, programmers who use imperative approach might face them often. However, I'd argue there are tons of tasks that look clean and understandable with functional approach and ugly with an imperative approach. But that's not the most important thing. The most important thing is that in real world no one will ever give you a ticket that says "I want to find a max difference between two elements in the array". It's just a low level implementation detail which will never exist in functional world in the first place. So it does not matter which approach you are using, you will always choose proper low level algorithm that will look great and understandable in the project paradigm and most likely will look horrible in any other paradigm.
As someone who knows a little bit of Haskell, I can assure you there is a solution in Haskell that looks way more like the imperative solution. Here he abused some of the more confusing aspects of the language.
What's your thoughts on the C++ standard trying to becoming more expressive like the inclusion of the ranges-v3 and I've seen a proposal for C++23 for a new composition operator for I think regular (imperative) functions as well as the range, view and lambda functions. I believe the proposed operator was operator |> (x, f(y)) where f is a function and y is its input and x is a value or function that is passed to f through y. I was curious on your thoughts. Awesome video though.
I like your run through. Very good example and good work. Personally I like languages which are multi paradime. I our team we use D and try get my colleagues to use other part of the D language. Specially the OOP or OOOP (Over Object Oriented Programing)
Dude I'm high as f watching random cs videos and I see we returned to hieroglyphs wtffffffff
I love this so much
bro 💀
420 likes .... coincidence?
my exact reaction when I saw APL for the first time 😭
I'm high right now
APL is how coding looks like for non-coders
Looks like emojilang & mindf$ck had a baby & they named it APL. 🤯
Most coding languages look like gibberish to non-coders, APL looks like gibberish to everybody. Equality.
@@NStripleseven lol
I like that analogy!
This comment is epic, it's so accurate!
So are you saying APL programmers are above coders? lol
what programming looks like to non programmers
APL looks like minecraft enchanting table language
I am .net developer for 6+ years. I didn't understand anything not a jot.
@@saidkorseir192 Then just start from Python first ;)
Doesn't APL look very similar to just Greek?
Honestly, from the video you made me prefer Haskell over APL. It really doesn't matter if the code is shorter if you need someone specialized on the language to make sense of it, while on Haskell after you explained the S operator and that was all I needed to get a rough ideia of what the code was doing.
I totally agree. Haskell nails both readability and simplicity
Many would take that further and justify Scala over Haskell, then Java over Scala, then C over Java, then assembly over C. The question is whether the value you get is worth it in exchange for initial understanding barrier. Personally I'm intrigued enough to try APL and BQN out.
@@batlin How on Earth did you get to assembly being more readable than C or Java?
@@casperes0912 I didn't. Please read the thread again.
APL isn't meant to be read, its a write it once and then recreate it if you need to change it later kind of thing. This is at least what everyone i've been learning from has said. This might seem stupid, but the language is meant to be easily written out so you can focus on the thing that needs to be done instead of how to do it.
Great talk. For reference, this is how you could solve the problem in the Wolfram Language:
(nums - FoldList[Min, nums]) // DeleteCases[0] // Prepend[-1] // Max
Readable AND short ? Looks like a great language to me !
my normie ass automatically assumed everything after // to be comments, spent a solid 10 secs thinking about where the rest of the logic was
Nice one! You can significanlty cut down on character count in WL for this problem using replace repeated to get rid of those pesky zeros:
a - FoldList[Min, a] //. 0 -> -1 // Max
I also took a stab at a short MATLAB version with a different approach:
-min(min(tril(a-a')))
This is pretty terse but I can't find a clever way to make it return -1 instead of 0 for decreasing lists (yet!)
Me: oh c++ thats pretty verbose but I can somewhat understand it
*changes to python*
Ah! I really get the idea now, yeah this is fairly intuitive
*switches to scalla and haskel*
uhhh huh...
*digression*
im trippin balls dude
*switches to APL*
WHAT THE F-
"WHAT THE F-" F\omega is a programming language too.
After APL and Haskell blew your mind. You are read for ......
.....
Forth
@@AndreiGeorgescu-j9p dawg I wrote this 2 years ago. Recently dabbled with C / Raylib and can confirm what you say is true
I have an APL book called "APL An Interactive Approach" and you can tell it's old because it asks you to plug your computer into a phone line.
Such a weird and obscure language but it was pretty fun to mess around with.
APL feels like Chinese in that there is a single symbol per concept. The learning curve is steep but it's beautifully rewarding when you get there.
Thank you so much.
This is by far the most accessible intro to combinators and combinatorial logic I've ever seen.
I do admit to never making a true effort to seek out understanding of combinators beyond a very quick skim of Wikipedia, but that doesn't change the fact that what little effort I have put in was never rewarded with conceptual clarity.
For example, all previous videos I've seen making the claim that they are an easy intro to the topic... well, they failed me.
I guess what I'm saying is thank you!
So concise and clear and completely useful!
I never even knew these existed. I barely understand basic FP tools like map/zip...
APL is also functional programming, the main difference is that arrays are the main data structure while in Lisp the Single Linked List is the main data structure.
When introducing the Python solution @ 17:12 , replaces clear easy to read semantically meaningful variable names with short abbreviations while stating "...change the variable names to make them more concise which, in my opinion makes them a lot more readble" - realizing, oh yeah his fav language is APL! ;-) haha Also nice job on the combinators intro!
It's 7:12
Yeah I was like, this has to be sarcasm, making fun of unreadable code and mystery names. Later I realized he's simply nuts.
Yeah, this is a style that I strongly disagree with. Also, the Python algorithm is a linear scan, while the functional version seems like it might end up making 2 passes over the data, unless I misunderstand how Haskell and friends can optimize code. I would have assumed the Haskell version was just the imperative version translated to recursion.
My least favorite part of programming is figuring out what everyone's abbreviations mean. They are never as obvious to the reader as they are to the writer.
@@austininflorida Also when they abbreviate words by a single letter to make it just that bit shorter. It's like... why? "low" becomes "lo"... just write "low"!
Im pretty sure APL is the code Neo sees as the matrix when he gets revived… the moral of the film is we should all take a page from neo’s book and learn APL 😤
Is it just me, or is the Scala form the most readable? The briefer the function gets, the harder it is for me to read. Even when I know haskell and not Scala.
Yeah, after thinking about it, I'd use the haskell Let expression to name the lists, just like how you explained the algorithm at the start.
I don't know why we are caring so much about how concise the function can get by adding more symbols.
Like he mentions how you might not want to spell flip and use parentheses yet the APL solution has symbols that I have no idea how to type. Typing flip() is easier and it's plain English so it's easier for me to read.
Yes very. Like when I read "tail" the first thought is "oh, this is going to remove the first element!"
Haskell was the most readable for me. Scala looked bloated. People that know math tend to go towards concise solutions and languages because symbols make it shorter to write and read. Once you know what the symbol means it's tedious to grok more and more expanded versions of it. That said I'd rather read the Scala than the Python.
Its not, F# and Ocaml are better. Actually APL is more readable after you learn the symbols.
I think APL deserves a prize for being more esoteric than assembly.
Assembly isn't esoteric at all.
Assembly is at least remotely useful when it comes to retro games
@Charles Thompson, and APL is OP at array manipulating. APL is not esoteric at all, it serves a cool purpose, not like "Brainfsck".
I had a headache
Why you consider asm esoteric?
Personally, I'd take the C++ (or a Rust) implementation. It's more obvious what's going on and all of the intermediate values are easily available for evaluation or logging or whatever during debugging. And it's likely better performing, particularly once you get beyond trivial examples. And of course, you can just put the whole thing in a helper if you need the algorithm more than once, and you never even see the guts unless you need to (at which point it's a lot easier to see the guts.)
I'd also argue that extremely minimal syntax isn't really an important goal of software. Creating products for people to use is the real purpose of software. One problem in software (as in many pursuits) is that too many people become more about the pursuit itself than about the purpose of the pursuit. Nothing wrong with that in and of itself, but it can become an issue. I think C++ is suffering from this currently. It's too driven by people who are into languages and not enough people who just want to create products for people to use.
it's all fun and games until you need to reason about side effects and concurrency...
@@lorenzrosenthal119 Whether a piece of code has side effects isn't a function of the language it's written in. You can of course write plenty of C++ code that is without side effects, in those areas where it's a benefit, but not have to jump through hoops to deal with mutable shared state, which is something that you often really need. Sometimes everyone needs to agree on the state of the state right now.
I agree, when programming we are programming for other people, and that usually means that elegant code is probably not what you want to write. Take the haskell S-combinator, it'd probably be just as readable, if not more, to just define an argument and pass that to the function.
@@deanroddey2881 it sort of is based on language though. Sure I can write c++ with no side effects but there's not many language level constructs to guarantee it. As a mutable / effectful language by default, the burden is put onto the developers to opt into it, which means i don't really trust any c++ code to do what it says haha. When working in a large code base I have to hope they aren't doing anything funky and have to read the implementation of code more than I would like to. When a language supports it from a more basic level I can KNOW when something is a pure function which saves me a lot of effort / time debugging and digging.
@@alpalz I would never argue that C++ is SAFE of course. I've spent my life writing a huge amount of C++, so I know that isn't the case. It CAN be safe but it is of course on the writer to do so, and most writers are going to take the easier route in a lot of cases.
My own C++ code base, which is 1M+ lines, is as good as it gets, because it was written in as close to optimal conditions as possible. Hardly any serious code base even comes close to enjoying those kinds of circumstances, and that's were C++ gets particularly dangerous. In my case, it's all one writer, and everything is uber-consistent, and nothing was ever hacked into place.
In that kind of situation, C++ is awfully nice. And it helps that I don't use the STL, or any other third party code, I have built my own entire world, so it's more like working in C# or Java in a lot of ways, in the sense of being built as a completely unified system from the ground up with a comprehensive runtime library that is all designed to work together.
This is probably one of the, if not the best, computer science video I have ever watched. This helped me expand my perspective on problem solving as a new programmer. Thank you!
Great video! Really interesting. A couple of comments:
1. It seems really redundant to use -1 to signify no solution, since, if you read the problem carefully, nums[i] < nums[j]. This means that nums[j] - nums[i] > 0, if a candidate solution. So, if max of all candidates is 0, there is no solution. This would simplify the Scala especially.
2. I understand that Python was just used as an example language for imperative programming, but it could come off that you were comparing Python and Haskell/Scala's abilities, when Python is capable of solving this in a very functional manner as well. In fact, the implementation can be quite elegant:
max(map(sub, nums, accumulate(nums, min)))
Though Python does not explicitly have scanLeft or zipWith, Python's map auto zips multiple iterables, and the standard itertools library has accumulate, which is essentially the same as scanLeft. Conveniently, if you don't provide an initial value to accumulate, the first value is just the first item of the iterable (which seems similar to Haskells scanl1). Python's operator library provides the sub function. One other nice thing about this, is that it works entirely with iterators, so there are no intermediate results.
If you want to use -1's instead of 0's, I think the most elegant way would just be to use an 'or' operator, since 0 is falsey:
max(map(sub, nums, accumulate(nums, min))) or -1
Thanks - just wanted to say that functional solutions can be written in Python!
@@a2sbestos768 Also I don't think variable names as "lo" and "e" should be used in Python, as explicit is better. Also I'm a huge fan of type annotations so I'd prefer
def maximum_range(nums: List[int]) -> int:
@@geeshta It is the trade-off between code readability and code "elegance". Some people like it when there is as little as possible boilerplate and redundant code. Long variable names are therefore a no-no if a shorter one conveys its purpose well enough. Lo for "the lowest number so far" is good enough in this case. I do think that this does increase the size of your function documentation though, because you need to define more specifically what a variable means.
Same goes for the "elegant" uses of the combinators to combine multiple statements into one line to not repeat the use of a variable for instance. Sure it saves characters, but you now need to spend more characters in the documentation to help future code refactorers decipher what the piece of code does.
this is *infinitely* better code than every example in the video. self-describing and simple
You swapped i and j in your first point, just fyi
My moment has finally come! _I used APL before you were a glint in your father's eye_ . 👍. Great video. Enjoy the journey it's everything.
So the S combinator duplicates the argument of the function that follows it and inserts it in the place of itself?
i.e. foo bar applied to List gives us foo List (bar List), where foo takes two arguments and bar one.
I have never heard of combinators, but this seems quite useful in a language like Haskell wherein there were no explicit references to a named argument anywhere in the function definition.
Just the first video I watched on this channel, and looking at other videos... this is probably the best channel I found over the last ~10 years
In my opinion, in imperative languages, branching into loops and conditional blocks really benefit from braces.
I agree they add noise, but I feel like it helps sort out what the blocks of code are doing.
Which is a main reason I find python infuriating, it's still imperative, but relies on a lot of tabbing to generate structure, which can get annoying to read in large parsing sections of code.
For context Haskell is my favorite language, with C# being my professional language
IMO dev teams are diverse and code ideally should be readable for everyone.
I personally liked Scala and Haskell solutions (although I need to learn combinators to fully understand it, so I'll watch your videos about it).
What's nice about this F# solution is that it's clear, concise, and can be run step by step interactively in REPL to see the transformations:
let maximumDifference nums =
nums
|> Seq.scan min System.Int32.MaxValue
|> Seq.tail
|> Seq.zip nums
|> Seq.map (( Seq.filter (() 0)
|> Seq.fold max -1
Man I really really loved this video. You convinced me to look into combinatory logic. I'm gonna check out To Mock a Mockingbird.
Thanks for the great video!
Great talk as always.
The slides were much smoother this time around, loved it.
APL (A Programming Language) has been the source of jokes in the programming community for nearly 50 years! I couldn’t readily find good references beyond en.wikipedia.org/wiki/Write-only_language?wprov=sfti1 , but there used to be fun competitions on who could write the shortest and most indecipherable code in APL.
More practically, APL started out at a time when programming was still regarded by some as an extension of pure mathematics, so a highly symbolic syntax was not totally out of left field. Memory at the time was also at a premium, likely costing over a dollar a bit, so being able to represent core functionality in a glyph represented by just a few bits had practical benefits.
I thought it was, "Array Programming language".
Iverson's decision to keep APL functionality to single glyphs was far less a product of chance or circumstance than of intention. While it may have helped save on memory incidentally, programming was by no means considered "an extension of pure mathematics" in the 60s; these were the days of FORTRAN and COBOL, when programming was still in its infancy but was largely verbose and strictly procedural, and was beginning to be more and more seriously used for day to day business rather than simply mathematics. Even at the time, APL's syntax was very much out of left field to most programmers.
It was no mistake or happenstance that he chose notation so dense. Rather, he considered the dense notation of APL to be a feature and selling point in itself, as once you get past the learning curve of understanding the notation, its density shifts into clarity and ease of expression, similar to how exceptionally complex concepts can be expressed in very small amounts of mathematical notation. I highly recommend reading through his paper "Notation as a Tool of Thought" for more insight on why he chose to lean in the direction of the more "mathematical" appearance of APL.
@@foxoninetails_ That sounds like a reasonable explanation. I would guess that, based on the fact that he wrote that paper, it is the correct one. I may try to track the paper down now. One of the first books I owned was APL/360: An Interactive Approach. In 1972, in high school, the only terminal I had access to was based on the IBM Selectric Typewriter. That was a very successful typewriter for IBM and it had swappable character balls. At the time, I did know Iverson worked for IBM and I had assumed the "weird character" approach was because at IBM Research, they might have been encouraged to help sell Selectric based technology. I don't think the language became as mainstream as IBM might have hoped. However, there are a lot of people my age, those who leaned towards math anyway, who learned that language right after FORTRAN and were quite taken by it. I think APL taught me more about how to think as a programmer than any thing I learned in my high school and college careers. I have run into many people since in the tech world who also fell in love with APL. And I would say, to some extent, it taught me how to think about matrix math in general.
Wow, just amazing!
You are just the programming channel I was looking for. Thanks for your content, keep it up!
Cheers!
One one hand I'm awed at the deep mathematics on display, on the other hand I want to never use it myself.
All of these highly formal functional languages are so ridiculously complex and unmaintainable, I can't imagine anyone using them in a serious setting.
I figured the same. Coming from a python background I cant live without intuitive programming design that kinda makes comments not necessary
In the grand scheme of things, all thats important to me is:
*can i read it off the bat and come back to it later and remember what this function does and what created that call back?"
*is it efficient?
Ironically having said this youd probably laugh at me since i mentioned efficient and python in the same comment but i digress, something like python with a little more control over variable instantiation would be nice.
You may well find Haskell complex, but it's certainly not _unmaintainable_ - in fact, IMO the biggest advantage of Haskell is that its strong type system makes it almost impossible to break code that's worked before, without the compiler detecting the problem. Thus one can confidently change APIs without the risk that some dependency then ends up with bugs in production.
And ultra-compact, point-free code is actually rather frowned upon in the Haskell community. This video strongly overstates the similarities in practice to APL.
The parentheses aren't even needed in the APL solution, making a cool looking function with the max reduce at one side and the min scan at the other
Sigh :facepalm: I definitely should have noticed that
it's worth noting that the problem being solved has a real-world interpretation: If the numbers are a time series of prices (like daily prices for a stock), this gives the maximum profit you could make if you bought and sold at the optimal times (assuming you can't short-sell). So if you are evaluating some trading algorithm this gives the upper bound on performance.
This is also solved in one of the "Q for mortals" videos.
I was shocked when I first saw the APL version of the gave of life, a game that took me ~100 lines to code in python written in a single line
How do you even produce those characters on your keyboard? Is typing a function call really so taxing that reducing it to a single character is the difference maker? I don't understand how this kind of language can be easier to read/write or reason about.
I think you missed the point. The guy is talking about why he thinks a certain language is amazing in his opinion. He thinks that way because he loves the math and all behind combinatory logic.
I agree with you that it might not be practical, but the guy thinks it's cool and he likes the concept and loves that an existing langauge has all this combinatory logic in it. That's why he made this video.
At least that's how I understood the video anyway ¯\_(ツ)_/¯
@@ravimaithreyregulagedda2158 It's possible, yeah. We're left to guess at the motivations of the author since they weren't clearly stated.
@@0dWHOHWb0 Back in the days when APL was cool, you had special keyboards and unique encodings (well before Unicode). So you probably just use a custom keyboard mapping. I don’t remember the count, but there aren’t that many glyphs in APL. Conceptually, it’s most like the broad use of symbols in mathematics in general; you’re writing a mathematical expression, not a story.
hipster lang , they dont use this for anything
it's not that difficult to have certain key presses produce other characters. vim, for example, makes setting up this type of scheme pretty trivial, but it can be done at a system level very easily on linux and windows.
also, J, made in the 90s by the creator of APL, uses ascii characters, so fear not. (:
Combinatory logic gameplay:
Combinatory logic lore: "Somebody I Used To Know remix played over this accelerated video"
Thanks for this video, quite interesting introduction to combinatory logic and array programming.
this sounds interesting, tell me more...
This is going to be a good one, I can feel it
Was it?
@@echoptic775 I think they are still recovering from the digression
How does changing variable names from clear words to random meaningless syllables make the code more readable
It doesnt
I wonder if Clojure's macros can be used to introduce combinators 🤔
Your strategy is overly complex, no?
maxDiff :: [Int] -> Int
maxDiff = snd $ foldl' go (maxBound, -1)
where go (lo, diff) x = (min lo n, max diff (x - lo))
I'm a Haskell beginner, this was my solution:
maxDiff [] = -1
maxDiff (a:as) = mD as a
mD [] _ = -1
mD (a:as) m = if a > m
then max (a-m) (mD as m)
else mD as a
I love your podcast! I didn’t realise this was your channel.
Audio is distorted again. Reduce input volume by 10-15% to alleviate the issue.
Yea, I thought I had done the adjustment prior to recording but apparently something got reset.
> At 18:17 "In the future, I will give a full either 40-60 or 90 min talk all about Combinatory logic and Combinators"
Please tell me we live in that future and that video exists somewhere and I just didn't find it 🙏
ruclips.net/video/JELcdZLre3s/видео.htmlsi=_Twr1l55167AE-Wt
@@code_report Thank you for the link! This was a great video!
I would love your opinion as a C++ developer: it's clear to admit that Array programming is extremely elegant and terse, but I wonder if the elegance comes at the cost of the performance? I didn't catch up with all the ArrayCast podcast episodes, so apologies if it has been addressed there, but at this point I'm still unsure if array programming and performance mix well or poorly with performance, the "naive" examples I came across seem to favor terseness over efficiency but don't know if it's a tendency or not.
Thank you. I finally get what combinators are _for_ not having seen it explained in a way I have understood as an engineer until now.
Thank you, this is the first time I have a good idea of what the S Combinator is all about...
Sharing the same argument again, without being forced to mention it twice... Neat...
Imperative programming is like a recipe. You follow the steps and out comes the result at the end.
Functional programming is like an assembly line for your data, where each step computes, recombinates or combines the data, to get to the endproduct.
I feel that the recipes of imperative programming are easiest to read and follow, because you can easily digest it step by step and see the intermediate results.
If you do simple things like map, filter and reduce and an occasional zip, functional programming can still be very readable for most programmers, but it becomes way more complex once you start using the combinators. And I don't mean that those can't be understood. I mean if you can see at a glance what it does without too much effort, which helps maintainability.
Well as everything i think one becomes good at doing a thing with practice. After some time u probably will be able to quickly think, read and type with combinators, but the problems arise as soon as you need to show it to someone. Pretty high chance it will take quiiite some time to understand. If it was a thing taught in schools/uni, then probably there would be no problems
@@utof the beauty of industry: you will need to show it to someone as you cannot maintain code forever, and your boss will not pay for that person to sit around learning a new way to think.
I hardly understand 5% of what's going on here but I feel amazed and quite smart after watching these vids. LMFAO. Seriously the world of computing is mind blowing. The so called "developers" like me don't even know 1% on what goes behind the scenes when we call an in-built function or use any tooling like Git or SDKs etc. Thanks mate for keep making these vids. Hope you're keeping backups we don't know when WW3 ends everything.
7:55 - for the python example, isn't it better to use math.inf instead of sys.maxsize?
Possibly, but not type consistent. sys.maxsize is like Haskell's maxBound::Int, except Python's int has no max value (it's arbitrary precision like Integer). It's not great as Python code goes anyhow; I might e.g. write max(max(nums[i:])-min(nums[:i]) for i in range(1,len(nums)-1)), which may not be as algorithmically efficient but avoids naming new magic values and probably runs faster.
Oooooh a very interesting comparison! Love your channel
APL was the very first programming language that I learned at the age of 8, taught to me by deaf friends. The next was BASIC. Imagine that!
Galaxy brain
Probably easier to get into at a younger age because much of maths is still weird glyphs at that stage, so it wouldn't seem so out of left field as it does for adults.
Would you do a Julia over view? It looks like a jack of all trades and master of few.
So, how to open a file using APL?
I dont get why people think that the wierd unicode symbols are a boon?
The S combinator explained at 17:32 reminds me of functions as monads: you can chain a binary function x onto a unary function y, so in Haskell it would be something like y >>= x. I believe the same is achieved by treating functions as applicatives with x y (solution of this video).
You probably already treat functions as monads. That's what Reader is. Reader a b is pretty much the same as a -> b, under the hood
7:40 100% disagree. Even though I knew what the variables were originally, I got immediately confused by these short variable names. It’s maybe to do with parsing mathematical formulas, which I am super bad at, but I feel like 1-3 word variables are much more readable than 1-3 letter variables. Except for i in a loop, but even there, index, position or location may be more appropriate depending on what you’re actually iterating over.
I have a lot of experience parsing math formulas and I still think the short variable names are worse!
Wasn't this whole video a joke?
@@31redorange08 I… don’t know. Maybe? I guess, yeah, when you think about it like that, it makes sense, but I can also see how people do find beauty in pure languages like this. I just thought that it wasn’t for me?
About the example problem:
I believe it might be faster to store two variables of the highest number and highest difference, and instead of comparing the difference between the min and the current number to the current biggest difference on every single step, we just compare the current number to the highest number.
Each time we find a new min, we compare the highest difference to the different between the current max and min, and store the largest of those as the largest difference, then set the min and max to the new min we found.
So instead of doing a subtraction for every element in the array, we only do a subtraction each time we find a new minimum.
I have no idea how fast variables can be accessed and stored, so I don't know wether the increased number of variables used increases compute times more than the reduction from fewer subtractions.
Before the functional language section I tried to solve the problem with Haskell myself. What surprised me is that my solution was completely different from yours.
I only used recursion and ifs, no fancy functions like scan, map, filter, reduce and combinators, but my code is still O(n).
It made me wonder which is faster for larger input arrays, I might check it out later.
Scan was new to me, but map and folds should be very standard haskell toolset
For a quick look at J I recommend "A Brief J Reference". J in j is the symbol for Complex Numbers. Imaginary numbers are super fun in geometry.
"The expressivity of the single glyphs"? oh yea, totally sold me...I'm switching to APL, where do I buy a keyboard with all these expressive symbols?
5:00 doesn't it make more sense to assume the max_dif to be n[1]-n[0] if positive, otherwise -1, such that .we increase the difference as we find a larger one, instead of assuming it to be MAX_INT?
So what you're saying is that some languages have more useful (for this problem) standard library functions than others. So the comparison is between standard libraries, not languages.
Still a valid discussion but you're mixing syntax Vs library
While that is true for Haskell vs Scala, it is not true for Haskell vs APL, as both languages have very distinct ways of laying out the code.
The standard library is an essential part of many programming languages.
Consider, for instance, the syntax of Forth: Words are ended by whitespace. How much semantics do you get out of that?
In Haskell, booleans are fundamental for several syntactic features, such as conditionals and boolean guards. Although the type is distributed with the compiler, it's part of the library, not special syntax.
@@0LoneTech for sure. That's my point, is that the comparison _should be_ inclusive of language features too, imo. It seems somewhat unfair to the languages that have advantageous tools as a part of the language if the comparison is only between their libraries, as is done in the video.
I can't wait to see this showdown. 😁
Thanks in advance for making this!
Please could you comment how do you do those smooth text code transtions? Thanks in advance.
What people like this guy often forget, is that code is read a lot more than it's written
I assume to people who live in APL, it's very readable. Those ten people can probably read this immediately.
From my experience it's not true.
It's often very fast to get the gist of an expression, but a little slower for me to confirm the exact workings of the chain. In comparison, C syntax is very noisy with redundant terminators, separators, names etc. Haskell is a fairly sweet spot, but forking of data often involves some awkward back and forth parsing.
Do you say that when you see Chinese and tell them they need to rewrite their language so people who don't speak Chinese can read it?
clojure solution:
(defn maximum-difference [nums]
(->> nums
(reductions min)
(map - nums)
(filter pos-int?)
(cons -1)
(apply max)))
i like it better than the skala solution and clojures map fn forgoes the need to zip, which is always nice. But point free solutions like these tend to fall apart when you don't have a consistent placement for argument (notice my use of cons instead of conj). But that can happen in haskell and apl too, if it werent for combinators to save the day. clojure just solves this with more macros which has it's own tradeoffs.
I love APL . It helped me so much with Linear Algebra and Calculus.
I'm pretty sure APL was developed by our alien overlords.
Nah, Ken just wanted a version of maths that made sense, like why does summing everything have its own symbol but subbing everything doesn't? Why does min and max have no symbol even though we use often? Why do we need to remember Bedmas/Pedmas, what about if a expression has a Sigma, when do I do that? So Ken made something more consistent, with more symbols and less rules, and it was found that it can be made a programing language easily enough.
What are the pros and cons of writing the algorithm in one paradigm vs the other? For example the pros and cons of the Haskel implementation vs the Python one? For context, Python is my main language but I'm kind of interesting in learning Haskel.
This is the first time I came to know of combinators. Interesting.
So APL is what people refer to as a write-only language
I think the very idea that only collections (aka arrays) of things should exist is brilliant, kudos to APL to introduce that.
Anyone ever open a door out of curiosity and instantly regret it? Time for another mug of coffee...
20:35 Twin Algorithms secret reference (and 21:58)
could have sniped his conference talk while he waited to give it in person
What's your opinion on J programming Language vs APL? honestly I've been giving both a bit of review, and seem to like J language more. What would you say - which one is "more powerful", if you could/can compare them?
Are functional and array orthogonal paradigms? Could you theoretically have a programming language that is both functional and array based?
Honestly I am far from understanding it but congratulations for the great presentation and thank you for taking the time to explain it. There's a lot I still have to learn tô fully appreciate it.
I would love to know how you manage your time. Would love to start doing more outside of work, but I you’re like 10X and I’m struggling
Waiting⏳also your sicp course is insane
If BQN also had all the type theory stuff Haskell does, I'd be sold. Without it, although the glyph notation is cool, it's just a limited version of Haskell not worth learning (imo) - just learn Haskell at that point.
Or I guess I can ask: Other than the fact that it's more succinct on account of the glyphs, what does BQN offer that Haskell doesn't?
It's not just the glyphs, but also the syntax and chosen data types and polymorphisms. For instance, CBQN will happily build byte arrays rather than Double arrays to improve memory access patterns, simply because the data fits. When they advertise context free syntax (really, only local context), it's to assist both human and automatic readers.
It really isn't a version of Haskell. If anything, it's a version of APL, and the array oriented programming is a useful paradigm to learn, which you could translate to some degree into Haskell Massiv.
It does bug me that people keep comparing Haskell lists to arrays. There are arrays even in standard Haskell (Data.Array), though not in base.
why the order are reversed in scala and haskell? a problem came form who have zero knowledge with haskell
I'm not 100% sure about scala-I thiiiink that the '.' is like a method call on whatever is returned by the bit before. perhaps a bit like:
someNumber.square.add2
in other words, '.' ≈ "and then"
'.' is haskell is following the math notation, such that:
add2 :: Num a => a -> a
add2 = (+2)
square :: Num a => a -> a
square x = x * x
(add2 . square) 10 == add2 (square 10)
(add2 . square) 10 == 102
(square . add2) 10 == 144
which is the same order as math notation:
(f . g) x == f(g(x))
Why auto maximum_difference(std::vector nums) -> int, instead of int maximum_difference(std::vector nums)? Why by value instead of by const-ref?
Also, you probably should do for (const auto& elem : nums), even though the copy here doesn't make a difference... Also, how's performance with those "min scans", differences and other transformations? You could implement those with e.g. std::transform but I figure it'd be slower compared to the explicit loop to do all that.
I would like to learn more about this combinatory logic stuff.
23:26 use of uncurry (zipWith (-)) . (id &&& scanl1 min) might match APL better, I think.
That is probably what Klingons use to program.
Working in the embedded world (C, Assembler) with some tool programming in Python, C# or Java and some "field trips" into web backends in PHP, it really makes me wonder why someone would need or even enjoy that (no offense) and where that is used outside of academic contexts.
Is Conor Hoekstra the child of Edward Haskell and Edsger Dijkstra?
7:08 What is the point of returning auto in that function?
It is not returning "auto", that is not a function is actually a lambda. The lambda is defined as the "name(...inputs..) -> return_type { body} }. Thus, "maximum_difference" is a variable of type lambda and to avoid writing the whole type as in any other C++ variable declaration he used the "auto" keyword".
@@josee.ramirez4305
?? Lambda must have capture list so it would be [] () -> int { return 10}.
Besides, this functin returns the result of std::max, so it returns int.
MATLAB:
res = max(nums (2:end)-nums(1:end-1))
If all(res
It's too esoteric to me to be honest haha, but it seems really cool to know how to program in a language like this.
And I have 2 questions about Array Programming (or APL):
1 - To code in APL you need a specif IDE? because I don't even know if it's possible to type some of the characters that I saw.
2 - And what kind of problems this type of language is target for? like, in which scenario I would choose Array programming instead a normal approach (like C++)?
1) You can do it with plugins + dyalog in vscode.
2) I have zero idea... why you'd ever use it
1) Dyalog APL has a nice built-in REPL with all the character set, and also they have the RIDE IDE which is nice too.
2) In the past APL has been used for teaching maths, and for computational physics. There isn't a hell of a lot of material for the second part, but I can share a 70pg report from the 80s that does some serious physics calculations (Newtonian gravity, Einsteinian gravity, Schrödinger equation, Feynman path integrals) I've found. If you've got a problem that can be modelled using matrices (linear algebra), then APL and J are perfect for it.
Also J seems much more equiped and powerful for anything related to mathematics and data. I hope modern APLs will catch up.
This is definitely really cool. But for me there is always a question: how to get things done on such languages? What's the purpose? How difficult it would be to write for example autodiff library on APL? Or train ML model (or GPT-3 for example)? process tabular data from csv? scientific algo, which should be as efficient as possible?
Processing and extracting data from csv's is just about the perfect use for functional and array languages. It excels in things where the input is simple to quantify and the result can be expressed as a series of transformations on that input, so reducing a set of rows and columns into some meaningful value you want to extract from it would look much simpler than an imperative solution.
Smullyan and Priest write some of the most accessible logic books.
I don't study much in this area of logic, but I'd definitely pick through the Smullyan text if I wanted a quick comprehensible dive. Good recommendation.
11:00 Scala code, python equivalent:
max(-1,*(n-m for m,n in zip(list(reduce(lambda x,y: x+[min(y,x[-1])], nums, [sys.maxsize]))[1:], nums) if n-m))
19:30 haskell code, python equivalent (no INT_MAX, yet python doesnt have S combinator, so this is best I can):
max(-1,*(n-m for m,n in zip(reduce(lambda x,y: x+[min(y,x[-1])], nums[1:], nums[0:1]), nums) if n-m))
Update: Just saw atrus' comment. here is pythonic way:
max(map(sub, nums, accumulate(nums, min))) or -1
Such a great presentation!
Smullyan's bird book will arrive most likely on 2021-12-18. I'd ordered it well before I saw this video. I'm happy to see that you recommend it too :D!
Hi, personally many of these languages are not my cup of tea. APL cannot even be typed on a normal keyboard. But I liked to see what I could do in C# here.
I am both able to go the imperative way as well al the functional way which is called LINQ.
I had to write the ScanLeft part myself, of which my yield return version turned out to be the nicest, because in that way you still benefit from Linq it's lazy execution efficiency.
But the rest was quite easy. For Map you use Select and for Filter you use Where.
The nicest thing is that Zip is available as well and it accepts and extra input array as direct input, so that I basically get something simular to that S combinator solution from Haskell, even though the code looks more like the Scala code in the video.
Your videos are cool! Can I ask what you use to create these amazing slides and animations?
I have a tutorial here: ruclips.net/video/Vh3y1ela-_s/видео.html
How to even write APL on a keyboard?
Using more complicated key mappings. Classic APL keyboards have more glyphs printed on them, and they're entered with shift modes like Control. There are also workarounds like backtick prefixing (BQN uses \ prefixing).
Soooo.... The matrix is written in APL...
This talk demonstrates why functional programming is less widely adopted: the imperative approach is, more or less, “do what the problem description says to do”, the functional approaches require reinterpreting the problem in terms of the concepts that are well supported by the language.
Except it's not true. In fact, imperative approach is the opposite thing to declarative approach, so it is always about "how to do what the problem description says" and not "do what the description problem says". Otherwise it's not an imperative approach anymore. The reason why it _looks like_ imperative does whatever the problem description tells you to do, is because resources like leetcode, hackerrank etc most of the time provide issues designed _specifically_ for an imperative approach. They fit imperative approach well, programmers who use imperative approach might face them often. However, I'd argue there are tons of tasks that look clean and understandable with functional approach and ugly with an imperative approach. But that's not the most important thing. The most important thing is that in real world no one will ever give you a ticket that says "I want to find a max difference between two elements in the array". It's just a low level implementation detail which will never exist in functional world in the first place. So it does not matter which approach you are using, you will always choose proper low level algorithm that will look great and understandable in the project paradigm and most likely will look horrible in any other paradigm.
As someone who knows a little bit of Haskell, I can assure you there is a solution in Haskell that looks way more like the imperative solution.
Here he abused some of the more confusing aspects of the language.
Elegance !== Legibility
How do you even type APL on a keyboard????
What do you think about GNU APL for a beginner?
Lisp isn't array programming lang?
Not in my books. Lisp is almost it's own paradigm, but languages like Clojure and Racket definitely belong in the functional paradigm IMO.
What's your thoughts on the C++ standard trying to becoming more expressive like the inclusion of the ranges-v3 and I've seen a proposal for C++23 for a new composition operator for I think regular (imperative) functions as well as the range, view and lambda functions. I believe the proposed operator was operator |> (x, f(y)) where f is a function and y is its input and x is a value or function that is passed to f through y. I was curious on your thoughts. Awesome video though.
I like your run through. Very good example and good work. Personally I like languages which are multi paradime. I our team we use D and try get my colleagues to use other part of the D language. Specially the OOP or OOOP (Over Object Oriented Programing)