Thanks! I don't know if you caught the reference to your primitives as algorithms concept that you introduced in the past, but it helped anchor the organization of the primitives quite nicely for some of the more complex ones.
Very interesting talk. I've never had it laid out so clearly before that to use array languages effectively you need to find the right data representation for mapping your domain to the existing primitives.
I enjoyed this talk. I've always admired APL for its concision-- which is not an end in itself, but in what it facilitates in drawing connections between different domains. Iverson's principles were a great reference and hit the nail on the head. At the end of the day, programming languages are for human beings-- sorting them into machine code is the business of the compiler. This simple fact (imo) has misled many into thinking that a good programming language has to be simplistic and undemanding, and oriented toward a programmer with scarce knowledge of mathematics. Is there a place for that sort of language? Absolutely. And it may be true that a highly optimized compiler could overcome any performance difference. But I think, ultimately, the producer is as important as the product. Grasping these transformations and operations with our own minds is worthwhile for our own sake. It feeds our creativity and helps us understand, as far as we can, what we are creating. And besides: we live in an era in which the machine "intelligence" we are capable of producing is already effectively a black box to our own understanding. Facility with languages like APL could be our best chance of tackling that obscurity-- at least from the purely technical side (there is still the problem of proprietary source code).
Thank you for the talk! I'm always interested to learn more about reading array langs and being able to understand it at a glance. This talk reiterated how you get used to it, and zoom out semantically. What is the name of the talk referenced at the end when the speaker was asked about debugging APL? I'd like to learn more about how you solve issues with your code. A debugger is nice, but is there some sort of linting? Can you do Clojure-like REPL things? It almost looks like you could but I'd like to hear more.
Referenced talk is at ruclips.net/video/uInwQEMYAP8/видео.html . The Dyalog APL debugger allows for stepwise debugging, live editing source code, and REPL-like interaction with the session at any point in execution, as well as a few more things like watching, breakpoints, and so forth. It also will support token-by-token debugging in the future, so that you are not restricted to debugging a line at a time.
This is my first time hearing about APL and a lot of the points resonate when I'm writing SIMD purposed stuff. Has me extremely interested to write whole programs this way. But how does the performance of APL compare to decently optimized C? I'm planning on building an insanely performant terminal emulator, but does APL have non-negligible amounts of overhead compared to "data-oriented" C? I see that things are ref-counted which would incur a decent overhead usually. I'm assuming that most operations can be optimized to use some portable SIMD by the compiler? Would be super interesting to compare one of those SIMD parsers (e.g json, url etc.) to an APL version. Either way I'm interested in APL, I always love venturing into new styles to see what I can learn. I can definitely see how the style/mindset that APL reinforces can be of great benefit. Great talk.
Work on comparing SIMD parsers and the like is ongoing. How APL performs against other languages depends a lot on what you mean by "decently optimized" in those other languages. I don't think APL presently has a strong performance advantage over other languages purely when it comes to handling linear-dependent event-driven loops. However, often, APL enables a more maintainable and easier to program architecture around data-parallel computations, which in turn can mean that you can more ergonomically create performance oriented code for a large number of problem domains. You can do the same thing in C, but it's often much harder to maintain and keep correct over time. Commercial APL interpreters gain a significant amount of their performance from the ability to perform the majority of their computation inside of the primitive functions, which can then be highly tuned, and special cased at runtime for all manner of data shapes and ranges. This allows for a single primitive to internally encode many different high performance algorithms for different types of data. In a C program where you don't use the same sort of abstractions, you'd need to maintain many potential algorithms to retain the same sort of general performance. Where APL can help even in crafting C programs is by serving as a specification language that is well suited to making good abstraction points for a C implementation. Compiled APL can reduce the interpreter overheads involved with the arrays (reducing ref-counting, for instance, or reducing allocation costs), but even the Co-dfns compiler is much less mature than the commercial interpreters currently available, so in practice, you need to be aware of the array/interpreter overheads if you are using one of the major commercial implementations except for emerging technology like Co-dfns.
Yes due to memory and cache locality and coherence. And Marshall Lochbaum (an implementer of APL and now BQN) has talked at length about implementing these idioms and atoms using fast SIMD and Vsctorized operations. Leading to nano second performance on the CPU even for large datasets. He talks about how these kind of performance comparisons are useless and if you do want to go faster then just hand roll architecture specific and kernal specific assembly.
@@marcosgrzesiak4518 same thing possible in any language with any kind of polymorphism / generics Agree that here compiler could optimize better but it's not some special technique, it's just "what is APL language" So technique to remove explicit loops is to use implicit loops with specific language
@@JayDee-b5u I liked the talk too & I liked the enthusiasm about shifting one’s perspective to play to the strengths of underlying hardware. My main gripe was what you said. When asked(twice) about the “shape” operator, I wish he would have said something like “this is a 3 by 3, or, this is a 5 by 3 by 3”
@@JayDee-b5u and really that’s my only criticism. When someone says/does something that bothers me, I take a lot of time to think about why it did. I agree that a language should change the way you think about something, and I really appreciate that the speaker brought APL to the forefront of my mind
fantastic talk!
Thanks! I don't know if you caught the reference to your primitives as algorithms concept that you introduced in the past, but it helped anchor the organization of the primitives quite nicely for some of the more complex ones.
Very interesting talk. I've never had it laid out so clearly before that to use array languages effectively you need to find the right data representation for mapping your domain to the existing primitives.
I enjoyed this talk. I've always admired APL for its concision-- which is not an end in itself, but in what it facilitates in drawing connections between different domains. Iverson's principles were a great reference and hit the nail on the head.
At the end of the day, programming languages are for human beings-- sorting them into machine code is the business of the compiler. This simple fact (imo) has misled many into thinking that a good programming language has to be simplistic and undemanding, and oriented toward a programmer with scarce knowledge of mathematics. Is there a place for that sort of language? Absolutely. And it may be true that a highly optimized compiler could overcome any performance difference. But I think, ultimately, the producer is as important as the product. Grasping these transformations and operations with our own minds is worthwhile for our own sake. It feeds our creativity and helps us understand, as far as we can, what we are creating. And besides: we live in an era in which the machine "intelligence" we are capable of producing is already effectively a black box to our own understanding. Facility with languages like APL could be our best chance of tackling that obscurity-- at least from the purely technical side (there is still the problem of proprietary source code).
Thank you for the talk! I'm always interested to learn more about reading array langs and being able to understand it at a glance. This talk reiterated how you get used to it, and zoom out semantically.
What is the name of the talk referenced at the end when the speaker was asked about debugging APL? I'd like to learn more about how you solve issues with your code. A debugger is nice, but is there some sort of linting? Can you do Clojure-like REPL things? It almost looks like you could but I'd like to hear more.
Referenced talk is at ruclips.net/video/uInwQEMYAP8/видео.html . The Dyalog APL debugger allows for stepwise debugging, live editing source code, and REPL-like interaction with the session at any point in execution, as well as a few more things like watching, breakpoints, and so forth. It also will support token-by-token debugging in the future, so that you are not restricted to debugging a line at a time.
@@arcfide Thank you! That sounds reall nice because you can inspect everything you need.
This is my first time hearing about APL and a lot of the points resonate when I'm writing SIMD purposed stuff. Has me extremely interested to write whole programs this way.
But how does the performance of APL compare to decently optimized C? I'm planning on building an insanely performant terminal emulator, but does APL have non-negligible amounts of overhead compared to "data-oriented" C? I see that things are ref-counted which would incur a decent overhead usually. I'm assuming that most operations can be optimized to use some portable SIMD by the compiler? Would be super interesting to compare one of those SIMD parsers (e.g json, url etc.) to an APL version. Either way I'm interested in APL, I always love venturing into new styles to see what I can learn. I can definitely see how the style/mindset that APL reinforces can be of great benefit. Great talk.
Work on comparing SIMD parsers and the like is ongoing. How APL performs against other languages depends a lot on what you mean by "decently optimized" in those other languages. I don't think APL presently has a strong performance advantage over other languages purely when it comes to handling linear-dependent event-driven loops. However, often, APL enables a more maintainable and easier to program architecture around data-parallel computations, which in turn can mean that you can more ergonomically create performance oriented code for a large number of problem domains. You can do the same thing in C, but it's often much harder to maintain and keep correct over time.
Commercial APL interpreters gain a significant amount of their performance from the ability to perform the majority of their computation inside of the primitive functions, which can then be highly tuned, and special cased at runtime for all manner of data shapes and ranges. This allows for a single primitive to internally encode many different high performance algorithms for different types of data. In a C program where you don't use the same sort of abstractions, you'd need to maintain many potential algorithms to retain the same sort of general performance. Where APL can help even in crafting C programs is by serving as a specification language that is well suited to making good abstraction points for a C implementation.
Compiled APL can reduce the interpreter overheads involved with the arrays (reducing ref-counting, for instance, or reducing allocation costs), but even the Co-dfns compiler is much less mature than the commercial interpreters currently available, so in practice, you need to be aware of the array/interpreter overheads if you are using one of the major commercial implementations except for emerging technology like Co-dfns.
Doesn't octave work exactly the same way as this, basically like lisp but with more prefab symbols? Anybody know?
Octave is more akin to Matlab
At the OP-CODE cpu level, does any of this matter?
Yes due to memory and cache locality and coherence.
And Marshall Lochbaum (an implementer of APL and now BQN) has talked at length about implementing these idioms and atoms using fast SIMD and Vsctorized operations. Leading to nano second performance on the CPU even for large datasets.
He talks about how these kind of performance comparisons are useless and if you do want to go faster then just hand roll architecture specific and kernal specific assembly.
Yes because APL maps better to how CPUs actually execute code.
I don't trust people that write code that looks like something you find in an Area 51 spacecraft 😜
So technique to remove explicit loops is to use implicit loops with hard readable code
genius
Tell me you dont understand array langs without telling me you don't understand array langs
@@marcosgrzesiak4518 same thing possible in any language with any kind of polymorphism / generics
Agree that here compiler could optimize better but it's not some special technique, it's just "what is APL language"
So technique to remove explicit loops is to use implicit loops with specific language
the use of the power operator (⍣) at 35:07 is an explicit while loop, though a very concise one
@@meme_gawd so what
you tell me 🤷 you're the one who wrote explicit loops
cringe talk
Why do you say this? I liked the talk although I think the speaker could use layman's terms.
@@JayDee-b5u I liked the talk too & I liked the enthusiasm about shifting one’s perspective to play to the strengths of underlying hardware.
My main gripe was what you said.
When asked(twice) about the “shape” operator, I wish he would have said something like “this is a 3 by 3, or, this is a 5 by 3 by 3”
@@JayDee-b5u and really that’s my only criticism.
When someone says/does something that bothers me, I take a lot of time to think about why it did.
I agree that a language should change the way you think about something, and I really appreciate that the speaker brought APL to the forefront of my mind