Great video and congratulations on not getting lost doing all those re-writings by hand! Anyway I think the video title undersells what you're actually achieving here: this is not just making a particular Haskell program faster, this is potentially making any program that uses this DSL faster by ensuring that the optimizer can eliminate layers of abstraction.
Sorry to see Richard go, but delighted to see a video from Alexis! I've been a big fan of yours, so to speak, for some years now. "Parse, don't validate" has stuck with me for clearly articulating something quite fundamental and important. Re: ideas, you briefly mentioned arrows in the video, and I recall you were at one point working on a GHC proposal adjacent to them - perhaps a short introduction to/tutorial on arrows would make for a good topic? I'm sure I'm not alone in thinking that they look interesting, but I've never been able to really get my head around them.
I've watched Haskell Deconstructed series, and there was part about Yampa, which utilize arrows. I think, that good point to start getting familiar with arrows. But be warned, im not an expert, and didn't used arrows by myself. But still, you might be intrested in this material. link: ruclips.net/video/-IpE0CyHK7Q/видео.html
The simplicity in the end of using the rewrite rules was very nice. I felt a bit lost for the majority of the video, though, because I didn't know why we were manually inlining all this stuff. Maybe you explained it early and I missed it, but only near the end did I realize, "Okay, we're simulating the inlining steps that GHC takes, so that we can diagnose where it gets stuck, and hopefully then we can give it some more hints to get it over the hurdle(s)." I guess it wasn't obvious to me that "Inline everything we see" is a good approximation of GHC's process. So I would have appreciated more early emphasis on the goals or rationale of the steps we were taking.
It's almost like it would be nice to have ghc to have a debug feature to perform a full inline operation that can be used for tasks like this. Seems like quite a bit of work for something mechanical.
do you mean Tactic programming via HLS.. or just maybe Typed or regular Template Haskell can be used.. but then question will be do we analyze every module like this to optimize.
What a great walkthrough, I learned a lot! Thank you! One of the parts I didn't get: how did you know to stop inlining when you saw nested cut/paste calls? How could you tell ghc wouldn't be able to simplify those?
Alexis, thank you for very instructive hands-on video. In the first part of the video, before re-write rules, you substituted expressions with their evaluated values. Would it be helpful to have re-write rules on values level to facilitate such substitutions in addition to the re-write rules on types level you used in the second part of the video? Or have an optimizer that would decide that an expression can be evaluated at compile time?
Thanks for the very nice video! One thing I am wondering: do these rewrite rules also deal with the case where the length is something like `(0 + 0) + (0 + 0)`? It seems none of the rules would apply immediately, which is kind of a shame.
You could probably add a rule that simplifies toward right associativity: rewrite (x + y) + z to x + (y + z). Applying that to your example, you’d get 0 + (0 + (0 + 0)) and then the other rules would apply.
A better video title: @lexi_lambda: How to make a Haskell program 5x faster with 16 lines of comments Great work Alexis, I'd love to see you do more of these videos - if you could do some on the work you're doing to improve GHC's performance, I think a lot of people would find that really interesting too.
Bugs can indeed happen. Quoting from the GHC users guide (which is a great resource by the way): "GHC makes absolutely no attempt to verify that the LHS and RHS of a rule have the same meaning. That is undecidable in general, and infeasible in most interesting cases. The responsibility is entirely the programmer’s!"
@@hellwolf4liberty well, you could always write tests for it, like any code. There's lots of things the compiler can't check, in general, so you have to test. If lack of compiler proofs was really *that* bad, no Ruby program would ever work. You should see the tests in writing for a small typescript app - it would be great if the compiler could prove everything but I'm having to write tests.
Guide how to write 16 strings within 1 hour. It's pretty tought when it dives into compiler. But nevermind. Not sure that video was helpful on my level of understanding, but such peeks into optimization machinery certanly demestificates some of misconceptions. Thanks for video!
43:20 Wouldn't that transformation be forbidden for the optimizer to do, since it turns something explicitly marked as strict into lazy (and even completely eliminated)?
I was thinking that too. Since the rewrite rules do fire, that seems to imply that somehow it does still reduce in a similar manner. I wonder what's going on here.
All this inlining and simplification by hand is really tedious. We definitely should have haskell-language-server plugin for this. There actually was something called HaRe (Haskell Refactoring) a while ago... Another thing is that she is second-guessing the compiler. We don't actually know if all these inlinings and simplifications actually fire...
Great video and congratulations on not getting lost doing all those re-writings by hand! Anyway I think the video title undersells what you're actually achieving here: this is not just making a particular Haskell program faster, this is potentially making any program that uses this DSL faster by ensuring that the optimizer can eliminate layers of abstraction.
Sorry to see Richard go, but delighted to see a video from Alexis! I've been a big fan of yours, so to speak, for some years now. "Parse, don't validate" has stuck with me for clearly articulating something quite fundamental and important.
Re: ideas, you briefly mentioned arrows in the video, and I recall you were at one point working on a GHC proposal adjacent to them - perhaps a short introduction to/tutorial on arrows would make for a good topic? I'm sure I'm not alone in thinking that they look interesting, but I've never been able to really get my head around them.
I've watched Haskell Deconstructed series, and there was part about Yampa, which utilize arrows. I think, that good point to start getting familiar with arrows. But be warned, im not an expert, and didn't used arrows by myself. But still, you might be intrested in this material.
link:
ruclips.net/video/-IpE0CyHK7Q/видео.html
The simplicity in the end of using the rewrite rules was very nice. I felt a bit lost for the majority of the video, though, because I didn't know why we were manually inlining all this stuff. Maybe you explained it early and I missed it, but only near the end did I realize, "Okay, we're simulating the inlining steps that GHC takes, so that we can diagnose where it gets stuck, and hopefully then we can give it some more hints to get it over the hurdle(s)." I guess it wasn't obvious to me that "Inline everything we see" is a good approximation of GHC's process. So I would have appreciated more early emphasis on the goals or rationale of the steps we were taking.
Very cool. Perhaps talking about your native delimited continuations patch, runtime details, garbage collection?
You made me like haskell even more.+1 subscriber
will get it. Just don't get burnt out. Whenever you need a break, take one.
nice video, maybe go into coercions and the wanted's constraints system
Thanks @lexi_lambda for this video. It was an amazing watch. Of course, I need to watch again, and may be start with some simple examples myself.
It's almost like it would be nice to have ghc to have a debug feature to perform a full inline operation that can be used for tasks like this. Seems like quite a bit of work for something mechanical.
Excellent video, thanks! Have you tried using HLS to automate inlining the definitions?
Using the retrie plug-in you could get quite close, but It operates at module level. We would want it to work at the expression level
do you mean Tactic programming via HLS.. or just maybe Typed or regular Template Haskell can be used..
but then question will be do we analyze every module like this to optimize.
What a great walkthrough, I learned a lot! Thank you!
One of the parts I didn't get: how did you know to stop inlining when you saw nested cut/paste calls? How could you tell ghc wouldn't be able to simplify those?
that was awesome..and please please do more ..learned a lot ..thank you..
Great video! I learned a lot from this.
Would love to see a video on coercions :).
Would using the ghc plugin nat-normalize help to get the rules firing here?
Explained in great detail! Thank you so much!!
Alexis, thank you for very instructive hands-on video.
In the first part of the video, before re-write rules, you substituted expressions with their evaluated values.
Would it be helpful to have re-write rules on values level to facilitate such substitutions in addition to the re-write rules on types level you used in the second part of the video?
Or have an optimizer that would decide that an expression can be evaluated at compile time?
Thanks for the very nice video! One thing I am wondering: do these rewrite rules also deal with the case where the length is something like `(0 + 0) + (0 + 0)`? It seems none of the rules would apply immediately, which is kind of a shame.
You could probably add a rule that simplifies toward right associativity: rewrite (x + y) + z to x + (y + z). Applying that to your example, you’d get 0 + (0 + (0 + 0)) and then the other rules would apply.
good tutorial, one question about soft recording. How do you do it? lol
that differential equation arrow library is amazing, where can I get it!!!!
Nice video. Which IDE is this?
If we need to read and cause changes to "core" code, why do we bother with the Haskell front end language in the first place?
Well, most of the time we do not need to read or optimise the core. And in this case, the optimisations apply to the library and any program using it.
@@garethrowlands can we predict when, how often, how difficult it will be to optimise via core?
@@tricky778 well those are good questions. Most of the time, programs go fast enough without reading down into core. Your mileage may vary!
What's the name of the colour scheme?
Thank you SO much for this video :) it really helped me out!
how does one download or use the script from your description
A better video title: @lexi_lambda: How to make a Haskell program 5x faster with 16 lines of comments
Great work Alexis, I'd love to see you do more of these videos - if you could do some on the work you're doing to improve GHC's performance, I think a lot of people would find that really interesting too.
Those are not comments, it's a compiler pragma.
I'm learning tNice tutorials, guitar, and 3d animation at the sa ti what am i doing to myself?
can bugs happen in rewriting rules, or it is statically checked against correctness somehow?
Bugs can indeed happen. Quoting from the GHC users guide (which is a great resource by the way): "GHC makes absolutely no attempt to verify that the LHS and RHS of a rule have the same meaning. That is undecidable in general, and infeasible in most interesting cases. The responsibility is entirely the programmer’s!"
ouch, wow, quite a risky move from software engineering perspective...
@@hellwolf4liberty well, you could always write tests for it, like any code. There's lots of things the compiler can't check, in general, so you have to test. If lack of compiler proofs was really *that* bad, no Ruby program would ever work. You should see the tests in writing for a small typescript app - it would be great if the compiler could prove everything but I'm having to write tests.
Guide how to write 16 strings within 1 hour. It's pretty tought when it dives into compiler.
But nevermind. Not sure that video was helpful on my level of understanding, but such peeks into optimization machinery certanly demestificates some of misconceptions.
Thanks for video!
43:20 Wouldn't that transformation be forbidden for the optimizer to do, since it turns something explicitly marked as strict into lazy (and even completely eliminated)?
I was thinking that too. Since the rewrite rules do fire, that seems to imply that somehow it does still reduce in a similar manner. I wonder what's going on here.
All this inlining and simplification by hand is really tedious. We definitely should have haskell-language-server plugin for this. There actually was something called HaRe (Haskell Refactoring) a while ago...
Another thing is that she is second-guessing the compiler. We don't actually know if all these inlinings and simplifications actually fire...
so difficult, i dont understanding
norman is that you
yup
TeacNice tutorialng is your talent!
equalize it
Be consistent, make mistakes, experint