Rust Functions Are Weird (But Be Glad)

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии • 379

  • @_noisecode
    @_noisecode  Год назад +163

    The noinline thing in the C++ code* has caused some confusion, so let me clarify: I used noinline to simulate an example of a higher order function that is not inlined, to show how function pointer types hurt codegen (and lambdas help it) in template instantiations that aren't fully inlined. Template != "definitely inlined"; not being inlined is common for e.g. recursive higher order functions, or doozies like std::sort. While answering comments about this I came up with a clearer example, that doesn't use noinline, where in a reasonable implementation of a commonplace algorithm (in-order binary tree traversal), using a function by name generates worse code than using a lambda:
    godbolt.org/z/eqz8EbWM6
    The lambda line generates zero code whatsoever (since due to the lambda type it instantiates walk() in a way that is statically known to dispatch to f(), which does nothing, so the whole thing optimizes away), whereas the function pointer line generates an out-of-line instantiation of walk() containing indirect calls.
    Here's the Rust version, on the other hand: rust.godbolt.org/z/8xf87G5WT It's a bummer the compiler isn't able to optimize out the control flow completely (note GCC has the same issue with the C++ version), but the important thing is that _inside_ the monomorphizations of calculate, there are no indirect calls to the passed-in function, since its type statically resolves to code that does nothing, for both the named function and closure case. The two monomorphizations produce the same code and then they get deduplicated in the final output as I showed at the end of the video.
    I think this example is pretty compelling and if I could replace the one I used in the video with this instead, I definitely might. Even though Rust's codegen isn't a dramatic "home run" over C++'s since rustc doesn't optimize out the pointless recursion, it still supports my argument that Rust's type system lends itself to good codegen in non-inlined higher order functions, whereas C++'s type system works against it.
    *I also disabled inlining of calculate() in the Rust code, for the record.

    • @tk36_real
      @tk36_real Год назад +1

      hey, this sounds stupid, but did you delete my reply?

    • @_noisecode
      @_noisecode  Год назад +8

      Definitely not--I didn't even see one come through. Apparently RUclips automatically/inexplicably deletes replies sometimes on a whim: www.reddit.com/r/youtube/comments/11wgrlc/youtube_keeps_deleting_my_comments_for_no_reason/
      Sorry about that. Maybe try leaving it as a top-level comment?

    • @_noisecode
      @_noisecode  Год назад +9

      Oh... did it have a godbolt link in it? Reading some more, I guess RUclips might be prone to auto-delete comments with external links (my comments with links seem to all work but I haven't seen anyone else post a comment with links, and another person saying their comment got deleted was supposed to be posting me a link). If you post it again (and really sorry for the hassle and trouble) maybe we can try the thing where you post the link so it doesn't look like a link (i.e. 'godbolt dot org slash 12345').

    • @0xCAFEF00D
      @0xCAFEF00D Год назад +3

      Can you produce a Rust version of the example you came up with?
      I don't think it's fair to disable the C++ channel of communicating caller context and then state that the Rust way of doing it is better because you did that.
      Also video corrections are usually pinned. I think this comment is important to understand the video. I got this comment on the second page.

    • @tk36_real
      @tk36_real Год назад

      @@_noisecode hi, yes exactly - it had a godbolt link. I'll retry later and thanks for checking :)

  • @NoBoilerplate
    @NoBoilerplate Год назад +528

    Another great video, thank you Logan!

    • @CielMC
      @CielMC Год назад +17

      Hi Tris

    • @NoBoilerplate
      @NoBoilerplate Год назад +15

      @@CielMC 😁

    • @astroorbis
      @astroorbis Год назад +5

      hey there! love your vids man, especially the HAM radio one.. getting my license soon :)

    • @NoBoilerplate
      @NoBoilerplate Год назад +6

      @@astroorbis You're too kind! The Rust community are so friendly, I wouldn't have got going without them! ❤

    • @astroorbis
      @astroorbis Год назад +3

      ​@@NoBoilerplate Rustaceans rise up! For some reason, Obsidian and Rust feel very close although they don't have any actual relation.. might just be your videos :shrug:

  • @HMPerson2
    @HMPerson2 Год назад +51

    regarding rustc merging functions: function merging is done by LLVM and clang can do it for C++ too, it's just off by default. you can manually turn it on with `-Xclang -fmerge-functions`

  • @OrbitalCookie
    @OrbitalCookie Год назад +126

    Rust: same function copy pasted? Different type.
    Javascript: null is an object? Sure.

    • @SuperQuwertz
      @SuperQuwertz 10 месяцев назад +3

      Null is not an object. The typeof thingy is a bug ^^

    • @Leonhart_93
      @Leonhart_93 8 месяцев назад +2

      @SuperQuwertz It's not a bug, it works exacty as intended. "typeof" is meant to always classify things as part of a data type, always returns something. So if null is not an object, then what it is? Because it certainly is not "undefined". The only remaining type left for it is "object".

    • @SuperQuwertz
      @SuperQuwertz 8 месяцев назад

      No, just null@@Leonhart_93

    • @lobovutare
      @lobovutare 7 месяцев назад +4

      @@Leonhart_93 It's not a bug sure. It's a quirck. What Javascript is known for.

    • @Leonhart_93
      @Leonhart_93 7 месяцев назад

      @@lobovutare So mean intentional convention.

  • @N....
    @N.... Год назад +114

    In C++ you can pass functions as template parameters using template and then the compiler can optimize it properly. You can think of this as passing the function by value, just as a template parameter instead of a function parameter.

    • @_noisecode
      @_noisecode  Год назад +52

      A few people have said this, to which my reply has been that you could, but then you're swimming upstream from convention. Idiomatic higher-order functions in C++ (e.g. the whole STL algorithms library) are written to take functions as regular parameters, not NTTPs. So this isn't really a generalizable piece of advice for how to avoid this pitfall; it only works if the function you are calling has this interface, which the vast majority (in my experience) do not.

    • @_noisecode
      @_noisecode  Год назад +57

      Besides... I have to say that "just write your higher order functions using NTTPs to get codegen" _kind of_ just supports my argument that in C++ you often have to do the non-obvious thing in order to get good codegen. It's the same as saying "wrap everything in lambdas"; it's something extra you have to do to get the compiler to generate the code you want, since the default behavior for passing named functions by value is to give you indirect calls. In the video I never said you _couldn't_ get the good codegen, in fact I literally demonstrated that you can with lambdas, I just argued that you have to jump through hoops to get it. This feels _exactly_ like jumping through a hoop to get it.

    • @Max-ui5gc
      @Max-ui5gc Год назад +21

      Also, if you pass the function as a function pointer in Rust (not a generic implementing an Fn trait), you get the same dynamic call as in C++. That's also a case where the easier solution (no generics) is less performant.

    • @N....
      @N.... Год назад +6

      @@_noisecode It's easy to make a template class with a call operator that calls a template parameter though, and the syntax would be less of a chore than a lambda

    • @_noisecode
      @_noisecode  Год назад +13

      @@Max-ui5gc I'm comparing/contrasting how the type systems have different consequences for generic instantiations in the two languages. Of course if you explicitly say you want indirect calls by taking a function pointer parameter, you will get them.

  • @blt_r
    @blt_r Год назад +169

    Also if you write:
    pub fn f_u32(num: u32) -> u32 { num + num }
    pub fn f_i32(num: i32) -> i32 { num + num }
    they will also be optimized to be the same function, because of the clever design of two's compliment

    • @_noisecode
      @_noisecode  Год назад +44

      Super cool! I hadn't tried this myself. Of course [for completeness] this relies not only on the functions boiling down to the same machine instructions, but also `i32` and `u32` having the same ABI. Putting them in a struct you can end up with different functions with identical machine code, since the functions have different ABIs. rust.godbolt.org/z/Mxnvs5qae

    • @Originalimoc
      @Originalimoc Год назад +5

      But in debug mode, won't it panic when overflow?

    • @blt_r
      @blt_r Год назад +22

      @@Originalimoc In debug mode, two functions will never be optimized into one. In debug mode there are pretty much no codegen optimizations at all, the compiler just generates whatever and gives it to you.
      But in release mode, if you use num.checked_add(num), then yes, this optimization indeed won't be possible because checking for overflow is different for signed and unsigned integers.
      EDIT: you also can use `-C overflow-checks=yes` to check for integer overflow even in release mode.

    • @博麗霊夢-p4z
      @博麗霊夢-p4z Год назад +1

      I think it because these two functions generate the totally same assembly code, and the types are erased on runtime so Rust doesn't care about which one can fit the type

    • @angeldude101
      @angeldude101 Год назад +3

      "Clever design"? You mean the "elegant mathematics of modular arithmetic"?
      0xFFFF_FFFF + 1 = 0, therefore 0 - 1 = 0xFFFF_FFFF = -1. (Why people don't make the next step to 0xAAAA_AAAB * 3 = 1, therefore 1 / 3 = 0xAAAA_AAAB = ⅓, even though it's backed by the exact same mathematics, is beyond me.) Also, 0x8000_0000 + 0x8000_0000 = 0, therefore 0x8000_0000 = 0 - 0x8000_0000 = -0x8000_0000. It's just like 0 in that it's its own negative! In fact, it's the _antipode_ of 0 on the number wheel.

  • @henrycgs
    @henrycgs Год назад +8

    ooooh. wow. I would have never thought of that. in other words: rust's functions allow for static dispatch. this enables the compiler to do much smarter optimizations since it knows exactly what function is being passed as an argument. brilliant!

  • @blouse_man
    @blouse_man Год назад +9

    how u have such deep understanding of such topics is just amazing, hope one would also able to get simple topics to such great depths

    • @nottellinganyoneanything
      @nottellinganyoneanything Год назад

      have you tried his C++ Godbolt but with a different compiler? Try it with clang.

  • @yoshiyahoo1552
    @yoshiyahoo1552 Год назад +24

    I'm not sure if this is within the scope of this channel, but making a video on the difference between hardware threads and software threads and how rust affects and is affected by that difference would be really interesting!

  • @BlackM3sh
    @BlackM3sh Год назад +6

    6:05 I think the best way to think about closures are as tuples, or perhaps tuple + fn-pointer. With the tuple being your captured variables. When getting annoying problems with the borrow-checker, etc. it helps me understand to think of closures as tuples instead.
    A closure like `|x: i32| x + y` would be the tuple (i32,) since it captures `y` which is an i32. A closure which captures nothing is the empty tuple () which is zero sized, just like a normal function, so it can be coerced to one. Any other tuple is non-zero sized so needs additional stack space in addition to any potential function pointer making them incompatible.

    • @_noisecode
      @_noisecode  Год назад +7

      I think this is a great mental model for closures. Similar to mine but not identical; in my mental model (strongly influenced by my mental model of C++ lambdas), closures are a struct containing each capture (if any), and that struct has a single method that contains the code inside the closure. That method is what's invoked by the call operator.
      Note for the record that in terms of layout, the representation of closures in memory is completely unspecified by Rust.

  • @SolraBizna
    @SolraBizna Год назад +7

    I already knew Rust's particular "weird" function type approach-but I only already knew it because of those excellent, educational error messages! I'm sharing this with all my Rust students, because it explains it all much better than I have been.

  • @Turalcar
    @Turalcar Год назад +4

    6:25 This might be nitpicky, since y is immutable but local captures a reference to y, not a value, unless you use "move". This isn't crucial since it's likely to be inlined anyway but something to keep in mind.

    • @_noisecode
      @_noisecode  Год назад +1

      Fair nitpick! You are correct.

  • @protodot2051
    @protodot2051 11 месяцев назад +28

    So THAT's why lambdas can only be assigned to auto-typed variables. The reference is so jargon-heavy, I just couldn't figure out what a "unique unnamed non-union non-aggregate non-structural class type" even is.

    • @Delfigamer1
      @Delfigamer1 5 месяцев назад +12

      _Unique_ -- means that no two instances are the same. Each time you write a lambda, the compiler creates for it a fresh new typename. Even if you literally copy-paste the same lambda twice, they will still have two distinct types.
      Compare with e.g. "Result", or in C++ "std::variant" -- when you write this type in multiple different places, they all end up referring to the same type.
      _Unnamed_ -- means that you cannot refer to this type in source code directly, unlike the "i32" and "Box" mentioned above, or "fn"-types constructed in the video.
      _Non-union non-aggregate_ -- means that this type is not a union and not an aggregate. :) In C++, a "union" is, well, the type that you get with the *union* keyword. An "aggregate" is a common word for *class* and *struct* -- they are practically the same thing in C++, the only difference being that *class* implicitly starts with "private:" visibility, while *struct* starts with "public:". As for explicit access specifiers, fields, methods, virtuals, nested types etc -- they are allowed in both *class* and *struct*, and behave in exactly the same way -- so, when talking about these types, they're often referred to with the common word "aggregate".
      In Rust, there is no direct counterpart for C++ *union*, but you can see Rust *enum* as a close enough approximation. The aggregate type, however, is present directly, though there is only one -- it is *struct*.
      _Non-structural_ -- that means that the object of a lambda does not split into constituent parts. This is in contrast to aggregates, where you can access their individual fields and base sub-objects. As far as the user's code is concerned -- a lambda object is a monolithic black box of bytes.
      _Class type_ -- in this context, it just means the opposite of "primitive". Primitive types have a special place in C++, these are types like int, float, raw pointers, SIMD vectors -- they don't have constructors or destructors, are standard-layout, and have some other specific properties. A lambda type, on the other hand, has a member operator(), and is allowed to have a non-trivial destructor (to properly dispose of its captures).

    • @protodot2051
      @protodot2051 5 месяцев назад +6

      @@Delfigamer1 Yeah see, the problem is that you needed that entire wall of text just to explain that one sentence. But thanks for missing the point.

    • @mort44444
      @mort44444 4 месяца назад +1

      good explanation! thanks

    • @anonion6821
      @anonion6821 5 дней назад

      @@protodot2051 hey i mean, now you know the jargon

  • @mattshnoop
    @mattshnoop Год назад +41

    This guy has very quickly become the subscription that I am most looking forward to his next video

  • @aleksanderkrauze9304
    @aleksanderkrauze9304 Год назад +61

    Great video! In this little series you started recently, you talk about things that I already (mostly) know. However you show implications much deeper that I would initially find out myself, which makes me rethink about those subjects in a different setting. For example, in the case of your previous video, I knew that returning &Option is a bad idea and more idiomatic way would be to return Option instead. But I did not know all of the reasons *why* that is the case. That is all to say that I love your videos and hope to learn much more from you. Cheers!

  • @christianjensen7699
    @christianjensen7699 Год назад +2

    It is nice to watch a Rust video from someone that knows C++ and does good comparisons, instead of a JavaScript developer that misspeaks every time they talk about C or C++. It seems like everyone on the internet comes from front end web development.

  • @AzuxDario
    @AzuxDario Год назад +3

    You can get 2 lines of assembly, with moving 20 to eax in C++ with normal function if both funcitons (f and tempalte) are defined as constexpr.

  • @7h3d4sH
    @7h3d4sH Год назад +2

    You are an excellent teacher of programming concepts. The style of your teaching in this video worked very well for me personally. Very high quality content - keep it up. And thank you!!!

  • @tenthlegionstudios1343
    @tenthlegionstudios1343 Год назад +14

    I still think higher kinded types would be useful in rust if it could still be performant and safe. Having types based on types signatures. There is a world where having the same signature being the same "type" is very useful. Like defining a functor, monad etc... I know rust is moving towards Generic Associated Types, that helps solve some of these issue's. But, in my mind this function type you are mentioning is just a function ptr more or less. Still an interesting video, thanks!

    • @AndreiGeorgescu-j9p
      @AndreiGeorgescu-j9p 10 месяцев назад +1

      Yea rust will always be a low level non expressive systems language unless they add HKT and ideally dependent types too.atm it's just better C which isn't much of an accomplishment

    • @Justin-wj4yc
      @Justin-wj4yc 4 месяца назад

      @@AndreiGeorgescu-j9p jc cringe

  • @mikkelens
    @mikkelens Год назад +29

    These rust videos are wonderful. I've barely written any c++ code at all, but it's wonderful to look at how these immensely different approaches to similar-ish languages pan out at compile time.

  • @IllidanS4
    @IllidanS4 6 месяцев назад +1

    I don't really think that the performance issue in C++ has to do with the fact that functions are "structurally typed". Sure, having a unique type for each function would simplify the optimization, but I'd fully expect the compiler here to acknowledge that you are passing the same function each time. It does so for normal values, so why not functions?
    Regardless, it is relatively uncomplicated to fix this without changing too much of your code, here's just quick code that I wrote that should help:
    template
    constexpr const inline auto monomorphic = [](auto&&... args)
    {
    return Func(std::forward(args)...);
    };
    Using monomorphic instead of f gives you the benefit of the lambda type, while not sacrificing much of readability or conventions.

  • @mzg147
    @mzg147 Год назад +11

    I still don't get why it is necessary for functions, but not for e.g. integers. Wouldn't it be so great and efficient if the numeral "2" had the type 2? And there was a trait i32 that 2 would implement and then every computation on literals would be done using the type system? And arguments from the outer world would be "dyn i32"?
    I feel just like this using the functions in Rust. I think I get that the advantages of using functions like that is far greater than the advantage of using "i32 as trait" approach, and this is the main argument for it. But it makes the whole thing unnatural for me, I would prefer consistency in the type system, the optimizations could be done behind the scenes, not at the expense of design.

    • @_noisecode
      @_noisecode  Год назад +7

      Thanks for this comment, I think this is a super interesting point. I honestly think there's a decent argument to be made that the literal "2" should indeed be of type `2`, and only coerced into i32 when it's "type erased" in some way. Some type systems in some languages do exactly that (TypeScript for example). In that mental model, `2` is to `i32` as `{f}` is to `fn(_) -> _`.
      From what I can tell, the main downside of extending this to all Rust literals is just the absurd amount of generic monomorphizations that would result. It would mean the compiler would need to statically stamp out a completely different implementation of something like `foo(_: impl std::i32)` for each of `foo(0)`, `foo(1)`, `foo(2)`, etc. Seems sorta like that's simply a bridge too far, although I don't hate the purity, consistency, and elegance of the idea.

    • @mzg147
      @mzg147 Год назад +4

      @@_noisecode Rust doesn't have type coersion in itself, the traits only have it right? In Typescript, every type is like a set (of objects of this type) so it makes sense, and sets can be made subsets of each other, but in Rust every object is of exactly one type, making the whole thing asymmetrical...

    • @_noisecode
      @_noisecode  Год назад +4

      Sure, that's a fair distinction to draw between TypeScript and Rust's type systems. For the Rust case, I was thinking that a type like `2` would be a ZST that uniquely identifies the value of 2 irrespective of type, but it coerces into a concrete integer type when you nudge it.
      let x: i32 = 2; // the value of type 2 is coerced/erased to i32
      let x: u8 = 2; // coerced/erased to u8
      This would be analogous to how fn item names are coerced to fn pointer types if you give them a little nudge, as I show in the video:
      let x: fn(_) -> _ = f; // the value of type {f} is coerced to fn pointer

  • @yaksher
    @yaksher Год назад +6

    It's worth noting that if you want function pointer behavior instead in Rust, you can just specify the argument as a function pointer and not as implementing a trait. Better yet, the compiler is still smart enough to see through it at compile time when it can, even if the generated assembly makes indirect calls.
    I.e., your exact example with calculate's signature replaced with fn calculate(bar: fn(i32) -> i32) -> i32 still has main just return 20, but the code generated seems to be literally identical to the C++ example's code. This is probably usually a bad thing though. Still interesting to think about the trade off with function pointer types vs function trait types. I would probably just shorthand the signature you had as fn calculate(bar: impl Fn(i32) -> i32) -> i32 most of the time so it's shorter to write.

    • @_noisecode
      @_noisecode  Год назад +2

      Totally agree about doing `impl Fn` for calculate() in real code--I wrote it "longhand" with the `where` clause (and the C++ version with the `requires` clause) for the video so you can visually match up the different key elements of the two versions better (and so the animation between the two looks cooler ;) ).
      And yep, if you intentionally opt in to the indirection (by concretely taking a function pointer type as a parameter) you get identical codegen to the "bad" C++ version. I prefer this story over C++'s... you get the nice clean optimized version unless you specifically say you want the indirect version (which you might, for e.g. reducing monomorphizations if a very large number of them is causing problems).

  • @virkony
    @virkony Год назад +1

    14:28 Hmm... I thought that CPU do have indirect branching prediction and that GCC do have -findirect-inlining which should work for templates and code within same module.

    • @_noisecode
      @_noisecode  Год назад

      -findirect-inlining is an optimization that tries to _fix_ the issue I present in the video: C++'s type system causing indirect calls in higher order functions when you use a plain function name. Needing to turn on a special optimizer switch is kind of the same as needing to wrap the function in a lambda; it's an extra step you need to take to get good codegen because C++'s language semantics give you indirect calls in HOF by default, whereas Rust's give you static dispatch by default and you opt in to dynamism, rather than having to opt in to good codegen like in C++.

  • @Alkis05
    @Alkis05 Год назад

    6:20 well, it could support currying. When passing y to that clojure it could wrap inside a function and compose it with the clojure. Or something on these lines

  • @adamszalkowski8226
    @adamszalkowski8226 9 месяцев назад

    Thank you for the hint. Now that you mentioned it, function item types are like unit structs that implement the Fn traits. That's how it's zero size.

  • @leddoo
    @leddoo Год назад +9

    that's really interesting!
    though the "functional programmer" in me absolutely hates it :D
    the compiler could technically do that in the background, without exposing it in the type system - simply by monomorphizing based on the identity of the function arguments to the call. (this wouldn't work for functions assigned to locals, but some rather simple analysis during type inference could identify those cases, where the variable can only refer to exactly one function.)
    i was also quite surprised by how monomorphizations are supposedly skipped, if the functions are identical modulo optimization. this would inherently sequentialize code generation and optimization (to some degree). the observed effect, that `f` and `f_prime` used the same machine code could also be explained by function deduplication done by llvm.
    however this would be after monomorphization and would therefore not reduce compile times. you could perhaps test this by generating a huge number of functions and making calls to a generic function for each of those functions, then compare the compile times for identical/different functions.

    • @_noisecode
      @_noisecode  Год назад +5

      That's an interesting idea! I wonder indeed how plausible it would be for the compiler to do this monomorphization trick behind the scenes while telling the white lie that all functions do have the same type. Good food for thought!
      On the other hand, then we're back in "functions are callable because they just... are" territory. A point I make in the video is (paraphrased) that calling a function is an _operation_ you perform on a function value, so the correct abstraction for it is a trait, just like any other operation you do to a generic value (e.g. std::ops::Add). I also like that the type system protects you from unnecessary dynamism; you only get a function pointer if you opt in to one (although to be fair, it is easy to accidentally opt into one, since fn items coerce to fn pointers so easily).
      And you're right, I glossed over the point that the function deduplication stuff doesn't help with the compile time issue, just the final binary size. It does indeed have interesting consequences re: sequentializing codegen. Thanks for the comment. :)

    • @ХузинТимур
      @ХузинТимур Год назад

      ​@@_noisecode >telling the white lie that all functions do have the same type
      I think, such thing contradicts the idea of Rust being explicit and low-level.

  • @shane2799
    @shane2799 2 месяца назад

    18:49 it seems like the compiler should have been able to optimize `example::f` out of the generated assembly, since `example::f` is never used. Or did the compiler have to keep it, because `example::f` is `pub`?

  • @officiallyaninja
    @officiallyaninja Год назад +2

    16:30
    Isn't this also an issue with rust? you advocated for this exact kind of thing in the Arc video, of writing uglier code for performance so is it really an issue? I don't really think the lambda makes a huge deal anyway, and having function types be pointers does kinda make them more ergonomic to work with in situations where you don't care about performance

    • @_noisecode
      @_noisecode  Год назад +4

      I guess I wasn't really saying that Rust has no footguns at all--my point was that C++ is an absolute minefield of them. The C++ community tends to admit quite plainly that almost every corner of the language has poorly chosen defaults (the design of lambda expressions being one place where that's much less true). In Rust, when you write the pretty version, you're much more likely to find that the language design is working alongside you to make that version the most efficient version, too.
      Now you're right that I advised you to write it the ugly way in the Arc video--point taken. :) That video is ultimately still celebrating a fantastic Rust design choice though--the fact that wide pointers + slices let us effortlessly repurpose Arc for dynamically-sized reference-counted strings--something that would take considerably more error-prone work with e.g. std::shared_ptr from C++--so why not just do it?
      Also I don't _super_ agree that C++'s defaulting to functions pointers is more ergonomic. Rust fn items implicitly convert to fn pointers at the slightest provocation, so realistically if you do need to do function pointer stuff you won't run into any trouble. But when you don't, you don't pay for it (to borrow a phrase from a C++ fundamental design principle that it violates here).
      I really appreciate comments like this and the discussion that arises from them. Thanks for watching. :)

  • @MrEtilen
    @MrEtilen Год назад +1

    Great description. Thank you for taking the effort to produce such a quality material!

  • @sp00kthebourgeois
    @sp00kthebourgeois 6 месяцев назад

    One of the biggest learning curves I've experienced, going from languages like Python on JS to rust, is forcing myself to trust the compiler. I don't have to do anything convoluted, I just write code and the compiler does its job. This video is a good example of that

  • @isaaccloos1084
    @isaaccloos1084 Год назад

    Your last two videos were so good I was eyebrow-raising when I saw a new notification from your channel. Please keep up this great work 👍🏻

  • @topcivilian
    @topcivilian Год назад

    Google algorithm brought me here and...
    Immediately 'liked' and 'subscribed'
    Thanks for the content, my friend.

  • @ronminnich
    @ronminnich 3 месяца назад

    I've really struggled with Rust, these videos are helping me a lot

  • @rsnively
    @rsnively Год назад +23

    I would love an episode about Result types. I know there's a lot of cool stuff you can do with the ? operator and good functions for operating on them. I just struggle with all of the extra overhead of creating my own error types to return, combining different error types, and so on. Nine times out of ten I end up just reaching for Option because it mostly does what I want, even though I know it's not the right tool for the job.

    • @bobbybobsen123
      @bobbybobsen123 Год назад +1

      Using the anyhow crate simplifies this a lot, removing the need for specifying specific Error types

    • @yaksher
      @yaksher Год назад +1

      I feel like you could probably get away with `impl std::error::Error` or `Box` as the error type most of the time.

    • @jrmoulton
      @jrmoulton Год назад

      @rsnively take a look at the error-stack crate. error-stack adds a few things that remove boilerplate and make errors nice to look at but what I like about it is that it actually forces you to learn Result the hard way. Using anyhow and similar crates is really nice but you can get away without ever actually understanding the result type. When I was learning I read everything I could find and watched every video but I was still confused. I would think I understood it but then I would have a problem that made me understand that I fundamentally misunderstood. If this guy made a video it would probably be good enough but until he does learning Result the hard way is probably your best option (no pun intended)

    • @omg33ky
      @omg33ky Год назад

      Anyhow is a great crate for applications but for libraries I would recommend thiserror since it makes it easy to create actually different errors for the outside, not just return a single error type like you would if you used anyhow

    • @SimonClarkstone
      @SimonClarkstone Год назад +3

      And now, you got your wish. An episode about Result types was released about a day ago. Possibly due to your comment.

  • @CaptainOachkatzl
    @CaptainOachkatzl Год назад +6

    great video, makes me super excited for whatever you have planned for Fn, FnMut and FnOnce!

  • @lioncaptive
    @lioncaptive Год назад

    Logan narrates and explains the functionality so well.

  • @licks4vr
    @licks4vr 5 месяцев назад

    Analyzing the compiled code is such a great way to learn about this

  • @Voltra_
    @Voltra_ Год назад +2

    14:20 C++ compilers actually do just that. Same with C++20 coroutines despite being even more complicated and indirect.

  • @asdfasdf9477
    @asdfasdf9477 Год назад +1

    Good stuff, please keep making these.

  • @palapapa0201
    @palapapa0201 Год назад

    16:00 Why can't the compiler just optimize `calculate(f)` into a lambda so you don't have to do this manually?

    • @_noisecode
      @_noisecode  Год назад

      Compilers do have a lot of optimizations (e.g. inlining and cloning) designed to get this performance back. And they are often successful. The compiler can't just rewrite your code to use a lambda though, since that would be an observable semantic change to the code you wrote. For example, you could tell whether the optimization took place by printing out the type of F inside calculate(). As an 'observable optimization', it would need to be specially blessed by the C++ standard (a la copy/move elision) in order to be legal for compilers to do.

  • @johnjones8330
    @johnjones8330 Год назад

    Thank you for the great explanation, particularly with the comparison between passing a function and a lambda to calculate in C++, I found this really helpful.

  • @dahveed241
    @dahveed241 Год назад

    Dude your visual representation of how everything works is fucking amazing. Keep up the great work!

  • @codeofdestiny6820
    @codeofdestiny6820 Год назад

    Great video! Thanks for explaining this amazing topic! Only think that I would like to say is that it would have better if you'd have cleared up the whole inlining thing from the comments int the video itself. Keep going with great content!

  • @MuscleTeamOfficial
    @MuscleTeamOfficial 4 месяца назад

    u keep putting these out imma keep watchin

  • @christianburke4220
    @christianburke4220 Год назад

    A high-quality channel with only 6.33k subs?
    nice

  • @ricardolima5206
    @ricardolima5206 5 месяцев назад

    Very nice video. Even more impressive that I actually understood what was happening.

  • @draakisback
    @draakisback Год назад

    Great video man, glad this showed up in my recommended feed even though I really don't need to learn rust since I've been using it in production for like 6 years now.
    I had to learn this the hard way when I first learned rust. I come from a functional programming background and when I started writing rust I wrote it like a functional language. I'm sure you can figure out some of the horrible side effects that came with me writing rust that way. I remember being extremely baffled by the fact that we had three different function types via the trait system and then becoming even more baffled when I realized that we actually had infinite function types since every function is its own type. Now of course having used it for 6 years now, I will say that it is a really cool system in practice. I do still have my gripes with the trait system, for example the fact that generics are effectively just trait arguments is still a bit annoying. As somebody with a category theory and lambda calculus background, it's a bit counterintuitive to have generics that don't really work like true generics and functions that don't really work like functions but this really only matters for maybe 1% of use cases in my experience.
    Anyhow, keep up the good work mate.

    • @jboss1073
      @jboss1073 Год назад

      " I do still have my gripes with the trait system, for example the fact that generics are effectively just trait arguments is still a bit annoying. As somebody with a category theory and lambda calculus background, it's a bit counterintuitive to have generics that don't really work like true generics and functions that don't really work like functions but this really only matters for maybe 1% of use cases in my experience. "
      Could you clarify, out of curiosity? How does Haskell get generics right and Rust gets them wrong by making them just trait arguments? How does it abide more to Category Theory than Rust? Thank you.

  • @dragonmax2000
    @dragonmax2000 Год назад

    This is dope. Totally awesome. Please keep on making these. So fundamental. :)

  • @nathanoy_
    @nathanoy_ Год назад

    gotta say: I love your content. Please keep it up!

  • @azaleacolburn
    @azaleacolburn Год назад

    Amazing video, great work! I’m sharing this with all my co-workers.

  • @asdfghyter
    @asdfghyter Год назад +1

    so, if i understood this correctly, if you want to avoid the monomorphizations in rust, you can explicitly cast your function to an fn pointer and so avoid the code duplication in the compiled binary at the cost of getting an indirect call?

    • @_noisecode
      @_noisecode  Год назад +1

      Yep! Although in my experimentation, even if you do this rustc will still sorta sidestep you sometimes and generate a fully monomorphized function even when you handwrote the indirection. I haven’t looked into why and it could be just because somehow it was smarter than whatever my experiment code was, and in “real” code it would keep the indirection in there.

  • @laundmo
    @laundmo Год назад

    2:09 i would really like to know which language has "signature = type id"

  • @flatmapper
    @flatmapper Год назад +4

    Do you use Manim for visualization?

    • @_noisecode
      @_noisecode  Год назад +4

      Yep! I always give credit in the description. :)

    • @flatmapper
      @flatmapper Год назад

      @@_noisecode amazing)

  • @pierrebertin4364
    @pierrebertin4364 Год назад

    Amazing videos man.. just discovered rust and your channel, loving it. Which tool are you using to build your videos? Super smooth and enjoyable.

    • @_noisecode
      @_noisecode  Год назад

      Thank you so much! I use the Manim library for the animations, check the description for more info.

  • @minirop
    @minirop 11 месяцев назад +3

    Your last point is very dubious (to not say misinformed). C++ does function merging (or "identical COMDAT folding") but it does so at link time, not compile time.
    The reason is that with Rust, when you compile your file, it's considered a complete crate (AFAICT), while C++ consider is just an object file and doesn't do any things that could break the public ABI (since it can't know how the code will be used, unlike the linker).

  • @Dash359
    @Dash359 Год назад

    Oh, man. Your vidos are so good. How about series dedicated to all Rust's features unshugaring. Moving, borrowing, smart pointers, etc. Performance implications using those and how compiler optimize those.

  • @maanaskarwa7934
    @maanaskarwa7934 10 месяцев назад

    Why is the function call f still there in the rust assembly code? I saw it has been completely removed from the C++ version after using lambdas. Is there a way to make this happen in rust as well?

  • @didles123
    @didles123 Год назад

    So one can regard functions are basically just unit structs (hence they all have their own unique type), which implement Fn* traits based on their signature. This makes it easier to understand closures as structs whose fields are their capture variables.

  • @damonpalovaara4211
    @damonpalovaara4211 Год назад

    4:26 The first line saying "different fn items have unique types" is saying exactly what you want to add to the message.

  • @julionegrimirandola3375
    @julionegrimirandola3375 Год назад +2

    Great video!! Knowing about it's pretty cool to see this kind of in-depth content about how types work, and thanks for the compiler explorer tip!!

  • @vanilla4064
    @vanilla4064 Год назад

    This is an awesome video! Loved learning more about rust's language design.

  • @MaxHaydenChiz
    @MaxHaydenChiz Год назад +5

    Interesting video, but I'd have appreciated it if you'd linked to your code so that we could see it on godbolt without having to retype it ourselves. I wasn't able to replicate your C++ results with my initial example and having to retype what you wrote exactly and then carefully look at what settings you were using in order to make sure that I was right about why we got different results was tedious.
    This is a compiler problem, not a language one. gcc does the correct optimization once you give the compiler permission to do full monomophization by making both `f` and `calculate` be `constexpr`. Output is the same as the rust code, even on lower optimization levels. Works across multiple architectures too.
    I'm not sure why clang specifically has trouble with this code and can only optimize the lambda version. But this is a compiler issue and not a language one. (It could also be user error. I'm not that familiar with the details of clang code gen.)
    I'm also not sure why making `f` have static storage class would be expected to do anything here (and it doesn't change the generated code). I'm not even sure that your explanation of `static` is even correct in this context and it sounds like you mean to be talking about `constexpr`. (Though maybe in editing for brevity, some nuance involving `static` got lost?)

    • @_noisecode
      @_noisecode  Год назад

      Very valid feedback! I'll put some links to the godbolt examples in the description.
      Even considering your points, I do still argue this is a language problem. Not all code can be constexpr, and besides, constexpr is a language construct, not a compiler switch; my point stands that it's a language problem that the 'default' semantics don't let the compiler optimize your code completely (unlike the default semantics in Rust) and you have to manually opt in to semantics that do (like constexpr or lambdas).
      My point about f being static was that, normally, you expect a tiny static function to probably be able to disappear/be optimized out completely (note that the out-of-line definition of f DOES get optimized out completely when we wrap it in a lambda). But due to needing its address when it's used as a function pointer, it can't be optimized out in that version of the code. I just rewatched that section and I don't *believe* I misspoke in any way about the semantics of `static` functions.
      Thanks for watching and for the discussion. :)

    • @_noisecode
      @_noisecode  Год назад

      Godbolt links in description! Thanks again for asking, should've put them there in the first place.

    • @MaxHaydenChiz
      @MaxHaydenChiz Год назад +1

      @@_noisecode Thanks for updating it so quickly!
      FWIW, I played with this some more and it seems like clang and gcc treat "noinline" differently. Clang takes you quite literally and just refuses to do the optimization since it has the effect of inlining it and then eliminating the function as dead code. (What clang does is probably the thing that people generally intend when they use that attribute in real code.)
      If you get rid of that attribute, the optimization works as it should even with minimal optimizations enabled. Both clang and gcc will be able to treat the functions as constexpr without having to force it by declaring it specifically.
      So, it's not so much opt-in as opt-back-in (after opting-out). There are probably cases where constexpr is actually needed to trigger the optimization, but since clang and rust will use the same llvm analysis for that, I don't think you'll actually see a difference in practice.
      If you could construct one, I'd be interested in seeing it, but so far, I haven't managed it. I suspect that Rust would benefit if the compiler needed to do an alias analysis on the arguments of a higher order function that took multiple functions as arguments. But haven't been able to construct a working example.
      And since I can't actually think of a distinguishing case, I'm not confident that I can say that rust being nominally opt-out is "better" than c++ being nominally opt-in. That would depend on what opting out with the rust code looked like and how often you ended up with build time problems or other issues with degenerate monomorphization compared to the times where the annotation or code change in C++ was complex or non-obvious. We are already dealing with a corner case of a corner case at this point. So it's hard to say.
      As for `static`, what you said is correct. Putting it on that function tells the compiler that it won't be referenced from any other file. This shouldn't impact the optimizations being done, only whether the compiler *also* emits the code for the function itself or if it has permission to eliminate it as dead code if it is inlined everywhere in the current file. And this is what actually happens on both gcc and clang. Using `static` there isn't something you'd normally see in the wild, so it stood out as odd. But having rewatched, I don't think there's an error in what you said.

    • @_noisecode
      @_noisecode  Год назад

      This example optimizes down to nothing if you take away noinline, yeah, which is of course why I added it: I was trying to simulate a case where you have a higher order function that doesn't get inlined. One such function that's extremely common is std::sort (which typically doesn't get inlined). You don't have to manually opt out like I did to find common real world examples of compilers choosing not to inline function template instantiations, where then the types they are instantiated with matter for your codegen. (An out-of-line std::sort instantiated with a custom predicate that's a lambda will have that lambda inlined into its definition, whereas an out-of-line std::sort instantiated with a function pointer will have to make an indirect call for every element comparison).
      (Huge amount of codegen here but you can see the two instantiations of __introsort [line 56 and then 1437] are quite different--the function pointer one has lots of `call`s that the lambda one does not: godbolt.org/z/1Er3nf6bT Code for `greater` is also generated even though it's marked static since its address is needed by the function pointer instantiation.)
      I think constexpr is sort of unrelated to the matter at hand by the way. It may affect codegen in the way you're seeing because constexpr implies `inline`, but aside from that, a function being marked constexpr shouldn't affect how it's optimized when it's not being used as a constant expression (thus making its constant evaluation optional), except compilers _may_(?) be slightly more inclined to constant evaluate it (which isn't really an optimization and more of a language semantic thing that happens in the compiler frontend before anything gets to e.g. LLVM).
      `static` is very common on functions in .cpp files--so I'm not sure what you mean that it's not something you'd see in the wild. Often in C++ they're put in anonymous namespaces instead, but `static` is another very typical way to do it, and some coding guidelines (e.g. LLVM's in fact) actually require `static` instead of anonymous namespaces for internal helper functions. You're right that `static` won't affect how the function is inlined etc., just whether its definition gets eliminated as dead code. Apologies if I suggested otherwise.

    • @_noisecode
      @_noisecode  Год назад

      Here's a simpler and more compelling example:
      godbolt.org/z/7rd79T1oT
      Node::walk() traverses a binary tree, visiting each node with a predicate. There are no `noinline` attributes in sight. The call using the lambda (so the one where walk() is instantiated with a predicate that can be resolved statically based on type information) is optimized out completely and is nowhere in the generated code. The call using the function's name directly generates a bunch of code featuring indirect calls.

  • @Ciubix8513
    @Ciubix8513 Год назад

    Fascinating, I love learning stuff like this, keep it up!

  • @rsnively
    @rsnively Год назад +2

    Another banger

  • @retropaganda8442
    @retropaganda8442 Год назад

    I know I'm just adding more complexity to the matter, but my first approach would have been to write the C++ version as taking a function, not a typename, since this is what you want:
    template int calculate() { return F(1) + F(2); }

  • @electra_
    @electra_ Год назад

    I'm guessing that if you wanted specifically to avoid monomorphization and do dynamic dispatch, you would simply pass a Box instead of a generically defined function?

    • @_noisecode
      @_noisecode  Год назад

      Yep! Though I'd probably just pass a `&dyn Fn()` or a `fn()` if you just need to call the function (and not hang onto it) -- no need to pay for Boxing it up in that case.

  • @PierreThierryKPH
    @PierreThierryKPH 10 месяцев назад

    But what does this typing mean when you write a a function like map() that takes an arbitrary function with some specific signature?

  • @TimDrogin
    @TimDrogin Год назад +1

    So, in rust function type is some mix of Id-location of a function, while in cpp it is just a signature? That’s some clever stuff going on there

  • @TCoPhysics
    @TCoPhysics Год назад

    Brilliant explanation. Thanks for sharing!

  • @alonbenjo7382
    @alonbenjo7382 Год назад

    I'm not a rust programer so it seems weird: if f_prime is both public and has no lable in the asm, and from another file i call it, how the compiler will know its name is f now?

  • @maninalift
    @maninalift Год назад

    Thank you. Your analysis is very clear and useful.
    Do you know how haskell typeclasses work? I'd be very interested if you did a comparison of those to Rust traits, in terms of how dynamic dispatch and monomorphisation work. Obviously Haskell doesn't tend to give guarantees about the binary representation of things in the same way as rust and C++.

  • @danser_theplayer01
    @danser_theplayer01 3 месяца назад

    "function f is type function f and function g is type function g; also I'm going to tell you exactly why I'm not going to compile this code"
    That's so cool, usually javascript call stack trace yells at me in V8 minified garble, not even human javascript.

  • @NickDrian
    @NickDrian Год назад

    Amazing description, keep it up!

  • @Pupperoni938
    @Pupperoni938 Год назад

    Forgive me, I'm from C++, but what about things like dependency injection, where it's desirable to have indirect calling? Is there solution there still to use function pointers?

    • @_noisecode
      @_noisecode  Год назад

      If you do want indirect calls, it's idiomatic in Rust to use something like `dyn Fn(i32) -> i32`, which is essentially a type-erased dealio that calls the function through a vptr. Basically it's just a more generalized function pointer that can also call closures with state. If you want to own the callable, a la std::function in C++, you typically use `Box i32>`. If you just need to call it but not own it (a la something like llvm::function_ref in C++) then you use it by reference, e.g. `&dyn Fn(i32) -> i32`.

  • @nirmalyasengupta6883
    @nirmalyasengupta6883 11 месяцев назад +1

    Excellent piece of work! The premise is posited; the observations are presented; experiments are made; the conclusions are drawn! I am not really a beginner with Rust, but this video has been a real addition to my knowledge of Rust compiler works! Thank you, @Logan.

  • @PaulSebastianM
    @PaulSebastianM Год назад

    Dude! Beautiful video! 🎉

  • @skirsdeda
    @skirsdeda Год назад +13

    Very good explanations of practical implications in this video, just as it was for "&Option vs Option" one. No other youtuber gets even close in that regard :) Keep it up!

  • @darshanv1748
    @darshanv1748 Год назад

    I don't understand the monomorphization argument, regardless of whether functions are types themselves or their signatures are type like rust and c++ respectively, the code would be generated for the functions right?
    So i don't get the code being bloated with monomorphization

    • @_noisecode
      @_noisecode  Год назад +1

      The difference is for higher order functions; in Rust, calculate() is monomorphized once for _every function_ you pass into it, whereas in C++ it gets monomorphized once for every function _signature_ you use it with. So many C++ functions can share the same generated calculate() function (but that’s because it contains indirect calls so there’s more overhead). In Rust, conceptually there is bloat because you get a new calculate() monomorphization for every single usage, but, the resulting code is likely more optimized.

    • @darshanv1748
      @darshanv1748 Год назад

      @@_noisecode oh k now i get it
      Thank you

  • @jkjoker777
    @jkjoker777 Год назад

    amazing, looking forward to the next one

  • @micycle8778
    @micycle8778 Год назад

    what happens with dyn Fn? do those get optimized or no

  • @pablorepetto7804
    @pablorepetto7804 Год назад

    This was very interesting, but I ended up watching it backwards. First i jumped to the practical application, then I went back to warch the rest. Seeing the ugly "wrap it in lambda" idiom made the rest make sense. Thank you for the table of contents!

  • @DavidBeaumont
    @DavidBeaumont Год назад +2

    Nice explanation. However in real code you almost always want to use function composition because you have different combinations of functions at runtime. Thus I strongly suspect it's not actually that common (in either C++ or Rust) to be passing functions in this static fashion, except maybe for generating constants. It would have been nice to see what Rust did in the dynamic case.

    • @ХузинТимур
      @ХузинТимур Год назад

      I often use `x.map(T::from)`. In fact, there is a clippy lint for that called `redundant_closure`.

    • @_noisecode
      @_noisecode  Год назад +1

      Exactly. In Rust I try to used named functions like T::from wherever I can, because it's more elegant (it's kind of 'point-free style'). In C++ this could be detrimental to codegen, whereas in Rust it's not.

    • @DavidBeaumont
      @DavidBeaumont Год назад

      @@ХузинТимур That's a good point about map(). Applying a function over a collection is a case where you'll combine named functions explicitly. Maybe it's not as unlikely as I thought.

  • @theCapypara
    @theCapypara Год назад +4

    I love these videos so much, I'm a huge geek for programming language design, and Rust is just a so beautifully constructed language.

  • @liamkearn
    @liamkearn Год назад

    18:05 Compiler pass ordering issue IMO f should be gone due to DCE rather than aliased. God pass ordering is arcane.

    • @liamkearn
      @liamkearn Год назад

      Great video though :)

  • @tomvdsom
    @tomvdsom Год назад +1

    Loved it 👍

  • @РайанКупер-э4о
    @РайанКупер-э4о 3 месяца назад +1

    Me, from math background: why can't we have both? Like 1 is a natural number. But it also is an integer and rational and real, complex, quaternion and it's a vector, multivector, and even a one-dimentional tensor. It is all of that at the same time. So why can't we have the same with types in code? Like, if you create an integer, it should go in everywhere where a real number can. And the same is here. Every function is obviously an element of a set that contains only that function. But it also is an element of a set that contains every function with this signature. In math any object can be an element of infinite number of sets. Why don't we have infinite number of types? Just compute them lazy way, like functional languages do with everything.

  • @samlovescoding
    @samlovescoding 6 месяцев назад

    Where can I buy your Rust course?

  • @vokungahrotlaas
    @vokungahrotlaas Год назад +2

    If you really need to generate the function per object and not per type, just ask it to the compiler:
    ```cpp
    #include
    #include
    #include
    template
    [[gnu::noinline]]
    int calc()
    requires std::invocable
    {
    auto r = std::views::iota(0, 5)
    | std::views::transform(f)
    | std::views::common;
    return std::reduce(r.begin(), r.end());
    }
    static int f(int x) { return x + x; }
    int main() { return calc(); }
    ```
    Still an interesting design choice en rust's end tho, but the example is just not great imo.

    • @_noisecode
      @_noisecode  Год назад +2

      You could change calculate's API so you have to pass the function pointer as a NTTP in this case yeah, but I don't see this is as a generalizable solution at all. For example, none of the STL algorithms are designed to work this way. Idiomatic higher order functions in C++ accept their functions as values, which is the pattern that has the shortcomings I showed. There are ways to get the good codegen, yes--NTTPs are one, lambdas are another as I showed--but resorting to NTTPs is actually just support for my argument that in C++ you have to do the less-obvious thing to get the good codegen, because the defaults are working against you.
      It's fair to critique this example though; check out the pinned comment for a better one.

  • @eddieantonio
    @eddieantonio Год назад

    Really enjoying these deep dives into topics I would otherwise not think twice about!

  • @Vagelis_Prokopiou
    @Vagelis_Prokopiou Год назад

    Excellent video man. Thank you.

  • @Dash359
    @Dash359 Год назад

    And wow, Compiler Explorer is increadable!

  • @natashavartanian
    @natashavartanian Год назад +94

    I want Logan Smith to be president

    • @azaleacolburn
      @azaleacolburn Год назад +2

      Bro wtf. We need sociology majors in office, not tech bros.

    • @natashavartanian
      @natashavartanian Год назад +29

      @@azaleacolburn Sorry. It’s too late. My comment is a binding contract and he will be sworn in later today. You’ll have to be faster next time

    • @azaleacolburn
      @azaleacolburn Год назад +11

      @@natashavartanianCurse you Natasha the Platypus!!!

    • @gzoechi
      @gzoechi 11 месяцев назад +1

      When should he find time to create videos when he's presidenting?

    • @Primeagen
      @Primeagen 11 месяцев назад +2

      I want him to be prime minister

  • @youarethecssformyhtml
    @youarethecssformyhtml Год назад +1

    That's why they say Rust can be faster than C++. This channel is pure gold :o

    • @nottellinganyoneanything
      @nottellinganyoneanything Год назад +1

      Try the same code with clang - it will be optimized. This imho is just an overlook in GCC.
      Both rustc and clang use llvm so there is almost no real reason why rust would be better optimized than C++/clang or vice versa.

  • @Alkis05
    @Alkis05 Год назад

    But f and g don't have the same signature, since they have different names. Name of the function is part of the signature.
    That is why in c++ you have the override keyword, which forces you to implement a method with the exact same signature when inheranting

  • @ethanevans8909
    @ethanevans8909 Год назад +1

    Would you be interested in sharing your manim source? Im interested in learning manim to also make programming videos

    • @_noisecode
      @_noisecode  Год назад +1

      I would love to publicly publish some or all of the source code at some point, but I'm hesitant to do it yet since frankly I'm still learning Manim myself and would like to let my own skills mature a bit more before I put my scrappy hacky little scripts out there as a resource for people. :) But I would definitely love to do my best to answer any questions you have to help you get started making stuff, feel free to shoot me an email if you'd like!

  • @spookyconnolly6072
    @spookyconnolly6072 9 месяцев назад

    making these types similar to gensyms probably is the same reason scheme macros are also hygienic (namespace splatting) -- but instead for typing functions and making sure you don't lead to weird behaviour as a result of them being 1st class values

  • @leokiller123able
    @leokiller123able 10 месяцев назад

    So, in our C++ code we could replace all our function declaration by global lambdas, for example: int f(int x) { x + x; } would become auto f = [](int x) { x + y; }, and it would achieve the same thing as rust, but the advantage of the C pointer types is that you can store functions that can take any pointer type as argument and convert it to a function that takes void *, for example we could store this function: int foo(int * param); and this function: int foo(char * param) into a int (*)(void *) function pointer

    • @_noisecode
      @_noisecode  10 месяцев назад

      Converting void(int*) to void(void*) is not supported implicitly in C++, you'd have to cast; and even then, it's not safe. Even if it weren't undefined behavior to call a function through a pointer with the wrong signature (it is), calling a void(int*) through a void(void*) is totally unsound, since the void(int*) is specifically expecting a pointer to int, but the caller would just be able to pass in any pointer, whether it points at an int or something else entirely. (The jargon is that function parameter types are not covariant--they're contravariant. It would actually "work" to assign a void(void*) to a void(int*), but not the other way around.)

    • @leokiller123able
      @leokiller123able 10 месяцев назад

      @@_noisecode every pointer has the same size so I don't see how it wouldn't work, you can do this:
      void foo(void *ptr) {
      int *asInt = ptr;
      }
      So technically its the same as doing
      void foo(int * ptr) {}

    • @_noisecode
      @_noisecode  10 месяцев назад +1

      It doesn't work for a few reasons. For one thing your code doesn't compile in C++; it works in C, but in C++ it requires a cast from void* to int*. Secondly, neither C nor C++ technically _guarantees_ that all pointers have the same size/ABI (although it's true that they will be on most if not all real platforms). Thirdly, it is simply undefined behavior to call a function through a function pointer of a different type. This means the compiler might optimize using "wrong" assumptions about your code if you break this rule. Even if it seems like it "should" work, it's illegal according to the language spec and your compiler is not required to do as you say if you attempt to tell it to do this.
      But most importantly, it's just semantically broken to do this type of cast for function parameter types. Think about this code:
      ```
      void foo(int* i) { printf("%d
      ", *i); }
      auto ptr = reinterpret_cast(foo);
      // now we have a function pointer taking a void*, so we should just be able to pass any pointer to it, right?
      NotAnInteger x;
      ptr(&x); // oops -- foo() is now going to try to format a NotAnInteger as an int.
      ```
      By casting from void(*)(int*) to void(*)(void*), you are inappropriately _widening_ the set of types that are allowed to be passed to `foo`, and when a NotAnInteger* thus makes it into `foo`, all hell will break loose.
      This is because function parameter types are not covariant. In other words, the type of f(Subtype) is NOT itself a subtype of g(Supertype). On the contrary, the type of g(Supertype) is a subtype of f(Subtype), making function parameter types _contravariant_.
      C++ legality issues aside, there are other ways in which changing the function pointer type _could_ kinda make sense (covariant return types, or converting a void(*)(void*) to void(*)(int*) since function parameter types are contravariant), but I'm harping on this one because it's very wrong and it's a very common mistake.

    • @leokiller123able
      @leokiller123able 10 месяцев назад

      @@_noisecode Thank you for your detailled answer, but the only thing I don't agree is that it is safe to assume that every pointer has the same size, it would be a nightmare otherwise and the void * type would have no purpose, even standard library functions like memcpy assume that every pointer has the same size. But function pointers are another thing and you're right about them, I made a mistake in my previous answer, thank you for pointing it out. So if you want your function to take a pointer of any type, you should take a void * as parameter instead of casting the function itself.

    • @_noisecode
      @_noisecode  10 месяцев назад

      Thanks for the discussion! For completeness, I do still maintain that a compiler could give int* and void* different sizes and everything would be fine. memcpy doesn't make any assumptions about the size of anything besides void*, and if you imagined that (for example) int* was 64 bits and void* was 128 bits, then the compiler could simply widen int*s into void*s when a conversion takes place, just like an int gets widened to a long long when a conversion takes place.
      But in reality, yeah, the address size will almost certainly be the same for all (data) pointer types in a given architecture.

  • @Amir_404
    @Amir_404 Год назад +6

    This is a textbook example of a Strawman argument. You jumped though a lot of hoops to force the compiler to generate that bad code while also doing a lot of nonsense to bloat the C++ code. The non-malicious C++ code that does this is as follows:
    #include
    __attribute__((noinline))
    static auto calculate(auto f){
    int sum=0;
    for(int i : std::views::iota(1, 5)){
    sum+=f(i);
    }
    return sum;
    }
    inline int f(int x){
    return x+x;
    }
    int main() {
    return(calculate(f));
    }
    You actually have to know a lot about C++ to write the code in a way that it doesn't compile down. You used __attribute__((noinline)) which doesn't have an equivalent in Rust(#inline(never) doesn't actually work here, its complicated). You made calculate public so the compiler couldn't assume anything about its input since your program could also be started from calcluate. You used template instead of auto to make your code look scarier, and your template doesn't even work as a template because you hard-coded what could be passed in. You used such a convoluted method of iterating that I am kind of impressed. You made calculate a lambda which is just nonsense.
    The reason rapping your function in a lambda changed your code so much is because you created a copy of calculate that isn't linkable so the compiler could safely check what f was. It is making the function static/inline that gives the performance boost. The only performance boost that lambdas offer is that they make it near impossible to mess-up the build processes.

    • @_noisecode
      @_noisecode  Год назад +4

      It's pretty easy to construct an example that proves it's the type system (not any linkage shenanigans) that is what lets the C++ code compile down to nothing with a lambda. godbolt.org/z/8jsPTs4Yc Everything in this example is fully public and externally visible, but because the code inside the function object F is known and can be called statically, it can be fully inlined inside the instantiation of calculate. This is the same thing that happens with a lambda: the code in the function call operator is known statically, so it can be inlined based on knowledge of the lambda type alone. The linkage stuff you're talking about facilitates function cloning, which is a separate optimization that actually has to swim upstream from the C++ type system to get the good codegen.
      I reject the claim that I made the C++ code any more complicated than it needs to be. I wrote idiomatic C++20 Ranges code that you'd see on any slide in any fancy Jason Turner talk. I wrote the C++ function declaration (as well as the Rust function signature) 'longhand' so the viewer could see the different parts that match between the two languages. I didn't intentionally make the C++ code scary--I like C++ and I think its version of calculate() as shown in the video is perfectly pretty.
      There are a few other things you say in your comment where I don't quite understand what you are accusing me of--but yes, the function template calculate is 'public' (I guess?) but also it's implicitly inline, since it's a function template that's implicitly instantiated. I don't understand what you mean that I "hardcoded what could be passed in." Also you say that Rust's inline(never) doesn't work here, but it does: rust.godbolt.org/z/5sWs98oK4 It prevents calculate from being inlined _inside its caller_, which is what I needed in order to demonstrate my point about the codegen of calculate itself.
      I see that your version of calculate() does indeed compile better than the ranges+std::reduce version, even without static+inline: godbolt.org/z/fbGr7c7hv . I'm honestly surprised they yield different codegen, but you can see from the symbols in the generated code that it's because Clang 'clones' your version of the function, which as you probably know and I already mentioned, is an optimization Clang and GCC both do to combat the _exact_ performance pitfall I'm referring to in the video--a function that needed indirect calls because of the type system being copied and then optimized into a special-cased version for a specific call site. This, again, supports my argument that the C++ type system works against efficient codegen for higher order functions (and we need special compiler optimizations to fix it it), where Rust's works toward it.

    • @_noisecode
      @_noisecode  Год назад +1

      Small correction: I did accidentally leave a superfluous `views::common` in the C++ code in the video where it's not strictly necessary. It was needed in an older version of calculate() and I forgot to remove it when I golfed the function a little.
      `views::common` is a no-op when it's not necessary though, so it shouldn't affect the final product other than some line noise overhead. I do admit it's a minor mistake in the video though.

  • @evlogiy
    @evlogiy Год назад

    Great video! Thanks! ❤