Programming Discussion: Speed, w/ special guest Sean Barrett

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 153

  • @constfun
    @constfun 4 года назад +27

    “Your house and you can’t put up a poster” is a great analogy for auto formatters, however, perhaps counter intuitively, I find it liberating. There is extra brainpower that is suddenly available in your head when you no longer think about formatting as you write code. Often I’ll let the code flow onto the page as it falls and save to format and review, it keeps me in the zone. Minute differences of my personal style relative to the formatters hardly matters, to me. I’ve literally taken a step out of my “inner loop” if ya will and i like that.

    • @dandymcgee
      @dandymcgee 2 года назад +6

      I think this argument only applies to people who never formatted their code in the first place. I like my code to be vertically aligned for readability and auto-formatters always fuck that up. E.g. i.imgur.com/aleRXZt.png

    • @lucemiserlohn
      @lucemiserlohn 2 года назад +1

      I personally use a code formatting that might seem very strange to others, but it work for me. Have a piece of software messing that up would ruin readability of the program for me completely. No, thanks, do not want that.

    • @0ia
      @0ia Год назад

      @@dandymcgee Not even! I was so freed when I *first* stopped using auto-formatters and started using spaces instead of tabs. Full control over my formatting freed so much time otherwise spent adapting to and reading others’ methods of formatting. I didn’t need to have an established formatting habit to appreciate disabling auto-formatters.

  • @panstromek
    @panstromek 4 года назад +59

    This was very interesting discussion overall. There is one bit Sean said and I can't find it now how storing AST (I think) in a Data Oriented way is hard so he does store it as a tree with pointers. I would say that's still pretty "Data Oriented" decision and perfectly fine.
    I feel like DOD became kinda synonymous to using SOA for everything, but I think Mike's point was more general. It was more about "Structuring your program around what needs to happen with the data and not the other way around (structuring your data based on the structure of the program)."
    If you need to process your data in a tree fashion, I think it's perfectly fine to store it as a tree. Tree is a nonlinear structure, so no matter how you do it, you will still face a lot of non-locality anyway.

    • @RobertHildebrandt
      @RobertHildebrandt 4 года назад +8

      Watching it right now:
      28:54 Seans starts talking about his rule of thumbs and Data Oriented Design being one of them
      34:52 They talk about trees and SoA

    • @panstromek
      @panstromek 4 года назад +1

      @@RobertHildebrandt that's it, thanks ;)

    • @RobertHildebrandt
      @RobertHildebrandt 4 года назад

      @@panstromek You're welcome :)

    • @Ehal256
      @Ehal256 4 года назад +1

      Check out co-dfns for a language with a data-oriented compiler.

    • @charlesalexanderable
      @charlesalexanderable 4 года назад +1

      These guy's converted Chromium's document object model (DOM, which is a heterogeneous tree like they are discussing) to data oriented and got a huge speedup: ruclips.net/video/yy8jQgmhbAU/видео.html

  • @alexnoman1498
    @alexnoman1498 4 года назад +10

    These are absolutely amazing, I love this format! You're practically guaranteed to learn something, it's great.

  • @monkyyy0
    @monkyyy0 4 года назад +101

    this podcast is taking a while to find a name

  • @Spongman
    @Spongman 4 года назад +15

    Runtime relocation has been a thing in windows since the beginning, way before ASLR, before virtual memory, even. Two .DLLs can have the same base address and they still need to be able to be loaded into memory at the same time. the alternative would be to have a global registry of all the load addresses of every DLL ever shipped, and have the EXE's expect them always to load at that address.

  • @sebastianmestre8971
    @sebastianmestre8971 3 года назад +3

    I have recently gotten a lot of mileage out of storing AST nodes as a discriminated union, laid out in a flat array, in post order (which also happens to be the order in which nodes come out of a recursive descent parser).
    Due to the post order, a plain loop is already a bottom up traversal, no recursion needed. In turn, a lot of the typical processing reduces to linear scans, with some extra reads over parts of the array that have already been processed. It goes FAST.
    This was for a toy compiler of a toy language that didn't do anything too complicated, so I do question the feasibility for a large scale industrial strength compiler.

  •  4 года назад +3

    1:16:13 Procrastination is a good thing!
    I also like to advocate that!
    Thank you, Jon, for sharing my point of view :)

  • @SimGunther
    @SimGunther 4 года назад +12

    You know what's worse than premature optimization? Premature SIMD use. Sometimes it's significant enough of a boost in performance to justify its use, but as always YMMV depending on cache locality and memory bandwidth.
    Come to think about it, premature optimization is often confused for mistaken optimization for the wrong set of high level design decisions.

  • @DanTrue
    @DanTrue 4 года назад +39

    This is a very interesting talk. I love hearing Jonathan talk about how he works and optimises.
    I do find the attidue of all other branches of programming being bad, combined with the fact that he has mostly having worked on games a bit tiresome and therefore has a specific perspective. Yes, a lot of web development keep making a big fuss out of the simplest things and a lot of the npm ecosystem is pure insanity to try to manage - but just because we don't have the same problems to fix as Jonathan doesn't make us bad programmers.
    I mostly do backend server work, either monolithic or microservices. I *could* spend time optimising my code to the same degree as Jonathan does, layout my content in memory well and ensure the cache is hit most of the time. I love that stuff, it's fun to do once in a while :)
    But usually right after that code block, I'm going to either be making a http request (e.g. to the database) or at least respond to the user over a http channel - which will take far longer than any amount of cache misses (~1 ms if the two services are in the same datacenter. Tens of hundreds of ms if communication with the end-user somewhere globally). So, it makes much more sense to me to care about speed and stability on the inter-service layer, databases etc. to limit http latency (which can make a drastic difference in the perceived speed of my application), than optimising the code in the individual services.
    I don't think this makes me a bad programmer. It just means I have completely different problem to solve than Jonathan does. But I'm sure Jonathan would also respect this perspective. The talk is with a specific focus on compilers and single-machine applications like games after all.
    Anyway, keep up the nice work and keep posting :)

    • @joaoedu1917
      @joaoedu1917 4 года назад +15

      As a fellow web-backend programmer, I feel you. But I think that he means that things should be completely different, not just better in this or that, when it comes to what the web should be like (and other sw. stuff). Rewritten from scratch, with a new approach etc. Which is totally unrealistic, but not totally unhelpful.
      I myself take to my workplace a lot of Jon and Caseys perspective (and as a tech leader, try to be a good influence on my peers). Not that I throw away our high-level code or try to write a reverse-proxy or a database from scratch.
      But I just don't buy virtually any web-novelty that reaches me, and their reinforcement freed me of a lot of stuff which I always though was BS, like TDD, OOP or design patterns. I've become more pessimistic, critic and down to earth. So it's kinda "the king is naked" feeling, but I try to turn it good. I identify what we're stuck with and try to make stuff as simple as possible; while making it clean, rational, understandable etc. focusing on the code, the APIs, the data-structures, the path that the data is traversing etc. (in other words, good code, instead of buzzwords)

    • @lucemiserlohn
      @lucemiserlohn 2 года назад +3

      That does not mean web developers are bad developers. It means the technology is ridiculously bad. And I agree with him on that. So much so that I will not ever touch code that uses web tech, ever. No HTML, no CSS, no JavaScript, no WebServices. One exception is JSON, which is kind of more useful than XML, but then again, it is a niche.
      You do what you do, just don't expect me to like what you're working with.

    • @Vitorruy1
      @Vitorruy1 Год назад +1

      When we can't choose the stack our hands a tied somewhat, but there's still large gains to be made on the code side, it's not all network latency.
      In PHP they had a huge performance gain by removing unnecessary allocation, also another huge gain by pre-compiling framework files that never get changed. On the framework sites there's a lot of low hanging fruit gains by avoid unnecessary file reads for components and so on.
      But the fact that such simple optimisations are often missed in those big popular frameworks really show how little the average open source web dev knows/cares about performance.

  • @fourscoreand9884
    @fourscoreand9884 4 года назад +3

    This is awesome stuff, thanks, Jon. Brilliant.

  • @VexillariusMusicEDM
    @VexillariusMusicEDM 4 года назад +2

    I'm very much enjoying these discussions, thank you!

  • @sndrb1336
    @sndrb1336 4 года назад +29

    jon has a quarantine resistant hairdo. Might consider this move too.

    • @cogigo
      @cogigo 4 года назад +4

      Pulled the trigger on my hair 4 days ago. Was a bit of shock at first seeing myself without any hair on my head but its actually looks pretty good. And I´m actually more productive because my hair isn´t a mess and I feel like I look sloppy all the time.

    • @nexovec
      @nexovec 3 года назад +4

      @@cogigo It's more aerodynamic, so you're faster now

  • @MurtagBY
    @MurtagBY 4 года назад +2

    It's cool that you put guest in header

  • @judison0
    @judison0 3 года назад

    @Jonathan Blow
    How do I take advantage of multiple cores in my compiler (a strategy):
    My compiler works in steps (parse, check, generate, llvm, link) one step at a time.
    There is an executor, N threads as cores.
    First it collects all files to be compiled, into a list. (single threaded)
    Then it parses each file (in N threads producer/consumer logic)
    Then it checks each module (a module can multiple source files) (N threads)
    Then it generates each module (N threads)
    Then it calls llvm for each generated module (N threads) (could be in the same step as generate)
    Then link (single threaded)
    It does not provide the best case, as it waits for all jobs in one step to end before start the next, but its way better than nothing.
    Some steps may run in a per-module job or other divide strategy...
    Don't know details of jai, but my language requires the AST (of all modules) before check can start...

  • @MonsSeptime
    @MonsSeptime Год назад

    Such talk is like a glass of water in the middle of the desert!

  • @leonardocaetano6307
    @leonardocaetano6307 4 года назад +4

    Note to myself: 4:24:39 a programming project to work on

  • @sebastianmestre8971
    @sebastianmestre8971 4 года назад

    It may or may not be more expensive to do it this way, but, type checking can be flattened by using an unification algorithm instead of propagating up the tree.

  • @nerdError0XF
    @nerdError0XF 4 года назад +1

    It`s so pity that there are no auto-subtitles. For not native speakers it`s much easier to watch with them

  • @h3rteby
    @h3rteby 4 года назад +6

    Autoformat on save is great. Not just for enforcing consistent style on a team, but to be able to write without having to fiddle with indentation and stuff at all. You can just write garbage looking stuff and have it "snap into place".
    Formatters usually have configurable preferences too.
    Btw I would rather have a formatter that only supported Jon Blow Style, and adapt to that, than have no formatter at all.

    •  4 года назад +3

      I would like to present a counterpoint.
      There is a method `Map.of(key1, value1, key2, value2, ...)` in Java since Java 9.
      Autoformatter would fill the line and break it just before it would exceeded the limit. Or put each argument on separate line.
      I'd like to format it to keep two arguments per line.
      Autoformat is great to be used manually, but not to be forced on save/commit IMHO.

  • @joshuasamuels2143
    @joshuasamuels2143 4 года назад +3

    Any time I write:
    (ThisLongWindedObjectName == x or ThisLongWindedObjectName == y)
    I really wish I could write something like :
    ThisLongwindedObectName =| [x,y]
    Which would return true if the variable is equal to any of the things. Note I am not comparing the object to the boolean returned by x or y, but I am comparing the thing to list a of other things. Is there any common syntax that handles this, or is there any languages that have this shorthand??
    Would you be able to add something like this to JAI? Seem like it would be a trivial layer of syntax that would make these situations much more concise.

    • @mariamosman3305
      @mariamosman3305 4 года назад +1

      SAME

    • @joshuasamuels2143
      @joshuasamuels2143 4 года назад +1

      After i wrote this i realized that i had the answer in my repertoire but i wasn't conscious of it.
      Using an array literal:
      if [x,y].includes(longWindedObjectName) { doStuff() }
      Or
      switch longWindedObjectNamed{
      case x: case y: doStuff()
      break;
      }
      Not sure which is more efficient.

    • @brothir
      @brothir 4 года назад +1

      ​@@joshuasamuels2143 Just use a macro/function. cmp2(Base, A, B) Base == A || Base == B, for instance.

    • @MurtagBY
      @MurtagBY 3 года назад

      @@brothir python is your guy: val in [x,y]

    • @brothir
      @brothir 3 года назад +1

      @@MurtagBY Python is most certainly not my guy, and I would never select a prog lang based on something this trivial.

  • @3laserbeam3
    @3laserbeam3 4 года назад +9

    Are you gonna make these every weekend? They are really cool!!

  • @JackMott
    @JackMott 4 года назад +2

    of course Sean has some oddball resolution. haha. Love you Sean.

  • @SiisKolkytEuroo
    @SiisKolkytEuroo 4 года назад +1

    Web developer here, trying to grasp the world of real software engineering. At 28:55 they explain that one of the key areas of performance optimization is "mispredicted branches". My question is, how do you even find out about a mispredicted branch, and how do you measure its impact or test if the problem has been fixed?

    • @SiisKolkytEuroo
      @SiisKolkytEuroo 4 года назад

      It's almost as if you'd have to know the CPU inside out, below the machine instruction level, to know how to make your program efficient

    • @wolfschaf
      @wolfschaf 4 года назад +7

      Not really, there are certain things you can do to minimize branch misprediction. The branch predictor is just something that will "remember" which path a branch took and hopefully won't miss it when it passes this branch again. And maybe it will even figure out some patterns inside your program that will enable it to predict branches more reliable.
      So what can you do?
      Just try to do the work that is hard to do for the branch predictor. If you know that something is almost always false or almost always true, then just leave this to the predictor. BUT if you have a 50/50 chance or something the like then the predictor will not be able to help you and you will get a lot of mispredicted branches here.
      So you just have to locate these kind of branches and split them into two separate code paths. Sometimes this is easy to do, sometimes it's hard.
      A general good advise is to stay away from OOP. There you operate on an object in an isolated manner. So only one object at a time. If you have multiple objects then they are most likely to be of different type (inheritance). And that's not good for the branch predictor, but also not good for speed in general. There are lots of last minute decision making happening in methods and you should get rid of those.
      See the talks of Mike Acton. He will explain exactly why this is bad and what to do about it. Keyword here is Data Oriented Programming.

    • @paulcosta8297
      @paulcosta8297 Год назад +1

      ​@@wolfschafAwesome comment

  • @u9vata
    @u9vata Год назад

    When talking about vtune or what... I am not familiar with it, but familiar with "perf" tool on linux and when you cannot "sample" because sampling makes a false result, you can still use CPU hw counters sometimes to get a hint of what is going on.
    Also if you suspect a function / data structure is slow the good thing is to "unit test" it: basically make only that run, with some preparation.This helps to run it gazillion times without refactoring the whole app - of course only works if you have a good suspicion already.

  • @peter-mangelsdorf
    @peter-mangelsdorf 4 года назад +4

    omg! Sean Barrett and Jon Blow?!?

  • @ratgr
    @ratgr 4 года назад +2

    OK, back at this video again, lets see if I can complete it, I have to say Sean is wrong about the impact of misprediction using the switch statement both in measured time and conceptually, the only advantage he could have is instruction locality but not that much.
    Explanation:
    The switch conceptually always fails prediction once per token,
    The Loops conceptually fails once per token per type of token;

  • @Teabone3
    @Teabone3 4 года назад +7

    Premature optimization is a habit I have as well

  • @SimGunther
    @SimGunther 4 года назад +7

    1:34:00 IIRC ZigLang will do "generics"/"templating" with compile time computation of the variable type with something like
    fn foo(comptime T: type, a:T) T
    Which defines a function of any type T and variable a to return any value of type T. That seems much better than templating AFAIK, but I just wanna know your thoughts on that templating alternative.

    • @jonaskoelker
      @jonaskoelker 4 года назад +3

      Maybe I'm just slow but how is it different (other than syntactically) from C++?
      You're using the definite singular, "the type"; would 'foo' only work for a single type, inferred at compile time? In that case, with C++ templates you use a single definition of vector to make both vectors of ints and vectors of strings; that seems better (some of the time at least).
      If the ZigLang construct does the same as "template T foo(T a) { ... }", then how is it better if it's not different?
      What am I missing?

    • @SimGunther
      @SimGunther 4 года назад

      @@jonaskoelker The context in the David Stone CppCon 19 might fill you in on compile time stuff that's missing in c++, which likely won't pass the committee because of fears of breaking old code

    • @jack-d2e6i
      @jack-d2e6i 2 года назад

      @@jonaskoelker One difference is that in zig the type is just a variable known at compile time, not a special “type” construct.
      It’s a generalisation of how most languages do generics, and allows you to do meta programming by building types at compile with normal functions.

    • @jonaskoelker
      @jonaskoelker 2 года назад

      @@jack-d2e6i > One difference is that in zig the type is just a variable known at compile time
      So are the type arguments to C++ templates, yes?
      > not a special “type” construct.
      What difference does that make?
      Is the type inferred? Are there ways of combining types that are more convenient than in C++? More flexible? Is the syntax more convenient because it requires less typing?
      What can you do in Zig that's either impossible, more difficult, less convenient or less flexible in C++? Could you point to a concrete example?

    • @jack-d2e6i
      @jack-d2e6i 2 года назад

      @@jonaskoelker I've tried to reply twice. Idk if youtube is swallowing my comments.

  •  4 года назад +3

    Dear Jon, please keep making these videos.
    I don't know shit what you two are talking about, but I use it as a meditation material.
    When I am by myself, I keep thinking about random stuff.
    But when I listen to this, my brain is fully occupated by trying to catch something understandable and the rest of subconcious is just relaxed.
    Maybe something like Koans you mention in The Witness :)
    Thank you for your work!

    • @meraindia5367
      @meraindia5367 2 года назад +1

      although I understans almost all of it, as I'm obsesssed with speed,
      But I sleep easily when I listen these :-)
      With all due respect

  • @Spongman
    @Spongman 4 года назад

    the msvc precompiled headers are most useful for programs where every .c/.cpp file is #including . however, even if you're not doing that, it's still significantly advantageous to build/use precompiled headers for basically all but the most trivial programs. even if you just put your system includes in there and leave your program's headers out, you'll still see a speedup - as long as you keep your other cl flags consistent between source files the compiler will load a single instance of the pch binary in memory and compile all your source files against that single blob, in parallel.

  • @gregg4
    @gregg4 4 года назад +10

    2:53:50 Sean makes the point that tabs means fewer characters and thus a greater chance of a cache hit when fetching the next token. Then Jonathan replies: "every reasonable programmer uses spaces". Why? Was he sarcastic? I always thought spaces were crazy.

    • @mav45678
      @mav45678 4 года назад +5

      Spaces preserve how code looks across editors with different tab lengths.

    • @gregg4
      @gregg4 4 года назад +6

      @@mav45678 Sure, that's why tabs are better. I can set the indentation size to what I want and other programmers can set it to what they want while all using the same code.

    • @mav45678
      @mav45678 4 года назад +5

      @@gregg4 The code is often formatted to look pretty with only one specific length of tab though.

  • @arsenbabaev1022
    @arsenbabaev1022 День назад

    I did try state machine based lexer and it ran 2x slower than normal lexer with a bunch of switches and ifs. The entire entire state machine loop was 32 lines of assembly, but still it was way slower. I think that sean's state transition design is slower that regular, the real fast lexer might need to use 16 bytes simd to try beating the regular-style lexer.

  • @Spongman
    @Spongman 4 года назад +3

    how would you do C++ linking without mangling and without breaking the ABI?

    • @KeldonA
      @KeldonA 4 года назад +1

      The problem with name mangling is that it's not standardized. Python handles name mangling without any issue because it's standardized to the point that I can manually link to a name mangled private property.

    • @jblow888
      @jblow888  4 года назад +7

      Have the link format use more data than just the procedure's string name to identify the procedure.

    • @Spongman
      @Spongman 4 года назад +2

      @@jblow888 that would have broken the ABI and would have broken all existing tools. not a great selling point in a market where, at the time (1992), Microsoft was at best #3 in the DOS/Win16 compiler market behind Borland and Watcom. regardless there would be no semantic difference between what you propose and what exists today.

    • @Spongman
      @Spongman 4 года назад +1

      @@KeldonA no, a standard from the beginning would have been nice. cfront was the defacto standard for a while but i believe its original mangling scheme wasn't compatible with some platforms and didn't cover new language features. still, on windows at least, msvc, icc & clang use the same scheme.

    • @0x1337feed
      @0x1337feed Год назад

      @@Spongman ratiod

  • @smajlovicmuamer1944
    @smajlovicmuamer1944 4 года назад +7

    I always wondered, Jon.
    What are your views and feeling on love-life stuff ?

    • @bartholomewjoyce
      @bartholomewjoyce 4 года назад +2

      Hahaha

    • @SimGunther
      @SimGunther 4 года назад +4

      He's got a working RTOS running in his body. It's probably one of the last things on his mind XD

  • @Hector-bj3ls
    @Hector-bj3ls 4 года назад +2

    Code formatters are awesome as long as they're configurable. I'm lazy and I like the computer to format my code for me as long as it does it the way I want.

    • @delian66
      @delian66 4 года назад +1

      Code formatters are awesome *exactly when they are NOT configurable*. I am lazy too and I do NOT want to:
      a) lose time tweaking/configure them.
      b) deal with source formatted by differently configured formatters (which is inevitable when more than 1 person works on a project, if the code formatters are configurable).
      See the success of go's unconfigurable fmt tool for more details.

    • @Hector-bj3ls
      @Hector-bj3ls 4 года назад +2

      @@delian66 For a project, you would create a config file that is checked into the repository. Then everyone working on that project uses the same formatting.
      They should be configurable because not everyone agrees on a style and like Jon said, he doesn't want to force his style onto other people. I agree unless they're working on a project where a common style should be adopted.

    • @Hector-bj3ls
      @Hector-bj3ls 4 года назад

      ​@championchap I agree, it makes sense for it to be an external tool. It's nice when a language comes with one by default because it's less work for me, but I'm not opposed to it being a third party thing.
      So, Gofmt, Rustfmt and Zigfmt are all packaged tools where as something like Prettier for JS is external. I use all of them when programming in those languages.

    • @delian66
      @delian66 4 года назад +1

      @@Hector-bj3ls The language creator(s) are in the best place to enforce a consistent code formatting style for the whole ecosystem.
      If they do not make that decision, then the decision *will* be made by *many others*, much less sensible and thoughtful people.
      If you impose a style only for a given project, that does not solve the problem of each project having its own style, and you as a developer having to adapt to each new style everytime you open a new project/switch jobs.

    • @Hector-bj3ls
      @Hector-bj3ls 4 года назад +2

      @@delian66 I don't think that each project having it's own style is a problem though. I know there are some benefits to having the style the same across the board, but I'd rather promote the happiness of individuals since that's what makes them productive.
      Also, unless the formatter enforces a casing style, projects end up looking different anyway. In JS you have linters for that and in Rust the compiler actually does it.
      Personally, as long as a project is consistent I don't mind doing what the style is.

  • @jack-d2e6i
    @jack-d2e6i 4 года назад

    Even when coding by myself I like auto formatters.
    I can write sloppy code then save it and it looks good.

  • @SimGunther
    @SimGunther 4 года назад

    Any examples of a project similar to the one Jon's talking about at 4:03:28?

  • @charlesalexanderable
    @charlesalexanderable 4 года назад +2

    35:37 on heterogeneous trees, there is a talk where someone converted chromium's DOM into structure of arrays and got a huge speedup: ruclips.net/video/yy8jQgmhbAU/видео.html

    • @JackMott
      @JackMott 4 года назад

      wouldn't there be some HTML where that idea breaks down? Like you end up with a real sparse array?

  • @Wand2Fishes
    @Wand2Fishes 4 года назад

    While the cpp lerp looks ugly, I don't understand how function long_lerp is slower than function short_lerp, if long_lerp is just the same as short_lerp with a bunch of edge cases coming after the most common case which when triggered would just return something and skip the edge cases.

    • @loli42
      @loli42 3 года назад

      mr. 1-year-old comment. templated bullshit that generates 8 terabytes of x86 instructions inline (bc templates are bullshit) whenever you call it is not good for instruction cache, plus there is the initial branch, plus there is whatever the fuck initial computation is even being done that it's branching on. you don't know why it's slow because you think a computer is just an instruction pointer that increments after every instruction.

  • @TransformationalGaming
    @TransformationalGaming 4 года назад +2

    You guys should just become web programmers... So much easier than writing compilers... lolol!

  • @alexchichigin
    @alexchichigin 4 года назад

    I'm sure you're aware there are a little bit more registers on a core than directly addressable: en.wikipedia.org/wiki/Register_renaming :)

  • @jouniosmala9921
    @jouniosmala9921 4 года назад +15

    Wtf, your compiler is supposed to be fast, and your expressions take so much memory. Almost two cachelines per operator, and you still get your performance numbers. I have definitely wasted my time when worked on clever data structures very early on to make my compiler fast since changing those late would feel impossible task. In the end using clever data structures made development harder and slower so I started looking to other implementation languages to speed up development. There is fine line between being clever enough to create superior results and being so clever that later in the project you just get stuck because you need to operate on weird ways on weird things and nobody else has done same way before.

    • @iestynne
      @iestynne 4 года назад +6

      for something like a compiler I would definitely optimize for code simplicity (comprehensibility, debuggability, robustness, productivity...)
      and only optimize for speed when doing so does not conflict with that primary goal (probably once the design is feature complete and locked down)
      as Jon says, this depends on having a good intuition for how to avoid laying down performance landmines for your future self!

    • @jouniosmala9921
      @jouniosmala9921 4 года назад

      Doing what I was trying to do later would require complete rewrite from scratch to aim for 10x performance improvement compared to doing it traditional way with pointers.

    • @SimGunther
      @SimGunther 4 года назад

      @@jouniosmala9921 That's why it's always a good idea to deduce the magic trick by reconstructing it backwards so you'll know the high level decisions to make to get a minimally viable and fast product ;)

  • @owensoft
    @owensoft 4 года назад +4

    I love how he avoids "web stuff".

    • @dravorek
      @dravorek 4 года назад +12

      That tech stack is so deep. I bet hundreds of people drown in it every day.

    • @DonaldDuvall
      @DonaldDuvall 4 года назад +2

      I agree it is terrible (web tech stack) but he can't disagree, it gets the job done, and it allows him to have an audience. So maybe he is less distant from the tragedy the web than he believes.

    • @owensoft
      @owensoft 4 года назад

      @@dravorek drooowwwnnn!

  • @radioleta
    @radioleta 4 года назад +1

    What's that ui64 suffix in 0xfffffffffui64, I've never seen it before in C 2:42:29

    • @SeanTBarrett
      @SeanTBarrett 4 года назад +5

      It's an old MSVC convention for unsigned 64-bit literals, before they added support for ull.

    • @radioleta
      @radioleta 4 года назад

      @@SeanTBarrett I see, doesn't it harm portability? Cheers

  • @hanes2
    @hanes2 4 года назад +5

    360p... hello darkness

  • @zelexi
    @zelexi 4 года назад

    Something I noticed. struct inheritance is public by default. You shouldn't need to do "struct foo : public bar" as public is assumed. Simply "struct foo : bar" is good enough

    • @etodemerzel2627
      @etodemerzel2627 3 года назад

      It's better to be explicit about things like that. That why you just don't need to keep a lot of small details in your head.

    • @____uncompetative
      @____uncompetative 2 года назад

      I usually already in the bar acting like a foo

  • @SimGunther
    @SimGunther 4 года назад

    1:49:00 Will the Mill computing processors be the savior of computers that we hope for or will the Spectre invincibility not be worth it for 99.99% of the industry, forever dooming itself into permanent mediocrity and being permanently vulnerable to Spectre?

  • @indycinema
    @indycinema 4 года назад +5

    Please release the compiler.

  • @EkajArmstro
    @EkajArmstro 4 года назад

    Woah I want to know the story behind 4:14:30 lol

  • @faust1652
    @faust1652 4 года назад

    very cool!

  • @MurtagBY
    @MurtagBY 4 года назад +2

    1:33:12 (for me only, sry everyone)

  • @charliemalmqvist1152
    @charliemalmqvist1152 4 года назад +1

    Are there plans to release a compiler for jai any time soon?

  • @fahdv2597
    @fahdv2597 4 года назад

    What ide does he use

    • @SimGunther
      @SimGunther 4 года назад +1

      Emacs for text edits and Visual Studio for debugging

  • @darthtesla301
    @darthtesla301 4 года назад +2

    Damn 2:16:25 hours is a lot

    • @alexnoman1498
      @alexnoman1498 4 года назад +1

      Wait, what? That's exactly half of what it shows me as the length...?

    • @verageren
      @verageren 4 года назад +4

      @@alexnoman1498 Probably one of those double-speeders

    • @alexnoman1498
      @alexnoman1498 4 года назад +2

      Oh, right. Double is a bit too much for me, plus I would have to heavily concentrate then which is harder for this style of rambling conversation.

  • @SiisKolkytEuroo
    @SiisKolkytEuroo 4 года назад +1

    Please make a formatting program that makes everybody's code look like your code.
    Then we'll get someone to hack github and make everyone's code look like your code

  • @krux02
    @krux02 4 года назад

    gofmt makes the argumentation about styles go away within that language. No more tabs vs spaces discussions.

    • @iestynne
      @iestynne 4 года назад +2

      looks like it's also designed to drive large scale automated code transformations, generating minimal diffs... I didn't realize that, that's cool

  • @yixe2253
    @yixe2253 4 года назад +1

    what is the programming language hes working on?

    • @obelisk9251
      @obelisk9251 4 года назад +2

      Jon? He's working on his new language JAI

    • @yixe2253
      @yixe2253 4 года назад

      @@obelisk9251 still I thought he released that

  • @Spongman
    @Spongman 4 года назад +2

    vtable calls are almost never mis-predicted.

    • @incodegames
      @incodegames 4 года назад +4

      If you're calling a simple function one level of inheritance deep maybe. But as soon as you've bought into the bad kind of OOP practices, you're going to be nesting layers of hierarchy on complex functions very quickly, and you're definitely going to run into misprediction.

    • @Spongman
      @Spongman 4 года назад +1

      @@incodegames I'm not talking about 'bad kind of OOP practices' - bad coding is language agnostic. i'm just saying that making a vtable call can potentially be faster than switching on a tag especially with x86/64 __fastcall. however, if you've trained yourself an irrational gag reflex to `o->m()`, then you're going to have to implement it yourself. honestly, i find all this micro-optimization hilarious. as soon as the program has to hit the disk, all of this instantly drops below the noise level.

    • @incodegames
      @incodegames 4 года назад +4

      ​@@Spongman I mean yeah, bad coding is bad coding, but the mindset that "x problem almost never happens" is why it takes four hours to compile Tensorflow. You're never going to be faster than an inlined function that fits in the cache line with no mispredictions, so that's what we should shoot for if we can. The whole argument is just that we should program around the structure of data on the platform, and shape our mental models based on the real problem. I don't see any other way to scale appropriately.

    • @Spongman
      @Spongman 4 года назад +1

      @@incodegames "You're never going to be faster than an inlined function that fits in the cache line with no mispredictions" that's not even relevant to the case i'm talking about.

  • @didey4897
    @didey4897 4 года назад

    Me first

    • @zeikjt
      @zeikjt 4 года назад +3

      Enjoy your 360p :)

    • @obelisk9251
      @obelisk9251 4 года назад +2

      @@zeikjt The 360p is such a tease :(

    • @wessmall7957
      @wessmall7957 4 года назад +1

      360p gang

  • @DeathKillBot
    @DeathKillBot 4 года назад +4

    Premature Optimization: The Movie

    • @yksnimus
      @yksnimus 4 года назад +13

      in my humble opinion, "prEmatuerE oPtimIZatiOn iS the ROot of EviL" being posted as a mature sentence in every programming circle have done more evil than good. Ppl dont care about optimization even when they should, or only when its too late.