Reducing C++ Compilation Times Through Good Design - Andrew Pearcy - ACCU 2024

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 17

  • @AndrePoffo
    @AndrePoffo 27 дней назад

    The separation of protocol and implementation is really helpful. It makes testing much easier, too.

  • @tomkirbygreen
    @tomkirbygreen 2 месяца назад +1

    Excellent talk. Pretty much essential material for software at scale.

  • @hbobenicio
    @hbobenicio 2 месяца назад

    Very good talk, thank you!

  • @llothar68
    @llothar68 Месяц назад +2

    Restrict your use of header only libraries and templates in API boundaries, use PIMPL and forward declaration and you are fine.

  • @bfitzger2
    @bfitzger2 2 месяца назад

    On the "macros are evil" bit where "#define WIDGET 7" messed up other code, we sometimes used wrapping headers that #undef'd macros we didn't want to leak out, or did it in source code. This originally was in the context of making some code cross-platform where Windows or Apple headers liked to define very common names for their constants, so platform-specific code would use the raw header, but public code in our project used the wrapping headers. I think Unreal does this as well, and I wouldn't be surprised to see this in older Unix cross-platform projects.

  • @tlacmen
    @tlacmen 2 месяца назад +1

    Will lld or mold improve build time with whole program optimizations enabled?

    • @paulluckner411
      @paulluckner411 2 месяца назад +1

      mold is supposedly faster in any standard usecase. It is optimized for modern hardware by making use of parallelization as much as possible.

  • @TalJerome
    @TalJerome 2 месяца назад +1

    Does anyone understand what he meant with "more granular" regarding the protobuf issue? (23:45)

    • @ContortionistIX
      @ContortionistIX 2 месяца назад

      instead of including the whole schema, only include the schema for specific endpoints

    • @paulluckner411
      @paulluckner411 2 месяца назад +1

      I guess he just means to include the smallest header possible. E.g. instead of only use the ones you need, maybe , .

  • @GeorgeTsiros
    @GeorgeTsiros 2 месяца назад +1

    Anyone remember Turbo Pascal?
    Remember how fast it was? Remember the _hardware_ that it ran on?
    Yeah. We've got a _lot_ of catching up to do.
    There is zero reason the executable can't be ready some milliseconds after a character has changed in the code.
    That THE ENTIRE SOURCE is worked on, as if it has never been seen before, every time a build is started is comical.

    • @depralexcrimson
      @depralexcrimson 2 месяца назад +1

      thank the software industry for that one... instead of hiring passionate people, they hire vloggers that do anything but coding 90% of their work day.

    • @maxrinehart4177
      @maxrinehart4177 2 месяца назад

      ​@@depralexcrimson the tech industry really screwed itself hard.

    • @allNicksAlreadyTaken
      @allNicksAlreadyTaken Месяц назад +1

      There are actually a thousand reasons, you obviously just don't understand them. If you are so smart, go out and fix it.

    • @27182818284590452354
      @27182818284590452354 Месяц назад

      Turbo Pascal had modules 40 years ago.
      C++ compilers can't properly implement them still.
      It's just mind-boggling.

    • @realhet
      @realhet Месяц назад

      One valid reason is optimization.
      I remember TP, also I remember Delphi on win32 up until 2012ish. It was still lightning fast, but generated a little bit slower code than LLVM.
      Later I got to work with Cpp and got totally sick from those compile times: 50kloc project, and 45 seconds to launch a debugger on unoptimized code. I don't even do big projects, just my one man project... With optimization it was like 2-3 minutes. It was fast compared to this presentation, but coming from Borland Pascal, Delphi it's slow as hell. When the program starts running, I already forgot why I started it earlier