The Dirty Truth About Clean Code

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024
  • In this video, we will delve into the dirty truth about clean code. We will explore its significance and how it affects performance while reacting to "Clean" Code, Horrible Performance from Molly Rocket. If you'd like to watch their entire video, then check the additional resources section.
    Clean code is a programming approach that focuses on writing easily readable, maintainable, and efficient code. It emphasizes the use of meaningful names, consistent style, and modular organization to enhance the overall understanding of the code. By following clean code principles, developers can improve collaboration, reduce bugs, and simplify future modifications. Clean code practices contribute to better performance and stability in software projects. Ultimately, clean code is essential for creating high-quality software and fostering a healthy development environment.
    Performance-oriented programming is an approach that prioritizes the optimization of code for speed and resource usage. By employing efficient algorithms, data structures, and hardware-specific optimizations, developers can create software that runs faster and utilizes resources more effectively. This programming paradigm is particularly valuable for mission-critical applications, real-time systems, and high-performance computing scenarios.
    🚶‍♂️ FOLLOW ME 🚶‍♂️
    Join Our Discord - / discord
    Twitter - / codyengeltweets
    TikTok - / codyengeltalks
    Medium - / membership
    Subscribe To My Newsletter - www.engel.dev/
    💡Get 20% Off Of Brilliant - brilliant.sjv....
    🎥 My RUclips Gear - kit.co/CodyEng...
    🖥 My Desk Gear - kit.co/CodyEng...
    🎵 My Background Audio - www.epidemicso...
    *The above links are affiliate links.
    📚 RESOURCES 📚
    "Clean" Code, Horrible Performance - • "Clean" Code, Horrible...
    #softwareengineer #cleancode #programming

Комментарии • 769

  • @CodyEngelCodes
    @CodyEngelCodes  Год назад +30

    Hey everyone, "Clean" Code, Horrible Performance is a 20+ minute video so as you can imagine this reaction video was heavily edited. Here is a link to the original video -- ruclips.net/video/tD5NrevFtbU/видео.html

    • @jboss1073
      @jboss1073 Год назад

      "if there's no clean abstraction don't try to force one" - the clean abstraction is ABSTRACTION itself and it always exists, hence why disagreeing with DRY makes no sense. Have you studied lambda calculus?

    • @lenyo38
      @lenyo38 Год назад +1

      That thumbnail though... Disrespectful.

    • @seriouscat2231
      @seriouscat2231 4 месяца назад

      @@lenyo38, I'd think Casey has enough sense of humor to get it without any problem. I think his main point is performance, but I think the biggest problem with clean code is that sometimes the interface is as complicated as the actual implementation, so there are no cognitive savings in putting some things behind an interface. But I'm afraid that this is just a reaction video and that the main takeaway from this will be "well, I disagree" and "well, it depends".

  • @rafaelbordoni516
    @rafaelbordoni516 Год назад +375

    For context, this is Casey Muratori, he is a game dev. Performance in games is not like "if it takes less than a second to load the data it's fine", it's more like "the faster the code, the prettier the game and the better it plays", plus the customers complain way more when a game performs a little worse than the industry standard, everyone can see when a game looks better, plays better and performs better than yours. In this context, using all of the hardware with top efficiency is paramount. I think the message isn't to write the most unmaintainable code to get the best performance, it's just to be aware of how your code affects performance. Also, breaking clean code rules doesn't necessarily make your code unmaintainable, what is considered clean is extremely opinionated and writing performant code doesn't need to be a tangled mess.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +28

      Yeah his advice makes more sense for game development. Although I will say Satisfactory performed like garbage (and likely still does for larger factories) but it's a fun game and they have been optimizing it along the way.
      And I agree that not following clean code doesn't mean your code is not readable, but in his own example it did make it more difficult to understand.

    • @Ignas_
      @Ignas_ Год назад +49

      @@CodyEngelCodes I agree that it makes it more difficult to understand. However, what you see is exactly how it works, once you DO understand it, you know EXACTLY how it works. Whereas with clean code, in larger codebases you may have to piece together code from several different functions in various classes in several different files to understand what's going on.
      It's like the difference between a programming article and a research paper about an algorithm. The research paper is harder to understand, but it has a lot more information in it. And I think as professionals we should strive to understand more complex things, rather than to dumb down actually complex things in futility.
      A good programmer shouldn't be afraid of diving into complex code.

    • @duncanshania
      @duncanshania Год назад +17

      @@Ignas_ This i'd not think is far off that's the whole thing i've noticed forcing OO into everything makes the vast majority of things more complicated than it is. Using an Actor or CS pattern does well for quick game prototypes but it's still unmaintainable the vast majority of the time.

    • @duncanshania
      @duncanshania Год назад +2

      @@CodyEngelCodes i've not looked at their code yet but if it's anything like 7 days to die it'll likely be very unfixable in any meaningful way but that's assuming they leaned too hard into the typical CS pattern indie unity devs like using. It would read better if they used something a bit more old school C style probably for a game like that since luckily there's not a lot of variety in game types or dynamic behavior resulting from data itself, but rather the dynamic behavior comes from players ingenuity. For 7d2d and minecraft they should've straight up used ECS though for both maintainability and runtime. All of these games could easily run 100x better and be easier to develop if indies generally weren't so resistant on self improvement (IE: trying things besides whatever OO style pattern they started with). And lacking skill isn't the issue, it's the total lash back they will usually dish out when anyone actually points out the issues since they basically live in a feedback loop sort of echo chamber. I went through the OO phase, it is fairly addicting till you really take a couple days to read/listen to a different perspective.

    • @ChronicTHX
      @ChronicTHX Год назад +7

      @@Ignas_ Some people just like having 15 classes and the same amount of beans (i'm targetting u java sheeps) with too descriptive naming and think their code is clean just because of that. The same people are forced to create a class to have a main. I think it is some form of addiction at this point. Same goes for IoC when most popular entreprise frameworks use reflection.

  • @nberedim
    @nberedim Год назад +68

    Noone remembers clean code. Everyone remembers Quake's fast inverse square root

    • @lagz89
      @lagz89 5 месяцев назад

      gold

  • @Scribblersys
    @Scribblersys Год назад +260

    5:05 "Apple hires really smart hardware engineers, and they just keep making things faster and faster, which means we can keep making things more convoluted and slower."
    This is exactly the kind of attitude that Casey is trying to push against. Why do you think it's a good thing that as hardware gets faster and more optimized, software can get slower and more bloated such that it basically runs just as fast (if not slower) as it did 5/10/20 years ago, basically squandering the work that hardware designers put into making things as powerful as they are now?

    • @LuaanTi
      @LuaanTi Год назад +8

      Because it's worth it? It's all about how efficiency gets multiplied. I'm all for pushing software to perform better. But it's clear enough that it has costs. And good management is all about balancing costs and benefits, not about pushing your subjective sense of value or aesthetics on other people :D
      There's a reason the "ideal" software he's pushing for is so rare (and always has been, even back in the 8-bit days). Yes, a lot of poorly performing software could be much improved with relatively little cost, and a good chunk of that even without compromising readability or maintainability. Plenty of programmers aren't good enough to grab that relatively low-hanging fruit, or just don't care. But what he's talking about is quite different - it just doesn't work for anything big. It's the Unix mirage all over again (nothing new under the Sun, as usual :D ). It seems to be working great with a few small pieces, but it just doesn't scale. Sometimes that's good enough; but we can all see what customers actually prefer - sometimes it's the small and efficient... but the vast majority of time, they want something that does everything they want at the tip of their finger.
      Of course, even pretty basic UI is one of those really convoluted problems. That's why there were always so many convoluted libraries to choose from, trying to abstract it away - the complexity is there in the problem. It's not a coincidence people like him absolutely hate doing any UI work. Heck, most programmers probably hate UI work :D

    • @JimAnkrom
      @JimAnkrom Год назад +8

      Yeah, if you believe this bit that "clean code means your software will run 50x slower than my ultra-optimized code" you're being lied to. Unmaintainable spaghetti can not easily be parallelized. It cannot be easily extended to meet new requirements. Users waste hundreds of milliseconds just reacting to the application notifying them that their task is complete. You 10x'ing my 10ms code by making it impossible to read is not making anything tangibly better, and frankly it's likely you'll make my clean (but optimized at a high level) code slower since making all the subtasks faster doesn't mean anything at all when those fast threads are waiting on the database to say the save was successful before proceeding to tell the user.
      But no, this is 2023, so let's talk in 2023 terms - it cannot easily be tested, which in a world starting to think about AI writing code? Testing is of primary importance. Full stop. Can't unit test your super-fast spaghetti? Your fast code at runtime is outperformed on time-to-market. Cry all you want about how their code looks slower on paper, but AI-assisted developers with code that can be tested (and therefore quickly refactored to meet requirements) will outperform you where it counts - getting functionality out the door.

    • @MrHaggyy
      @MrHaggyy Год назад +12

      @JimAnkrom still sucks that all the functionality that get shipped on my device. Like the 100th shitty browser to collect data on the user so the App developer can serve his real customers who does advertisement.
      Also fast code is usually not that hard to read, complicated code is.
      But yes some clean code guides do help the compilers and interpreters to make good code.
      And in the end it really depends on the industry. If you do desktop or web app, nobody is expecting good and performant code, but wants to adapt fast. So slow and flexibel code it is. If you do OS, engine, or libraries your code better be fast or the only one available. Flexibility has constraints and performance does matter. And if you do safety critical work like transportation or infrastructure people will die if you can't hit performance criteria 100%. And your requirements never really change anyways. In that industry you really work with your compiler to get the best instruction sequence possible.

    • @LuaanTi
      @LuaanTi Год назад +1

      @@MrHaggyy Definitely. Every experience I've ever had reinforces that any business funded by advertising is bad for you. When was even the last time some advertising tried to show why their product is better/something you want? It's all about exposure and manipulation.
      And it gives all the wrong incentives to the host of the advertising. Weird how when newspapers were funded by tobacco advertising, they never really talked about tobacco being bad for people, right? Heck, they usually sold newspaper in tobacco stores :D And funded by junk food, and car industry, and oil and coal... no big deal. Oh, and of course, sports hooliganism is just "harmless fun" and "celebration", nothing to do with the hundreds of millions pouring in from advertising revenues related to "sport" (while the exact same outcomes from people, say, demonstrating for human rights, is vilified and "regrettable"... sports matches must be a lot better for humankind than human rights, eh? :D).
      There was always that argument that advertising allows the new, small guy get visibility for their product. Which is kind of true, but... when was the last time you saw an ad like that? :D

    • @catcatcatcatcatcatcatcatcatca
      @catcatcatcatcatcatcatcatcatca Год назад +9

      It’s also pretty bold to blame customers for their never-ending lust for consumerism and new gadgets - while the companies whose applications people rely are writing slower code and using more system resources year after year.
      It’s an environmental issue as well in my view.

  • @ohwow2074
    @ohwow2074 Год назад +61

    What I find the most frustrating thing in Clean code bases is the abstraction level. I personally sometimes find it disgusting. Like you mentioned for DRY. Why would someone want to invent abstractions for some small pieces of code that are already readable and maintainable? Yet some people do this.

    • @Gol_D_Roger_The_Pirate_King
      @Gol_D_Roger_The_Pirate_King Год назад +2

      Try using that piece of code multiple times and later try to update all the repeated code you copied. I am sure you will forget 1 implementation that is burried deep to your code base. That 1 piece of code you forget to update will now introduce a bug.

    • @Muskar2
      @Muskar2 Год назад

      ​@@Gol_D_Roger_The_Pirate_KingThat's how I learned about this too. A common example is to change how logging functionality works globally. But I think it's a fundamentally wrong assumption that is rooted in OOP land.
      Instead if you build your application by having the parts least likely to change at the bottom (e.g. platform-specific -> general application functionality (e.g. state machine / lifecycle) -> specific modules that fit to specific scenarios) and bundle all the relevant code and data close to where it's used, then you rarely run into these problems, except the times where the compiler can literally point you to all the places you haven't finished refactoring. And I've found the cost of infrequent refactoring is way less than the added complexity of infinitely flexible modules where you quickly lose sight of all the context that may be causing the bugs you're investigating.
      In OOP land the layers are usually data structures at the bottom, presentation at the top and business logic in the middle (or circular like MVC), but I think those ideas are part of the dogma today, not rooted in anything quantifiable or measured, just a mental world model.

    • @GeorgeTsiros
      @GeorgeTsiros 6 месяцев назад +1

      it's... a very difficult problem to solve. There are some quite convoluted problems out there and abstracting stuff is _a very useful tool_ for managing these kinds of problems. _Obviously_ it can be overdone, it can be forced into places where it doesn't make sense. _My_ general advice is "what is the absolute simplest method that satisfies the _current_ requirements?". I put "my" in italics to show that it is an approach i use and that i do not - at the moment- have hard arguments for (or against) it. If the simplest method that satisfies the requirements uses some abstraction... sure, why not?

    • @OnlyForF1
      @OnlyForF1 4 месяца назад +2

      @@Gol_D_Roger_The_Pirate_Kingfix it when you start using the code later multiple times. The problem isn’t abstraction, it’s abstracting outrageously early based on assumptions that are probably wrong.

  • @skytech2501
    @skytech2501 Год назад +212

    Software is definitely slow nowadays, we are riding on the hardware improvements.

    • @doktoracula7017
      @doktoracula7017 Год назад +65

      Riding? We are negating them, some of software I use nowadays is slower than similar one I used 15 years ago on computer that was already old back then. At least most is faster.

    • @duncanshania
      @duncanshania Год назад +8

      When you realize that with less developers people could make PS1 games that compete with modern phone games in a lot of ways yet was written in C with 2MB of system memory and a single core CPU thousands of times less efficient. But to be fair phone games these days heavily lean into being skinnerbox scams so i'm sure quality and competency doesn't matter as much as a result when money still crutches it. but we aren't supposed to let healthcare drastically price gauge with reduced quality and shouldn't be letting that remain an excuse to not self improve ourselves.

    • @jesusmora9379
      @jesusmora9379 Год назад +8

      @@duncanshania phones don't have a single architecture, they usually rely on opengl ES and each phone supports a different number of implementations. they also have a varying processing power and memory and can run applications in the background.
      the PS1 was one architecture, and games were limited by it, while some games made use of smart solutions to make better games like crash bandicoot, most PS1 games were bad, and many were just reskins of other games with similar mechanics, mostly fighting games and jrpgs.
      also they were AAA and AA games, made by expert programmers and entire teams of artists. today anyone can make a mobile game in unity.
      your example is as bad as Muratori's.

    • @duncanshania
      @duncanshania Год назад

      @@jesusmora9379 That's not entirely wrong though the body count for the work was still drastically less despite a platform like that being impossible to wrangle for most folks these days despite how simple it actually is. The companies back then were generally headed by competent senior software devs. If not at least heavily influenced. They were lean, they didn't have to cater to people not willing to grow individually. The entire culture back then was different. And you can still make teams almost this productive today if you rip them away from their OOP roots. What was AAA back then really isn't even 1/10th the size of modern companies. They weren't dipping in quality and hiring more psychologists and artists to help trap people into skinner boxes and weren't making most the same stupid decisions just cause they could get away with it. Now just cause the phone hardware is running an OS, and while yes that is going to eat runtime (funny i usually complain to most OSDevs that game consoles would be nice if isolated in unikernels again), it's processing potentially is still drastically more. When people can't even survive without OOP and GC/RAII as an extension to not being able to let that go, they can't be as productive. It's just pointless nonsense people stick with to feel more secure.

    • @what42pizza
      @what42pizza Год назад +4

      @@jesusmora9379
      > phones don't have a single architecture
      Casey himself has said many times that this isn't a good excuse for slow code, just watch his RefTerm series
      > they also have a varying processing power and memory and can run applications in the background
      So? Even with differing specs and other apps running, each app still has thousands of times more resources than PS1 games
      The rest of your comment is mostly irrelevant since we should be talking about the average programs, not the ones made by experts or the ones made quickly and terribly. Although, I do have one more thing to add:
      > also they were AAA and AA games, made by expert programmers and entire teams of artists.
      So are today's games? And we have much bigger teams of creators and programmers that have more expertise than anyone could have ever had in the 90s. How in the world is that an excuse for the slowness of today's code?

  • @VladyslavPavliuk
    @VladyslavPavliuk Год назад +44

    7:20 - that reaction is probably the root cause why the default Windows picture viewer takes 1-2 seconds to load on $4k PC.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +4

      Gotta spend $5k if you want to load the picture viewer faster.

    • @martinsulak6366
      @martinsulak6366 Год назад

      I do think those WinRT apps can be vastly improved. I suspect the operating system itself from performance issues, not the apps.

    • @duncanshania
      @duncanshania Год назад +1

      @@martinsulak6366 From the bit of experience i have in OSDev, yes the OSes made these days are just... terrible. It's flooded with dogma and neglect. And while this might not make sense to others my gripe is that drivers being so complicated is contributing to the situation considering that most OSDevs expect you to make another linux backed OS or you're committing a sin. Total lack of creativity.

  • @Pspet
    @Pspet Год назад +108

    Your mentality is literally the reason why my smart tv takes 2.5 seconds to open a static settings menu. (Hint: it should take 0 seconds. Exactly 0)

    • @igornowicki29
      @igornowicki29 9 месяцев назад +7

      I bet it's not the "clean code" issue, but really heavy bloat of spaghetti code where someone in the past tried to write 'smart' few iterations ago and now it's unreadable. Clean code - or rather modularity - allows you to do easy refactors, that let you optimize speed critical components

    • @youtubeenjoyer1743
      @youtubeenjoyer1743 6 месяцев назад +7

      @@igornowicki29 Modularity has a cost. Even if software is easier to refactor module-by-module, a collection of modules is inevitably imposing a specific structure. If the structure is bad (this is very common, unfortunately), you will have to refactor everything anyway.

    • @d_lollol524
      @d_lollol524 5 месяцев назад

      companies want to sell you their next-gen TV sets every year . It is time to sell old TV and buy a new and "slower" TV .

    • @mvargasmoran
      @mvargasmoran 4 месяца назад +1

      I’ve worked on TV apps for lg tvs, those apps are crap, slow web based bloated, spagetti “clean code” with a bunch of abstractions and layers of indirection. Don’t try to defend crappy software.

    • @Whythis2628
      @Whythis2628 4 месяца назад +1

      What should you trade off ? Do you want your next software update next month or worry about 2.5 secs. Hint : a clean code does not necessarily mean slow code, we have come far enough to worry about micro optimisation, rather in software we worry more about faster delivery, if Netflix is not delivering play or pause functionally we are moving to something else.

  • @paulooliveiracastro
    @paulooliveiracastro Год назад +14

    I can guarantee that I'm upset with performance. I literally pay hundreds of dollars for a new phone whenever it "gets slow". And I know my phone is not slow, its the software that got slower over time.
    I also believe that doing things more efficiently doesn't take that much more time to do. We just got lazy over time, and we also do things that actually harms our productivity for the sake of clean code.

    • @vast634
      @vast634 Год назад

      One collogue made a data processing tool (XMLs in, other formatted XMLs out). This whole thing with all the added libraries and dependencies was then 10 Megabytes in size. I wrote the same functionality with basic common Java Classes. The final package of tool was then a 50 kilobyte jar.
      He basically just slapped all the typical external libraries into it that he is used to work with. Most of the stuff being bloated and unnecessary and a potentially a security risk.

  • @donjindra
    @donjindra Год назад +96

    Object oriented programming introduces as many problems as it allegedly solves. IMO, the best policy is to keep your code simple, easily understood and well documented. I also favor short functions that perform one task. But there are definitely _some_ cases where good programming practices take a back seat to performance. I'm one of those who wants to know as much about the internals as practical, especially when performance is a concern. It has helped me understand whole systems much better.

    • @cccc2740
      @cccc2740 Год назад +4

      what problem OOP creates? writing small functions and classes, using DRY, and other parts of SOLID principles etc depends on programmer. How does OOP makes programmer not to follow these principles? I am working in a scala codebase and its in pathetic state. None of all those nice principle are followed...long chains of HoFs, huge methods, huge classes etc etc. Scala is a functional language, suppose dot solve many of our issues..isn't?

    • @duncanshania
      @duncanshania Год назад +6

      @@cccc2740 It doesn't depend on the programmer it depends on the problem at hand being solved. I've seen a few small cases OOP does fairly decent but it's an extreme minority. OOP typically tends to overly abstract a problem such that it's actually harder to reason about, and seems to encourage coupling, and discourage modularity. Shocking a response right? You'd have to play around a fair bit on the right examples to really see for yourself. The point of orientations and patterns is they are situational and tradeoffs. This notion of a silver bullet needs to die sooner than later.

    • @cccc2740
      @cccc2740 Год назад

      @@duncanshania Can you give me an example here?

    • @tainicon4639
      @tainicon4639 Год назад +2

      @@cccc2740 Data science code is made more difficult to read and implement by using OOP principles. Performance is also lower (classes are not how a computer works) so all the OO code needs to be converted back to procedural code when compiled which adds extra junk (this can be mitigated by modern compilers/ AI analysis of compiled code).
      It’s not a huge issue in non performance intensive places. But scientific computing uses a lot of low level procedural code (c and Fortran) for a reason… it works better and is easier to implement (again highly demanding code… not simple python scripts for basic projects).
      None of this applies to web development, which (in my observation) is where all of these discussions are focused. The vast majority of SEs are non implementing code with performance requirements like that.

    • @cccc2740
      @cccc2740 Год назад +2

      @@tainicon4639 i would like to agree with you here, but when i look around i see that python, one of the slowest programming language out there, is today one of the most commonly used language in scientific computing, isn't...ML, image processing and all that stuff today is done in python...so this performance argument doesnt seem in line with waht we see in real world...

  • @zacharychristy8928
    @zacharychristy8928 Год назад +19

    The thing I hate about all of these discussions is that it seems like everybody is only ever fighting straw men. The clean code people will say they hate working on incomprehensible spaghetti codebases that look like they were written by a random symbol generator, and the low level people will say they hate inheritance cathedrals split across infinite files with superfluous OOP enterprise fluff.
    The issue is, neither side intentionally writes code that way. Low level people have strategies for making things readable. OOP people have ways of making things simple. Every time somebody comes at my camp telling me how much they hate "X" practice, my response is always the same. Nobody who knows what they're doing is going to do "X".
    First and foremost, your coding practice should be decided by the problem domain. It is NOT valuable to gripe about every wasted CPU cycle and the end of modern software expertise when you are in an application domain where performance improvements offer no significant value. I've seen someone spend a year improving the performance of a text-based game that already ran well above 90fps. A year was wasted on 'improvements' that no user will ever notice, let alone appreciate.
    That's kind of my biggest criticism of Casey. Someone prototyping an application in Python, or making a simple frontend, or using a pre-built game engine is NOT always going to solve that problem more effectively by stripping away abstraction in the name of 'simplicity' which is arguably just as nebulously defined as 'clean'. I think he does understand this concept, it just always seems like there's this unspoken assumption in what he's saying that EVERY problem domain is best handled by a dependency-free C library and a command-line compiler.

  • @dagoberttrump9290
    @dagoberttrump9290 Год назад +26

    Gotta side with casey on this one. I'm doing embedded systems for a living and gamedev as a side project. For me a big part of the fun lies in squeezing as much efficiency out of the algorithms as possible. And i always find it funny that people think you'll end up with a mess after doing that. The most optimal code for me is procedural, code that you can read like a book. And as it happens, cpus like it this way too so why bother with pointless abstractions? The only thing that i care about is how to get from A to B in the shortest amount of time. And oop often gets in the way. I like oop for the concept of raii and stateful objects though, in all other cases i just ignore it.

    • @BosonCollider
      @BosonCollider Год назад +1

      Well, OOP has a few neat ideas, but I do feel that it used the wrong abstraction for common tasks and got people used to base class + inheritance abstractions which have a runtime cost, instead of (monomorphized) generics + function overloading/trait interfaces which are zero cost abstractions

    • @absalomdraconis
      @absalomdraconis Год назад

      ​@@BosonCollider : Ironically, neither generics nor inheritance precisely belong in objects. They're really part of interfacing, which ideally should be a partially separate thing, whereas objects themselves should just be about producing discrete & coherent things, with all the polymorphism & similar available through some sort of casting operators or something (preferably either inlined, or even entirely executed at compile time). Unfortunately, everyone got obsessed with classes, so what should have simplified things instead was used to make them more complicated.

    • @aoeu256
      @aoeu256 Год назад

      Note that OOp like in Java can increase your lines of code, but OOP was actually about reducing lines of code and making everything easier as seen in smalltalk and ruby.

  • @ToBeardOrNotToBeard
    @ToBeardOrNotToBeard Год назад +54

    Put simply, "Clean Code" as put forth by Bob Martin is a religion. It is not engineering. Statements are made with absolutely no evidence to back them up, and the result is the mess that software is in today, where everything not only runs thousands of time slower than it should (note, should, not could), but also has more bugs than ever before. All Casey is trying to do is draw attention to the notion that when it comes to the simple performance of the code, which is something that actually *can* be objectively measured, Clean Code introduces insane penalties. If you want to justify these penalties, you'd better be damn sure that you're *actually* getting something out of it, and well, you can't really argue "I read this in a book with 0 evidence" and expect to be taken seriously by somebody who is actually engaging in engineering.

    • @etodemerzel2627
      @etodemerzel2627 Год назад +4

      Very well said.

    • @jasonwhite5578
      @jasonwhite5578 Год назад +3

      Yeah, so true.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +9

      I don't take issue with him showing that clean code is less performant, I actually really like that part of the video. The only part that I really take issue with is towards the end when he makes very absolute statements about design patterns.
      Someone who is a Clean Code absolutist will write terrible code. Someone who is only focused on application performance will also write terrible code. Ultimately it's about trade-offs and depending on what you are working on those trade-offs will be different.

    • @markhathaway9456
      @markhathaway9456 Год назад

      Q: Do things really run slower or is it that there is much more software being run (under a single name or otherwise)?

    • @trevoro.9731
      @trevoro.9731 Год назад

      Any religion of such kind should be paid to follow. If it is not paid for being followed, that is a scam, not a religion.

  • @dire_prism
    @dire_prism Год назад +36

    Everyone is worried about too many lines of code in a method. Everyone is worried about too many methods in a class. Not very often do I see people worried about polluting the namespace with too many small classes or having too many layers of indirection.
    But in reality this is just as bad, perhaps even worse.

    • @WralthChardiceVideo
      @WralthChardiceVideo Год назад +2

      yeah, ravioli code is (not "can be" it just *is*) as bad as spaghetticode

    • @dire_prism
      @dire_prism Год назад +4

      @@WralthChardiceVideo At least Spaghetti you can follow... Raviolis have dependencies to other raviolis defined in some other raviolis

    • @Tasarran
      @Tasarran Год назад

      Yes, you have to strike a balance; if you split your functions STRICTLY to only do one thing, you end up with a thousand one-to-five-line functions.

    • @youtubeenjoyer1743
      @youtubeenjoyer1743 6 месяцев назад

      Although ravioli and spaghetti are very different, i will prefer spaghetti every time. It's much easier to apply and thicken your custom secret webscale sauce to high quality spaghetti, than it is to open, refill, and reseal individual ravioli.

    • @_programming_
      @_programming_ Месяц назад

      it is not to "do only one thing", it is "only one reason to change". That should be the rule to scope a function, a class, a module, everything. If the HTTP layer has to change, let's say, move a query parameter (or from the body) to a header, our domain logic has nothing to do with that, there is no reason for the domain to change.

  • @nickbarton3191
    @nickbarton3191 Год назад +30

    Thanks for making this video. It was bound to be controversial as everyone has different experiences.
    I've been programming 40 years, mostly embedded, huge aerospace/defense projects.
    I appreciate the likes of Bob Martin pushing back on really unreadable code produced in the 80s & 90s.
    Being a really young industry with young devs, we tend to oscillate between fads, clean-code being one of them. But we still need to make our code readable and if SOLID etc helps then great.
    I've seen so much embedded code prematurely optimised at the expense of readability that I go with readable first, measure performance and optimise after. Usually, poor performance is due to a few bottlenecks rather than something endemic.
    Small iterations and continuous deployment finds us out fast.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +5

      Thank you so much for your input, it's always great to hear perspectives from folks with more experience than myself and from other areas. I definitely agree that going with readable first tends to work out well, or at least I've never thought to myself "dang it, if only I optimized this code ahead of time!"

    • @Muskar2
      @Muskar2 Год назад +2

      But there's a big difference between optimized code (e.g. SIMD and branchless) and non-pessimized code. The latter being Casey's made up word for writing code that doesn't stand in the computer's way, which is all he advocates for. There's no optimization, there's no sacrificing readability or maintainability - arguably the contrary. It's putting relevant code and data next to where it's used, rather than scattered in tiny modules, and not pretending that the modules in your bike renting website needs to be so flexible that it can one day be reusable for your nuclear power plant application. Because complexity isn't free. Not to the maintainers nor to the computer.

    • @nickbarton3191
      @nickbarton3191 Год назад +2

      @@Muskar2 Next to premature optimisation stands premature foresight. The point is that code should be flexible enough to be refactored to adapt to new requirements. This idea of open for extension closed for change I've never seen work in practice.
      But if you've got methods with 100s of lines, it's almost certain to be inflexible. Small code can appear scattered but adopting single responsibility principle is really the only way to keep it clean. Arbitrary refactoring is what fragments code. Group related stuff in namespaces, libraries usually helps navigation. Yes, there is a cost of abstraction but rarely breaks the bank unless it's something specialised. Optimise what's necessary but only after measurement.

    • @Muskar2
      @Muskar2 Год назад +2

      ​@@nickbarton3191 The closest I've seen to Open-Close principle work in practice is replacing an old functionality module with a new one - but that rarely happened in my career. Usually we were switching out the entire presentation framework and doing massive refactors to improve bottlenecks.
      I think you misunderstand what DOD looks like. You mostly have 100s of lines of code in the parts that are least likely to change (unless your entire application's purpose changes), and then in the upper layers are your flexible code where stuff that actually solves the same problem lives together. It's not grouping by code similarity, it's grouping by real world usage. So if you need to change something it's easy to see what it's currently doing and modify it.
      In the rarer case that you need to change some wrong assumptions you made in your foundation, you have the compiler to tell you all the places where you broke something. It's not as scary as it sounds.
      SRP is _not_ the only way - in fact it increases the code base by at least 10x, more like 20x I'd say - not to mention the scatter. It's an appealing idea, but I think it's a Sirenic one. What made me fall for it was the idea that everything can be reusable or swappable if the modules are generalized enough, which means it'll save time in the future. But it rarely ever does, and tiny generalized modules just aren't good enough for anything but smaller scale stuff (but larger scale are sometimes too scared to get rid of it until far beyond its use) - and it lures you into depending on a web of external frameworks to manage the complexity of the rest of your code.
      Instead you can have things like IMGUI where one line of code represents a GUI element and there's no pipeline or GUI state management, just functionality that lives and dies with that element.
      I'm not sure what you mean by "arbitrary refactoring". Refactoring has two useful purposes in my experience: improving messy code (ideally deleting a lot too) and fixing wrong/outdated assumptions. That's not arbitrary. That's based on reality.
      Cost of abstractions do break the bank in larger scales. It's why it took a team of developers roughly half a year to change the ordering of episodes of Black Mirror - and it's why so much software today operates in seconds or worse, rather than nano- or microseconds. Including our IDEs, compilers, OS - and especially web frameworks or "containers".
      Measuring before optimizing is certainly key, that much we agree on. But non-pessimization is not optimizing. It's just letting the CPU get close to its general-purpose capacity, instead of idling waiting for cold memory every few instructions, because it's 70% generic polymorphism.

    • @nickbarton3191
      @nickbarton3191 Год назад +1

      @@Muskar2 To pick up on a couple of points for clarification.
      SRP is not necessarily about reuse, that's rarely the case. It's just to keep a component simple an about one concept and purpose.
      By arbitrary refactoring I mean to split a big method into smaller functions without respecting SRP. Yes the functions may be small but they're still mixing up concepts. If OOP, I look for subclasses or composite patterns to reduce the complexity.
      Hard to explain without examples but I hope you catch my meaning.

  • @bjorntrollowsky4279
    @bjorntrollowsky4279 Год назад +38

    What is almost never mentioned is that its not only the performance with these "clean" code advices that is a problem. There is often the big problem of abstractions that lead to a lot of accidental complexity, its hard to read code with a lot of implicit behaviour. The 1st idea of polymorphism being better than switch statements is already a good example. When you read code where everything is a goddamn interface and you never know exactly what's the implementing class just by reading the code at one place but have to read and understand it in a dozen places then the chance of you making a mistake is much higher, the time to understand what's going on is also much higher, so the "cost of ownership" will be much higher. Of course the idea looks great in a prepared 100 lines short example, but then you get to work with some codebase of a million lines of code written by dozens of engineers during a decade 😅

    • @sUmEgIaMbRuS
      @sUmEgIaMbRuS Год назад +5

      To be fair, a well-defined iterface doesn't require you to understand the implementing class at all, that's the whole point. But people don't like putting in the work for actually maintaining a well-architected, clean codebase, just like they don't like putting in the work for performance optimizations.

    • @newretrotech
      @newretrotech Год назад +4

      @@sUmEgIaMbRuS This is exactly what I find funny about people complaining about clean code. They think that just using interfaces, inheritance or whatever is "clean code". They completely disregard the "architecture" part of it. Is not just about creating interfaces, but is about making the interface be enough for you to understand the high level logic without having to jump to the concrete implementation.
      But hey, I guess that's too much to ask. It would surely be better to have a 300 lines function with a switch statement and filled with flags to determine where to go next.

    • @jboss1073
      @jboss1073 Год назад

      @@newretrotech Architecture is bullshit. Effect-Handler oriented programming has proven that.

    • @marcossidoruk8033
      @marcossidoruk8033 Год назад +4

      ​@@sUmEgIaMbRuS Yes it does if you are part of the maintenance team, wich was OPs point. If you just had to use the code you would just read the documentation and thats it, if you are reading through the code and trying to understand the implementation you are doing because you have to work with the codebase and you absolutely need to understand the implantation, one interface somewhere may be fine, yes, but when everything is an interface because interfaces are "cleaner" than if/else statements the code just becomes conceptually unbearable because you need to always think in terms of abstraction layers and implementation at the same time.

    • @marcossidoruk8033
      @marcossidoruk8033 Год назад +4

      ​@@newretrotech And what I find funny about people that defend clean code is the fact that they keep pretending that a developer cares about abstraction at all when working with a codebase.
      The user of a library cares about abstraction, when it comes to your codebase that you mantain yourself the ONLY thing you care is implementation, why on earth do people think adding layers of abstraction that obscure the actual implementation will somehow help you to come up with a better implementation is just beyond me, that is literally contradictory.

  • @KennethBoneth
    @KennethBoneth Год назад +31

    To be clear, when he says “it’s one of their examples” he’s saying that because it’s pulled from the “Clean Code” book
    Also he asserts you cannot use AVX/SIMD in clean code because it requires knowledge of the internals and just generally an entirely different code architecture that clean coders will never arrive at.
    The fact most clean coders have little/no experience implementing simd is weak proof that clean code isn’t compatible with simd.Not saying there aren’t clean coders who would make exceptions, there certainly is.
    In my humble experience, clean code bases I’ve worked in are not more maintainable than code bases that simply prioritize modularity and clear/sequential logical flow (as much as possible)

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +2

      Yes, I understand where the example came from. And sure, clean code bases aren't going to be more maintainable than other maintainable code bases. Following clean code gives you a blueprint that can be followed though.

    • @ChronicTHX
      @ChronicTHX Год назад +3

      @@CodyEngelCodes Clean code gives rules that make programmer dumb when it comes to the actualy needed architecture. In other words, it leads to overkill codebase.

    • @duncanshania
      @duncanshania Год назад +1

      @@ChronicTHX This makes me think what we need more than anything is people willing to practice architecture design. There's nothing i've found yet that gives solid explanations as to why one pattern sucks over another on a particular problem. Everything's still touch and feel. I know most of us that explored other areas aside OOP/functional tend to think OOP is just a waste of time most times, but still... Why. I've seen OOP typically cause pointless abstractions, coupling, discouraging modularity, but when you ask why you'll get a lot of different takes on it and none of which i've heard sounds super solid as of yet. Hell the closest i heard is that it better helps code bases with a defined feature set but even that doesn't feel right. UI seems to fit with a type based tree fairly decent enough but 'why' i don't know, though even then UI still doesn't feel like a truly solved problem.

    • @seriouscat2231
      @seriouscat2231 4 месяца назад

      @@duncanshania, the funny thing is that at the lowest level this is about abstractions versus metaphysics. In the sense that people wish they could think of abstractions as if they were metaphysics, as in if there were essential abstractions, whereas in reality all abstractions are accidental. It does not mean you don't do them on purpose. It means that any abstraction you make, you might as well not. If abstractions were essential, then there would be a rule that could be discovered, instead of designed or invented, that would allow you to know when to abstract and how.

  • @BrunoidGames
    @BrunoidGames Год назад +5

    Programmers vs Developers perspective.
    I work with zOS assembly. We don't have a bit to waste, some routines are called more than 1 mi times per minute and computer cycles demands energy.
    Everything has its cost and the cost of today's extremely slow software is unimaginable.
    That's why I love retro game dev, I can play with z80, 6502, 68k, etc without a big company behind.

    • @markhathaway9456
      @markhathaway9456 Год назад

      Aside from object references and caching problems, what else makes today's software "extremely slow"?

    • @BrunoidGames
      @BrunoidGames Год назад +2

      @@markhathaway9456 A few decades ago only a small bunch of nerds could program or have a computer, nowadays anyone can make software without go deep into the machine or the OS. We have a ton of abstraction layers, high level languages getting closer to the natural language, abundant computational power, "Write code to human beings instead of machines" culture.
      Not necessarily a bad thing, it's another paradigm and programming is very popular now.
      I'm right now programming an MSX, z80 + 64 kB ram. That's fast software!

  • @blenderpanzi
    @blenderpanzi Год назад +64

    About short functions: I never did game dev, but they say the main render loop can be a very long function. I think it was John Carmack that said that he thinks it's better to keep it all in one long function (in that case) instead of pulling out lots of tiny functions. Reason: You want to read that function top to bottom to understand how your game updates state, you don't want to jump around to many other functions and get confused. And when separating out tiny functions someone on your team will always try to call one of those where it shouldn't be (they are supposed to be only ever be called at that one place). Instead he says its better to divide that long function into sections with local scopes { } that have a comment describing what that part is doing on top. You can then collapse these blocks in your IDE, leaving only the comments and you still get that nice overview of what the function does. That kind of overview is usually the argument for small functions.

    • @duncanshania
      @duncanshania Год назад +3

      Sometimes a function calls a lot of functions itself and that's partially why it won't be small. There might be other reasons from time to time. If we are going to make a hard rule about it we should really get more in depth as to what exceptions there could be and why. The actual render loop generally is something that calls a bunch of stuff and sets up a lot of state, not that you wouldn't write utility functions it calls but still. Would be happy to give an example if asked.

    • @andriysimonov457
      @andriysimonov457 Год назад

      I'm not in game dev and wonder if they write tests for their long functions. Writing and supporting good tests for such functions is nearly impossible.
      Clarification: I mean testing the long functions in general, not the rendering function specifically. So, my question is more in context of the item #3 of the "Clean" Code, Horrible Performance video.

    • @blenderpanzi
      @blenderpanzi Год назад +15

      ​@@andriysimonov457 How would you write testes for the main render function? Render out frames and compare them to saved screenshots? From what I hear in game dev they don't do unit tests for their game and render logic, they do manual testing for those. They do unit testing if they have something that can be basically a library. But I dunno, I'm not doing game dev. XD

    • @absalomdraconis
      @absalomdraconis Год назад +6

      ​@@andriysimonov457 : As albatross said, trying to do unit testing for that kind of function is usually a fool's errand, not because of the function's complexity, but instead because the function is in such a complex problem space that it's not worth the bother of automating the tests. It's the sort of thing that only gets unit testing if you're trying to do physics calculations on massive data sets, essentially because at that point you aren't targeting a use-case that makes manual testing easy.

    • @Tasarran
      @Tasarran Год назад +5

      ​@bloody_albatross I'm primarily a game/graphic dev, and you're right, there's no way to write tests about does the cloud render the way you hoped it would or does this UI look right when you change resolution

  • @wildfiler0
    @wildfiler0 Год назад +3

    The funny thing, almost every time I refactor code to make it cleaner, it runs faster, because it becomes easier to see possible problems with performance.
    About game dev, once I saw the source code for a web-based adventure-battle game, there was a main function 3k lines long. And the performance was terrible, like 10-15 seconds per request. But people play this game anyway... And new developer for this project said that fixing bugs was a fun adventure which usually led to adding couple new bugs.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +1

      That's an interesting take on clean code. It makes sense that understanding how code works is the first step to making it more performant.

  • @NoX-512
    @NoX-512 Год назад +3

    OOP makes your code more complicated to read and maintain. There is no good reason to create small functions just for the sake of creating small functions that are called once. It’s easier to maintain your code if you don’t. The paradigm that each function should fit on your screen is more a rule of thumb. Most of the rest of “clean code” doctrines are recommendations that have been around forever.
    I think you misunderstand Casey. He is not advocating that we should super optimize everything and only write obscure code. He is advocating a code style that is easier to write and maintain while giving better performance at the same time. It’s a win-win.
    “Clean Code” is far from “clean” and OOP is one of the biggest mistakes in software development.

  • @DMitsukirules
    @DMitsukirules Год назад +6

    I feel like your running commentary in this video is a perfect illustration why so much software is so painfully slow and unusuable.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +2

      Can you provide examples of software you use in your daily life that is slow to the point that it results in pain and anguish?

    • @DMitsukirules
      @DMitsukirules Год назад +5

      @@CodyEngelCodes Photoshop and my web browser. Windows. Specifically systems like Windows defender. Something called Baloo. LLVM. Using all of this software is my livelihood. They are the things I interface with to make money to survive, and they cause me massive amounts of pain and anguish. Especially LLVM.

  • @MadaraUchihaSecondRikudo
    @MadaraUchihaSecondRikudo Год назад +11

    Regarding AVX at around 10:40 the reason you would not be using AVX or SIMD when writing clean code is that you need to be grouping (or at least be able to group) things by *operation* rather than by *data*, AVX is essentially "do 8 or 16 add operations in one instruction" if each add is a virtual call with a bunch of overhead, you would never be able to pre-determine which are the adds you want, then group them together to be performed at the same time.

    • @aoeu256
      @aoeu256 Год назад

      Note that when you use matrix operations on numpy it will use the vector instructions and numbs.jit(no python) will disable the python dynamic things

  • @tibimunteanu
    @tibimunteanu Год назад +7

    10:36 I don't think that you got what Casey was saying here. It's not that "if you use SIMD, it's bye bye clean code" or "SIMD = ugly code". It's that if you start with "clean code", the data is so poorly laid out in memory (because it's encapsulated in containers and hidden behind getters and virtual tables) that it doesn't give you the opportunity to make it SIMD. You would have to rewrite it some other way, and you would more likely end up with something far from "clean code" and more like table driven as in his example.

    • @duncanshania
      @duncanshania Год назад

      Mostly yes because it drastically encourages lots of nested allocation of small objects being pointed to. And people are generally grouping data not based on actual processing relevance but based on what they think is real world abstractions though i'd argue it's not really the case.

  • @MrAbrazildo
    @MrAbrazildo Год назад +7

    4:00, it's faster to put those values in an array, or even a single variable. And it's less lines of code too. Thus, to me it's even cleaner. 5:54, this is what I'm talking about. I would make it even 1 line. It's cleaner by size, meaning you have fewer instructions to read.
    4:52, he has a point, a very strong 1. We can't work backwards, unless if there's a clear issue.
    10:15, those faster maths take advantage of arrays/list/containers, all kinds of contiguous memory. So it can't be achieved through independent objects on the heap. I use OO, but I always put objects in containers.
    12:10, sometimes it needs to be made that way. But in C/C++, at least macros can deal with such cases, nicely and elegantly.

  • @klikkolee
    @klikkolee Год назад +5

    I saw the video you're reacting to. I'd say it is illustrating important concepts in a bad-faith way that distracts from the knowledge. Ultimately, there is often going to be a tradeoff between maintainability and performance. When computers get faster, you can write more of the same difficult code to get more done, or you can write easier code that does the same amount -- or something on the spectrum in between. It is frequently just fine to be far on the "easy" side of the spectrum. The presenter is probably so accustomed to needing to be far on the other end of the performance-maintainability spectrum that the existence of that spectrum is offensive to them. It's a shame.
    Something the video also illustrates: guidelines like clean should be treated as guidelines, not straight jackets. Treating clean or other code approaches as rigid rules often leads to code that is poor in both performance and readability. But the guidelines do illustrate useful patterns and concepts for maintainable code. Making good use of clean and other guidelines requires critical thinking. And so does deciding when to start and stop optimizing.

  • @tibimunteanu
    @tibimunteanu Год назад +25

    15:40 Just saying that "clean code" = "faster delivery time" does not make it true. I worked at many companies which required "clean code" from the programmers.. and most of the code was just awful. Adding really simple features on top of the "easy to maintain clean code" were really hard and frustrating to squeeze into those overly complex systems. And they took way more time to get done, and there were always large lists of bugs to deal with. Sometimes projects just have to be rewritten just because they become so convoluted that there is no way to add some specific new feature. I found myself mostly trying to navigate and understand complex layers of abstraction that had nothing to do with the problem at hand. This is never the case when you write the simplest code and avoid those clean code doctrines. I think you should try that coding style before commenting on whether you think it's good or bad.

    • @duncanshania
      @duncanshania Год назад +1

      We learn this through trial and error though and there's no universal rules as of yet, so usually i teach my pupils to just try as much variety as they can and don't be afraid to fuck it up. But learn from it. The reasons over using one pattern over another isn't truly clearly proven/defined though and we are in this toxic situation of everyone doing things one or two ways only and not really exploring, if anything putting down others for trying something different. That's the frustrating part. I do game engines and game systems engineering but i'd be curious to know what you do roughly, am wondering if maybe most other fields do lots of small programs and don't need to maintain them or not, i keep hearing that as a deflective excuse, however when even uncle bob says that software needs to be clean to help maintainability, i have to question his experience if all i see him ever talk is either functional programming or OOP. And it's not like i'm a ton better but i've for sure tried a couple other styles aside that and read about a fair bit for the context of solving various game logic problems. And all these years of experience just doesn't at all line up. Also for clarity, even in my fields of choice there's still a lot of hardcore OO enthusiasts even despite this field seemingly being less dogam'd into it than most are.

    • @tibimunteanu
      @tibimunteanu Год назад +4

      @@duncanshania I do game engine programming, gameplay programming. I've also been working in webdev for almost 15 years. I always try to avoid the "clean code" propaganda even when I do webdev. It is harder to do, since I don't work alone and almost everyone tends to over-architect and over-complicate things, but in that context, I at least try to not trash the memory and not give the cpu unnecessary processing to do. And it works out. You don't have to monitor cache misses in every context or do SIMD by hand, but just remembering that the computer is not just some abstract thing does pay off in terms of performance, maintainability and readability (you always have less code to read and since it only deals with the problem at hand, and you don't have to understand layers of abstraction and how they come together). Another important point is that having the mindset of keeping code as short as possible and as focused on the problem as possible, this is what you iterate upon.. you solve the problem with what you think the shortest path is, and then you come up with even shorter paths after you gain some more insight and have some more context and awareness of what the whole set of actions and edge cases describe your problem. Sometimes you learn tricks to make some paths run faster by minimizing the cpu workload or laying related data close together and sometimes you are able to apply these techniques to other problems. I don't really see this kind of thing happening in overarchitected oop projects, because the focus tends to be pointed at something completely different. The energy goes into ways of adding abstractions on top of abstractions, and it's a slippery slope. The folks who do this, swear by it and say that it makes the code more readable and maintainable somehow.. but I feel like they just want to believe that it's true and since they don't have any other style to compare to.. they take it for granted.
      How can more files containing more lines of code consisting of more instructions all tangled in subjective ideas of abstractions be more maintainable?

    • @duncanshania
      @duncanshania Год назад

      @@tibimunteanu Honestly exactly how i feel but immensely better articulated than i've managed. I tend to find more peace of mind talking to AAA devs than indies since at least even if they aren't of the same mindset, least so far they generally seem way more interested in improvement. And it's at least more possible to have an honest debate. But it could just be this tiny corner i managed to find. I've never gotten into the AAA side of the industry and massively struggle with ADHD but when i see projects like overwatch take the bold approach they have it's vastly heart warming. I'll continue to be inspired and work on my humble opensource sandbox project reaching for 10x+ better runtimes and ease of development than minecraft and 7 days to die. It's honestly my dream to see the industry learn and evolve and share really useful techniques that could generally take us so much further. It's about time we do more than just visual polish on roughly same count NPCs in mostly static worlds without much a persistent simulation. And it's nice to see the matrix awakens demo and how it looked inspiring to folks even if they did slightly cheat and it's geared toward the UE community of all things. Maybe there is hope.

    • @electricz3045
      @electricz3045 Год назад

      It isn't even a true comparison you've written. You eventually overwriting the variable string instead of comparing the type. You can either compare the type (then "clean code" and "faster delivery time" would be true) or compare the values.

    • @duncanshania
      @duncanshania Год назад

      @@electricz3045 I'm a bit lost of context on this one, strings typically aren't subtyped. And if you need a fast general string compare you'd use something in spirit a bit close to UE's FName class though i'd recommend redoing it yourself since their version sucks. It doesn't do well with threading but the trick being you use some static map to store the actual strings and internally use a mere 32 bit or 16 bit ID per Name instance so that compares are instant, which is situationally useful, searching up the string itself is also not too expensive. That and a good usual first step to faster string compares is checking lengths, whether it's via std::string or your own custom string abstraction. Also frankly if you're worrying about runtime it makes a lot of sense to get used to making your own custom data containers for stuff like this, it's very much not hard, the main thing that'll get in the way might be the language itself if it's not as data flexible as C++.

  • @torshepherd9334
    @torshepherd9334 Год назад +19

    I think his issue is less with clean code and more with specifically
    1. Abstractions which destroy the compiler's ability to unroll loops/ auto vectorize/ etc.
    2. Using inheritance as the go-to for dynamic polymorphism. Modern programming languages have sum types which involve significantly fewer indirections
    If we keep doing these things, not only will our code run slower, but we won't ever get to take advantage of newer hardware and compiler optimizations

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +3

      It does seem like he takes issue with clean code based on the video. Definitely agree with most of his points in the video, I don't think speaking in absolutes is productive though.

    • @torshepherd9334
      @torshepherd9334 Год назад +2

      @Cody Engel I think that's because it's so nebulously defined. As you said yourself, the switch statement doesn't look that unclean - it's just so subjective, whereas saying "use inheritance over explicitly-enumerated sum types" is a rather concrete thing which adds indirection

    • @llothar68
      @llothar68 Год назад +1

      Compiler optimisatons are almost worthless unless you do very specific things. After link time optimisation (inlining functions across compile modules - what was implemented for more then 30 years with larger source code files) nothing was worth even mentioning. 10% in 10 years is how compiler optimizations help us.

    • @duncanshania
      @duncanshania Год назад +1

      @@llothar68 To take a butchured quote from someone else, there's a small percentage of the program that a compiler can actually solve, because the problem at hand is often not clearly defined in code properly. Even with micro opts this can be the case. But the main thing is... Data layouts. Using the ideal data layout for the problem at hand is rather important and something that's usually vastly neglected. Things like batching, existence based processing, not creating overly fat data structures, not overly allocating a bunch of tiny things pointlessly, it all adds up and compilers literally can't optimize this kind of thing. It's more a conceptual thing.

    • @torshepherd9334
      @torshepherd9334 Год назад

      @Lothar Scholz the main way in which compiler optimizations help you is in loop unrolling and loop vectorization - these can only happen reliably if your data is in struct-of-arrays form, but clean code definitively nudges people towards AoS, which kills that

  • @jamesclark2663
    @jamesclark2663 Год назад +2

    So two things I find odd about the examples he gave. 1) Who cares how much faster it is to find the area of your shape? If you are simulating the internal nuclear interactions of the sun at the per-particle level, then sure. Do whatever you need to improve the performance for that hot path. But who the hell cares if the 'Start Simulation' button used a virtual method call to into that simulation? 2) As a point of principle I wish people describing OOP or just clean code in general would stop using 'Shapes', and 'Vehicles', and 'Animals', and 'Coffee', as the basic examples of how to write clean code. They are often highly simplified to the point of being absolutely meaningless in practical terms and it seems too many people latch on to them as concrete examples (no pun intended) of how to *actually* conceptualize the model data in their programs rather than as examples of how the syntax mechanics work. And in this case Molly Rocket has latched on quite literally to them as de-facto ways to write all code rather than understanding the types of problems they can solve in some situations when used... well... situationaly.

  • @ddaypunk
    @ddaypunk Год назад +13

    RE: DRY - I've learned AHA recently and WET. So Avoid Hasty Abstractions, which I think is what you are getting at. Sometimes it is really hard or even a waste of time/brain power to abstract something. So Write Everything Twice and wait for a better abstraction. I've also learned to not apply DRY to tests in most cases. It is better to have the full context within a test most of the time even at the expense of repetition.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +3

      Yeah AHA is probably my favorite one since it's the vaguest, yet still gives you all of the guidance that DRY and WET provide. Tests can be really quite tricky too, sometimes it's easy to not repeat yourself with the setup but other times it just isn't possible and you're stuck with quite a bit of copy and paste. The plus side though is if the test becomes outdated, you just delete it and don't need to worry about impacting other tests.

    • @llothar68
      @llothar68 Год назад

      Abstraction leads to logical spaghetti codes and makes since more complicated many times. It's also a waste if you only use it at two places, i don't think about it until it hits three repetitions.

    • @herrpez
      @herrpez Год назад +1

      On my last fresh project, an internal tool, I just repeated code over and over until it became abundantly clear that I had an area where I could make an easy and useful abstraction. Was any of it "Clean Code"? No, but it was clean code. People are too quick to jump on buzzword bandwagons for no good reason.

    • @ddaypunk
      @ddaypunk Год назад +1

      @@herrpez I mean I’d hardly call it a buzzword. The book is 15 years old and the guy that wrote it has roughly 45 years experience in the field. It’s still a great read if you read it less like law and more like general interest. “Internet of things” now there is a buzzword! 😂

    • @dtkedtyjrtyj
      @dtkedtyjrtyj Год назад +1

      The biggest problem with DRY is that people misunderstand the acronym too easily, and it is easy to misunderstand.
      It's not about not writing the same code twice. That _look_ like repeated code; but isn't necessarily the kind of repetition meant.
      It's more about what should happen to the code if one thing changes. For instance a stock calculation in ecommerce. You don't want to duplicate that code, because if it changes, it should change everywhere.
      Unfortunately it is easy to over apply: "Oh, I'm testing two booleans here, and two here: let's abstract that."
      So DRY _needs_ to be moderated by AHA and WET, but they don't replace the need for DRY.
      Like so many other things; you have to learn the theory and reasons behind it, only then can you use the shorthand to understand it. Too often people just read the short description, invent some meaning that isn't there, apply their misunderstanding, and conclude it is bad advice. Other examples: TDD, BDD, Agile, SCRUM, and the original video.

  • @RAndrewNeal
    @RAndrewNeal Год назад +5

    The performance budget stacks up. Platform agnostic developers love Electron, while the rest of us hate it because it's slow and bloated. It's a shame that programmers use hardware improvements as an excuse to be lazy with optimization.
    Even if your program doesn't need optimized to be performant by itself, it will never run by itself. The kernel schedules CPU time for each program; and the heavier yours is, the more it will affect the system as a whole.
    And for programmers who are close to the hardware (embedded system developers, kernel developers, driver developers, etc. ad nauseam), clean code principles will never fly. Whether it be slow clock speeds in embedded systems or real-time requirements of drivers, their software has to be as performant and lightweight as they can make it. Every program should be as lightweight as is reasonable. Will I want to use the photo viewer that takes a couple hundred milliseconds to load each picture, or the one that only take a couple? Because of lightweight software, my 15-or-so year old computer can be up and running on a cold boot in 10 seconds, ready to launch anything I want immediately (credit to SSD speeds as well). My Ubuntu installation even took more than a minute to boot up, let alone the atrocity that is Windows. This computer runs bleeding edge versions of everything; nothing is kept back to achieve that performance. Compulsory "I use Arch, btw".

    • @duncanshania
      @duncanshania Год назад +1

      You'd be surprised how many engine developers still swear by OOP and the minority of us actually constantly getting annoyed with it. Not counting general game systems. Heck i'd argue OSDevs and Emulator devs aren't doing anywhere near as good as they could be. Both with making more maintainable code and making it better on runtime and generally more flexible.
      See people like me or in the extreme case mike action who i think is leading unity DOTS, are actually the minority, but gradually gaining momentum in challenging how things are commonly done. And most all the good talent is hiding in AAA not indie. Which is honestly kinda sad. But heck for example even Box2D which is developed by a blizzard employee, has a terrible runtime mostly because it's too OO and exposes the physics bodies directly to the end user the typical way, it's not sufficiently data oriented. Physx has similar issues. UE is swimming in absolutely terrible runtimes because it's overall design is centered on the Actor pattern which just doesn't scale. Visually it can do decent, but otherwise it can't. And far as i've seen they don't even have an easy way for end users to display meshes out right without the AActor abstraction getting in the way, causing tons of indirection, and tons of memory consumption, and tons of minor things to deallocate when you destroy them. It's just all around gross. But i will suck the Unigine d*ck since they do better about designing a sane api despite them still using virtual methods, at least it's less strict on usage and easily lets you build your own abstractions.

    • @RAndrewNeal
      @RAndrewNeal Год назад

      @@duncanshania I'm surprised to hear that UE has that big a problem, being that it's probably the most popular game engine, and I'm not a game developer, so have a lack of experience with game engines.

    • @duncanshania
      @duncanshania Год назад +1

      @@RAndrewNeal I don't mean this in a snarky way but sadly even in this case popular doesn't mean ideal or good. The one and only thing I can appreciate is that they have the only somewhat public ish next gen rendering engine (nanite). There's been a bunch of other rendering engines that looked potentially awesome but far as i know none of them are publicly accessible. It and unity both have somewhat terrible APIs but that's somewhat the tradeoff for them trying to cater to folks within the OO domains and in the case of UE it tries really hard to make it easier for artists and level editors to contribute to prototyping code via blueprints. This comes at the cost of not just a forced pattern which doesn't scale well, but generally building projects with the engine is a nightmare. Unigine manages to be a somewhat complete game engine as well but isn't plagued by these issues. And really if you get someone like me to slap you something custom together assuming that the level editor is figured out or is not necessary, the dev experience when it comes to making actual games of scale will be way better for any project lasting more than 6 months. Programming orientations and programming patterns, actually are this big a deal, i try to make my own engines more C style ish such that the end user isn't forced to tolerate bloat unlike how UE forces AActor on everything to do literally anything. They'll be free to use whatever game programming pattern they choose. Also while i put in a lot of effort doing a lot of things custom any average programmer could technically slap a custom engine together using third party libs, say for example this mix. Bullet3D, Ogre3D, Entt, and some networking lib. Roughly within a day or less. Engine dev exists though because the game project in question might need a little help in some way, maybe it needs more runtime in some particular aspect, or maybe it needs more flexibility these libs don't provide. OOP style engines i find are rather hard to keep lean, modular and portable. But it's totally possible. But the effect on the end user is it's generally more frustrating. Reason being that going overboard that direction often encourages pointless coupling. Also i'm somewhat a fanboy of C++ templates and having the freedom to allocate data EXACTLY how i want, for example my 2D physics engine manages to be in the .bss segment of the executable since there's not a single malloc/new call anywhere in it. It's specialized to be AABB with a spatial hash table that totals somewhere a GB or less i can't remember. The memory never grows. After multithreading it, it could handle games with hundreds of thousands of moving physics bodies without breaking a sweat. Making things like this where you can make some assumptions about the project's limitations makes such a big difference. STD containers are too general purpose most times and it doesn't hardly take any time to make custom variants, aside the fact most of us made enough custom alternatives that we've already a large collection to choose from and nothing to really worry about. Folks like me already have the experience necessary to take projects to insane heights and other users won't have to tolerate much at all headache utilizing us assuming they are open to trying something different. For example competitive games, RTS games, and sandbox games could benefit from taking some techniques that overwatch uses. Even though it's the ECS backend mostly which matters. But maybe a week or less of practice and the game programmers will figure out the rules and ways of doing things just fine. It's the job of the lead game systems engineer to decide on the game programming patterns to be used, not the decision of some arbitrary engine constraints. If there are such constraints getting in the way, it should be thrown away and replaced.

    • @RAndrewNeal
      @RAndrewNeal Год назад

      @@duncanshania Sounds like you've been concocting an engine of your own. I'm a plain C guy myself, but my area is in electronics, making embedded programming the most useful to me. Though I do write some for Linux, but I never get into any big projects, as it's easy to get engrossed in programming when I need to keep my focus on hardware development (circuit design/electronics engineering). When dealing with microcontrollers, I not only have to keep things lean for the

  • @JohnAlamina
    @JohnAlamina Год назад +2

    I am not sure if you are trying to deconstruct what this guy was saying. From the word go, it is clear he was just focused on performance which is trade-off against readability. I don't believe the "CLEAN coders" are advocating performance over readability but the inverse. So his argument is valid if you are focused on performance (for the most part) and not valid for if you are focused on readability and maintainability. It is hard to have both, but if anyone knows how this can happen, I would be glad to know about it.

  • @ohwow2074
    @ohwow2074 Год назад +4

    Bro he's right but only partially. He doesn't mention the biggest crimes of the developers. One of them being things like Electron. Like that thing is so slow. VS Code is super laggy compared to native software like other IDEs. We're not only writing slow code, we're also using some horrible technologies to make our work easier.

  • @murilorodelli
    @murilorodelli Год назад +23

    Clean code is a set of rules to allow people that dont know how to code or the codebase to add (poorly designed) features at expense of performance

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +4

      You can certainly write some clean code that's more dirty than clean.

  • @UNgineering
    @UNgineering 8 месяцев назад +1

    I can sum up the entire "clean code horrible performance": "if you write your entire code in assembly and embed all of the required data, it will be 100x more performant".
    Thank you for pointing out the optimizations! Any good compiler and assembler will optimize your "clean code" to be almost as performant as his "manual" one, except it'll be 100x more maintainable.
    It's definitely a good idea to use the clean code principles as principles, and not as commandments.

    • @CodyEngelCodes
      @CodyEngelCodes  8 месяцев назад +1

      Yep, I certainly agree that we should be optimizing our code where possible but it shouldn't be at the cost of readability and maintainability (unless highest performance possible is the top priority).

    • @zhulikkulik
      @zhulikkulik 8 месяцев назад +2

      That's exactly the opposite of what Casey said.
      Hardware is so much more performant that if you just put a bit more thought into your program and make it more specific and optimal (not optimized, just optimal) - you won't even need to optimize anything, it will already be 100x faster.
      Of course you could maybe get it even 100x faster than that, but you don't need to because at this point your program is already fast enough.
      Days of handwriting asm code are long gone.

  • @asddsa10001
    @asddsa10001 Год назад +6

    In my opinion, just write a code that is easy to read, maintainable, and also makes it easy for other people to work with that. I usually have a container for my data like the plain old data such as the struct in C and a bunch of functions operating on them. I found it a very reasonable way of writing software.

  • @OriginalKKB
    @OriginalKKB Год назад +4

    Backend dev for a large e commerce company here. Most important thing in my field is maintainability. Performance and memory footprint are important because it saves money on the server side, however, if the super fast sleek performance wonder is so hard to understand that someone not familiar with a project (turnover rates are high and codebases have continously evolved over years) needs 2 weeks just to understand how it works, that costs money too, especially if there is a problem that needs to be solved asap. It's always a tradeoff.
    I think it is good to have guidelines everyone agrees on but if there is good reason to not follow them it should be possible.

  • @leonardotemperanza5824
    @leonardotemperanza5824 3 месяца назад +1

    10:11
    Bro just read the first 4 lines of the wikipedia page and immediately assumed you can easily simd-ize OOP code. Doing simd usually implies working with chunks of data at a time, so that you can perform 4 or 8 additions in one cycle. If you're doing one add per method call in your encapsulated object to adhere to clean code principles you're throwing performance out the window.
    13:21
    It doesn't work like that. Most codebases that completely ignore performance in the design stage are beyond repair and require a full rewrite to be optimized (e.g. the LLVM codebase). It's not as easy as "ok, let's just find the hotspots in this horrible codebase and nullify their execution time"

  • @DevMeloy
    @DevMeloy Год назад +1

    I work on a large legacy application that has not instituted the clean code methodology and it has been a nightmare to work in. Although I don't subscribe to every tenant of Dr. Martin, I absolutely see the benefit of DRY and single use (although this can be subjective). Even if there is a slight performance hit, being readable and more maintainable far outweighs the milliseconds my customers might see.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад

      Exactly, performance isn't everything. If you can't add new functionality in a reasonable amount of time then it doesn't really matter, your product will quickly become obsolete.

  • @Pengochan
    @Pengochan Год назад +2

    This is completely missing the point of the original video, which is about how "clean code" rules may affect performance.
    Neither "I don't use that rule", nor "I just trust that hardware will be fast enough" (yours or that of your customers?) is an argument in that discussion (the latter just documents a horrible attitude), and arguments about code maintainability only come into play when considering performance vs. maintainability (which *is* an important aspect).
    And of course he uses the example to make his point.
    Ridiculing the example itself *also* is beside the point, because it is *an example.*
    The thing is, that there are a lot of applications now, where performance is mostly irrelevant. If the weather app on your phone takes one microsecond or 100 to send out that request for an update to some server which then takes half a second to send an answer is completely irrelevant, but for some game engine or a number crunching simulation that performance impact makes a huge difference, enabling it to handle e.g. more details or larger simulations (maybe for the weather forecast), or just consider to cut the costs of rented/bought hardware for some server farm by a factor of 10.
    About applying AVX to clean code ... try to thread that through polymorphism, and see how "clean" that code looks afterwards.
    The one valid point in *this* video is, that maintainability *is* important, and that the performance benefit affects different software products (even different parts of one software) differently.
    A lot of what was said in the very end should've gone into this video earlier, because about 90% of this video sounds like a rant about the other one before it comes round to admit there are some valid points. I btw. totally agree, that these absolute statements putting performance above anything else for any software are a bit misleading.

  • @core36
    @core36 Год назад +9

    Life is about balance. Programming is about balance. You can combine both approaches. Something getting called all the time in a game loop? Use fast code. Need to call a function once on initialization? Make it clean because that couple of milliseconds don’t affect much.

  • @bp-ob8ic
    @bp-ob8ic Год назад +7

    In my experience, layout is more important than performance.
    I worked at a big-box retailer where floor associate (FA) carried an Android-based device intended to locate product. If the shelf is empty, the FA could determine if there were more on hand, and where they are located in top-stock.
    Scanning the barcode on the empty shelf location gave the FA a lot of detail about the product, but it took three click events to get to the location of the product.
    If the shelf was empty, it took two more click events to see where it was in top-stock (not that it would actually be there, but that is not a software issue).
    Bottom line: Response time of your code is not the biggest problem users have with your code.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +3

      That's a great point, your code can perform well but it doesn't matter if the feature doesn't solve the problem for the user.

    • @jasonwhite5578
      @jasonwhite5578 Год назад +2

      @@CodyEngelCodes Good lord, it doesn't justify ignoring the performance of your application though. Performance is the type of thing, where if your user thinks your application is slow, he/she won't report it, they'll just move on to the faster one.

    • @zacharychristy8928
      @zacharychristy8928 Год назад

      @@jasonwhite5578 Nobody said that though. Clean code doesn't even argue that you outright ignore performance. All anyone is saying is that a religious adherence to performance is just as pointless and stupid as a religious adherence to "cleanliness". Solve the problem, don't scream about performance.

  • @robertmuller1523
    @robertmuller1523 Год назад +5

    For clean code dogmatists, maintainability is not measured by simplicity and comprehensibility, but rather by unit test coverage. Accordingly, the clean code rules are not actually aimed at simplicity and comprehensibility, but rather at maximum decoupling which is considered a prerequisite for maximum unit test coverage.

    • @duncanshania
      @duncanshania Год назад +4

      Extremely little of my work uses OOP yet it's vastly more modular and decoupled for it. If anything a lot of the time OOP forces coupling.

    • @markhathaway9456
      @markhathaway9456 Год назад +1

      @@duncanshania Just curious, how does OOP force coupling?

    • @absalomdraconis
      @absalomdraconis Год назад +3

      ​@@markhathaway9456 : A lot of OO usage loses track of what an object is (a complete, self-contained implementation of _something),_ and instead focusses on the toolkit (polymorphism, data hiding, the various patterns), resulting in the OO toolkit getting used to create spaghetti code that gets scattered across multiple files, some of which aren't obviously relevant because they aren't _directly_ used in the code that you're trying to understand. So, the OO isn't really forcing the coupling, but it's making it easier to increase the complexity _of_ the coupling.

  • @contentnation
    @contentnation Год назад +3

    Just a few Cents from my experience on web/embedded/extreme performance development. One example why performance matters I regularly use to explain the why to optimize. Assume your web server takes 1ms more to answer a request, so "not much" and barely worth to optimize... But that means you need 1 CPU core more per 1000 requests/s. Actually more like 2 because of redundancy or peaks. Now in my business my alarms go off when we have less than 1500 requests per seconds, in peaks they go over 100.000 requests/s, record was something like 150k/s. Now if we "waste" 1ms, we need 150-250 CPU cores ADDITIONALLY just for having non-optimized code. So every microsecond counts (in my line of work). Another example is real time systems, like embedded stuff in a car. Assuming you go down the German Autobahn at 200km/h (2187 football fields per hour in imperial units), that is ~55m/s (~66 fridge widths per second) or a car length every 1/10th of a second. Having high latency in automatic emergency systems can cause violent deformations on previous rigid objects.

  • @batchrocketproject4720
    @batchrocketproject4720 Год назад +4

    If you're a subordinate in a team, it doesn't matter what approach you take - an alternative will be prompted by someone who feels their job is to 'improve' things. You might even end up back where you started three weeks later after countless rewrites.

  • @SufianBabri
    @SufianBabri Год назад +4

    Clean code is indeed about making code easier to maintain. In performance-critical programs such as games and embedded system, we can let go of certain rules.
    What we can't let go of is to name things properly (i.e. not naming a function "nuke" or "holyHandGrenade" when all the function does is clear the data). How many people hide behind the "performance benefits" and then write sh*tty code that needs no obfuscation tool at all!
    So don't be lazy and learn about clean code. Yes, like Cody says, choose which bits are applicable to your work/project. There's no silver bullet.

  • @jonasmehr7622
    @jonasmehr7622 Год назад +2

    I think his performance comparisons are misleading at best. Almost always the performance bottleneck are I/O operations and things like allocating memory. His code example is small enough that all of the variables can neatly sit in L1 cache, so he isn't impacted by that at all. That would be different for almost any real system. In the wild, the performance overhead from polymorphism etc. will likely have a much lower impact. If you have to wait for an HTTP request to return you're not gonna care about a few additional CPU cycles.
    Of course there are situations where performance is critical. But in most situations, fast enough is fast enough and other criteria like maintainability, testability, correctness, time and cost to ship a feature etc. are equally or more important than performance.
    A good software engineer will know what hardware they are targeting, how often that code will run and how time critical it is and make tradeoffs accordingly.

  • @blenderpanzi
    @blenderpanzi Год назад +8

    I remember a time where Microsoft Word started in about a second or two and every part of its UI was snappy. On a fat32 HDD. And it didn't really lack any features that any somewhat normal user needs. That was Office 2000, about 24 years ago, and even back then people complained how much slower it is to Office 97. (But there are a few useful features in Office 2000 that weren't in Office 97, e.g. nested tables.) There is so much stuff that is so slow now!

    • @dmitripogosian5084
      @dmitripogosian5084 Год назад +1

      My wife wrote her first book in 1992 on 386 laptop with 5 MB ( that is is mega) RAM and 20 MB ( again mega ) hard drive using Word2, the last one with modern Word 360 for Mac. She basically says there is no fundamental difference to her experience/workflow, nor noticeable change in speed. Still not instantaneous, as we all thought GUI will be in 10 years back in 1992.

    • @blenderpanzi
      @blenderpanzi Год назад

      @@dmitripogosian5084 I'm surprised that she doesn't says it got slower.

    • @dmitripogosian5084
      @dmitripogosian5084 Год назад

      @@blenderpanzi Actually, I asked her. And she says her smoothest experience (subjectively, by memory) was on IBM thinkpad laptop running XP and Office either 2000 or 2003, that she had somewhere between 2005 and 2011 (she switched to Mac in 2011). Then she used Office 2003 for Mac for quite some time after 2011 until forced to upgrade.
      Original Word2 on 20 Mhz 386 served her until 1998, and the main limiting factor was actually not 5MB RAM or CPU, but 20MB hard drive. I regretted many times we cheapened out in 1992 and did not get her 40 MB drive :)

    • @blenderpanzi
      @blenderpanzi Год назад

      @@dmitripogosian5084 It has to be Office 2000. I remember when Office 2003 came out it was absolutely unusable on the same machine on which 2000 ran smoothly. Used so much more RAM, took so much longer to start, every click in the ribbons took ages to respond.

  • @Exilum
    @Exilum Год назад +2

    My main issue with the whole video you're reacting to (not yours), is that it's all about optimization. Code is a tool. How fast your code should be is not "as fast as it could be". The answer is "just slightly faster than you need it to". If you are running a program once every second, it wouldn't matter if it took a single cycle or an entire 10ms to run. Meanwhile, if your program draws UI on screen, you would have a reasonable expectation that it should at least match the framerate of your screen, which we'll for the sake of argument set at the highest screen refresh rate currently available of 500Hz. That would mean your draw calls should be faster than 2ms, logic, input handling, etc included.
    Whether or not your code is 10x faster by not writing clean code, it doesn't matter.
    Need should drive optimization.
    If your code can be easier to read and maintain while being within your target speed, then why should that matter?

  • @davdev793
    @davdev793 Год назад +3

    Fun that his explanation of those clean code rules is actually pretty nice. He is not lying but everything needs context. In general clean code is still worth to aim at in my limited experience. Just need to know when to apply and when to bend those guidelines. Personaly I do believe optimization should not be done at the same time of inception. Create clean, optmize dirty later. Maybe you're experienced enough to do both at the same time but would the rest of the team or the next ones to come after you be that capable? Your take is also pretty nice it's always good to know why you're doing something.

    • @iswm
      @iswm Год назад +1

      Your experience is limited but Casey's isn't. You should listen to Casey.

    • @davdev793
      @davdev793 Год назад

      @@iswm I listen to many people. And of course Casey is one those people just not the only one. You gotta learn to listen to every side and take what is good from each for you to get better by your own. There is no golden hammer solution.

  • @GeorgeTsiros
    @GeorgeTsiros Год назад +2

    8:40 "i'm the first to admit, i'm not a huge math fan"
    uh oh. Red flag.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад

      Thanks for the engagement on this video ☺️

  • @Gol_D_Roger_The_Pirate_King
    @Gol_D_Roger_The_Pirate_King Год назад +14

    DRY is good for when you want your code to be consistent and be update in just one place. If you always repeat your code you need to update all of the repeated once you update your logic. That is why there are a lot of bugs in game dev. They are fast but hard to maintain.

    • @dire_prism
      @dire_prism Год назад +9

      It works the other way around too. If you change the logic of code called from different places you need to consider how it affects all callers.

    • @taragnor
      @taragnor 10 месяцев назад

      ​@@dire_prism If you use proper OOP though, that's not a big issue, because you still honor the interface. It's part of why there's so many rules like encapsulation in OOP, because it allows you to change the internal workings of a class while minimizing side-effects to other code. So if you have a function getCustomer(int id), so long as you don't change that interface and return the Customer object, the user of the class doesn't need to worry if it's implemented via an array, a hashmap or a connection to a database.

  • @MewPurPur
    @MewPurPur Год назад +17

    I'm a gamedev like him, although on small games, and I help a bit with the game engine on the side. I write my code in a way I would consider very clean. Who cares?! My game runs on 400 FPS on an old laptop. And after years of development and writing code for my game, it costs 1% of my performance. Rendering costs something like 85%. I squeezed a thousand times more performance out of a small rendering optimization (10 lines of code) than anything I could have done to the code's runtime performance.
    When I initially put fog in the game, it cut the FPS in half. Now that's something to optimize.
    How about the game engine? There was a bug in the editor that made the curve editor reconstruct every single frame, and no one reported it in years because it wasn't noticable. There are simply places where performance is of zero importance.

    • @duncanshania
      @duncanshania Год назад +3

      Actually i'm legit curious now may i ask the genre of the game and general dynamic entity count specs?

    • @OmegaF77
      @OmegaF77 Год назад +1

      @@duncanshania Knowing how indies are nowadays, I'm guessing a 2D pixel-art platformer but I could be totally wrong. At that point, hell yeah extreme optimization doesn't matter. However, if he is making a 3D game with high-definition graphics, then not following Casey Muratori's advice is a fool's errand.

    • @duncanshania
      @duncanshania Год назад +1

      @@OmegaF77 Openworld sandbox and rts genres are where the pushing really gets done.

    • @iswm
      @iswm Год назад +1

      you may be a game dev, but you aren't like him.

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca Год назад +2

    From a consumers perspective, it feels bit disrespectful that we have to buy new hardware just to facilitate developers making the same product more and more resource hungry, to ultimately deliver similar level of new features and updates as before: for the sake of making those updates and features less bothersome to produce.
    I don’t think the companies as a whole is actually planning to deliver worse and worse software year after year. That seems like a very shortsighted strategy.
    Yet suggesting that this is the position of your average company is basically what “taking advantage of performance gains” translates to.
    It is also an environmental question. You could just use less of the processing power of what can be millions or hundreds of millions of end users. Not to mention datacenters.
    Even if the companys values align with delivering worse and worse product while paying for more and more serverspace just to cut on cost of labour - and thus OOP and clean code is their best interest in every way; Those are still some very shitty values in my opinion.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +1

      Performance does not equate to a good product. It's one part of the product and certainly becomes more important as a product finds a market for itself. But just because you can rewrite something so it works more efficiently, doesn't mean that is the top priority.

  • @torarinvik4920
    @torarinvik4920 Год назад +3

    Certain concepts like DRY, single responsibility, and small function are generally considered good practice by most programmers. As matter of fact repeating yourself is considered one of the typical code smells along with gotos, global variables, magic numbers, cryptic abbreviations, nested control structures, not using const by default ect. If you read the coding standards that NASA, Google and other major actors use then you will find many of these usual suspects. The data-oriented style that game engine programmers and the scientific computation folks use is specifically intended for working with big data, and is tailored for that specific purpose. Uncle Bob's focus is on enterprise software that the average programmer does. Write maintainable, readable code then profile and take actions based on that. Do micro optimisations when needed. OOP though is a tool just like FP that is to be used and not abused, be pragmatic.

  • @fernando-loula
    @fernando-loula Год назад +1

    What optimization flag did he use? His results seem rather contrived.

  • @lindesfahlgaming5608
    @lindesfahlgaming5608 Год назад +3

    Its hard to watch the "I Knew it" Statements while the Title of the original Video says it already :) So the only Discussion left is: Should i use Clean Code if my Application needs to run fast? Its a necessary Question, because Clean Code is sometimes more a religious Dogma instead something to have in mind while coding. We will see if the Code Neural Networks will write in the future are even readable or understandable by us :)

  • @moestietabarnak
    @moestietabarnak 8 месяцев назад +1

    As a maintenance programmer, I mostly do refactoring around a bugfix, with K.I.S.S., simplify and usually I get better performance and more readable/maintainable code. and less buggy because simple is easy .
    and DRY or "don't re-invent the wheel", Goodyear prove this wrong.
    and I hate the 'do one thing' mantra that result in a stack-trace 119 call down, like I see in most java apps!
    How do you think those 'one-liner' get assembled to an apps ? init everything, load everything, save everything, log everething is quite complex 'one thing' to do

  • @u9vata
    @u9vata Год назад +18

    The key point is that "not clean" actually not mean that its harder to understand. Also the real issue is that if you do it "the clean code way" and later profile your application, you basically cannot see where it is slow - you will just see its slow. The main issue is applying these not create "hot spots" but "bad data organization in memory" which makes a horizontal slowdown instead of a deep vertical one at one place that you call... Then you cannot go to a spot and "try to optimize that only" because there is no such spot yet the product is still inferior to others. I saw this happening multiple times and when you tell the customer who hire you for optimizing this "clean shit" they are not happy to hear "you just need to rewrite the whole in different architectural decisions".

    • @duncanshania
      @duncanshania Год назад +2

      THIS RIGHT HERE! I suck at explaining this exactly but you did it perfect. It's all over the indie game dev space and it drives me up the wall. I'd die to work in a team that at least is willing to try something different and self improve. Sticking to just one pattern your whole life is utter insanity. Legit is there a way you could DM me a way to contact you later on it's hard finding people that actually got this far. If i ever find an excuse to work with people like this it would be awesome.

    • @LuaanTi
      @LuaanTi Год назад

      Yeah, exactly - but I wouldn't call that clean code! :D Performance must always be part of the design. But! I don't think the world would be better off without those game developers making their games. And they wouldn't ever make the game any other way.
      Part of clean code should be the ability to separate code (and data) from other code. That allows you to optimize where that optimization is actually worth it. But good abstraction is _hard_ . And it's very easy to get lost in that and never actually make a game. It's just as easy to get lost in code that never does any abstraction as in code that does poor abstraction; eventually, the frustration of working against the grain gets to you.
      I've done plenty of development with 8-bit and 16-bit machines. Yes, we did squeeze everything we could out of those machines - but only ever to find a way to make the features fit and work. If your code fit in memory (and you didn't have to share that space with any other code), you were done. If it no longer did... that was the time to start making things smaller. It was the same with speed - nobody bothered optimizing stuff that wasn't a problem for the customer. And yes, I've fixed code that only took a few hours of work to improve performance a thousand-fold and made the customer very happy - but they were still very happy with the old code too; it still saved them huge amounts of effort and time, and that's the point.
      What tends to be annoying is when that isn't true; when the software actually makes things take _longer_ . But even that can be worthwhile if it pays for that with things like initial investment, at least for a time. Most people don't want to be specialists in a hundred tiny fields :D
      Everything should pay for itself. If "clean code" means you're writing slow, bug-ridden, hard to write/maintain code... throw that out. Do better. It really helps having good, consistent tracking of both coding performance and code performance. Keep track of all those horrible parts of code noöne wants to touch, where dozens of new bugs spawn with every change and sometimes on a blue moon something starts yodelling.
      And keep in mind the costs of changing things later. Abstraction can be great for fixing exactly the kinds of performance problems you're talking about - if the data organization in memory is decoupled from how you use that data, you can change one without changing the other, at least in part. Heck, SQL is almost 50 years old now, this is no new radical idea. My software 3D renderer can easily and quickly do things Ken Silverman didn't dream of back in the day thanks to such high-level abstractions - and without the cost of maintaining ten different versions of "the same function" (with small tweaks). It's hard to overstate how extremely powerful managed memory can be _if you can afford it_ - and fortunately, the places where you can efficiently use managed memory are ever increasing. And of course, you always have to be ready to get a bit of grease on you when necessary. I've always disliked Java in particular because it makes it extremely expensive or outright impossible to manage memory a bit more tightly _when necessary_ - and unfortunately, a lot of "low-level" coders only ever had a passing familiarity with how horrible Java is. But again, nothing new there - it took a long time before C/C++-style programmers budged on the static vs. dynamic memory allocation debate. Mostly it took the old guys dropping out.

    • @u9vata
      @u9vata Год назад

      ​@@LuaanTi "It's just as easy to get lost in code that never does any abstraction as in code that does poor abstraction"
      - To be honest I think with some certainty that having wrong abstraction is actually even worse than having none in most cases...
      "nobody bothered optimizing stuff that wasn't a problem for the customer."
      - I actually disagree to some extent. Yes you did not "consciously optimize" code, but it was muscle memory to not totally pessimize things. I mean you did create better code just because you were more often doing any kind of performance awareness - and likely still produce better code than those who never were at all aware of performance. Actually what people not realize is that it does not take effort to do some things just right - people think that "optimization" is something that takes time away from features, which is only true if you want to outperform std::sort and stuff and not even that many times to be honest. But not doing "wrong shit that neither any compiler, neither any CPU can ever make fast as architecture" is not a cost - unless you do not have this skill. And everyone should have SOME basic skill not to pessimize everything.
      Look at the Primeagen video where he looks through "making javascript fast one library at the time" to see how BAD things people do just by not caring at all - many of which solutions are not even easier to read than their performant counterparts or easier to write... Like using regex instead of two if statements which latter actually is not just faster, but reads faster too...
      "Abstraction can be great for fixing exactly the kinds of performance problems you're talking about - if the data organization in memory is decoupled from how you use that data, you can change one without changing the other, at least in part"
      - No. If you do abstracions over a data organization in memory you hardly can change it later - to be honest at that point it is best to either not have any abstraction at all or know at least the basics of memory hierarchy - its not really hard and not an extra effort to do things right "by muscle memory" in this aspect - or at least not totally badly.
      Just imagine trying to change data being spread around everywhere as references into something like an ECS. You just cannot do that change. The more abstractions you have in the "spread around everywhere" OO design, the more harder to make that change.
      "Heck, SQL is almost 50 years old now, this is no new radical idea. My"
      - Part of why SQL could hold up well is because its by definition a data oriented design. Compare its performance to some ORM solutions - and by ORM I do not mean the simple "there is a layer that lets you use different databases", but JPA like thinks where there is literally mapping from-to objects. Also there if you really want performance you are better off not doing it the OOP way, but more close to how you would write it in SQL. And if someone makes it "the OOP way" abstractions, you end up needing bigger refactor than not abstracting stuff or doing it well.
      "My software 3D renderer can easily and quickly do things Ken Silverman didn't dream of back in the day thanks to such high-level abstractions - and without the cost of maintaining ten different versions of "the same function" (with small tweaks). "
      - Easily: Yes, party because better sotfware technology tools, but partly because time had passed and more knowledge is available..
      - Quickly: This part is only there because of the hardware guys to be honest..
      No one questions DRY - neither me, neither the video. Also there are tools that help you write more efficient code than ever. Its just that people are totally not performance aware by default - and what I mean by that is that with the same mental effort they produce less performant code, despite all the gains. I tried old visual studio on my 2007 laptop to open a project people rarely need to touch... It was blazing fast on the old machine, compared to a newer company PC the new version is happening. How that come? I was doing the same things, but it was not only that "the new is more wasteful", but literally its so wasteful that it did the same thing on a faster machine - slower!
      You can say: "Oh but it is not doing the same thing".... okay, but I used it the same way - so I surely did not ask it to do its extra bullshit. No one asked that and that should only give a penalty for people that really do need some of that newer features. Its just horribly slow compared to how it was. It feels like they stopped knowing how to program...
      "It's hard to overstate how extremely powerful managed memory can be if you can afford it -"
      - It is easy to overstate. Also quess where I find the biggest memory leaks on daily basis? Not in C/C++, not even in freepas projects that I come across, but in typescript in Java. I for some reason work some time in performance oriented stuff and some time in "regular stuff" too and its like day-and-night difference.
      I find three main heavy issues in GC codebases:
      1.) People think GC solves everything: Keeping reference to something that the system should consider end-of-lifed is actually worse than crashing for this! Like when I routinely see codes where people add themselves to some event handler and never remove themselves from some event system. This is the single best way to create memory leaks AND very dangerous and hard to debug ill-defined behaviour in huge codebases.
      2.) All the time I see people not knowing that if you generate junk memory CONSTANTLY you will end up in places where in case things are slow, you can never tell where the slowdown is coming from.... Why? Because even if you profile the thing, the cost that a badly written function creates is not happening in the function, but in the follow-up GC runs!!! This makes it really awful hard to try optimizing the code when finally it is needed.
      3.) Because GC-only people think that GC solves everything I routinely see database connections, files, pipes, message queues, sockets and all kinds of stuff mishandled. Threads mutexes forgotten to be released (at least that bug shows up fast though usually) and many such things. Last example was that I found bug in official electron package where no one tested the case where you send a one-shot message where they usually use that API for "at least multiple things". But looked into how resource was handled and it would be so easy to handle it with RAII if it would be C++ or Rust or something.
      Comparatively in C++ or Rust - or even by implementing RAII in JS yourself (I did for a project!) you can even make your event API that way that you being destroyed automatically removes you from the message bus - not any smart pointer, no references, just plain objects and destructor and it works because or RAII. I might even blog this down just so people understand it more. It literally is as helpful that I took time to implement RAII in JS where people misused this so badly that I had to do something - then copied that code over to a TS project where people are not so bad, but still was beneficial to have this :-)
      "and unfortunately, a lot of "low-level" coders only ever had a passing familiarity with how horrible Java is"
      I think I surely wrote over 100kloc (just my own part was that big and we had over 10 people adding that many code) java projects and very serious ones. It is both horrible and not. Also you totally do not need to be coding in C++ to not awfully pessimize your code - yes with java it will always be a bit memory hog and always be a bit slower, but it can be made not so badly performing just by doing some simple rules.
      And its good you refer to the dynamic VS static memory debacle as closed as finally everyone knows that whenever possible its better to just avoid dynamic memory :D

    • @LuaanTi
      @LuaanTi Год назад

      @@u9vata Well, if you discount SQL as an example of abstraction helping things, you're already thinking about very different OOP than I am :) The same thing works for deciding data design strategies for other things, including, say, using an oct-tree or SVOs for storing volumetric data. Separating the query from the thing being queried affords you _more_ optimizations, not fewer.
      I don't have to imagine changing OOP engine into ECS, because I've done it. It was trivial. That's what good abstraction gives you. It also supports splitting data on the fly between different workers, even on different machines (handy for an MMO). I also have a query language that allows you to write nice queries to determine what kinds of chemical reactions happen all over the world as environmental conditions change (yes, it's a very weird sandbox game). This allows me to change storage and access strategies on the fly, based on runtime data from the servers, while allowing easy expansion (eventually I'd like the players to be able to design their own worlds).
      I've had that Visual Studio experience recently too, and the results were exactly the opposite. VC++ 5.0 on my new computer was much slower than VS 2022 - and that's with C++, which is stupidly hard to work with (C# is incomparably faster). And add to that that every thing I clicked on in VC++ 5.0 meant more waiting (because of course nothing was pre-loaded to save memory and loading times), and that it kept crashing pretty often. And don't even get me started on the debugger. As soon as I could port the project to 2022, I did, and never regretted it.
      Memory leaks? I doubt you'd really find a significant difference between unmanaged and managed code. The reasons they happen are largely the same if you write at least passable unmanaged code (managed code does have a somewhat lower threshold for what counts as passable, but we're talking about decent programmers, right?). The real problem has always been memory safety, and especially so when things appear to be working fine most of the time - things that would cause a hard immediate crash or simply wouldn't possibly happen in the first place in a managed environment will cause you endless suffering in an unmanaged environment.
      I'm not sure why you're putting Typescript and Java in the same sentence. What do they have in common? In any case, sure, you see a lot of poor Javascript code everywhere. Now imagine if the same people wrote the same libraries in unmanaged code (and before you say "they wouldn't", of course they would - just look at PHP).
      The memory leaks you're talking about are trivial to find and fix. That's what the managed environment gives you - information you simply don't have in the vast majority of unmanaged code, especially if it wasn't specifically built with debugging in mind. It's the same with the junk memory - if you can't find where it comes from, you're not using your tools very well. It's very easy to find, though of course not necessarily so easy to fix. As for unmanaged resources being handled wrong, sure. But - it's still easier to handle in most managed languages I've worked with than even with RAII in C++. I don't know why you'd compare code that doesn't do the barest minimum to C++ code which uses RAII - compare it to C++ code that _doesn't_ use RAII, that's the kind of programmers you're talking about.
      I quite like the Rust model; it's certainly a huge leap forward compared to C++. But... I definitely prefer select bits of unmanaged memory in a managed memory environment. And being able to defer the low-level decisions for later, when you have a better idea of what the application will actually be doing, is absolutely essential for quickly iterating on ideas. C++ programmers know that too - that's why they've been building throwaway prototypes (heck, sometimes even in managed languages).
      One of the key points here is to avoid doing it "the wrong way" if it's also a lot of effort. When I have to redo the "slow" code later on, no work is wasted. That's one of the reasons why my "slow" code doesn't look like Java Enterprise Code - the point is to be quick and flexible, so that it can later be changed to what's needed.

    • @u9vata
      @u9vata Год назад

      @Luaan Part 1: Please do not mish-mash every abstraction (or any struture whatsoever) as either OOP or "clean code"!
      "Well, if you discount SQL as an example of abstraction helping things, you're already thinking about very different OOP than I am"
      - Where did I say it is bad abstraction? What I said is that its actually more a data first abstraction than OOP and this is one of the reasons why it holds up in long time well.
      "Separating the query from the thing being queried affords you more optimizations, not fewer."
      - Yes. Now think about a bit for violating OOPs one main rule: encapsulating data + operations. SQL literally do the opposite of that design and clearly separates "the query and the think beign queried".
      - Again: you can say: but hey, that solution/abstraction is pretty clean - it's neither OOP nor "clean code" in any ways. You can argue (and I accept) that it is clean code (by readability and maintainability and separation) but not "clean code" (as in the set of rules Martin writes books about).
      I see this being an issue to be honest: He coined a set of development practices - some of which are hard rules, some less hard - as "clean code" and whenever people or even himself get cornered a bit in this or that part of it not really being the best practice, solution, pattern... defenders start to talk about "clean" as in English dictionary you look up what the word clean means... The point is not that. The point is that the set of rules advertised to reach english dictionary meant cleanliness and called "clean code" (or OOP for that matter) not good rules for most cases and better abstractions - also more shallow, but as much readable abstractions - can be given many times.
      "I've had that Visual Studio experience recently too, and the results were exactly the opposite. VC++ 5.0 on my new computer was much slower than VS 2022"
      - VC 5.0 is not when VS did peak... Try something like VS2010 or around that time period. But on many projects VC5.x is actually faster if you run it on its original environment. I literally ran 2010 a week ago on a C++ project and its faster on the 2007 machine with shit old winblows than the new environment. I would move the codebase just so that its easier to configure a more modern compiler and that kind of thing - but then basically just move the whole onto linux where its already much better to code C++.
      "Memory leaks? I doubt you'd really find a significant difference between unmanaged and managed code."
      - There is and its a HUGE difference. Modern C++ projects I see basically have zero (nista) leaks. Very interesting. Even the better TS project where I consider people not that bad on the team I see it everywhere. Again because C++ people in modern codebases do RAII everywhere literally and do the same for event listeners, files, whatever. I see event listeners causing most of these leaks. That is... the GC does not see it as leak, because there are still references to the data, because lets say you open a new panel, and the panel adds itself onto some even chain (give reference to itself), they totally never remove it when the panel life cycle ends "normally" and it even gets the random events.
      "The real problem has always been memory safety, and especially so when things appear to be working fine most of the time - things that would cause a hard immediate crash or simply wouldn't possibly happen in the first place in a managed environment will cause you endless suffering in an unmanaged environment."
      - I disagree that crash is that bad... If you just valgrind it very rarely and code modularly you can catch any of these even if you do not do modern C++ and doing that you do not even meet such crashes and even less in lets say rust, but not so much less. What happens in GC world though? You still have pointers to that data that is LOGICALLY DEAD (just physically kept alive by GC) and guess what you get instead of crashes: data corruption... like on disk... There was a case some guy unrolled a change and was "corrupting some data on some machines" and it took two weeks when we started seeing it... much worse than crash.
      Also much harder to check for - for these kind of issues there is no valgrind and stuff... Because there are valid references, the issue is that they should not be there.
      I give you an example: Again you add yourself to an event listener (this was the case) and because its GC language, people totally have no design / coding sense for proper life-cycle handling. I mean on C++ you do RAII for it because of your muscle memory tells you - without thinking... but where people think GC solves all and every, they do not even think or design much about life cycle of their entities / objects or panels. What happens? They add to event listener and never remove from it - because life cycle is not thought through and is ad-hoc in code - not in constructor / destructor with RAII, but randomly spread around, with added issue of "did they properly handle exceptions or not" around it (okay this is better in java because exception specifiers, but totally suck in TS/JS for example). What happens? Your code has been subscribed to events 2x, 3x the times and actually process it... You might see no issue in testing because you only test few things at the time - then in production some big load happens on the application and suddenly the user - or some process - change the value after the UI changed it - and expects it to be that now.... and the 2x, 3x subscribed panel handler overwrites it... Bumm. Data corruption and state that no one ever has planned for. Also even if this cannot happen, the objects never die and just collect up.
      You can say "but hey - that same shit you can do in C++" - but my point is that in C++ people by default think about life-cycles while in GC languages people generally not. And you can leak memory, cpu time and make data corruption if you do not. People should be more educated on this... MUCH MORE.
      I also don't really think memory safety bugs are the most common ones. Also you seem to compare "how things were in malloc/free times" to proper RAII solutions that everyone is using now in practice on those languages - and even when not, document life cycles and design for it...

  • @rafagd
    @rafagd Год назад +2

    His video is yet another example of using truths to tell a lie. It's a collection of actually horrible arguments against the current consensus. The main pain point I have with these is that he doesn't talk at all about bottlenecks. Yes, you can make your UI code 2000x faster, but you're bottlenecked on I/O anyway, now you have an spaghetti to maintain AND you got no speed for your work. Most software has bottlenecks you can't avoid and more often than not, working around those is how you actually get substantial gains in performance, not writing everything in a single function and leaving a monster for the next person to tame.
    Keep his point in mind, remember these are possible optimizations and benchmark your stuff. If you have really gone through the bottlenecks and you're seeing that your abstraction is the next one, then go ahead and apply his advice. Go down one level at a time and stop as soon as it is "good enough".

  • @rosen8757
    @rosen8757 Год назад +5

    My biggest gripe with "clean code" is that it's so much harder to read and follow.
    And I really don't understand what's so bad about using functionality of the language, switch statements are great.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +1

      Clean Code when done improperly is convoluted and difficult to follow. I don't have any issues with switch statements, I think they are great.

  • @d_lollol524
    @d_lollol524 5 месяцев назад +1

    I guess cloud providers will love to run our "clean SOLID code" .

  • @jameswood4672
    @jameswood4672 Год назад +2

    "It doesn't matter if my code is slow, hardware will make it fast" is a cope. And a terrible one at that. It's the reason that software is garbage nowadays. And I've worked with fintech companies. They literally pay 350-400k for C++ engineers to come in and shave of nanoseconds of latency.

    • @anm3037
      @anm3037 Год назад

      Hardware will make it fast is the anthem 😅

  • @darioc23c
    @darioc23c Год назад +1

    Casey is right, I started with Commodore 64 and I think I know what's talking about. And if somebody says speed is irrelevant, it's not. He makes a point and ok for some SW it is not important but in general IT IS IMPORTANT

  • @marco.garofalo
    @marco.garofalo Год назад +1

    Not sure what to take from the original video. It's useful I guess to know for that specific example what are the performance trade offs, but it only shows one side of the medal, i.e. he's not showing/measuring what we are actually gaining in the exchange. it's not easy to to judge whether it's worth it or not by simply looking at performance. For example, if I have to speculate I can say that having software that ships quickly might be more important than top-performance (especially during the early stages of a product), i..e. IMHO keep reverse engineering the codebase in order to add a new behaviour doesn't sound like a good compromise, in your opinion what would you choose if in order to gain 25x in performance you'll have a 10x slowdown in delivery, and your competitors eat you alive? Beside that, t doesn't make sense to optimise every part of a software, so make sure metrics are in place to monitor critical journeys/paths/parts, then make informed decisions about "where" we might want to move away from "cleaned code", trading-off "ease of change", to gain higher performance, or vice-versa.

  • @axeld.santacruz4659
    @axeld.santacruz4659 6 месяцев назад +1

    Best take on that video, this is what a good engineer would actually do.

  • @HerbertLandei
    @HerbertLandei Год назад +7

    Yeah, I understand why a game dev counts every millisecond, but that's a totally different context than 97.5% of normal business programming, where you can write almost all code following best practices (I don't say "clean code", as I don't agree with all of it), and only optimize the few cases where it really matters.

  • @thomaspeuss2045
    @thomaspeuss2045 7 месяцев назад +1

    I played around with Casey's code and when I activate compiler optimisations the differences between the versions becomes MUCH smaller. That is why I don't like Casey's video because he skips important information that does not support his case.

  • @joelbrighton2819
    @joelbrighton2819 Год назад +2

    Towards the end you stated: "think for yourself". This totally hits the nail on the head! I like some of the Clean Code principles but others, frankly, are complete claptrap (for the environments in which I work). Bottom line, one has to be pragmatic!

  • @badwolf8112
    @badwolf8112 8 месяцев назад

    There’s no benchmark for what the clean code rules for the static analysis tools should be.
    Your appeal to intuition about the value of small functions is a good example. If it’s hard to follow you can just break it into paragraphs instead of add more boilerplate for more functions and creating more function calls: no objective indicator for clean.
    Honestly it’s nice seeing someone just come right out and say clean is subjective and unmeasurable because I can’t escape how reading code sucks even if someone tries to make it clean

  • @ardvan
    @ardvan Год назад +3

    In the end all the abstractions or "clean code" are there with the intention to first help get complex software done quicker (and also to be done by less smart people) and second to be written in a state that helps other people maintain this software. Unfortunately I would also compare some of the development abstractions as a bureaucracy that wants to "prevent" bad code by having strict rules what is allowed.
    But even the best programming language can't help if the programmer isn't up to his task. Bad programmers always make bad code. Bad craftspeople make faulty houses. These rules won't help.
    It's the exact same as building a city. In both cases you invest creative energy into building a physical or metaphysical object like a house or program. Some stuff will not be done if there are to tight time constraints. Other stuff will not be done at all. Some people are lazy and/or procrastinate and do stuff only at the end in a hurry with no love for what they do.
    I like to compare a software project with your house you live in. Software procedure and functions are the rooms of your house. Variables and Flags are drawers, conditions and formulas are the appliances. The have the tendency to multiply or rot over time and even may get bigger like heaps of dust.
    You need to tidy both, the house and the software.

  • @PaulPetersVids
    @PaulPetersVids Год назад +1

    There are 3 types of costs to think about what creating software - 1. performance cost (how fast is it?) 2. development cost (how much time/effort is required to develop?) 3. maintenance cost (how much effort/time does it need for maintenance?)"
    Clean code helps with 2 & 3. This guys seems to be only concerned with performance cost.

  • @Parker8752
    @Parker8752 Год назад +2

    So, when it comes to slow software, I think the issue is that some software performs worse on modern hardware than older iterations of the same software did on older hardware (Visual C++ 2005 on decent hardware of the time vs modern visual studio on today's decent hardware, for example, loads projects faster and has a far more responsive debugger). If you have a software product that you've been iterating on since the 1990s, and the latest version is less performant on a core i9 than the version from 2000 was on a 1Ghz Pentium, then I'm not sure there's any excuse for that. For it to be equally performant, it had best be providing a hell of a lot more value.

    • @LuaanTi
      @LuaanTi Год назад

      Okay, I've actually been working with Visual C++ 5.0 recently while recovering a large old C++ project... and I call BS. As soon as I could get everything to work properly in VS 2022, I switched to that - it's far faster and more comfortable, and the debugger actually works. And mind, there's plenty of things I like about the old IDEs, and I have plenty of good memories ever since I first got to Turbo Pascal. And that's with the old Visual C++ running on _modern_ hardware, not a 1 GHz Pentium. Loading the same project in Visual C++ 5.0 took 7 seconds, while in 2022 it was just under 2 seconds (all of this is on an NVMe SSD).
      There has been a painful hickup when Visual Studio first switched to WPF-like UI, but once those have been smoothed over, VS has been faster than ever, even with the tons of added new features. Of course, it's by far the worst in C++ - C++ is _really hard to work with_ , which is a big part of why so many C++ programmers still use simple IDEs (if that).
      Worse, Visual C++ 5 doesn't actually load anything on project load, really. When the project loads, you don't even have a list of the classes in the project, or the resource files, or project settings... Every single click anywhere takes considerable time to load things up, over and over again. This was a good trade-off back in the day, but today it makes everything feel really soggy and painful to work with. Even if it took longer to load the project up (and it doesn't), it would be worth it for how much more responsive and faster the work actually is - I don't need to reload projects ten times a day, and the work I actually do throughout the day is much faster in 2022. And as painful as working with any C++ codebase is, it's still a lot better in 2022 than in 5.0.
      Of course, the old Visual C++ had a bit of the Unix school in it - it had to focus on loading times, because it crashed all the time :D
      It gets even worse when you start trying to actually do anything. Searching and navigating anything is very slow in 5.0. Building takes forever, and rebuilds are necessary more often. The debugger is extremely simplistic and most of the time you have to go back to splashing debug outputs and asserts everywhere rather than relying on the debugger to do its job. New Visual Studio is much faster in part because of things that weren't reasonably possible in 2000 (like a _load_ more memory available today), but also because a lot of things are done smarter. The people making Visual Studio are pretty good at their job.
      Now, Visual Studio could certainly be even faster. Ditching binary files for text files certainly slows things down, for example. But was it a good change? IMO the answer is definitely yes. There's plenty of corrupted project files that I've had to deal with over the years, and the slowdown from having text files in the first place doesn't come close to how much easier those files are to work with (especially considering that again, loading the project isn't something I have to wait for the vast majority of time).

  • @KoltPenny
    @KoltPenny 5 месяцев назад

    It's easy to take one video on the entire course and take it out of context. Casey teaches you how simple and small changes in your code can improve the performance in your code a lot. This episode in particular is about how practices that don't have anything to do with engineering end up making things worse. He also complains that people frequently take the "premature optimization is the root of all evil" as a "optimizing is evil" and never care at all.

  • @joechuck74
    @joechuck74 Год назад +6

    If the shapes example existed within a REST API, the performance improvements of the dirty code would be masked by the network request/response times. Youd have almost unreadable code with no change in user perception.
    To your point in the video - he must be working with hardware at a low level.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +3

      Thank you for your comment. While it's true that the network request/response times can impact the performance of a REST API, I believe that writing maintainable code is still an important goal for developers. While optimizing code for performance can lead to faster execution, it can also result in code that is difficult to understand and maintain. This can ultimately lead to increased development time and costs as developers struggle to maintain and update the code.
      In my opinion, the best approach is to strive for a balance between performance and maintainability. By using clear variable names, organizing code into logical functions and modules, and applying other best practices, developers can create code that is both fast and easy to understand. This not only makes it easier to maintain the code over time but can also reduce development time and costs in the long run. Ultimately, the goal should be to create code that is efficient, maintainable, and meets the needs of the application and its users.

    • @Elrog3
      @Elrog3 Год назад

      I'm certain he would also argue that the design of the network is crap and is way slower than it should be.

    • @duncanshania
      @duncanshania Год назад

      @@Elrog3 Which it is crap, but really in that situation you still want to minimize bandwidth more than anything probably which still means you need to be thoughtful of data layouts which is the real core of being mindful of performance while you code. Also no surprise, polymorphism generally bloats objects, adds indirection, and scews general intent of how the data should be used/moved, it's terrible to serialize. Passing objects made this way via network is generally going to be more bloated. Latency and bandwidth both do matter even for networking centric work. Heck a game server isn't much different than a web server in that aspect even if tolerated latency varies, there's still a cap.

  • @shubhampawar7921
    @shubhampawar7921 Год назад +1

    I think a person who using jvm based languages should not have a debate about performance agaist guys who have been writing c/c++ + game development.
    The reason why people use garbage collected language is because they dont want to manage memory. They know this will result in slower performance but they tend to ignore it because it "enhances" development time. Now I'm not saying its bad to use garbage collected languages or jvm based langs, but there's no way I'm taking their opinions seriously when it comes to performance & optimization

  • @DIYDaveOK
    @DIYDaveOK 8 месяцев назад

    I was told by a very sage developer years ago never to write or design code up front for performance. This entire discussion validates that. "Dont write a switch statement today because the hardware tomoerow may be slower" is fairly stupid advice

  • @m4rt_
    @m4rt_ 9 месяцев назад +1

    How readable code is isn't a constant thing for everyone. It's based on familiarity with that kind of code, how long you are willing to look at the code, etc.
    Just because you don't see it as readable when taking a glance at it doesn't mean it's not readable.
    Everything is hard to read when you aren't familiar with that thing.

  • @dorbie
    @dorbie Год назад +3

    He's not opposed to writing clean code, he's opposed to the doctrine under the label clean code that isn't pragmatic and pisses away performance on the altar of arbitrary rules that don't actually deliver on their promise. These ideas are presented and enforced with religious fervor by some zealots you might encounter in your career and these people "know" a lot of stuff that just isn't so. That blazingly fast super computer you're sitting in front of performs like a dog when doing seemingly simple stuff because the "clean code" advocates won silly arguments based on opinion and bullshit for the past two decades. Meanwhile that guy presenting HIS ideas is actually able to make that computer do vastly more complex things blazingly fast instead of spout bullshit about productivity.

    • @zacharychristy8928
      @zacharychristy8928 Год назад

      He should probably CLARIFY that then, because the first 4 chapters of clean code are literally just "Name things well and be concise when possible" so it sounds kind of crazy to take so much exception to that, lol.

    • @dorbie
      @dorbie Год назад

      @@zacharychristy8928 The sheep's clothing. It's almost like you chimed in with a straw man without even listening to anything people actually said.

    • @zacharychristy8928
      @zacharychristy8928 Год назад

      @@dorbie Because I hear more people complaining about "clean code dogma" than anyone actually pushing it. There are far more strawmen here in what people describe as "clean code". So it's important to be clearer in what he's advocating against.

    • @dorbie
      @dorbie Год назад

      @@zacharychristy8928 He went to great length to describe in painstaking detail EXACTLY what he was talking about. How about you go listen instead of pretending you did.

    • @zacharychristy8928
      @zacharychristy8928 Год назад

      @@dorbie weird, because he took a lot of exceptions to things that absolutely aren't zealously or dogmatically pushed by "clean code" practitioners. Even the idea that clean code is some kind of dogma is something I see being complained about more than actually occurring.

  • @ddmozz
    @ddmozz Год назад +1

    I agree with the other guy though. I think performance is important in a spiritual way. "Computers are faster now"/"we have broadband"/"this code has to be maintained by others" etc are just coping mechanisms. Maybe necessary for huge coporate software, but certainly not for the vast majority of software out there.

  • @eugenschabenberger5772
    @eugenschabenberger5772 Год назад +1

    Someday I had to fix a bug. Over 50 classes in even more files. All derived, small functions and bugs nobody could find. Clean code perhaps. Rewrote the whole thing in one large function. Than it worked. No more bugs. Nobody ever locked at the function again. Clean code is nice for people who are not sure what they are doing, producing only a bloated mess of classes.

    • @vast634
      @vast634 Год назад

      On the one hand the guy here talks about cognitive overload, on the other promoted splitting up everything into atomic small functions and dedicated classes. That exactly created cognitive overload creating a huge call stack, and having a huge number of class files to wade through.

  • @houseisforthesoul
    @houseisforthesoul Год назад +5

    Clean Code should not be mutually exclusive from Performance. I have seen plenty of design patterns that have been implemented eloquently while still abiding to coding standards adopted by a few companies. I have seen driver codes implemented for embedded system products with factory pattern (like a CAN bus). Could it have been optimized better? Sure. Every program can be optimized - that’s the beauty of this field. You can always refactor, and make improvements given enough time. That doesn’t mean refactoring suddenly means writing it as complex as possible to make it run faster. You know what you can’t do with enough time? Having someone who doesn’t understand why your 5000+ lines of code is all required to be under main before executing the application is supposed to go understand the contextual history as to why it’s done that way particularly without handholding. You see how I took an extreme example and just trashed the argument for performance? That’s exactly what Casey did here with Clean Code. His arguments are extreme.

    • @peezieforestem5078
      @peezieforestem5078 Год назад +2

      You're straw manning his position. The actual point Casey is making here is that you should be aware of a large performance penalty of clean code. You might still make a decision to make that trade-off. The problem is almost no proponent of clean code ever discusses this aspect.
      If you believe people will make that trade-off, just tell them, there should be no penalty.
      Besides, if you've ever seen "AbstractToastHeaterGeneratorFactoryBuilderInterfaceImplementer", you know it is entirely possible to write abysmal code following the clean code patterns, so the benefits are not guaranteed.

    • @houseisforthesoul
      @houseisforthesoul Год назад +2

      @@peezieforestem5078 Clean Code never discusses “that” aspect because it’s not supposed to be a trade off to Performance - they are not mutually exclusive from each other. I am not straw manning his argument - I am arguing just like him. And for your toaster example: Yes, you would need to justify why you are making an abstract class for a toaster. I recommend reading into SOLID and OOD if you’re having difficulty understanding the use cases when you should be abstracting an interface they may be beneficial. In your case, toaster seems like an odd choice.

    • @duncanshania
      @duncanshania Год назад

      @@peezieforestem5078 I'll give an example from my domain, for games that need a touch of determinism or general mechanic flexibility you might benefit from using ECS which is a child of data oriented programming. For something like overwatch or an RTS or a sandbox game, it's actually vastly easier to maintain and add new features to, unlike Actor and CS patterns under the OOP domain which in these types of products would suck major balls. I constantly have to get on indie devs for never being willing to try any pattern not OO as if they'd die from it. In truth they could actually meet their insane promises they gave to their own customers if they'd be willing to redo their own code base in a proper pattern for the task. It would actually save them more time and be a much more pleasant experience. But no. Software development is currently buried in echo chambers. This isn't a minority situation most software i look at is too OO centric to do it's job well.

    • @andrewrobinson2985
      @andrewrobinson2985 Год назад +2

      Demanding polymorphism and inheritance on everything in a code base doesn't necessarily make the code base any more readable than a functional approach, but it does necessarily make it slower when you *have to* bounce around your memory at runtime just so the cpu can find where your next function call is.
      You can write well documented, extremely readable code in a way that doesn't completely ignore how the CPU works internally. It feels like clean code is popular because it's hard to fuck up those rules as a developer who doesn't have a deep understanding of computers. At the very least, it's a step up from everyone's first uni project that was just a main.cpp file with hundreds of lines in main(), but no decent professional developer should need to adhere to these rules to write good code.

    • @duncanshania
      @duncanshania Год назад +1

      @@andrewrobinson2985 It's not hard to get people in my domain to adopt the ECS game programming pattern if you have sufficient authority over a project and just throw them in. And that's one of the more janky patterns. Or rather it's just jarring initially with how alien it looks.

  • @pmoohkt
    @pmoohkt Год назад +2

    I can imagine that for a company deploying their app on a service like google compute engine, where you pay by used CPU resources, a 10x to 20x reduction of hosting costs does matter.

  • @scottfranco1962
    @scottfranco1962 Год назад +1

    I saw the vid, by the way I don't subscribe to people who turn commenting off. If you can't take feedback, I'm not interested in watching you.
    I believe the net net of the video is that abstraction in code has a price. The video author was trying to imply that the costs were outrageous (orders of magnitude), but the examples he used were contrived and additive (lets see what happens when we do everything wrong in the same program). I am fine paying for abstraction if the cost is reasonable.

  • @PoppySeed84
    @PoppySeed84 8 месяцев назад +1

    unfortunately, clean code does lead to slower software. and ya, maybe it doesnt matter sometimes. but the amount of speed you can gain by thinking in terms of how memory is actually accessed by a CPU is orders of magnitude greater than if you are just thinking about readability. the first time i ran into this was when i was trying to build just a simple minmax AI for a board game similar to chess. my first write, it was readable and easy to follow, but it was so slow it was basically useless. thats when i actually had to pay attention to the actual single bits within integers and put the entire board space within a single dimensional array of those integers. ya, it was probably a bit harder to understand what was going on. but i could actually search the board space in a reasonable amount of time now.

  • @peterjansen4826
    @peterjansen4826 Год назад +1

    I liked his clean code criticism video but disabling comments is not done. RUclips should not even have that feature, no matter how bad commenting on RUclips is (also because of YT itself).

  • @edemkumah5248
    @edemkumah5248 Год назад +4

    You're really struggling to argue against yourself man. He's showing numbers, you're propounding subjective hypothesis

  • @CyberWolf755
    @CyberWolf755 Год назад +13

    Cody: **understood that CLEAN is shallow**
    Cody: **proceeds to make a shallow excuse to continue creating CLEAN code**

  • @ottomortadela8040
    @ottomortadela8040 Год назад +4

    "it's not my job" is the slippery slope.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +1

      A slippery slope to work life balance, yes.

    • @ottomortadela8040
      @ottomortadela8040 Год назад

      @@CodyEngelCodes Agree to disagree. I still liked the video and subscribed.

    • @Ignas_
      @Ignas_ Год назад +1

      "We who cut mere stones must always be envisioning cathedrals."

    • @duncanshania
      @duncanshania Год назад +1

      @@Ignas_ Even with my old job at a restaurant, i can very much agree with this. The point isn't to sacrifice our wellbeing for the whole that makes no sense, we do need to balance work life, but we need to self improve always to strive for a better whole. Collective good through strong individualism.

  • @Daniel_VolumeDown
    @Daniel_VolumeDown 8 месяцев назад +1

    I wonder if this "clean code" is really more maintainable. I feel like more comments in code ando maybe some non interrupting guidelines would solve a lot of problems. BUT i wrote very little lines of code in my life so idk.

    • @zhulikkulik
      @zhulikkulik 8 месяцев назад +1

      Nobody edits comments. You could write one thing and some time later there's a completely different function at that place.

  • @jeeperscreeperson8480
    @jeeperscreeperson8480 Год назад +3

    3:23 except when your clean code is already unoptimizable by the time you discover a performance issue.

  • @insoft_uk
    @insoft_uk Год назад +4

    It’s depends on the task at hand, if working with a microcontroller on something that requires good performance then breaking a few rules fine, tho if working on a spreadsheet app then breaking a few rules not so good as the performance gains are in general never going to be noticeable by the end user and just made further updates harder for oneself

  • @Karurosagu
    @Karurosagu Год назад +1

    I think performance IS important alongside mantainability and readability, but when a website runs slow af just because You don't have the latest and fast hardware it makes me wanna puke
    "No one's gonna notice it"
    "It's no big deal"
    "Ir runs in my machine"
    It's all a load of bullshit
    Every piece of software WILL HAVE a performance hit, it doesn't matter if its in the backend or in your browser: someone's gonna pay the price
    Anyways, nice video

  • @Gengingen
    @Gengingen 11 месяцев назад

    This is so context-specific. If it is a smaller IoT project with limited resources & little maintenance needed then optimize away. Else with larger projects you need to think about the poor guy who is called in to fix a pesky bug which is due to some “wise” guy years ago who contorted the code quite badly to shave a few stupid cycles.

  • @PythonPlusPlus
    @PythonPlusPlus Год назад +1

    You know what’s most disgusting? People that use a high level programming language instead of Assembly. The compiler will never be able to create the most optimal assembly. Your programs will be much faster if you program the assembly directly. Heck, if it were possible, I’d be writing all my programs with nand gates.

    • @CodyEngelCodes
      @CodyEngelCodes  Год назад +1

      Why stop there? Let's go back to the days of vacuum tubes.

  • @infosecholsta
    @infosecholsta Год назад +7

    15:00 I write in-house SaaS code, and the threshold for full page responses is 100ms (except in cases where the user does a massive search) because people time and focus is expensive. I hate waiting a second for a computer.

    • @duncanshania
      @duncanshania Год назад

      If I wasn't an ADHD brain i'd ask your company to hire me. Thank you for your service.

  • @imdeadserious6102
    @imdeadserious6102 8 дней назад

    9:13 brother im not even in bootcamp yet and i can take a guess. Every shape uses width times height. So dont abstract that, the compiler can the do its best to optimize for whatever is different in the equation depending on what the shapes relevant values are. Table stores the rest of the shape equation that is different for each one.

  • @testingcoder
    @testingcoder Год назад +2

    Indeed, there's a clear conflict between code "cleaneness" and "efficiency". The biggest problem is some people don't understand that there's no "one right way to do things". There're many different programming languages, there're many different priorities. For some people it would be important to release features as fast as possible (and those who are probably going to see more and more competition from AI), for others efficiency matters the most.
    I believe the most important takeaways is - never assume you know what matters, ask stakeholders what matters for them in particular project or product.

  • @Rick-mf3gh
    @Rick-mf3gh 8 месяцев назад +2

    Principles are there to guide you. Context is king. SOLID is fine - but it is not a law.
    I can see why he turned off comments. 😉

  • @GeorgeTsiros
    @GeorgeTsiros Год назад +1

    10:39 yes, because using AVX, or any CPU feature for that matter, means you know about the internals not only of the software but the hardware, which are things that, you yourself said, you should not have knowledge of.
    What are you doing, Cody, what are you doing?
    Who taught you programming?