So funny because we love the "idea" of it, but implementing is another matter entirely. I would say much of that is chalked up to the fact that just getting things working and making it efficient is far more prioritized over writing elegantly clean code and refactoring in general, some places rush you and don't even have the time to write tests before the code.
I don't see why this guy is criticizing the quake fast inverse square root. That's about as clean as it gets. It's a pure function that uses primitive data types. There's nothing to maintain there. Sure it's hard to understand, but from a maintenance PoV, it's literally one standalone function. Nothing else you change in your codebase will alter how it works.
Yea and it's not esoteric or anything. Just applies mathematical approximation using series to compute something much faster with an acceptable error margin. I had forgotten most of my mathematics from collage after I left it but even then, the first time I saw that function I knew it was an approximation, just didn't remember how it worked... it's more a case of the programmer not knowing the theory used in the code logic rather than the code being bad (as you said, it's far from it)... never skip math day on college kids :)
I don't think the article is calling the fast inverse square root bad code, just calling it unreadable code. As argued in the article, performant over readable code should be used only in performance critical sections, like the fast inverse square root is being used in.
It's more of "if there's a bug in the system how sure can you be that it's not this piece of code?". Now after years of analysis and discussions it's pretty clear why that function works but imagine someone implementing magic like this to, for example, send the request to the server and you come to maintain it and have some error around communication with the server. The function might be doing exactly what it says and the error might be somewhere else but it'd take you ages to actually confirm that.
@@kilisinovic Not really. It's a pure function that takes a single argument that's calculating something mathematical for which there are already slower, but definitely correct, implementations for. So you can pretty easily just toss a battery of unit tests at the thing and compare the outputs to the slower but correct function to check for any unacceptable deviations. Generate a million random floats along with a few important specific numbers like 0, 1, and so forth , feed them to both functions, compare the two, calculating the average deviation, the maximum deviation from the correct answer and an array of the top 10 worst cases. That'll give you a great idea of how accurate it is.
@@taragnor that only makes you pretty confident but not 100% confident which is the point I was trying to make. What good does it do to check edge cases for numbers when you're doing shenanigans with bits. There may be none or there may be more than few numbers out there for which the function breaks but you can never be sure unless you really understand the logic and math behind the function or you examine the function for absolutely every use case.
"Paying our developers less because they can do more work for the money is more important then the performance of our apps." .... And that's the reason why my multicore-n-ghz-smartphone lags the F out with displaying a 350px wide webpage, while my singlecore 900mhz Pentium in 2002 had no problem with that. I'm just happy that when I learned programming we had to learn to watch our resource-usage. Today everybody is like "ok let's import a 500kb library to concat 2 strings together. More time to *sips latte, writes blogpost about stuff.*" And then I am sitting here and people wonder how I can search fulltext through 5000 rows of data and respond in ms. Like ... that's not magic, I just don't burden myself with 300 frameworks and ALL the abstractions known to man in between just to tell SQL to do a where. And that code is still maintainable.... if something looks like magic, write a comment there on the why and what... done.
It is even more funny when your multicore multi-ghz moder PC does not even does the heavy lifting of rendering the page now as all the actually heavy tasks are handled by a dedicated gpu :D
I worked on a system at the NAIC that was written in C++ and implemented all of their polymorphism via templates rather than via inheritance. The result was wrappers inside of wrappers inside of wrappers, and the error messages were horrendous due to extensive use of nested templates. We had to pipe our gdb message through stlfilt just to figure out what types were involved whenever we had a failure. There is a happy medium between code that is inscrutable because it is monolithic spaghetti code, and code that is inscrutable because it has been atomized into a fine mist of components. Ironically the reasons why monolithic spaghetti code is hard, is actually the same reason why overly abstracted code is hard. There's too many things to keep track of in your head, the only difference being what you're tracking.
Btw did you know that you can run python in your gdb? You could have prepared a python setup for your team for gdb use for custom pretty printing this - or if that stlfilt was good-enough just auto-run it via python commands you added to gdb. Also I am not sure when this way, because did static polymorphism like that (maybe an other architecture though) and did not have this issue.
Well there is a difference, the highly abstracted code can be accessed at multiple levels and can be modified at multiple levels ignoring the finer (or higher level) details while the monolithic ..well it is just one big mess and you are forced to understand it all. I always think it is funny that developers are fine in using the 20 or so level of abstraction CPUs - OSs - programming language and libraries provides but then they go crazy if you add another 2 or 3 layers on top of that. Maybe the problem is not the abstraction itself but the quality and the experience you have with it. In the end complex behavior will require abstraction whether you like it or not. And good abstraction will make it easier to understand, not harder.
The fast inverse square root is self contained in its own lil function so it’s pretty clean™️ and you don’t have to worry about the details until you have to.
Yeah that is what I was thinking, as the long the spaghetti code you are writing has a good reason to exist, as in the fast inverse square root function case, and as long as that spaghetti code doesn't intersect with other parts of the code/logic then no one should have any problems with it, Clean Code enthusiast or not. The problem, imo, is when the spaghetti code is responsible for branching logic/decision making and routing in the app. When that is the case, no performance considerations that leads to spaghetti code should be taken in account at all, as you will pay the price ten fold later on and be forced to change it anyway
@@ahmad_alhallak It's not even spaghetti. It's straightforward, linear code, which is trivial to read and understand. What is not trivial is knowing why the answer is correct, which requires knowing how floating point types work, and how Newton's Method works. If you do not understand the mathematics behind it, it might seem like magic, but that goes for most math. If you saw Vec2_Dot(normal, point) > 0, you could understand the code for it, but you wouldn't know what it represents.
@@AidiakapiThe dot product returns the cosine of the angle between two unit vectors. It is assumed that anyone working with dot() > 0 knows that it represents the front facing side. The problem is that there should be a comment explaining the behavior for people _not_ familiar with it: * Output ranges between [-1 … +1] * < 0 back facing * = 0 orthogonal * > 0 front facing
Precisely. The fast inverse square root code is self-contained and while it isn't at all obvious how it works to the uninitiated, it is clear what it is *supposed* to do. Should you experience a bug in your rendering code and you suspect it might be that function, it's trivial to replace it with a more explicit implementation for debugging purposes. Should that replacement not resolve the bug, you know that your bug is elsewhere. This is exactly how you want code to work - as mutable. The simpler your code, the less restrictions you have to changing it, and changing code to sniff out a bug is a perfectly reasonable thing to do. It's just much, much harder to do with huge object hierarchies and abstractions.
This article does the classic defence of Clean Code bait-and-switch where one moment they're talking about Clean Code™ and the next moment they're talking about regular run-of-the-mill clean code.
There was that one article talking about how "clean" means anything and everything nowadays. Since then, every time I think something is or isn't "clean", I catch myself thinking that and try to actually elaborate what I mean. "It's cleaner", what does that mean? Does it make the code more maintainable? Is it easier to understand, more readable? Or is it just a vague good feeling I have and I actually don't have any reason for it to be cleaner, and it comes down to subjective personal preferences? Usually it's the former, but the rare occasions where it's the latter, I'm thankful that I got myself to think this way. Though now whenever someone says "it's cleaner" I get infuriated because it means nothing.
@@Speykious yeah 'clean' is the least helpful adjective to describe code. Luckily, we are adjective people, professionally per-disposed to actually meaningful descriptions. Or at least we should be. I dunno what lazy shithead thought 'clean' was helpful. I am guessing its a borrow from gym bros and their 'clean eating' crap.
As many in the chat pointed out, the TF2 coconut JPG thing was a joke that people took seriously. TF2 RUclipsr shounic has a video explaining this, but the short answer is that while there is a coconut image in TF2’s files, it’s just part of an unused “coffee bean” particle and deleting it has no effect on the game’s stability.
I was gonna say, because the source engine seems like a really bad example for "poor" code given the sheer number of games that have been used to make it and over such a long period. I'm sure it has pain points but it's no way near a good example of "bad"
@@temper8281 It's actually quite a "personalized" code base, from the perspective of Valve. I'd imagine many of the same programmers have been working on the games that were made with it, so they were probably quite familiar with its aspects. Source 2 is/was a great opportunity to rethink the codebase and it's starting to mature with Counter-Strike 2 and S&box now.
@XeZrunner Valve are a passionate company and their tools are beloved but in general I'm not super happy w/ the code practices of the broader gaming industry Plus the work culture...
@@scvnthorpe__ Everything wrong with AAA gaming: * *2003:* _I used to go into a store to find a game,_ * *2023:* _Now I go into a game to find a store._
I think it's wise here to take the good aspects and discard the bad. Writing a million 3-line helpers = bad. Giving things reasonable names and understanding how and when to use an interface = good. Writing shouldQueryIfConditionIsCorrect() to write something that could be expressed as a ternary conditional -> bad. Using enums and structs liberally with descriptive names instead of a million hardcoded constants = good. It helps to work in a codebase with very experienced colleagues who all of their own individual takes, that way you avoid any cargo culting of ideas from books.
@@reed6514 Yep, but how else should either side (clean coders or "code rebels") sell books/talks/workshops or make youtube videos, if all would take a more nuanced approach? /s Edit: Damn, I necro'd a comment thread
simple and readable code is much more important to me that clean code, ive seen design patterns and uber atomized components escpecially react codebases that didnt make any sense to me and was very difficult to provide any value or build new features
Clean code was a GOAL. Clean Code (capital) is the religion. Not in any way different from the "Manifesto for Agile Software Development" goals vs Agile. It's always a process from where you take something that has good intentions and turn it into a clusterfsck of dogmas and rituals and ceremonies. Some more cynical people might say that it happens because you can't capitalize good intentions, but you sure can turn "the clusterfsck" into a set of books, courses, talks, etc... Maybe what you need is just an "utilitarian" version of it all. Pragmatic CC, Pragmatic Agile, etc... The pragmatic part being "does it make sense?", "do we need it?", "have we gone too far of the deep end?", etc... At my place we have a running joke about a certain component that we refer to as "the best hammer ever". It can hammer in any nail under any conditions. Only about 15% of it's potential has ever been needed/used. Nobody truly greps to the full extent how it works. As a testament to the skill of the creator, it's rock solid and never fails. And thank god, because everyone that ever went through it in debug had to resort to some alcohol fueled "therapy" after. We just marked it DontTouchJustWorksTM because it does, and we always "Step through" never "Step into" ;)
Yes. There is so much push back on this (pun not intended). Most people will never hit the theoretical performance characteristics of more advanced data structures. My personal saying: Never pessimize your code on purpose, but don't try to optimize it unless you prove a need.
@@raven-a Depends on the language. Bjarne was the primary designer of C++, and still has a pretty major role in its development. Under the hood, a vector is a really efficient dynamic array (unless it's a vector of bools, which I believe uses bitwise operations on a small number of integers under the hood to save memory space). This makes it really fast to retrieve an arbitrary value within the vector. A C++ list, on the other hand, is typically a linked list under the hood. This makes it really fast to insert or remove values from within it, but actually accessing the data is significantly slower. Unless you're constantly adding and removing values to and from within a list, meaning that elements within a vector would need to keep being moved around, the vector is probably the more performant option.
Java fucking sucks, that's all I'm gonna say... There's a cult of bad OOP out there and they are using exoteric JS "frameworks" nowadays because they can't really dominate back end programming anymore, because people just use whatever. They are the real enemy
@@gracjanchudziak4755It means you shouldn't prematurely limit yourself to abstractions just because...your abstractions should come up naturally as needed
The bit at the end about refactoring something and arriving at the same conclusion as the original programmer is a trap I’ve fallen into several times. At this point I’ve learned that if I see some code that on its face seems just extremely WEIRD that there almost always is a reason. Sometimes I don’t always arrive at the same conclusion but I’ll understand the original reasoning more. Honestly it’s helped me get my ego in check, too.
it's hilarious to me just how frequently people have to re-discover concepts that have been around for decades just because they're not directly applicable knowledge that actually gets taught. It would take 1 class period to teach the concept of Chesterson's Fence to people in 5th grade and solve at least 23% of the world's problems.
I have this experience with my own code 😅. I look at my code and think I make a mistake and change it and then I remember the reason why I code it weirdly.
I always prioritize writing clean and readable code first as this will make it much easier for you and your team to debug. Most applications don't need the performance gain. You see, it doesn't make difference for the final user if he will get the answer 1ms or 1 second after clicking the button. But it makes difference for the user if you'll solve a bug in 1 hour or 1 week. There are specific scenarios where the performance will make a difference and we only should spend effort on those specific scenarios. Games, complex reports and highly concurrent applications/features are an example. Even so, you still can write good and fast code, you don't need to trade one for another always. Equilibrium is the key for everything in life. Try writing readable code from the beginning but allow yourself to write worse code if necessary.
Clean code is a set of guidelines. I don’t see why people are taking it as a all-in kind of thing. I don’t think Uncle Bob intended it to be either. I think in terms of performance, OOP is the real thing people have issue with. Sure, CC encourages the use of OOPs primary “benefits”, but I think that’s a result of being written for the majority of development developers do/were doing at the time. CC principles can still be applied to FP for instance. I also think that performance of any OOP or even any high level programming language is always going to be suboptimal. Frankly, if you truly need performance, ANY language other than assembly is going to have its cons. Performance is also not the same as memory usage, memory leaks are going to happen irrespective of CC or OOP. If anything, I’d say CC will help you to at least ensure you create abstractions that might be easier to find memory issues with. Smaller objects instead of larger objects should in theory improve memory usage - something CC encourages. I think people are putting the blame on CC unfairly… look at OOP and the programming languages themselves as the real “enemies” to performance.
2:50 I will never be tired of you talking about this. I was hesitant to start coding seriously for like 10 years because I thought that since I can't hold so much in my head that means I'm bad software engineer and I don't belong there.
I believe it comes down to the priorities of the project or of the company. There are several things to consider that can have very different priorities depending on the company: 1) Performance. It might be of a very high concern if you build a game that needs that top-notch performance and wow effect (not all games are like that) or data processing that need to process terabytes of data in short time to create reports. It might be low priority if you build a JS for a web page, it is not about animations, and nanoseconds vs miliseconds doesn't matter. It might be very low priority if you write a script to process a low amount of images or data and it does not matter if it will be done in seconds or minutes. 2) Time to market. It might matter if you either build a thing and run a campaign or loose client(s) or if you either build something in time or other company will start sooner with similar service. It might be even matter of life or death for a startup in bad position. Also, a startup might be not in a good negotiating position with a big company requesting needed changes asap, or the opportunity is gone. On the other hand, if you have working reputable product and only work on enhancements, time to market won't matter. 3) Budget. You might have some budget and either you can cancel project or try to finish it without sufficient funds to do it very well. Happens pretty often in startups with investors. 4) Maintainability, quality of code, developer experience. May be a top concern for a long-run strategy. May be not if your company just tries not to drown and needs to change fast and cheap or die. 5) User experience. Usually it is very important, but might be moved to the back seat if other considerations are deemed more important. a) critical - so if that thing is even usable at all (might be not important at all for example in early PoC projects, usually top importance) b) normal - so if that thing is easy and nice to use for typical use cases I currently work in projects where it is something in this order 5a, 2, 3, 5b, 4, 1 (working startup) or 3, 2, 1, 5a (PoC projects). This means that there is a lot of rather bad code. It is not totally unmaintainable, but it is also very away from SOLID or clean code. We rarely have time to refactorize. There are also these smaller PoC subprojects that we do - there we don't take any considerations about code maintainability. We make quick scripts with code from github and GPT, stich it together and try if it works or not. If it works and project is accepted to go further, we rebuild a production version with a bit different approach (with better architecture and code), taking only some parts of PoC. I worked in projects with priorities like 5a, 5b, 4, 1, 3, 2 which were into quality. Mostly quality for enduser or client, then quality of code (but nearly on same level as usually those are corellated). This dictated totally different approach. Same person working, different projects, different decissions - both about the general architecture and code. And it is not like I evolved or rather de-evolved. I know more about how to write good quality code now than I knew back when I worked in projects which were very into the quality of code. I feel it is more like a thing dictated by the business circumstances of the projects rather than decission based on taste and knowledge. I did not work on games, but I think that in games it is most usually 5a, 1, 5b, 2, 3, 4 with 2 sometimes going up the ladder when deadline is near. They need to make game playable, make wow effects, make it to the market quick enough / on time. This dictates even different approach. Another thing is that you can overdo any of these things and make matters worse. 1) You can overdo performance and loose quality in other areas. Like your code runs fast, but results are less precise or it is too fast for the user or controlling process. 2) You can overdo time to market and add features faster than users can grasp and accomodate to the changes. It can be seen in some mobile games - they grow fast into having too many options, many with some currency/payments attached. Gets overwhelming, especially for new players. 3) Budget. Sometimes cheapest option is to take a ready code or services from the internet... and then you get locked in with it and have hard time changing things or changing provider etc. Something might be cheapest in short to middle run, but not cheap in the long run if you count cost of adaptation. 4) Maintainability. There is problem how to measure maintainability directly and we really have only some approximations like less repeating is usually more maintainable, single responsibility is usually more maintainable, more abstractions is often more maintainable. Can easly go too far with those proxy measures and get code that is actually less maintainable and harder to grasp. Also, like stated in the video, can loose on performance or user experience. 5) User experience. You can add a lot of complexity trying to have best user experience possible. You can also fall into a trap of thinking that something very usable for you will be also usable for endusers. You then can make a some custom thing instead of relying on proven typical solutions. It is hard to think like someone else or like general target group. A/B testing is here to help of course.
one aspect of the constant striving for perfect, clean code that gets overlooked is the impact on developer morale. especially when you know that when you open a PR it's just going to get trashed by someone who gets a high off making others write overengineered code. I see a lot of junior engineers lose so much confidence and their performance slows to a halt because of practices like this
Yes any senior devs reading this need to realise that if they want to keep their staff they should allow others as much freedom as possible because otherwise you will take away the biggest joy of programming and make people resentful
This is something I dread potentially happening, at the only programming job I had so far I was the only person working on projects and nobody cared to check my code, the results are all that mattered
"If you hire ten developers to solve the same problem, you will have eleven different solutions!" That's the real problem! Software is the creative expression of how a person thinks, how they interpret the world and the context around them. As you create abstractions, abstractions of abstractions, it will be a nightmare to maintain something that, theoretically, should be simple!
Clean Code is Taylorism (the practices that lead to factory productivity) for software. For small teams or companies following service design with lots of small teams collaborating, it doesn't make *any* sense, just as it doesn't make sense to make an assembly line for one guy to run back and forth along, trying to get the work done. The context it was *supposed* to work was one where you have a large team where a programmer was just working the "widget X" station of the assembly line. However, it turns out that code isn't *not* an assembly line, even if we make 5 layers (assembly line stations). Lower case "clean code" on the other hand (writing code that isn't obfuscated) is worth fighting for.
3:18, you (23:54) and Casey said something that I already knew: wait the need before abstracting. This is the right way to design: focused on the practical needs. And that means building abstractions for defensive proposes too - before somebody starts to advocate FP. 3:35, 1 of the basic optimizations made by the machine is to inline f()s. The only concern is about reaching a limit, after which it won't inline or starts extra calculations about it - but 1 can set it. The actual problem about helper f()s is the options for calling them by mistake, or to feel the need to memorize them, as options, when thinking about the whole thing. To avoid this extra stress, a class to hide them can help. But this starts to require more boilerplate. The best solution is writing lambdas (within the f(), of course), if those blocks are called more than once. Otherwise, I keep them inside the f(), as explicit code, if they only make sense there. 5:03, if 2 apps have a difference of ms for each action, people will feel better with the faster 1. So, it's not just for speed-critical projects: even the common ones can benefit. Meta said that _"people feel more engaged when the app is faster"_ (probably about ms and above).
Every time I see “Clean code” book on developers table, I know that at some point of time (more sooner than later) I’ll feel a strong urge to bash its owners head with that book. Never was wrong. I have a feeling that Bob Martin made much more harm than good for software development industry
People don't even understand what a Cartesian product is... So you had a product with something like 25 customizable items (color, texture, etc.) and each item had between 10 and 50 options. Some options are not compatible. For example, you cannot get the pink color with the wood texture. There was a function to select the first viable combination. Said function was stupid and would compute a large part of the Cartesian product before finishing. I estimated that it would complete after about 150 years... I extracted the responsibility of computing the Cartesian product in a separate function (and optimized it to discard non-viable combinations before computing the whole combination). In that new function, I used the lexical field of a Cartesian product, as this function sole responsibility was computing the Cartesian product... The developer that owned the module asked me to rename all variables to match the business domain, because he didn't understand what a Cartesian product was. So I cannot imagine the average developer to properly understand the abstractions from the GoF...
Prime's take on this implies its impossible to both have a good architecture _and_ follow clean code principals. I don't recall any clean code principals telling you to pick shitty architecture. As a matter of fact, IIRC one of the very first things you see in the book in the saying "every programming problem can be solved with another layer of abstraction........except having too many layers of abstraction" Which, while cheeky, is clearly telling you not to go to the insane lengths of abstraction that prime is talking about here. that would be like dismissing all of functional programming because people are currying, or something, too much to the point where the code is hard to understand
I don't understand this whole mindset of "With pattern X I can't keep the whole problem in my head", the point of patterns is that you don't need to keep it in your head anymore. With a good pattern instead of specific memory layouts, index ordering, specific formulae, or whatever specific details a coder can now write to a contract. If the abstraction isn't providing a contract then it is a bad abstraction, If the abstraction is providing a bad contract for the situation then another could be better, if the programmer is constantly trying to pierce a good abstraction maybe they are just doing it wrong.
good points about following the team's practices. it was hard for me at first but i learned to slow down and see the reasons for things in the code that made no sense to me at first. the best thing is to read a lot more code than you write, and put yourself in the minds of the people who came before you
I definitely think you're right about the "typing fast allows you to explore more", there is only so much you can figure out from reading the code and looking at the involved systems Once you make the change you increasingly get more context of the problem and might figure out that the approach isn't correct, has unforeseen side-effects etc..
Your code should be easily testable. If you can achieve it without "clean code", do it. There's a great talk called "Clean Coders Hate What Happens to Your Code When You Use These enterprise programming tricks" by Kevlin Henney, I think he's criticizing "clean code" the way you mean it (and many enterprise programmers to).
Every CPU cycle you waste is a tiny bit of energy thrown away. Making your system 20 times slower means making the entire energy consumption of your service 20 times higher (at least, if you don't factor in costs required for cooling your hardware too). So if you're a company that cares about the environment and your carbon footprint, you should start by educating your developers on how to put performance first, ahead of their egos.
I recently tried to read a project that was presented as an exemplar of clean code. It has literally hundreds of classes for a relatively simple web project. I found it franky impossible to understand what the bloody thing was doing. The cognitive overhead was overwhelming.
Here's my hot take: Get good at planning ahead AND learn how to type fast. There's no reason to pick one or the other unless you're physically unable to.
I worked at a big DDD shop. They were all about clean code, abstracting everything, strict encapsulation and of course whiteboarding out everything. This always hinged on the requirements being correct. As we all know, requirements change and as you stated this leads to very time consuming and difficult refactors. At my current job we don't practice any of those things and I'd say I'm 5-10x more productive as we just build stuff with very little long term planning. It may not make you feel like you're playing 4D chess but you can refactor it really quickly. Not only that, you can look at it and understand what is happening much quicker.
02:14 yes, that's why webpacked and minified js is sometimes more readable for me that many js files (having access to both many js files and webpacked and minified js is still always better)
seriously. instead of worrying if you're doing it "right", write that 50 line method and then take a second look. First attempt should always just 'get shit working' because half the time you might not know exactly what's entailed. Once you see that on the page it may be easier to recognize if there is a pattern and its worth a refactor.
@@anthonyparks505Indeed. Get a _slow reference version working first_ because it *doesn’t matter how fast you get the wrong answer.* By then you’ll have a deeper understanding of the problem to write a good solution. It is almost as if everyone forgets _The Mythical Man-Month’s_ advice. *Plan to throw one away, you will anyways.* Who knew prototyping was an old concept! /s
Performant code can be written by anyone capable of thinking about hardware a bit, you can master it in a few college courses. Writing clean and maintainable code comes only from experience. Clean Code TM on the other hand can be taught in schools, which means you can get rid of the expensive, experienced devs and replace them with starving interns.
Hi Prime. Good video and I get what you mean how a "Clean Code" makes it difficult for you to understand what exactly goes in the flow. And this is okay because one doesn't need to read 100% of the code most of the time. It is similar to the fact that we don't read every single line of a newspaper.; we skim through the headlines (function names) before we decide what article (the function body) to read. In Clean Code, you don't get function calls like `array.holyHandGrenade(index)`. So there is no point in reading every single function body if it doesn't seem relevant. If you're talking about the Clean Architecture, that's a separate story. AFAIK, the motive behind this architecture isn't writing a Clean Code but making yourself independent of the tools and frameworks (as Uncle Bob puts it, you can swap out the dummy database with a proper DB, etc. That is, you only need to rewrite a single layer of your codebase to replace your ReactJS app with a SolidJS app).
In my experience, you never do just "skim through the headlines" though, because when you're examining code, method names are generally insufficient to fully describe the contents, and more often than not are named poorly. Furthermore these methods are often thoughtlessly partitioned, with inter-dependencies, which is a big no. I find it annoying and disruptive to have to keep navigating to definitions. For these reasons, I think it is simpler and safer to just write monolithic long procedures. KISS is generally all you need. As for your clean architecture example: that just seems like typical premature over-engineering. KISS.
@@tarquin161234 the reason is that you have only dealt with poor quality code. When reading a clean code, you can bet your life that a printUserInfo is not going to increment the user's age. What you described is exactly what a clean and structured code will help you avoid.
I think part of the unspoken ideals of clean code is to not to understand certain parts of the code and not mind as long as it works but that breaks down the moment you need to maintain(modify) it
You did the best summary at the end: "do what the team does". If you're a performance junkie trying to squeeze extra cycles out of your react component in a "Clean Code" team building the next facebook, everyone will hate you. If you're a Clean Coder working on ethernet controller firmware team, abstracting everything, everyone will hate you. Collaboration > dogma.
I wonder when we started to program abstractions instead of machines. We've lost it. The computer is a simple machine that expends resources in order to transform data, if there is a way to do the same transform while using less resources, that should be the default. If there are ways to expend less resources with different tradeoffs, those should be exposed. If those options are harder to implement or read, or make future portability or maintenance difficult, the problem is in our tools.
Creating functions is making abstractions. So there is no conflict here. Doing abstractions is what programming is about. As you said doing abstractions correctly: "... put 15 functions in that file do that until you kind of hit that point where you're feeling I'm a little frustrated with my code..." It is the basic step by step process of creating and knowing your code as you create it. Even Uncle Bob say exactly the same thing in his clean code whatever. He even goes long on the problems of trying to abstract too early when we have the illusion that we understand the problem before even having written any code. It is just some principles based on pragmatic experience and the SRP and other wannabe principles are just some vocabulary about the structure of codes. You take what you want from it and do what you want with it. As needed. I think the problem is that the so called "clean code" that you saw were done by people who didn't understand clean code. They were maybe more old school Java programmers who like to abstract things just for the cheap feeling of it than for the need of code. That means that those were not clean code. A code with to much abstraction is dirty code, not anything else. Code as you wish and abstract as you need. That is clean code. Understanding the vocabulary and the principles is just like the design patterns. Only low quality programmers would say that those should be done exactly the same for every problems.
One of the thing i realised lately about all the Code talks is that a lot depends on what person works in. If you work in TS(JS+) you will look very differently at anything oop related than someone who works in oop language. And you will look very differently at oop if you use oop language, if you use paradigms cause they are "high level" vs when you need them and make them useful
I'm in high performance computing and scientific computing, and obviously speed matters to us. There is a rule of thumb in HPC that say that any clock cycle your CPU, GPU, TPU or IPU do not spend on floating-point arithmetics is a wasted clock cycle. Obviously this is not entirely true, you need to read data from storage, you need send data between nodes and so on. However, it turns out that you can more or less reach this ideal by abstracting the calculations you want to do to operations on matrices, vectors or multi-dimensional tensors. The big issue with the code I see a lot of my physicist colleagues do is that they do not abstract enough and end up wasting clock cycles on index calculations and conditional statement that shouldn't be necessary in the first place if they had found the right abstractions. The point I'm trying to make is that abstraction is not necessarily the enemy of performance, it is rather a question of finding the right abstractions (which admittedly can be very hard). Anyway, I think it is a discussion is important, and yes in some areas of software development performance is not a big concern but in others it is.
Oh, I like planning ahead. I dont like it when I implemented it and it turns out I can throw away 80%, make it eaier and do it in another way because of that single edge case that shows up when I'm almost finished... But I still like to plan ahead XD
I feel like a lot of it is. Well, don't go for performance if don't have. but maybe you should keep some eye on performance, but then again sometimes performance is paramount. This doesn't really say anything, here's an idea, think about your design constraints and goals, and use your best judgement.
Martin Flowers itself already explained that not all code need to be clean. And that no programmer, even himself write clean code. First you write code that works. Then you clean it, that leaves you with parts of the code that must need to be very effective, just stay that way, in those cases it is ok to leave a comment just explaining what you did. Another point is that Clean code does not mean SOLID. What clean code stand is that the code must be readable(Understandable), and that could change from company to company, team to team. Regarding abstraction, clean code just say that you shouldn't mix levels of abstraction. Levels of abstraction refer to how directly or indirectly code communicates its purpose, it has nothing to do with class abstraction principles or best practices in this context. Clean code does not means slow code. If someone is saying that, then this person didn't understood clean code. Martin argues that writing clean code is crucial for long-term maintainability and that performance should generally be a secondary concern in the initial stages of development. Make it work, make it right, make it fast! (Kent Beck). Performance first can lead to birty and bad code. This is not talking about writing slow code. Instead Martin Flowers says that poorly written code can generally lead to bad performance code. it's hard to identify bottlenecks and implement effective optimizations. On the other hand, well-structured, clean code makes it easier to profile and optimize because you can more clearly understand the code's behavior. Especially in systems with strict performance constraints. Even in these cases, however, the code should be as clean as possible to facilitate understanding and future changes. But that there are situations that require careful attention to performance from the beginning I think that people who attack clean code just don't understood what it is or just read the cover page of the book!
CC apologists always argue looking for nano seconds of perf is silly, but in reality I find we are often talking about 400ms per request vs 4ms. And when your que app has hundreds of requests or thousands per sec it starts to mater almost instantly. CC is an excuse used to explain bad code, it could be clean and fast its just easier to explain your bad code as clean code.
R u sure he said that,bc it wouldn't make much sense since the two are not exclusive or even competing. OO and functional are. Procedural just refers to a more let's say exploratory way to code as opposed to TDD or something with a lot of organizational overhead
@@haraldbackfisch1981 yes I think he has. Regardless, it would be an interesting topic for a video. What the differences are, when to use one or the other, different strategies, etc
@@nchomeyThe answer is simple. Don’t do OOP (inheritance and polymorphism) unless you are forced to by the code base you are working on. Encapsulation is a good principle and not specifically OOP. If you have the chance, write as much code as possible in non-OOP languages as you can, e.g. Zig and Odin. It will force you to think in procedural/data oriented patterns rather than OOP. If you do it properly, you will find that your code becomes easier to read and maintain.
Couple of months ago I was working in a personal project in js that needed to run relatively fast. It was a cli tool that that did a few operations for about 3 million database object and had to keep a state while it was running. I ran multiple jobs at the same time with a sync at the end that updated the state. Like a good boy I followed the "good practice" of having an immutable state, so everytime there was an update to the state, I created a new object. The code was running super slow because I was doing a lot of operations. Then I did a little test and stared mutating my current state. Suddenly the code started running infinitely faster. So, I don't know, best practices sometimes are not the best option.
But... if one needs to jump even once to understand what code does it's NOT clean code anymore IMHO. What you've described in this part were design patterns (and typical for Java inheritance overload) and that is completely different than clean code principles.
I agree: I saw "extremely clear code style" systems that are very-very hard to navigate because everything is spread around extremely... The most clean code thing I hate is the "cannot have more then 3 line functions" like rules... My friend worked in a project that had not just mediator patterns for example - but "InterMediator" patterns that mediate between the mediators and that kind of overbloat :D Also this guy seems to have not heard that Casey do not mean all people should optimize: just not pessimize like crazy. Also there are countless other ways of doing software than inheritance based virtual polymorphism... Not just static dispatch, but many ways to do things that are both maintanable and much better performing. Also the guy went from gamedev to non-game-dev and guess what they have easier bugs? Maybe not because clean code, but because gamedev is 10x more complex....
@@williamdrum9899Total bullshit. Especially when battery usage on mobile devices come into question, anything embedded, powertools, all kinds of tools - and generally very few software are there which should be totally pessimised for no performance... And then if you run anything on cloud you pay very high price for wasted compute in bills too...
@@williamdrum9899 Nonsense. I'm working on a project that's only used internally by 10-15 business people. You'd think that for a project like that, the performance wouldn't matter at all. But no, the users are complaining about the performance and have even given me the green light to improve it.
3 line functions? thats seriously, a BS rule, but again, as the book said don't need to be dogmatic, my personal rule is a function should not need a scroll bar to see it fully, if you have to scroll, you have to refactor. again, that's for functions or methods, not classes
I love you're approach and agree with it, having readable code which follows simple patterns is waaaay better than seeing an over engineered code that I need to memorize. I once had someone who hates inline nesting, so over engineering made him nest folders and files. Example: `Core/backend/src/intensive/runtime/postprocessor/Placer.ts` which is linked to `Core/Placements/Positioner.ts` and linked to `Core/frontend/src/Calculations/Math/OBB.ts` and all that function does was center a mesh on V3 0,0,0
Going for extremes is almost always not a good idea. Being an extreme "clean coder" just makes things less readable for others who are "less clean coders" than you. But we all have to agree that cleaning the code to some degree makes the code more readable and not less. We can apply some of the rules of Clean Code when they fit well while not be very dogmatic about the other ones... Just take the middle path, care for the code so that it stays as readable as possible without sacrificing too much performance and you'll be fine. The middle path is very often my path when I approach different ways of looking at problems in life.
The table approach of the original video is so good !! Up until the point where your boss steps in and says "Can we do complex shapes as well?" And then the junior dev that inherited the stuff from the inventor genius that happened to leave the company scratches his head.
I never had a problem with Carmack's sqrt. I always thought it was kind-of interesting. It's precisely the kind of thing you'd want to optimise. In my own job recently I first wrote, tested and profiled "clean code". Then I identified areas where significant improvement could be made with AVX2 and then I implemented that code path as a compile-time switch. That code is less clean and less easy to understand but the "clean" version is also there in the file right next to it so you can see what the intention is. The AVX2 code is 7 x faster!
I think the best approach to software development is to straight up write one to throw it away. 1. You have the option to get it to market faster. 2. You have all the unknowns now known. 3. You've learned a lot solving the problem. You can get so much done by just having a conversation about a feature ... going off doing it... making decisions on your unknowns and coming back with something done... Then saying what do you guys like... what don't you like. I had these questions these are the decisions I made. Then you go back fix those issues.. Make sure the customer is happy. Then get some automated integration tests around it. Then you engineer it. After that you can optimize it.
Yes! ECS is way to go! When you have N interconnected objects (geometry in some scene, actors in some simulation, etc) where state of one can depend on another, ECS is the only reasonable way to deal with all the unpredictable dependencies. OOP may be very inefficient and often introduces bugs that are hard to localise.
@@RogerValor This depends on a level. You can still have a request-response API or object, and an ECS underneath. For example, most databases resemble an ECS more than objects, but you have a request-response API on top of it.
Yeah, pointing out Quake's FISR function is not super relevant in the context of clean code principles. I'd argue that the FISR code existing inside of it's own function is the most appropriate thing that they could have done to make that code readable. I'd agree with the author if that code was wedged inside of another function that's doing something else, but simply pointing out that it's complicated doesn't suggest that it was unreadable due to gaming industry practices and clean code violations - it's a complicated problem space that encompasses advanced mathematics. There's a reason they hired a mathematician to write that piece of code at the time instead of have a developer smash their head into it.
I’d be interested to see some examples of “clean code” vs something you’d write. I hear this argument all the time and leave very confused. Abstractions are supposed to separate concerns, and I find that they help significantly because I also have trouble keeping that much state, but with abstractions, I don’t have to worry about implementation details when I intuitively know what something should do, unless it’s not working properly, in which case I can test it on its own. I’m wondering if there might be a disconnect due to my lack of diverse experience
A different take on using a `Set` is it communicates to the reader it's a "set" of unique values, even when small. This might sometimes be a useful assumption that can be made.
You cant be pragmatic and at the same time put abstractions everywhere for no reason just because "in the future we might need a different implementation"
"it's also scattered around for my l1 and l2" right? that is my biggest problem with oot is when you read it it's hard to mentalize what's even going on.
But if the code that is written is too inefficient to begin with, you can't optimize later if the need does arise without just rewriting everything from scratch. Yes, you shouldn't hyper optimize everything as you go. But you also shouldn't be ok with things being horribly inefficient even if they happen to be easy to understand and maintain.
I don't care about Clean Code or how applications are build as long as they are performant I will be very happy if Microsoft send to me (free of charge) workstation PC - AMD Threadripper Pro 64 code, 2 TB RAM, 8TB Raid0 card and 3 GPUs to run their slow applications, because Microsoft thinks electron is modern tech
I think many if not most projects take major Ls from not optimizing enough at the requirements level, before writing any code or planning any architecture. There's nothing faster and cleaner than not having the code in the first place, so if you can find a way to do less and still have a good business outcome, you'll do better in every metric. Go back and read The Rise of Worse is Better every time you want to start a project or do something clever.
I can remember 20 years ago, my professor for algorithms teaching everything using recursive algorithms and dynamic memory. And the last few classes he showed us every algorithm could be done using arrays and it was always faster.
UI/UX === cadence:: user interaction => figure out where you have to cache/ do stuff, so that you get within your cadence-beat. Misdirect if you have to. Make it work. Make it fast. Super easy! Barely an inconvenience!
FYI: William Kahan and K.C. Ng at Berkeley wrote an unpublished paper in May 1986 describing how to calculate the square root using bit-fiddling techniques followed by Newton iterations.[4] In the late 1980s, Cleve Moler at Ardent Computer learned about this technique[5] and passed it along to his coworker Greg Walsh. Greg Walsh devised the now-famous constant and fast inverse square root algorithm. Gary Tarolli was consulting for Kubota, the company funding Ardent at the time, and likely brought the algorithm to 3dfx Interactive circa 1994.
Polymorphism with optimizing JITs like Hotspot and Graal is a very performant thing because it's inlined. It is something Casey M. completely ignores. Makes me wonder why?
Clean Code should be simple code. If it is not simple it is not clean. If your definition of clean is adding complexity then you took the wrong things away from the book. With that said, I do not really blame people for taking away the wrong things from that book, the book doesn't do itself any favors.
The thing is, you can usually do both ; the Fast Inverse Square is an example of this - it makes ONE THING very fast and keeps it in a well defined box. I've written my fair share of "fast but ugly" code. Most code should be clear and easy to read. If you can't do this, write good docs, and hide it behind a nice API.
In the airline industry mechanics and pilots all have preflight checklists that are very methodical, step by step. Clean Code taken to the extreme is like a clipboard covered in post its.
> Disregard algorithm - use an array Yes, but no. I mean, it's nice to have underlying array for your octree instead of crapload of linked lists. BUT! First I need to have octree if 3D spatial mapping implied. And that's mean you've chosen right algorithm. Can array hold BST? Sure. That, If you know it's optimal data structure to store your open set for A* search over your octree. So. Get the best algorithm. And then do it with array.
"Just use an array" can work well anytime the data set is small, because often times even an inefficient algorithm will still work okay with the benefit of cache locality that you get from an array, but once you deal with any kind of large data set the strength of the algorithm becomes paramount.
Yea this kind of sums up my main problem as a swe. I cannot wrap my head around very complex architectures. Sure its easy if i coded everything, but when its already done by someone else its HELL to step in. This doesnt mean i code 1000 line functions, you kinda want to separate responsabilities, and want to have reusable components, but if you spend too much time jumping files to udnerstand where somethign comes from it becomes really hard to understand for others and really shitty to mantain
Clean code tends to create so many abstractions that you can't keep them all in your head at the same time, nevermind understand what they actually do under the hood.
IBack when I cut my teeth programming, I prided myself on writing clean, readable, self-documenting code . This as opposed to some crap that looked like someone ran it through a Javascript obfuscator (this was in the dark days before automatic formatting, and also before Javascript, so not that dark). I also prided myself on good application of good design and programming principles. These were (and to me still are) separate concepts. I've never read Uncle Bob's book, but from what I gather, it seems to me that, in violation of his own "clean coding" principles, he has mashed together separate concerns and concepts into "clean code," which is now a loaded term. Perhaps he should have called it Good Code. It would be priceless to hear people arguing against writing "Good Code."
Performance is critical for the backend. For example, points of sale have timeouts for transactions so any significant delay will cause the transaction to timeout. I worked for a company that had a process which summarized data. The stopped running it when it took more than a day to process a day's worth of transactions. After some analysis and rework, I got it down to less than 5 minutes per day. I was let go within 6 months of doing this.
Clean Code is developers just programming for the developer, with little thought given to the impact on the end product. It's like writers writing for writers rather than their audience, it's an exercise in self-aggrandizement for self-important people. You can write maintainable code that makes sense for the individual scenario while not neglecting performance in all areas. Abstraction layers should serve a specific purpose, like decoupling a specific runtime API implementation from the consumer code. A good example would be a graphics renderer API, maybe you want to give your program the ability to switch renderers between multiple target architectures and systems. Well abstraction serves a direct purpose there. Another good example is an entity component system because you can assemble complex objects piece wise at runtime based on a data set that changes completely independently from the ecs implementation. Some people think clean code and single responsibility means literally every object can only do one thing, rather than a logical module that serves a clearly defined easy-to-understand purpose. If you're just making an object for the sake of separating functionality without any discernable purpose, ask yourself why.
Here's my humble take on the matter: Clean Code is not a holy grail. I have read all 17 chapters of the book; It contains very good takes on good, testable and readable software development. However, it also contains bad takes too especially: "Chapter 3: Functions". Much of the things in that chapter should not be adhered to at all. Functions can be too small and can also be too Large. I like the idea of the Locality of Behaviour and organizing files and folders by feature rather than by type better. Having multiple layers of abstraction that don't serve the purpose of encapsulating complexity such that one doesn't need to go source-diving to figure out what is happening is also not a good idea too. In addition to Locality of Behaviour, you need behavioral transparency to a certain extent too. Again. one cannot achieve this behavioral transparency without also taking a minute to name things well. Also, I disagree with the HTMX essay that says Locality of Behaviour doesn't agree with ideas like Don't Repeat Yourself (DRY), or Separation of Concerns. I don't know where they got that from
We use PHP, Docker, Html/CSS, JS at my job. Ironically compared to the rest of the code base which uses more of a functional programming perspective, writing even OOP with loose rules has sped up quite a few pages performance wise. A lot of it has been preventing queries from running on pages that would never use the data.
ECS (Entity Component System) was created for access pattern based thread scheduling as well as polymorphism without the OOP drawbacks. It's widely used in the game industry where statements such as "You just didn't plan ahead enough." will get you thrown out of a room by designers, managers and programmers alike. It is both flexible and efficient, but, like Redux or Rust, has a steep, then flat, learning curve
I think maybe there’s a bit of conflation here. “Clean Code” isn’t a monolithic standard. When I advocate for it at my work, I’m advocating for code that is *more* SOLID. But mostly it’s that I want to be able to look at someone else’s code and quickly understand what they are doing. Part of that is naming variables clearly - not using “x” “_x” and “__x” as variable names within the same scope. Part of it is inverting some if statements to reduce nesting. Some of it is using standard or accepted code styles and design patterns. I’m sure another dev could have higher standards and want everything tucked away under 10 levels of abstraction. I get that. And I get how hyper meticulous adherence to SOLID principles and design patterns can leave a sour taste and spoil the idea of “Clean Code”. But just like a customer may not always ask for the right thing, or they may use vague language to describe a problem, there’s an underlying “need” that I want addressed when I say I want clean code. That need is for extensibility, stability and maintainability in the codebase. The tricky part is how we can come to agree as a team on what that standard should be because it’s ultimately subjective. The guy that writes sloppy spaghetti code with single letter variables and 6,000 lines in a method may still be able to quickly find things in his own code and that system may work for him, but other devs are in for a tough time if they have to touch that code.
Doesn't matter if the sloppy guy can read his own code. The code writer always knows his code intimately, no matter how bad, for a few months at least. As you're getting at,, the objective is making the code readable for other people.
Highly abstracted code necessarily cannot deliver features faster unless every single feature request lives at the edge of your codebase. Good luck when a regulatory change swings by that none of your abstractions deals with. You cannot predict the future. Don't pretend you can with Clean Code ^TM. Abstractions are NOT extensible. >Reduce Developer time How in the world does writing abstraction on top of abstraction on top of abstraction in Clean Code ^TM result in "lower development time" over just writing the code that needs to be written? That makes zero sense. I would like for a Clean Code ^TM advocate to actually measure these absurd claims.
The abstraction hell that you talk about is what it’s like to work with PLCs that have been programmed by major system OEMs. Trying to figure out why a machine is refusing to do what it should, and finding a magic binary number, and a comparison to a value that gets moved through multiple variables, it can be difficult to find the sensor causing the issue. Especially when there’s no maintenance personnel documentation about what the goddamn magic binary number does.
its easier than any other way of coding very flexable, easy to do changes and a breeze and still just gives you constant cache hits, and makes it super easy to parallel
@@Cara.314 Just out of curiosity... have you ever built any non-trivial data oriented project and maintained it for at least several years while it experienced some significant requirement changes?
Clean code is about declarative and intuitive APIs. All the greatest APIs are a perfect example of what clean code is. Interface implementation could become hectic, but as long as the API contracts are consistent is what matters most.
18:21 "Parts of apps that need of performance" - Casey also debunked the whole "you just need to optimize the hot spots" argument. You can only do so much without completely refactoring or even rewriting the whole application.
I'm thinking the main issues with the backlash to clean code, agile, etc is that people take it to the nth degree. Sure they act as hard and fast rules, but rules only make sense when context allows them to. It's the adherence-at-any-cost that causes issues and friction. Sticking to planning things to make them quick, easy, clean, and effective (and only picking two or three) should really be the Northstar to guide things (imo). Hope that makes sense.
2:46 if you are able to "plan ahead" its a sign that the problem you are working on is not very complex. if you are actually work on things that are interesting, you can't plan ahead.
There’s no guaranteed way to always do abstractions correctly. Many assume something that comes up much later when a lot of the system uses that abstraction. A good example is any abstraction over distributed components. Those are designed differently for a reason so can have extremely different fault tolerant design that needs to be accounted for in your app.
First time I saw "Scrum Master" as a title I really thought it was something dirty and a 12 year old had hacked the employee directory again.
Again? lol
Now I know better. It certainly is something dirty.
They recently changed it from scrum master to agile coach where I work
Me too, totally
@@iamhardlineraka retards who haven't coded in a decade and are great at inflating delivery costs
Over the years, people have talked about Clean Code for more hours than anyone has actually written Clean Code
🤣🤣🤣
By Clean Code, you mean TDD, of course.
I think what we really need is clear code instead.
@@ericbwertz
So funny because we love the "idea" of it, but implementing is another matter entirely. I would say much of that is chalked up to the fact that just getting things working and making it efficient is far more prioritized over writing elegantly clean code and refactoring in general, some places rush you and don't even have the time to write tests before the code.
I don't see why this guy is criticizing the quake fast inverse square root. That's about as clean as it gets. It's a pure function that uses primitive data types. There's nothing to maintain there. Sure it's hard to understand, but from a maintenance PoV, it's literally one standalone function. Nothing else you change in your codebase will alter how it works.
Yea and it's not esoteric or anything. Just applies mathematical approximation using series to compute something much faster with an acceptable error margin. I had forgotten most of my mathematics from collage after I left it but even then, the first time I saw that function I knew it was an approximation, just didn't remember how it worked... it's more a case of the programmer not knowing the theory used in the code logic rather than the code being bad (as you said, it's far from it)... never skip math day on college kids :)
I don't think the article is calling the fast inverse square root bad code, just calling it unreadable code. As argued in the article, performant over readable code should be used only in performance critical sections, like the fast inverse square root is being used in.
It's more of "if there's a bug in the system how sure can you be that it's not this piece of code?". Now after years of analysis and discussions it's pretty clear why that function works but imagine someone implementing magic like this to, for example, send the request to the server and you come to maintain it and have some error around communication with the server. The function might be doing exactly what it says and the error might be somewhere else but it'd take you ages to actually confirm that.
@@kilisinovic Not really. It's a pure function that takes a single argument that's calculating something mathematical for which there are already slower, but definitely correct, implementations for.
So you can pretty easily just toss a battery of unit tests at the thing and compare the outputs to the slower but correct function to check for any unacceptable deviations. Generate a million random floats along with a few important specific numbers like 0, 1, and so forth , feed them to both functions, compare the two, calculating the average deviation, the maximum deviation from the correct answer and an array of the top 10 worst cases. That'll give you a great idea of how accurate it is.
@@taragnor that only makes you pretty confident but not 100% confident which is the point I was trying to make. What good does it do to check edge cases for numbers when you're doing shenanigans with bits. There may be none or there may be more than few numbers out there for which the function breaks but you can never be sure unless you really understand the logic and math behind the function or you examine the function for absolutely every use case.
I like how Brian Will described fine-grained encapsulation, "The princess is always in another castle."
Sounds to me like "The Legend of Zelda" in a nutshell 😆
is a refference to mario @@AlesNajmann
Ah, that classic star wars episode with john luke picard
need more quotes like this to get the point across
"Paying our developers less because they can do more work for the money is more important then the performance of our apps." .... And that's the reason why my multicore-n-ghz-smartphone lags the F out with displaying a 350px wide webpage, while my singlecore 900mhz Pentium in 2002 had no problem with that.
I'm just happy that when I learned programming we had to learn to watch our resource-usage. Today everybody is like "ok let's import a 500kb library to concat 2 strings together. More time to *sips latte, writes blogpost about stuff.*" And then I am sitting here and people wonder how I can search fulltext through 5000 rows of data and respond in ms. Like ... that's not magic, I just don't burden myself with 300 frameworks and ALL the abstractions known to man in between just to tell SQL to do a where. And that code is still maintainable.... if something looks like magic, write a comment there on the why and what... done.
I'm infuriated! This is slander against programmers that drink lattes!
Haha this guy thinks 500kb of bloat is a lot, if he discovers electron he will go American psycho mode 🥶
@@PanosPitsime too. A 500 KB file to do WHAT? Does the library come with a tokenizer so I can use a special operator to concat?
@@PanosPitsi: Yeah... there is a reason I don't like that thing very much.
It is even more funny when your multicore multi-ghz moder PC does not even does the heavy lifting of rendering the page now as all the actually heavy tasks are handled by a dedicated gpu :D
I worked on a system at the NAIC that was written in C++ and implemented all of their polymorphism via templates rather than via inheritance. The result was wrappers inside of wrappers inside of wrappers, and the error messages were horrendous due to extensive use of nested templates. We had to pipe our gdb message through stlfilt just to figure out what types were involved whenever we had a failure. There is a happy medium between code that is inscrutable because it is monolithic spaghetti code, and code that is inscrutable because it has been atomized into a fine mist of components. Ironically the reasons why monolithic spaghetti code is hard, is actually the same reason why overly abstracted code is hard. There's too many things to keep track of in your head, the only difference being what you're tracking.
Btw did you know that you can run python in your gdb? You could have prepared a python setup for your team for gdb use for custom pretty printing this - or if that stlfilt was good-enough just auto-run it via python commands you added to gdb.
Also I am not sure when this way, because did static polymorphism like that (maybe an other architecture though) and did not have this issue.
@@u9vata at the time (2003ish) I'd never heard of Python.
I love that I haven't googled stlfilt and I'm confident in what that tool does
@@lerubikscubetherubikscube2813 :)
Well there is a difference, the highly abstracted code can be accessed at multiple levels and can be modified at multiple levels ignoring the finer (or higher level) details while the monolithic ..well it is just one big mess and you are forced to understand it all. I always think it is funny that developers are fine in using the 20 or so level of abstraction CPUs - OSs - programming language and libraries provides but then they go crazy if you add another 2 or 3 layers on top of that. Maybe the problem is not the abstraction itself but the quality and the experience you have with it. In the end complex behavior will require abstraction whether you like it or not. And good abstraction will make it easier to understand, not harder.
The fast inverse square root is self contained in its own lil function so it’s pretty clean™️ and you don’t have to worry about the details until you have to.
Yeah that is what I was thinking, as the long the spaghetti code you are writing has a good reason to exist, as in the fast inverse square root function case, and as long as that spaghetti code doesn't intersect with other parts of the code/logic then no one should have any problems with it, Clean Code enthusiast or not.
The problem, imo, is when the spaghetti code is responsible for branching logic/decision making and routing in the app. When that is the case, no performance considerations that leads to spaghetti code should be taken in account at all, as you will pay the price ten fold later on and be forced to change it anyway
@@ahmad_alhallak It's not even spaghetti. It's straightforward, linear code, which is trivial to read and understand. What is not trivial is knowing why the answer is correct, which requires knowing how floating point types work, and how Newton's Method works.
If you do not understand the mathematics behind it, it might seem like magic, but that goes for most math. If you saw Vec2_Dot(normal, point) > 0, you could understand the code for it, but you wouldn't know what it represents.
@@AidiakapiThe dot product returns the cosine of the angle between two unit vectors. It is assumed that anyone working with dot() > 0 knows that it represents the front facing side. The problem is that there should be a comment explaining the behavior for people _not_ familiar with it:
* Output ranges between [-1 … +1]
* < 0 back facing
* = 0 orthogonal
* > 0 front facing
Precisely. The fast inverse square root code is self-contained and while it isn't at all obvious how it works to the uninitiated, it is clear what it is *supposed* to do. Should you experience a bug in your rendering code and you suspect it might be that function, it's trivial to replace it with a more explicit implementation for debugging purposes. Should that replacement not resolve the bug, you know that your bug is elsewhere.
This is exactly how you want code to work - as mutable. The simpler your code, the less restrictions you have to changing it, and changing code to sniff out a bug is a perfectly reasonable thing to do. It's just much, much harder to do with huge object hierarchies and abstractions.
A pure function.
This article does the classic defence of Clean Code bait-and-switch where one moment they're talking about Clean Code™ and the next moment they're talking about regular run-of-the-mill clean code.
There was that one article talking about how "clean" means anything and everything nowadays. Since then, every time I think something is or isn't "clean", I catch myself thinking that and try to actually elaborate what I mean. "It's cleaner", what does that mean? Does it make the code more maintainable? Is it easier to understand, more readable? Or is it just a vague good feeling I have and I actually don't have any reason for it to be cleaner, and it comes down to subjective personal preferences? Usually it's the former, but the rare occasions where it's the latter, I'm thankful that I got myself to think this way. Though now whenever someone says "it's cleaner" I get infuriated because it means nothing.
@@Speykious yeah 'clean' is the least helpful adjective to describe code. Luckily, we are adjective people, professionally per-disposed to actually meaningful descriptions. Or at least we should be. I dunno what lazy shithead thought 'clean' was helpful. I am guessing its a borrow from gym bros and their 'clean eating' crap.
@@Speykious I prefer the term "idiomatic".
@@protox4 *idiotic
Sorry, I couldn't help myself.
I think I have a very low tolerance for file length. Most files I want under 200 lines
As many in the chat pointed out, the TF2 coconut JPG thing was a joke that people took seriously. TF2 RUclipsr shounic has a video explaining this, but the short answer is that while there is a coconut image in TF2’s files, it’s just part of an unused “coffee bean” particle and deleting it has no effect on the game’s stability.
I was gonna say, because the source engine seems like a really bad example for "poor" code given the sheer number of games that have been used to make it and over such a long period. I'm sure it has pain points but it's no way near a good example of "bad"
@@temper8281 It's actually quite a "personalized" code base, from the perspective of Valve. I'd imagine many of the same programmers have been working on the games that were made with it, so they were probably quite familiar with its aspects.
Source 2 is/was a great opportunity to rethink the codebase and it's starting to mature with Counter-Strike 2 and S&box now.
@XeZrunner Valve are a passionate company and their tools are beloved but in general I'm not super happy w/ the code practices of the broader gaming industry
Plus the work culture...
@@scvnthorpe__ Everything wrong with AAA gaming:
* *2003:* _I used to go into a store to find a game,_
* *2023:* _Now I go into a game to find a store._
So basically a lava flow
I think it's wise here to take the good aspects and discard the bad. Writing a million 3-line helpers = bad. Giving things reasonable names and understanding how and when to use an interface = good. Writing shouldQueryIfConditionIsCorrect() to write something that could be expressed as a ternary conditional -> bad. Using enums and structs liberally with descriptive names instead of a million hardcoded constants = good. It helps to work in a codebase with very experienced colleagues who all of their own individual takes, that way you avoid any cargo culting of ideas from books.
Well said. It's all context dependent & there's no one size fits all solution imo.
@@reed6514 Yep, but how else should either side (clean coders or "code rebels") sell books/talks/workshops or make youtube videos, if all would take a more nuanced approach? /s
Edit: Damn, I necro'd a comment thread
simple and readable code is much more important to me that clean code, ive seen design patterns and uber atomized components escpecially react codebases that didnt make any sense to me and was very difficult to provide any value or build new features
Clean code was a GOAL. Clean Code (capital) is the religion. Not in any way different from the "Manifesto for Agile Software Development" goals vs Agile. It's always a process from where you take something that has good intentions and turn it into a clusterfsck of dogmas and rituals and ceremonies. Some more cynical people might say that it happens because you can't capitalize good intentions, but you sure can turn "the clusterfsck" into a set of books, courses, talks, etc...
Maybe what you need is just an "utilitarian" version of it all. Pragmatic CC, Pragmatic Agile, etc... The pragmatic part being "does it make sense?", "do we need it?", "have we gone too far of the deep end?", etc...
At my place we have a running joke about a certain component that we refer to as "the best hammer ever". It can hammer in any nail under any conditions. Only about 15% of it's potential has ever been needed/used. Nobody truly greps to the full extent how it works. As a testament to the skill of the creator, it's rock solid and never fails. And thank god, because everyone that ever went through it in debug had to resort to some alcohol fueled "therapy" after. We just marked it DontTouchJustWorksTM because it does, and we always "Step through" never "Step into" ;)
One of my favorite Bjarne tips was "just use a vector"
Yes.
There is so much push back on this (pun not intended).
Most people will never hit the theoretical performance characteristics of more advanced data structures.
My personal saying: Never pessimize your code on purpose, but don't try to optimize it unless you prove a need.
@@khatdubellthe pun is pretty good. Appreciate it.
Why vectors over list, someone care to enlighten me? I've got a Google interview in two weeks or so and I'm not doing this Lettcode crap 😂🎉
@@raven-a Depends on the language. Bjarne was the primary designer of C++, and still has a pretty major role in its development. Under the hood, a vector is a really efficient dynamic array (unless it's a vector of bools, which I believe uses bitwise operations on a small number of integers under the hood to save memory space). This makes it really fast to retrieve an arbitrary value within the vector. A C++ list, on the other hand, is typically a linked list under the hood. This makes it really fast to insert or remove values from within it, but actually accessing the data is significantly slower. Unless you're constantly adding and removing values to and from within a list, meaning that elements within a vector would need to keep being moved around, the vector is probably the more performant option.
Abstractions should be discovered, not designed
beautiful take
why this comment have so few likes, damn...
I don't get it, what do you mean by discovered?
Java fucking sucks, that's all I'm gonna say... There's a cult of bad OOP out there and they are using exoteric JS "frameworks" nowadays because they can't really dominate back end programming anymore, because people just use whatever. They are the real enemy
@@gracjanchudziak4755It means you shouldn't prematurely limit yourself to abstractions just because...your abstractions should come up naturally as needed
The bit at the end about refactoring something and arriving at the same conclusion as the original programmer is a trap I’ve fallen into several times.
At this point I’ve learned that if I see some code that on its face seems just extremely WEIRD that there almost always is a reason. Sometimes I don’t always arrive at the same conclusion but I’ll understand the original reasoning more.
Honestly it’s helped me get my ego in check, too.
Yep. My current rule is, unless there's a comment in the file saying "clean this up later", I leave mystery code alone.
it's hilarious to me just how frequently people have to re-discover concepts that have been around for decades just because they're not directly applicable knowledge that actually gets taught. It would take 1 class period to teach the concept of Chesterson's Fence to people in 5th grade and solve at least 23% of the world's problems.
Those places in code must be well documented. If you use hack then add comment above it to indicate that it's a hack. XXX is for that.
I have this experience with my own code 😅. I look at my code and think I make a mistake and change it and then I remember the reason why I code it weirdly.
The good thing is now that you understand the code, you can leave comments explaining it.
I always prioritize writing clean and readable code first as this will make it much easier for you and your team to debug.
Most applications don't need the performance gain.
You see, it doesn't make difference for the final user if he will get the answer 1ms or 1 second after clicking the button.
But it makes difference for the user if you'll solve a bug in 1 hour or 1 week.
There are specific scenarios where the performance will make a difference and we only should spend effort on those specific scenarios.
Games, complex reports and highly concurrent applications/features are an example.
Even so, you still can write good and fast code, you don't need to trade one for another always.
Equilibrium is the key for everything in life.
Try writing readable code from the beginning but allow yourself to write worse code if necessary.
too much padding there isn't much meat in the article.
Clean code is a set of guidelines. I don’t see why people are taking it as a all-in kind of thing. I don’t think Uncle Bob intended it to be either. I think in terms of performance, OOP is the real thing people have issue with. Sure, CC encourages the use of OOPs primary “benefits”, but I think that’s a result of being written for the majority of development developers do/were doing at the time. CC principles can still be applied to FP for instance. I also think that performance of any OOP or even any high level programming language is always going to be suboptimal. Frankly, if you truly need performance, ANY language other than assembly is going to have its cons. Performance is also not the same as memory usage, memory leaks are going to happen irrespective of CC or OOP. If anything, I’d say CC will help you to at least ensure you create abstractions that might be easier to find memory issues with. Smaller objects instead of larger objects should in theory improve memory usage - something CC encourages. I think people are putting the blame on CC unfairly… look at OOP and the programming languages themselves as the real “enemies” to performance.
2:50 I will never be tired of you talking about this.
I was hesitant to start coding seriously for like 10 years because I thought that since I can't hold so much in my head that means I'm bad software engineer and I don't belong there.
I believe it comes down to the priorities of the project or of the company. There are several things to consider that can have very different priorities depending on the company:
1) Performance. It might be of a very high concern if you build a game that needs that top-notch performance and wow effect (not all games are like that) or data processing that need to process terabytes of data in short time to create reports. It might be low priority if you build a JS for a web page, it is not about animations, and nanoseconds vs miliseconds doesn't matter. It might be very low priority if you write a script to process a low amount of images or data and it does not matter if it will be done in seconds or minutes.
2) Time to market. It might matter if you either build a thing and run a campaign or loose client(s) or if you either build something in time or other company will start sooner with similar service. It might be even matter of life or death for a startup in bad position. Also, a startup might be not in a good negotiating position with a big company requesting needed changes asap, or the opportunity is gone. On the other hand, if you have working reputable product and only work on enhancements, time to market won't matter.
3) Budget. You might have some budget and either you can cancel project or try to finish it without sufficient funds to do it very well. Happens pretty often in startups with investors.
4) Maintainability, quality of code, developer experience. May be a top concern for a long-run strategy. May be not if your company just tries not to drown and needs to change fast and cheap or die.
5) User experience. Usually it is very important, but might be moved to the back seat if other considerations are deemed more important.
a) critical - so if that thing is even usable at all (might be not important at all for example in early PoC projects, usually top importance)
b) normal - so if that thing is easy and nice to use for typical use cases
I currently work in projects where it is something in this order 5a, 2, 3, 5b, 4, 1 (working startup) or 3, 2, 1, 5a (PoC projects).
This means that there is a lot of rather bad code. It is not totally unmaintainable, but it is also very away from SOLID or clean code. We rarely have time to refactorize.
There are also these smaller PoC subprojects that we do - there we don't take any considerations about code maintainability. We make quick scripts with code from github and GPT, stich it together and try if it works or not. If it works and project is accepted to go further, we rebuild a production version with a bit different approach (with better architecture and code), taking only some parts of PoC.
I worked in projects with priorities like 5a, 5b, 4, 1, 3, 2 which were into quality. Mostly quality for enduser or client, then quality of code (but nearly on same level as usually those are corellated). This dictated totally different approach.
Same person working, different projects, different decissions - both about the general architecture and code. And it is not like I evolved or rather de-evolved. I know more about how to write good quality code now than I knew back when I worked in projects which were very into the quality of code.
I feel it is more like a thing dictated by the business circumstances of the projects rather than decission based on taste and knowledge.
I did not work on games, but I think that in games it is most usually 5a, 1, 5b, 2, 3, 4 with 2 sometimes going up the ladder when deadline is near.
They need to make game playable, make wow effects, make it to the market quick enough / on time. This dictates even different approach.
Another thing is that you can overdo any of these things and make matters worse.
1) You can overdo performance and loose quality in other areas. Like your code runs fast, but results are less precise or it is too fast for the user or controlling process.
2) You can overdo time to market and add features faster than users can grasp and accomodate to the changes. It can be seen in some mobile games - they grow fast into having too many options, many with some currency/payments attached. Gets overwhelming, especially for new players.
3) Budget. Sometimes cheapest option is to take a ready code or services from the internet... and then you get locked in with it and have hard time changing things or changing provider etc. Something might be cheapest in short to middle run, but not cheap in the long run if you count cost of adaptation.
4) Maintainability. There is problem how to measure maintainability directly and we really have only some approximations like less repeating is usually more maintainable, single responsibility is usually more maintainable, more abstractions is often more maintainable. Can easly go too far with those proxy measures and get code that is actually less maintainable and harder to grasp. Also, like stated in the video, can loose on performance or user experience.
5) User experience. You can add a lot of complexity trying to have best user experience possible. You can also fall into a trap of thinking that something very usable for you will be also usable for endusers. You then can make a some custom thing instead of relying on proven typical solutions. It is hard to think like someone else or like general target group. A/B testing is here to help of course.
one aspect of the constant striving for perfect, clean code that gets overlooked is the impact on developer morale. especially when you know that when you open a PR it's just going to get trashed by someone who gets a high off making others write overengineered code. I see a lot of junior engineers lose so much confidence and their performance slows to a halt because of practices like this
This is a very good point. I find it very draining to have to respond to or follow the requests of my bureaucracy-loving colleagues.
Yes any senior devs reading this need to realise that if they want to keep their staff they should allow others as much freedom as possible because otherwise you will take away the biggest joy of programming and make people resentful
This is something I dread potentially happening, at the only programming job I had so far I was the only person working on projects and nobody cared to check my code, the results are all that mattered
If the code is hard to understand and maintain, then it's not, by definition, a clean code.
I mean, nice to say but there are many different definitions. We cannot say Uncle Bob who coined the term is the same as anybody elses.
"If you hire ten developers to solve the same problem, you will have eleven different solutions!" That's the real problem! Software is the creative expression of how a person thinks, how they interpret the world and the context around them. As you create abstractions, abstractions of abstractions, it will be a nightmare to maintain something that, theoretically, should be simple!
Clean Code is Taylorism (the practices that lead to factory productivity) for software. For small teams or companies following service design with lots of small teams collaborating, it doesn't make *any* sense, just as it doesn't make sense to make an assembly line for one guy to run back and forth along, trying to get the work done.
The context it was *supposed* to work was one where you have a large team where a programmer was just working the "widget X" station of the assembly line. However, it turns out that code isn't *not* an assembly line, even if we make 5 layers (assembly line stations). Lower case "clean code" on the other hand (writing code that isn't obfuscated) is worth fighting for.
3:18, you (23:54) and Casey said something that I already knew: wait the need before abstracting. This is the right way to design: focused on the practical needs. And that means building abstractions for defensive proposes too - before somebody starts to advocate FP.
3:35, 1 of the basic optimizations made by the machine is to inline f()s. The only concern is about reaching a limit, after which it won't inline or starts extra calculations about it - but 1 can set it. The actual problem about helper f()s is the options for calling them by mistake, or to feel the need to memorize them, as options, when thinking about the whole thing. To avoid this extra stress, a class to hide them can help. But this starts to require more boilerplate. The best solution is writing lambdas (within the f(), of course), if those blocks are called more than once. Otherwise, I keep them inside the f(), as explicit code, if they only make sense there.
5:03, if 2 apps have a difference of ms for each action, people will feel better with the faster 1. So, it's not just for speed-critical projects: even the common ones can benefit. Meta said that _"people feel more engaged when the app is faster"_ (probably about ms and above).
Every time I see “Clean code” book on developers table, I know that at some point of time (more sooner than later) I’ll feel a strong urge to bash its owners head with that book. Never was wrong.
I have a feeling that Bob Martin made much more harm than good for software development industry
Yes. to all of this.
People don't even understand what a Cartesian product is... So you had a product with something like 25 customizable items (color, texture, etc.) and each item had between 10 and 50 options. Some options are not compatible. For example, you cannot get the pink color with the wood texture. There was a function to select the first viable combination. Said function was stupid and would compute a large part of the Cartesian product before finishing. I estimated that it would complete after about 150 years... I extracted the responsibility of computing the Cartesian product in a separate function (and optimized it to discard non-viable combinations before computing the whole combination). In that new function, I used the lexical field of a Cartesian product, as this function sole responsibility was computing the Cartesian product... The developer that owned the module asked me to rename all variables to match the business domain, because he didn't understand what a Cartesian product was. So I cannot imagine the average developer to properly understand the abstractions from the GoF...
Prime's take on this implies its impossible to both have a good architecture _and_ follow clean code principals.
I don't recall any clean code principals telling you to pick shitty architecture.
As a matter of fact, IIRC one of the very first things you see in the book in the saying "every programming problem can be solved with another layer of abstraction........except having too many layers of abstraction"
Which, while cheeky, is clearly telling you not to go to the insane lengths of abstraction that prime is talking about here.
that would be like dismissing all of functional programming because people are currying, or something, too much to the point where the code is hard to understand
I don't understand this whole mindset of "With pattern X I can't keep the whole problem in my head", the point of patterns is that you don't need to keep it in your head anymore. With a good pattern instead of specific memory layouts, index ordering, specific formulae, or whatever specific details a coder can now write to a contract. If the abstraction isn't providing a contract then it is a bad abstraction, If the abstraction is providing a bad contract for the situation then another could be better, if the programmer is constantly trying to pierce a good abstraction maybe they are just doing it wrong.
Real communism has never been tried.
good points about following the team's practices. it was hard for me at first but i learned to slow down and see the reasons for things in the code that made no sense to me at first. the best thing is to read a lot more code than you write, and put yourself in the minds of the people who came before you
I definitely think you're right about the "typing fast allows you to explore more", there is only so much you can figure out from reading the code and looking at the involved systems
Once you make the change you increasingly get more context of the problem and might figure out that the approach isn't correct, has unforeseen side-effects etc..
Total bullshit. How much code you have to write for exploring something? Maybe a 1000 lines. Not your typing speed will be your greatest obstacle.
Your code should be easily testable. If you can achieve it without "clean code", do it. There's a great talk called "Clean Coders Hate What Happens to Your Code When You Use These enterprise programming tricks" by Kevlin Henney, I think he's criticizing "clean code" the way you mean it (and many enterprise programmers to).
Every CPU cycle you waste is a tiny bit of energy thrown away. Making your system 20 times slower means making the entire energy consumption of your service 20 times higher (at least, if you don't factor in costs required for cooling your hardware too). So if you're a company that cares about the environment and your carbon footprint, you should start by educating your developers on how to put performance first, ahead of their egos.
I recently tried to read a project that was presented as an exemplar of clean code. It has literally hundreds of classes for a relatively simple web project. I found it franky impossible to understand what the bloody thing was doing. The cognitive overhead was overwhelming.
Here's my hot take: Get good at planning ahead AND learn how to type fast. There's no reason to pick one or the other unless you're physically unable to.
I worked at a big DDD shop. They were all about clean code, abstracting everything, strict encapsulation and of course whiteboarding out everything. This always hinged on the requirements being correct. As we all know, requirements change and as you stated this leads to very time consuming and difficult refactors. At my current job we don't practice any of those things and I'd say I'm 5-10x more productive as we just build stuff with very little long term planning. It may not make you feel like you're playing 4D chess but you can refactor it really quickly. Not only that, you can look at it and understand what is happening much quicker.
02:14 yes, that's why webpacked and minified js is sometimes more readable for me that many js files (having access to both many js files and webpacked and minified js is still always better)
premature code cleaning is the root of all evil
And not cleaning code early enough is the root of your startup shooting itself in the foot
This is so true
seriously. instead of worrying if you're doing it "right", write that 50 line method and then take a second look. First attempt should always just 'get shit working' because half the time you might not know exactly what's entailed. Once you see that on the page it may be easier to recognize if there is a pattern and its worth a refactor.
@@anthonyparks505Indeed. Get a _slow reference version working first_ because it *doesn’t matter how fast you get the wrong answer.* By then you’ll have a deeper understanding of the problem to write a good solution.
It is almost as if everyone forgets _The Mythical Man-Month’s_ advice.
*Plan to throw one away, you will anyways.*
Who knew prototyping was an old concept! /s
So there was no evil in the world before we invented computers and started programming them. Got it.
As a Pastafarian, I find all this discussion of Clean Code very upsetting. All of my code is spaghetti code.
Performant code can be written by anyone capable of thinking about hardware a bit, you can master it in a few college courses.
Writing clean and maintainable code comes only from experience.
Clean Code TM on the other hand can be taught in schools, which means you can get rid of the expensive, experienced devs and replace them with starving interns.
Hi Prime. Good video and I get what you mean how a "Clean Code" makes it difficult for you to understand what exactly goes in the flow.
And this is okay because one doesn't need to read 100% of the code most of the time. It is similar to the fact that we don't read every single line of a newspaper.; we skim through the headlines (function names) before we decide what article (the function body) to read.
In Clean Code, you don't get function calls like `array.holyHandGrenade(index)`. So there is no point in reading every single function body if it doesn't seem relevant.
If you're talking about the Clean Architecture, that's a separate story. AFAIK, the motive behind this architecture isn't writing a Clean Code but making yourself independent of the tools and frameworks (as Uncle Bob puts it, you can swap out the dummy database with a proper DB, etc. That is, you only need to rewrite a single layer of your codebase to replace your ReactJS app with a SolidJS app).
In my experience, you never do just "skim through the headlines" though, because when you're examining code, method names are generally insufficient to fully describe the contents, and more often than not are named poorly. Furthermore these methods are often thoughtlessly partitioned, with inter-dependencies, which is a big no. I find it annoying and disruptive to have to keep navigating to definitions. For these reasons, I think it is simpler and safer to just write monolithic long procedures. KISS is generally all you need.
As for your clean architecture example: that just seems like typical premature over-engineering. KISS.
@@tarquin161234 the reason is that you have only dealt with poor quality code. When reading a clean code, you can bet your life that a printUserInfo is not going to increment the user's age.
What you described is exactly what a clean and structured code will help you avoid.
@@SufianBabri Exactly, listen to this guy, real communism has never been tried.
Man, the content you produce is so rich… both entertaining, insightful, and educational at times. I love it.
I think part of the unspoken ideals of clean code is to not to understand certain parts of the code and not mind as long as it works but that breaks down the moment you need to maintain(modify) it
You did the best summary at the end: "do what the team does".
If you're a performance junkie trying to squeeze extra cycles out of your react component in a "Clean Code" team building the next facebook, everyone will hate you. If you're a Clean Coder working on ethernet controller firmware team, abstracting everything, everyone will hate you.
Collaboration > dogma.
I wonder when we started to program abstractions instead of machines. We've lost it. The computer is a simple machine that expends resources in order to transform data, if there is a way to do the same transform while using less resources, that should be the default. If there are ways to expend less resources with different tradeoffs, those should be exposed. If those options are harder to implement or read, or make future portability or maintenance difficult, the problem is in our tools.
Creating functions is making abstractions. So there is no conflict here.
Doing abstractions is what programming is about.
As you said doing abstractions correctly:
"... put 15 functions in that file do that until you kind of hit that point where you're feeling I'm a
little frustrated with my code..."
It is the basic step by step process of creating and knowing your code as you create it.
Even Uncle Bob say exactly the same thing in his clean code whatever. He even goes long on the problems of trying to abstract too early when we have the illusion that we understand the problem before even having written any code.
It is just some principles based on pragmatic experience and the SRP and other wannabe principles are just some vocabulary about the structure of codes. You take what you want from it and do what you want with it. As needed.
I think the problem is that the so called "clean code" that you saw were done by people who didn't understand clean code. They were maybe more old school Java programmers who like to abstract things just for the cheap feeling of it than for the need of code. That means that those were not clean code. A code with to much abstraction is dirty code, not anything else.
Code as you wish and abstract as you need. That is clean code.
Understanding the vocabulary and the principles is just like the design patterns. Only low quality programmers would say that those should be done exactly the same for every problems.
One of the thing i realised lately about all the Code talks is that a lot depends on what person works in. If you work in TS(JS+) you will look very differently at anything oop related than someone who works in oop language. And you will look very differently at oop if you use oop language, if you use paradigms cause they are "high level" vs when you need them and make them useful
I'm in high performance computing and scientific computing, and obviously speed matters to us. There is a rule of thumb in HPC that say that any clock cycle your CPU, GPU, TPU or IPU do not spend on floating-point arithmetics is a wasted clock cycle. Obviously this is not entirely true, you need to read data from storage, you need send data between nodes and so on. However, it turns out that you can more or less reach this ideal by abstracting the calculations you want to do to operations on matrices, vectors or multi-dimensional tensors. The big issue with the code I see a lot of my physicist colleagues do is that they do not abstract enough and end up wasting clock cycles on index calculations and conditional statement that shouldn't be necessary in the first place if they had found the right abstractions. The point I'm trying to make is that abstraction is not necessarily the enemy of performance, it is rather a question of finding the right abstractions (which admittedly can be very hard). Anyway, I think it is a discussion is important, and yes in some areas of software development performance is not a big concern but in others it is.
Oh, I like planning ahead. I dont like it when I implemented it and it turns out I can throw away 80%, make it eaier and do it in another way because of that single edge case that shows up when I'm almost finished... But I still like to plan ahead XD
I feel like a lot of it is. Well, don't go for performance if don't have. but maybe you should keep some eye on performance, but then again sometimes performance is paramount. This doesn't really say anything, here's an idea, think about your design constraints and goals, and use your best judgement.
Martin Flowers itself already explained that not all code need to be clean.
And that no programmer, even himself write clean code. First you write code that works. Then you clean it, that leaves you with parts of the code that must need to be very effective, just stay that way, in those cases it is ok to leave a comment just explaining what you did.
Another point is that Clean code does not mean SOLID. What clean code stand is that the code must be readable(Understandable), and that could change from company to company, team to team. Regarding abstraction, clean code just say that you shouldn't mix levels of abstraction.
Levels of abstraction refer to how directly or indirectly code communicates its purpose, it has nothing to do with class abstraction principles or best practices in this context.
Clean code does not means slow code. If someone is saying that, then this person didn't understood clean code.
Martin argues that writing clean code is crucial for long-term maintainability and that performance should generally be a secondary concern in the initial stages of development.
Make it work, make it right, make it fast! (Kent Beck).
Performance first can lead to birty and bad code. This is not talking about writing slow code. Instead Martin Flowers says that poorly written code can generally lead to bad performance code. it's hard to identify bottlenecks and implement effective optimizations. On the other hand, well-structured, clean code makes it easier to profile and optimize because you can more clearly understand the code's behavior.
Especially in systems with strict performance constraints. Even in these cases, however, the code should be as clean as possible to facilitate understanding and future changes. But that there are situations that require careful attention to performance from the beginning
I think that people who attack clean code just don't understood what it is or just read the cover page of the book!
CC apologists always argue looking for nano seconds of perf is silly, but in reality I find we are often talking about 400ms per request vs 4ms. And when your que app has hundreds of requests or thousands per sec it starts to mater almost instantly. CC is an excuse used to explain bad code, it could be clean and fast its just easier to explain your bad code as clean code.
Prime, I think you've said you like procedural code as opposed to object oriented. It would be cool if you could do a video expanding upon that!
R u sure he said that,bc it wouldn't make much sense since the two are not exclusive or even competing. OO and functional are. Procedural just refers to a more let's say exploratory way to code as opposed to TDD or something with a lot of organizational overhead
@@haraldbackfisch1981yes he did say that many times
@@serhiipylypenko that he prefers procedural over OOP?
@@haraldbackfisch1981 yes I think he has. Regardless, it would be an interesting topic for a video. What the differences are, when to use one or the other, different strategies, etc
@@nchomeyThe answer is simple. Don’t do OOP (inheritance and polymorphism) unless you are forced to by the code base you are working on. Encapsulation is a good principle and not specifically OOP. If you have the chance, write as much code as possible in non-OOP languages as you can, e.g. Zig and Odin. It will force you to think in procedural/data oriented patterns rather than OOP. If you do it properly, you will find that your code becomes easier to read and maintain.
Couple of months ago I was working in a personal project in js that needed to run relatively fast.
It was a cli tool that that did a few operations for about 3 million database object and had to keep a state while it was running.
I ran multiple jobs at the same time with a sync at the end that updated the state.
Like a good boy I followed the "good practice" of having an immutable state, so everytime there was an update to the state, I created a new object.
The code was running super slow because I was doing a lot of operations.
Then I did a little test and stared mutating my current state.
Suddenly the code started running infinitely faster.
So, I don't know, best practices sometimes are not the best option.
But... if one needs to jump even once to understand what code does it's NOT clean code anymore IMHO. What you've described in this part were design patterns (and typical for Java inheritance overload) and that is completely different than clean code principles.
I agree: I saw "extremely clear code style" systems that are very-very hard to navigate because everything is spread around extremely... The most clean code thing I hate is the "cannot have more then 3 line functions" like rules... My friend worked in a project that had not just mediator patterns for example - but "InterMediator" patterns that mediate between the mediators and that kind of overbloat :D
Also this guy seems to have not heard that Casey do not mean all people should optimize: just not pessimize like crazy.
Also there are countless other ways of doing software than inheritance based virtual polymorphism... Not just static dispatch, but many ways to do things that are both maintanable and much better performing.
Also the guy went from gamedev to non-game-dev and guess what they have easier bugs? Maybe not because clean code, but because gamedev is 10x more complex....
Non gamedev code doesn't have to be efficient
@@williamdrum9899Total bullshit. Especially when battery usage on mobile devices come into question, anything embedded, powertools, all kinds of tools - and generally very few software are there which should be totally pessimised for no performance... And then if you run anything on cloud you pay very high price for wasted compute in bills too...
@@u9vata Ok, not as efficient.
@@williamdrum9899 Nonsense. I'm working on a project that's only used internally by 10-15 business people. You'd think that for a project like that, the performance wouldn't matter at all. But no, the users are complaining about the performance and have even given me the green light to improve it.
3 line functions? thats seriously, a BS rule, but again, as the book said don't need to be dogmatic, my personal rule is a function should not need a scroll bar to see it fully, if you have to scroll, you have to refactor.
again, that's for functions or methods, not classes
I love you're approach and agree with it, having readable code which follows simple patterns is waaaay better than seeing an over engineered code that I need to memorize.
I once had someone who hates inline nesting, so over engineering made him nest folders and files. Example: `Core/backend/src/intensive/runtime/postprocessor/Placer.ts` which is linked to `Core/Placements/Positioner.ts` and linked to `Core/frontend/src/Calculations/Math/OBB.ts` and all that function does was center a mesh on V3 0,0,0
Going for extremes is almost always not a good idea. Being an extreme "clean coder" just makes things less readable for others who are "less clean coders" than you. But we all have to agree that cleaning the code to some degree makes the code more readable and not less. We can apply some of the rules of Clean Code when they fit well while not be very dogmatic about the other ones... Just take the middle path, care for the code so that it stays as readable as possible without sacrificing too much performance and you'll be fine. The middle path is very often my path when I approach different ways of looking at problems in life.
The table approach of the original video is so good !! Up until the point where your boss steps in and says "Can we do complex shapes as well?" And then the junior dev that inherited the stuff from the inventor genius that happened to leave the company scratches his head.
I never had a problem with Carmack's sqrt. I always thought it was kind-of interesting. It's precisely the kind of thing you'd want to optimise. In my own job recently I first wrote, tested and profiled "clean code". Then I identified areas where significant improvement could be made with AVX2 and then I implemented that code path as a compile-time switch. That code is less clean and less easy to understand but the "clean" version is also there in the file right next to it so you can see what the intention is. The AVX2 code is 7 x faster!
I think the best approach to software development is to straight up write one to throw it away. 1. You have the option to get it to market faster. 2. You have all the unknowns now known. 3. You've learned a lot solving the problem.
You can get so much done by just having a conversation about a feature ... going off doing it... making decisions on your unknowns and coming back with something done... Then saying what do you guys like... what don't you like. I had these questions these are the decisions I made.
Then you go back fix those issues.. Make sure the customer is happy. Then get some automated integration tests around it. Then you engineer it. After that you can optimize it.
Yes! ECS is way to go! When you have N interconnected objects (geometry in some scene, actors in some simulation, etc) where state of one can depend on another, ECS is the only reasonable way to deal with all the unpredictable dependencies. OOP may be very inefficient and often introduces bugs that are hard to localise.
but ecs is not a solution for most daily problems, the majority of work getting done is a request-response pattern.
@@RogerValor This depends on a level. You can still have a request-response API or object, and an ECS underneath. For example, most databases resemble an ECS more than objects, but you have a request-response API on top of it.
Yeah, pointing out Quake's FISR function is not super relevant in the context of clean code principles. I'd argue that the FISR code existing inside of it's own function is the most appropriate thing that they could have done to make that code readable. I'd agree with the author if that code was wedged inside of another function that's doing something else, but simply pointing out that it's complicated doesn't suggest that it was unreadable due to gaming industry practices and clean code violations - it's a complicated problem space that encompasses advanced mathematics. There's a reason they hired a mathematician to write that piece of code at the time instead of have a developer smash their head into it.
I’d be interested to see some examples of “clean code” vs something you’d write. I hear this argument all the time and leave very confused. Abstractions are supposed to separate concerns, and I find that they help significantly because I also have trouble keeping that much state, but with abstractions, I don’t have to worry about implementation details when I intuitively know what something should do, unless it’s not working properly, in which case I can test it on its own. I’m wondering if there might be a disconnect due to my lack of diverse experience
A different take on using a `Set` is it communicates to the reader it's a "set" of unique values, even when small. This might sometimes be a useful assumption that can be made.
You cant be pragmatic and at the same time put abstractions everywhere for no reason just because "in the future we might need a different implementation"
"it's also scattered around for my l1 and l2" right? that is my biggest problem with oot is when you read it it's hard to mentalize what's even going on.
You always start with the goal of writing easy to understand, maintainable code first. Only optimize as the need arises.
But if the code that is written is too inefficient to begin with, you can't optimize later if the need does arise without just rewriting everything from scratch. Yes, you shouldn't hyper optimize everything as you go. But you also shouldn't be ok with things being horribly inefficient even if they happen to be easy to understand and maintain.
My first c# sharp gig. Simple app had about 7 layers of abstraction. Coming from classic asp, I was lost.
I don't care about Clean Code or how applications are build as long as they are performant
I will be very happy if Microsoft send to me (free of charge) workstation PC - AMD Threadripper Pro 64 code, 2 TB RAM, 8TB Raid0 card and 3 GPUs to run their slow applications, because Microsoft thinks electron is modern tech
Yes many of those software companies have zero respect for their customers' resources
I think many if not most projects take major Ls from not optimizing enough at the requirements level, before writing any code or planning any architecture. There's nothing faster and cleaner than not having the code in the first place, so if you can find a way to do less and still have a good business outcome, you'll do better in every metric. Go back and read The Rise of Worse is Better every time you want to start a project or do something clever.
I can remember 20 years ago, my professor for algorithms teaching everything using recursive algorithms and dynamic memory. And the last few classes he showed us every algorithm could be done using arrays and it was always faster.
UI/UX === cadence:: user interaction => figure out where you have to cache/ do stuff, so that you get within your cadence-beat. Misdirect if you have to. Make it work. Make it fast. Super easy! Barely an inconvenience!
FYI:
William Kahan and K.C. Ng at Berkeley wrote an unpublished paper in May 1986 describing how to calculate the square root using bit-fiddling techniques followed by Newton iterations.[4] In the late 1980s, Cleve Moler at Ardent Computer learned about this technique[5] and passed it along to his coworker Greg Walsh. Greg Walsh devised the now-famous constant and fast inverse square root algorithm. Gary Tarolli was consulting for Kubota, the company funding Ardent at the time, and likely brought the algorithm to 3dfx Interactive circa 1994.
Polymorphism with optimizing JITs like Hotspot and Graal is a very performant thing because it's inlined. It is something Casey M. completely ignores. Makes me wonder why?
Clean Code should be simple code. If it is not simple it is not clean. If your definition of clean is adding complexity then you took the wrong things away from the book. With that said, I do not really blame people for taking away the wrong things from that book, the book doesn't do itself any favors.
The thing is, you can usually do both ; the Fast Inverse Square is an example of this - it makes ONE THING very fast and keeps it in a well defined box.
I've written my fair share of "fast but ugly" code. Most code should be clear and easy to read. If you can't do this, write good docs, and hide it behind a nice API.
In the airline industry mechanics and pilots all have preflight checklists that are very methodical, step by step. Clean Code taken to the extreme is like a clipboard covered in post its.
> Disregard algorithm - use an array
Yes, but no. I mean, it's nice to have underlying array for your octree instead of crapload of linked lists. BUT! First I need to have octree if 3D spatial mapping implied. And that's mean you've chosen right algorithm. Can array hold BST? Sure. That, If you know it's optimal data structure to store your open set for A* search over your octree.
So. Get the best algorithm. And then do it with array.
"Just use an array" can work well anytime the data set is small, because often times even an inefficient algorithm will still work okay with the benefit of cache locality that you get from an array, but once you deal with any kind of large data set the strength of the algorithm becomes paramount.
Yea this kind of sums up my main problem as a swe.
I cannot wrap my head around very complex architectures. Sure its easy if i coded everything, but when its already done by someone else its HELL to step in.
This doesnt mean i code 1000 line functions, you kinda want to separate responsabilities, and want to have reusable components, but if you spend too much time jumping files to udnerstand where somethign comes from it becomes really hard to understand for others and really shitty to mantain
Clean code tends to create so many abstractions that you can't keep them all in your head at the same time, nevermind understand what they actually do under the hood.
IBack when I cut my teeth programming, I prided myself on writing clean, readable, self-documenting code . This as opposed to some crap that looked like someone ran it through a Javascript obfuscator (this was in the dark days before automatic formatting, and also before Javascript, so not that dark). I also prided myself on good application of good design and programming principles. These were (and to me still are) separate concepts. I've never read Uncle Bob's book, but from what I gather, it seems to me that, in violation of his own "clean coding" principles, he has mashed together separate concerns and concepts into "clean code," which is now a loaded term. Perhaps he should have called it Good Code. It would be priceless to hear people arguing against writing "Good Code."
Performance is critical for the backend. For example, points of sale have timeouts for transactions so any significant delay will cause the transaction to timeout.
I worked for a company that had a process which summarized data. The stopped running it when it took more than a day to process a day's worth of transactions. After some analysis and rework, I got it down to less than 5 minutes per day. I was let go within 6 months of doing this.
The most exciting part about learning to program is finally understanding the memes
Check out the video: How NASA writes spaceproof code.(Mde by Low Level Learning!)
You'll have a good time!
Clean Code is developers just programming for the developer, with little thought given to the impact on the end product. It's like writers writing for writers rather than their audience, it's an exercise in self-aggrandizement for self-important people.
You can write maintainable code that makes sense for the individual scenario while not neglecting performance in all areas. Abstraction layers should serve a specific purpose, like decoupling a specific runtime API implementation from the consumer code. A good example would be a graphics renderer API, maybe you want to give your program the ability to switch renderers between multiple target architectures and systems. Well abstraction serves a direct purpose there. Another good example is an entity component system because you can assemble complex objects piece wise at runtime based on a data set that changes completely independently from the ecs implementation.
Some people think clean code and single responsibility means literally every object can only do one thing, rather than a logical module that serves a clearly defined easy-to-understand purpose. If you're just making an object for the sake of separating functionality without any discernable purpose, ask yourself why.
Here's my humble take on the matter:
Clean Code is not a holy grail. I have read all 17 chapters of the book; It contains very good takes on good, testable and readable software development. However, it also contains bad takes too especially: "Chapter 3: Functions". Much of the things in that chapter should not be adhered to at all. Functions can be too small and can also be too Large. I like the idea of the Locality of Behaviour and organizing files and folders by feature rather than by type better. Having multiple layers of abstraction that don't serve the purpose of encapsulating complexity such that one doesn't need to go source-diving to figure out what is happening is also not a good idea too. In addition to Locality of Behaviour, you need behavioral transparency to a certain extent too. Again. one cannot achieve this behavioral transparency without also taking a minute to name things well.
Also, I disagree with the HTMX essay that says Locality of Behaviour doesn't agree with ideas like Don't Repeat Yourself (DRY), or Separation of Concerns. I don't know where they got that from
We use PHP, Docker, Html/CSS, JS at my job. Ironically compared to the rest of the code base which uses more of a functional programming perspective, writing even OOP with loose rules has sped up quite a few pages performance wise.
A lot of it has been preventing queries from running on pages that would never use the data.
ECS (Entity Component System) was created for access pattern based thread scheduling as well as polymorphism without the OOP drawbacks. It's widely used in the game industry where statements such as "You just didn't plan ahead enough." will get you thrown out of a room by designers, managers and programmers alike. It is both flexible and efficient, but, like Redux or Rust, has a steep, then flat, learning curve
I think maybe there’s a bit of conflation here. “Clean Code” isn’t a monolithic standard. When I advocate for it at my work, I’m advocating for code that is *more* SOLID. But mostly it’s that I want to be able to look at someone else’s code and quickly understand what they are doing. Part of that is naming variables clearly - not using “x” “_x” and “__x” as variable names within the same scope. Part of it is inverting some if statements to reduce nesting. Some of it is using standard or accepted code styles and design patterns. I’m sure another dev could have higher standards and want everything tucked away under 10 levels of abstraction. I get that. And I get how hyper meticulous adherence to SOLID principles and design patterns can leave a sour taste and spoil the idea of “Clean Code”.
But just like a customer may not always ask for the right thing, or they may use vague language to describe a problem, there’s an underlying “need” that I want addressed when I say I want clean code. That need is for extensibility, stability and maintainability in the codebase. The tricky part is how we can come to agree as a team on what that standard should be because it’s ultimately subjective. The guy that writes sloppy spaghetti code with single letter variables and 6,000 lines in a method may still be able to quickly find things in his own code and that system may work for him, but other devs are in for a tough time if they have to touch that code.
Doesn't matter if the sloppy guy can read his own code. The code writer always knows his code intimately, no matter how bad, for a few months at least. As you're getting at,, the objective is making the code readable for other people.
Highly abstracted code necessarily cannot deliver features faster unless every single feature request lives at the edge of your codebase. Good luck when a regulatory change swings by that none of your abstractions deals with.
You cannot predict the future. Don't pretend you can with Clean Code ^TM. Abstractions are NOT extensible.
>Reduce Developer time
How in the world does writing abstraction on top of abstraction on top of abstraction in Clean Code ^TM result in "lower development time" over just writing the code that needs to be written? That makes zero sense. I would like for a Clean Code ^TM advocate to actually measure these absurd claims.
20:48 this is what messed up the code base I am working on right now. Thank you very much.
The abstraction hell that you talk about is what it’s like to work with PLCs that have been programmed by major system OEMs. Trying to figure out why a machine is refusing to do what it should, and finding a magic binary number, and a comparison to a value that gets moved through multiple variables, it can be difficult to find the sensor causing the issue.
Especially when there’s no maintenance personnel documentation about what the goddamn magic binary number does.
Well, what I always use to say: You can solve problems a way more complicated than they actually are, but not the other way around.
7:07 DATA-ORIENTED DESIGN, my favorite! tho not that useful on web. but love it!
Very hard to do correctly. And very inflexible for future changes...
its easier than any other way of coding
very flexable, easy to do changes
and a breeze
and still just gives you constant cache hits, and makes it super easy to parallel
@@nomadshiba I disagree. It would be more widely used then...
@@vladimirkraus1438ad populum fallacy
@@Cara.314 Just out of curiosity... have you ever built any non-trivial data oriented project and maintained it for at least several years while it experienced some significant requirement changes?
Clean code is about declarative and intuitive APIs. All the greatest APIs are a perfect example of what clean code is. Interface implementation could become hectic, but as long as the API contracts are consistent is what matters most.
18:21 "Parts of apps that need of performance" - Casey also debunked the whole "you just need to optimize the hot spots" argument. You can only do so much without completely refactoring or even rewriting the whole application.
I'm thinking the main issues with the backlash to clean code, agile, etc is that people take it to the nth degree. Sure they act as hard and fast rules, but rules only make sense when context allows them to. It's the adherence-at-any-cost that causes issues and friction. Sticking to planning things to make them quick, easy, clean, and effective (and only picking two or three) should really be the Northstar to guide things (imo).
Hope that makes sense.
2:46 if you are able to "plan ahead" its a sign that the problem you are working on is not very complex. if you are actually work on things that are interesting, you can't plan ahead.
There’s no guaranteed way to always do abstractions correctly. Many assume something that comes up much later when a lot of the system uses that abstraction. A good example is any abstraction over distributed components. Those are designed differently for a reason so can have extremely different fault tolerant design that needs to be accounted for in your app.