This guy /actually/ has standards and sticks to them when they make sense, and not just because 'education' forced them into popularity. I highly respect that.
He doesn't have standards, he has opinions. Any experienced OOP developer could go point for point with his opinions and provide well rounded, reasonable, pragmatic rationales for why OOP is important. He doesn't have to agree, but id fire anyone who inlined OOP code like that on a commercial project just because he thought it looked better. He is welcome to his opinions however.
@@BinaryReader In his other videos he shows that experienced, academic authorities even, are totally unable to refute any of the basic points, and in fact double down on working around all the problems and confusion that OO causes.
@@asdfdfggfd This just sounds like you personally don't know how to develop with OOP. For the most part, academic authorities don't know anything other than (OOP can do this and this and this) and rarely settle on "best practice". But ill give you an example... interface IRepository {} interface IProducer {} interface ILogging {} class Server { constructor(private readonly repository: IRepository, private readonly producer: IProducer, private readonly logging: ILogging) {} } ... const server = new Server( new MongoRepository(), new KafkaProducer(), new KabanaLogging() ) Tell me, how is this code improved using either pure procedural or pure FP? Show your example (you can use Haskell type classes or Rust traits if you wish. Pick your language (or poison), and show me the code.
@@BinaryReader Your code literally does not do anything but alloc a struct to pointers to functions. It doesn't do anything. But anyhow, if I were to stupidly organize logic in this way with procedural code it would go something like this.. struct Server {void *(MogoRepository); void *(KafkaProducer); void *(KabanaLogging) } main() { yetanotherpointer = malloc (sizeof(Server)); /* Then if i needed a bunch of servers organized like this like I would in at HTTP or telnet server, Id just make that pointer to the serve struct an array of pointers to the server struct where as a retarded OOP system would need another class and more helper methods*/ } But like I said, that would be a stupid way to organize your code there, because you are doing it with OOP, without the barest clue about the problem that OOP was attempting to solve before it became one of the dumber religions in existence...
+James Bishop let's not reject the possibility that it's what actually maybe metaphorically happened to poor Brian and that is the cause of this epic rant series in which case it's very human and understandable. would you be able to preserve your sanity if Java somehow killed your beloved hedgehog?
I started doing software in the 80's, creating ERP systems for small organizations. This was before: big networks, object oriented programming, SQL, and widget/event centered user interfaces. My productivity then was DOUBLE or TRIPLE what it is now using modern tools. It drove me nuts dealing with one fad after another, each one slowing productivity and exploding complexity. The popular cloud based DBMS I work with with now feels like a ship with a thousand leaks. Nobody seems to get that Simplicity is the golden rule of software.
+Mark Mark For everyone simplicity means something different, also apparant in this video. Is it simpler to have one big function or several small functions that do the same but are not reused? For me, smaller functions are simpler, because the context is very small (if properly chosen) and i can very quickly grasp what the code wants to do, what i can change and what constraints exist. In the video, the opposite is proposed and i heard it often enough: one function is simpler than many functions.
he bashes mostly OOAD, not OOP language features well used, but yes... in a hour I even understood mostly how NES works (plus basic of Go), and I tried to get it from many similar emus with too much OO bloat (not to mention is sure that such realtime thing needs to be optimized and here it IS NOT premature at all) its excellent work!
To me the underlying problem is the conflict between theorists and those who actually need to create and maintain practical, working code. Theorists become more and more obsessed with their paradigms and adopt a "more object-oriented than thou" or "more functional than thou" view. But in real life there should only be one test: does using object-oriented design in your code make it simpler, clearer and cleaner? Then use it. If another program would be simpler, clearer and cleaner by using a procedural design, then use that. And if neither of those paradigms work then use a combination of the two (or make your own). Bottom-line: don't let any programming paradigm work against you in your code.
My friend, I wish you could speak your wisdom to all of my coworkers. I feel that many of us are unwilling to see when our tools are working against us rather than for us, especially when we’re only comfortable using that one tool.
Real theorists are busy on Haskell and Higher Kinded Types and Dependent Types etc, the really really general code which are indeed difficult as Brian said. They have no time for this OOP bs.
50:10 "In fact when I think about the complexity of a function, I don't really think so much of it in terms of cyclomatic complexity, of how deeply you're nesting loops and branches and so forth (I mean that is a concern), but for me the real measure of complexity is, well how many variables do I have to keep track of." Not sure how much of a fan you are of code metrics, but that sounds like an interesting idea for a complexity metric. How large is the state space, based on number of parameters, variables and references to globals.
For me complexity is interdependency. A single function with a lot of stuff going on might be complicated but without any dependencies it's not complex. The important difference between the two is complexities price on maintenance and extensibility is polinomial while complicatedness stays linear.
@@reoz2113 How is that different? Those micro-optimizations are all about data structures: Laying out and aligning data in a way that can be continuously prefetched by the CPU so you avoid cache misses, laying it out in a way that allows for safe concurrency - still while avoiding cache misses, ... Sure, you should have a good algorithm - but it doesn't matter that your algorithm is twice as fast if each miss can cost orders of magnitude more.
In haskell, you can put a "where" word after a function defunition and there you can describe all subfunctions (and each of them can have the "where"-block itself). They will retain visibility on the parameters of the root function.
IMO the problem with Object-oriented is when you let it dictate your design and plan all of your classes before you've written your first line of code. Some languages force you to make a class first so you're already starting with a disadvantage.
@@chudchadanstudI would say it tends to have the opposite effect. There's no quicker way to lose sight of the simple fact that any software -- regardless of how sophisticated -- is doing nothing more than inputting and outputting and mutating data at the end of the day than OO design. The overarching design philosophy of OOP tends to make people quickly forget that, since it's not merely organizing data into bundles, but instead into capsules that hide the data.
Not only you've made very good points in your previous videos, but also provided this example, kudos for this. I think most of the problems come from adhering too strictly to any particular paradigm, but I agree with the notion that imperative happens to be the one that gives the programmer the least amount of headache.
Here's my view on some of the things you said / showed: a) Your criticisms of the Console pointer in CPU etc. are a bit... odd. Its not uncommon to have bidirectional childparent relations. I don't see the problem in this case, especially since a real CPU is also connected to the rest of the system... otherwise it couldn't much of anything. Okay, they could have chosen to pass in the current state of the memory each loop, I guess, but that would be even farther away from the real situation. What is especially odd to me about your criticism of this point: It seems you especially dislike OO design that sticks to principles just for the sake of it. I agree with that. Yet, here you criticise this reference without any real reason except for principle. It's handy to give the CPU a pointer to the console, because then the CPU can easily access the memory. There's nothing confusing about that. Sure, it could have been a pointer to the memory, keeping it a bit cleaner... but that's about it. b) Private, nested functions: I never saw the benefit of these. You still have the same work to do. Define a complete function with a decent name and call it somewhere. The only difference, really, is you choose to put it at the beginning of the parent function and assume you'll never have to call that function from somewhere else. So what? It would be just as readable if these functions were defined on their own just above the parent function. On top of that, you wouldn't have to change your code if you realized later on that, contrary to your initial belief, you now have need of that function somewhere else. More important than were a function is defined is how intuitively understandable its purpose is. c) I really don't see how inlining a lot of functions and creating longer functions helps readability. One example would be your main loop: I would much prefer a main method that basically gives an outline of what the program is doing by calling other functions in a specific order. - InitialiseAudio(); InitialiseGL(); as separate functions I only need to look at if I actually care what's happening there. It's a really "boring" part of any program, as it doesn't add any understanding of its purpose. It's housekeeping. I don't put my vacuum cleaner next to my couch because I don't need to see it all the time to know I use it to keep my place clean. - MenuView / GameView taking care of their loop iterations on their own, for example, would clean up that main loop a lot... I really don't have to fully understand what each view does to get the big picture of the main loop. And if I do need to see what they do, I can easily check it out. I like taking a "hierarchical" approach to understanding a program. Doing things like this, your main method would probably crumble to less than 100 lines of code, would still contain the essence of what it's supposed to be and would be free of stuff that, to me, is irrelevant detail for my understanding of the main loop. I could then go and check out GameView and MenuView... since they tackle two separate issues: Displaying a game and displaying the menu. I could go on, but perhaps you understand my point of view: Inlining functions just for the sake of it doesn't help readability and understanding for me. As you yourself said in another video: When we try and understand the human body, we don't first look at microbiology. We look at the organs. What your main loop here is doing is saying "there is a lung in your body and here is its microbiology" before telling me about how the lung fits together with the rest of the organs. After watching this and your other two videos about how terrible OOP is, I have to say: You don't really make a strong point. Sure, you showed some really terrible OOP examples. Here, you showed a not so terrible example of OOP and turned it into a, IMO, not so great example of procedural programming. I would have liked your videos a lot more if you hadn't started with that "most important programming video you'll ever watch" bit only to continue without saying anything new: Doing something bad is bad.
Please read my comment again. I state that he dislikes OO design that sticks to principle just for the sake of it and ends up criticizing this specific design for no reason other than principle himself.
+Clairvoyant81 I understand what you mean. Especially this video weakens his own arguments because he doesn't builds on his own criticizm from the first video. I think this is because he essentially just tries to disprove that oop is the surperior way for structure. That is what everyone else is trying to convince you to think as far as I experienced it. In this particular example that you picked the real problem is that oop should be a working solution to the shared state problem by design. But thats simply not true. His example is simply a different approach with no big drawbacks. It seems he is not able to frame the real problems with oop he describes on a already working project. Maybe he doesn't realize that himself. The biggest Problems with oop as I see it are the major limitations for dynamic growth as the full program unfolds.
+alexander kerbers I agree that it's a really bad idea to teach people that OO is the only way to go ever. But: I also think that it's a valid option one should always have in mind. Just as with languages, these paradigms are tools one should use when appropriate. Could you elaborate on the dynamic growth problem? As I see it, it's fairly hard to write code that grows well as the project continues, no matter what paradigm you use, but perhaps I misunderstood what you meant or you have some more specific issue with how OOP grows.
For me, OOP dogma really hindered my early years of programming. It made me hate programming and hate any code that I wrote and see it as utter dogshit, even if there was nothing technically wrong. I wasted so much time on questions such as... What work should be done in this class constructor? Should I split this class into two pieces? Should I move this piece of data from class X to class Y? Do these classes make sense philosophically? Should this function be a member function of class X or class Y? Should this member variable be public or private? Should it be const? Should it be accessed with getters and setters? Should I write const versions of my class member functions? Should helper functions be private? How do I write tests for private member functions? Then there were further questions like: should my functions be only 5 lines long? Should my classes have only 5 members? For the code I was writing none of this mattered! I should have been writing procedural code, and should have only started thinking about these questions if I wanted to wrap a public API around my code.
+Brian Will, I rather agree with your pushback against code "atomization" and the tyranny of overgeneralization and naming things that need not be named. Also subfuncs or use blocks sound nice. However, one of the main reasons I've heard for splitting up functions is to have smaller testable units. I'm wondering if you've found that writing such large functions impacts testability at all? Do you wish that you could write tests for subfuncs when you're coding in this style? Is unit testing something you approve of in the first place?
As a C99/Rust embedded dev, I can assure you, you can not and should not test everything in your code. Ofc, utils, maths, physics functions or special module should be heavily tested via some unit test (for power consumption for instance), but if you want a clean and readable code, you should not create a mid process function testable. In almost cases, it will lead to some spaghetti code or lead to a need to simulate almost half your process in your unit test. Instead, rely on performance test or integration test. Make for instance your process stop before and after your function and compare some well design features. A simple byte field with a clean environment can do the job. Ofc you can't stop tricky errors with this method, but hey, it's mostly enough. Step by step unless it's not
I agree with what @zanzi8597 said, but I'd also like to add that in this hypothetical new language (or feature) that Brian Will proposes, I could also see how such a sub-section pseudo-function could be considered a separate testable unit by the language's compiler/build system/built-in testing framework. Like you could assign it some handle for identification, then in a separate section of the source file (or another source file entirely, but I'd prefer same-file), you could create your unit tests. But again, I'd say larger integration tests or unit tests of the outer function would often make more sense _overall._
I work in the automotive industry on safety critical software (e.g. braking controller) We use C language and it is horrible. Is it horrible because C is a bad language? No. It is horrible because unexperienced people (e.g. kids just out of school) write the code. No time to think about good software design, no time for refactoring etc. (Sometimes the existing bad code is used as excuse to write even worse code.) But this seems to be the "industry standard", because those kids are so cheap and don't complain all the time... I don't think OO is bad, if used wisely. But when unexperienced, writing OO code usually ends up worse than writing procedural code...
Yes. C is pretty horrible language to develop in. And no amount of "skill" makes it a good language to develop in. For the longest time however it was the only game in town, that however doesn't make it good. It just makes it not as awful as writing raw assembly code. This whole notion that creators of major tools are somehow infallible is nonsense. If you distribute a program, and users start complaining that your program is shit, going around roasting your users is usually not the valid approach. Programming languages, coding methodologies are no different.
following systematic and clean approach used here, even malloc and pointers of this small thing would be done well, by design; but anything with large data... no C nor C++, IMHO
@@antanaskiselis7919 I don't get it, but I'm only trying to learning programming. What's per se bad about c? c is the only thing that comes to my mind when i try to think of a language easy to read and therefore (re)write stuff in.
@@doesntmatter2017 I guess the most simply way to put it, because in modern day we have programming languages and tools which are that to C, what C was back in the day to writing forms of assembly. That's not to say that C is useless, far from it, there is loads of code written in C which is running pretty and still requires maintenance. However you would be hard pressed to see benefits of writing C code from scratch when starting something new now. And to extent in certain domains that's now even true for C++. Now that's not true for all domains, just huge chunk of them. Most of your common applications be in browser, desktop often even in stuff like IoT (internet of things) will not be developed using C or other closer to machine code language. Java, C#, Javascript, Python or even php will be preferred. Question is why? Well, because these languages handle a lot of stuff what languages like C does not. One of the more prominent factors would be memory management which is common concern not only for application breaking or causing undefined behavior but numerous security issues this causes. Ever wondered what all those updates in Windows for example do, even though no new features are added? Huge part is due to mismanagement of memory using languages like C or even C++. Here, microsoft report on it: visualstudiomagazine.com/articles/2019/07/18/microsoft-eyes-rust.aspx When we bump into common knee jerk arguments like Klaatu Barada Nikto implied. Which goes along the lines that it's just bad programmers doing this supposedly. Which is just blatantly false narrative. Reality is this, while C was workable back in the day, when applications were relatively simple. Now, when scale of said applications and interconnectivity between those applications became a thing, even the small errors made with C can be magnified too huge degree. On top of that we are also living in post-moor's law where multi-threading and parallelism is becoming more relevant, and languages like C has close to nothing on it. So if there is a way to remove the human error factor from it we should. That's not to say that there aren't domains were you kinda have to use C or C++ (the latter is fair to say is becoming better), however, due to certain design and backwards compatibility issues, you'll probably see languages like Rust which becoming 'competitor' to C / C++ becoming more prevalent in these domains.
@@doesntmatter2017 dont listen too much to this high-level fans who probably dont really understand how huge the world of Low-level programming stuff made of assembly and C they have below them, c is actually great and powerfull and someone have to do that, i agree with (almost everything) what was sayed, you cannot do a portal 3 game written in C, but some essential parts of everything has to be written in C, where you have to have more "power" and interact directly with the real memory and an error fuck up the whole thing... But its unavoidable you cannot write java and pretend your CPU will understand it lol
The reason for the "Memory" pointer pointing to the "Console" -- at least in my opinion without having looked at the code -- is that the 6502 uses memory mapped IO and the "Console" structure actually represents the physical hardware connected to the CPU via the memory/IO bus . Lots of NES emulators, or 8 bit console emulators in general were conceptualized this way (I know this to be true as I've done a lot of work with NES emulation). In other words, the "Console" structure really represents the physical interconnects between the hardware and is used to facilitate data transfer between those components the same as the memory/IO traces on the mainboard.
Just to clarify: The "Console" structure should have really been called "Bus" or "MemoryBus", etc., and it's purpose is to act as a tree trunk connecting all the various hardware branches (CPU, PPU, MMC, APU, Cartridge, etc) together as there was a lot of direct communication between the various hardware in the system that completely bypassed the CPU. It's also worth pointing out that there were literally dozens of mappers for the NES, some of which are extremely simple while others added additional hardware to the system by directly interfacing with and extending the APU and PPU, in addition some were contained in add-on hardware but not the cartridges themselves. The CHR-ROM could also be instead RAM and since there was no direct way for the CPU to access the CHR memory it instead had to read and write to certain addresses on the memory bus and the Mapper would perform the read writes.
I've been doing procedural PHP programming for 25 years. Never understood the need for pages of getters and setters and abstractions - when you can just write straight A to B fuctions with half the code. Love these videos.
@@keidaron My problem with OOP is polymorphism and classes inside classe, layer of layers of abstraction and add verbosity of java on top of it, then what would be a simple task becomes a mountain of ...
The Mappers is a perfect example of why it's better to use an interface instead of a switch. Sure, it's not so bad when you only want to support 4 mappers... but there are HUNDREDS of them, imagine what the code would look like if you supported most of them... Some of these mappers have very complex logic, putting the logic for all of them in one spot and doing a switch to decide which mapper's logic to execute just muddies the waters and makes the code less clear.
+Evan Teran Good example. However, it would be unhelpful to claim that switch statements are almost always wrong, as Fred George has (saying he would write a case statement "once every 18 months" and "feel really bad about it").
+fburton8 You are correct. I over-generalized. I should have said "The Mappers is a perfect example of WHEN it's better to use an interface instead of a switch." switch statements are perfectly fine IMHO, and interfaces with virtual dispatch are also fine. All about picking the right tool for the job.
@@EvanTeran Your mistake is thinking that there are only two options! Interfaces aren't a replacement for switches in this case anyway because you still need, at some point, to actually attach an object. At that point, it doesn't matter if you're using an object implementing some interface or a function pointer.
You seem to have forgotten that functions exist. Also, have you ever written an emulator? It's very common to have each *machine instruction* defined inside a giant switch statement!
@@recompile yes I've written an emulator, several in fact. I didn't forget that functions exist, I was pointing out how a switch statement can be bad for maintenance and results in bad over coupling of mostly unrelated modules. Why should code for an MMC3 implementation have ANYTHING yo do with code for an MMC1? Why should they even be near each other? The answer is, they shouldn't, because it just makes things less clear. So sure, you can use functions (and you should) but if you're using function pointers, yeah that'll work... But now you've just manually implemented interfaces. Which you can do of course (it's what the Linux kernel does in many places), but you're still using interfaces... Just without the benefits of a more modern syntax allowing it to be written with less code.
Avoiding the unnecessary encapsulation like you've explained has actually made my programming even more reliable, extendable, testable, etc. than when I tried to fit everything to an OO paradigm
I know this is never gonna be seen by anyone, but I had a bit of a revelation while watching this so I'm gonna drop it here. I realized a big difference between people like Brian with a real distaste for OO, and people who absolutely fucking love OO. Brian's priority is to write "straightforward" code so someone coming behind him can easily understand the entire code base. A lot of these pure OO people do not care about that. Their priority isn't to have the entire system be digestable, but structure the whole thing in such a way that any idiot can come behind them, only worry about some small corner of the code, and get something working in that little sandbox. Those two very different priorities are going to lead to very different styles
I saw it and mostly agree. I'm in the camp that doesn't see OO design as useless, just grossly overrated and used far too often. The biggest problem IMO is the complexity that a mind which readily favors OO -- as the default design paradigm -- tends to introduce at the level of integration. If we're just working in the corner of the definition and implementation of a single object as a data capsule, then it seems like a no-brainer that the object is simplifying things. Yet when we step back and look at the big picture in cases where there are many objects with complex interactions with each other (that is to say, looking at things at the inter-object level instead of intra-object), that's when a lot of people will find the objects contributing more complexity than they reduce. One of my main observations, and even just using encapsulation (bundling hidden data together with exposed functions) is that when two or more capsules are involved in a function/method interact with each other with subject-object relationships, at least one of those target capsules will generally become weaker in terms of their encapsulation. This weakening tends to happen very rapidly to the point where the public interface becomes so leaky so as to no longer justify the benefits of having it. As a concrete example, consider a few hundred pages of design requirements like this In a simple video game: >> Healing potion: heals 5 HP for any creature that drinks it. At this point we might abstract away the concept of a Creature and usable Consumable, of which HealingPotion is a subtype of Consumable. Then in the concrete HealingPotion's overridden use(Creature) method, it heals the creature so we need a generalized method in the creature abstraction to modify its HP. Even the most generalized design is now already weakening the data encapsulation of the Creature's HP. As we get hundreds of pages of rules like this, the abstractions will tend to become less and less abstract and the encapsulations will become weaker and weaker: the designs will progressively become leakier. Let's also throw a wrench into the process: >> Healing potion: heals 5 HP for any creature that drinks it. However, trolls become poisoned rather than healed through its consumption, as they have an alkaloid allergy to the nightshade ingredient in the potion. Already, any object-oriented design here is going to start getting ugly pretty fast, and we're still just dealing with one simple design requirement for a document that spans several hundred pages. Meanwhile, functional and even careful procedural designs don't progressively become exponentially more complicated as we introduce more and more design requirements like this. It's not uncommon that the run-of-the-mill OO programmer will require tens of thousands of lines of code to implement even the original Super Mario Bros code on NES and produce binaries that span megabytes, when the original developers managed to do it in ~16K lines of 6502 assembly code (and assembly is way, way more verbose than even C). Meanwhile, C programmers have managed to do the same with several hundred lines of code, and functional programmers can probably do it in even less.
+Paul Fox --- The "Don't Recompile" trick wouldn't work on me. That's the first thing I do: Make sure i can build the project successfully. Cool article tho. Thanks for the sharing. I like the idea about the "shoemaker has no shoes !"
You have no assurances you won't be replaced in any case. Sounds like a cop out excuse to write poor code. Seems to me that you're more likely to keep your job if you're good at it.
Interfaces are good exactly for the reasons you listed. If you write a single relatively small program just yourself, that likely won't change much, then you don't need interfaces. But if you are a professional programmer, that's rarely the case. Usually you will work with others, on multiple large software, with changing requirements, that you may have to maintain for several years. That's a whole different beast. If you are going to work for years on a project, the overhead of proper interfaces and generalization is relatively small, while the productivity boost is huge. As you mentioned APIs should have proper interfaces. When you are working on a large project, it's usually the best to start with writing your own APIs. This largely depends on the language. In C++ you can't really do anything without first building your own tools. If you use high level scripting languages, you may be able to skip this step, if your software not too complex, because you get a bunch of tools for free (and usually use OOP under the hood). Maybe this is why you can't see the value of OOP. You talk about utility functions, for which generalizations are may be required. Now you are starting to understand what is this all about. If you write large software, or especially if you write multiple ones, you should have a lot of utility functions. You can even write your own library, then all projects can use the same stuff without copying and pasting all the time. When you have projects which are changing a lot during development, you might be better off if most of your code is generalized. You save lot more time when you have to do major changes to the code than you spend on generalize everything. Of course too much generalization is bad too. You are wasting time on designing and writing code that will never be used, and also make it harder to use the parts you actually need. The trick is that usually you don't have to decide in advance. You can generalize parts of the code when it's clear it will pay off. But then you'd better do it right away, because if you wait too long, it can get too entangled with other parts, and then the cost of refactoring goes up exponentially. You talk a lot about sub functions. What you want is pretty much the same as private functions in a class. By declaring them private, you can guarantee that they can't be called outside the class. Also they are not in the way in your main function, don't have access to local variables in other functions, and the order of declaration doesn't matter. I think, you are beginning to understand why smaller functions are better. You don't have to wonder about their purpose, if they are properly named. The functions name should tell you exactly what it does. If that doesn't clarify it enough, you can still write a comment. Again, the goal is to make the code readable, not blindly following one specific rule. Long functions have more local variables as you mentioned, but for me that's not the worst problem. Long functions tend to do more than one thing, and then it's hard to tell which part does what, and how those parts interact. The other thing is, if you can't see the whole function at the same time, you have to work from memory, and that is much harder, end error prone. I've seen code where it was challenge to figure out where one function ends and the next starts. Not all languages have type switches. C++ surely doesn't. Anyway, you just reinvented polymorphism. I think if you don't know what the code does, it's easier to understand a properly written polymorphic class, than a type switch, because the class and the method names should tell you exactly what's happening there. Extending the code is also easier, because the compiler will tell you if you forgot to implement a pure virtual method, but it won't tell you anything about type switches. And finally, this is again an example which doesn't really need much OOP in the first place.
small, testable functions that operate only on their inputs are best. Using curly braces within one scope just to declare another scope means that that code should just be a function I don't wanna read 800 lines trying to find the 50 I care about. A good procedural function is just a series of function calls
There is a time and place for everything. It might come as a shocker to you, but even goto is valid and elegant in some cases. The problem is people like this who associate coding paradigm with their identity and become offended when someone contradicts them.
Well said. Too many functions without flowing code is actually harder to read as you need to redirect your attention to understand the function. But with complex code with a large number of moving cogwheels, it's inevitable that readability suffers.
@@grimendancehall It's not about clicking, it's about potentially juggling multiple files and disconnected pieces and trying to glue them together in your head.
Having worked in legacy codebases using both styles, I MUCH prefer the OOP style. I feel like your style of inlining is optimized for an end-to-end reading of the code. I rarely have to do this. (I actually can't remember the last time I had to do this.) I'm usually looking for a specific piece of functionality. When logic is broken down into well-named pieces, it is easier for me to see, at a glance, what something is doing. Looking through the before and after NES code, it was much easier for me to find the main game loop in the OOP version. Visual Studio has great search features which make it pretty easy to follow calls around, and see how many places a method is called. Also, virtual methods are great. :)
I’ve always wondered about this one bit… he changed “console.PPU.step()” into “stepPPU(console.PPU)” for the sake of not having PPU be a class… but if it was Rust, that step function would be defined in the impl for the PPU struct, which means you would end up back with “PPU.step()”. How is that any different from PPU being a class?
24:05 I'm torn on this one. I like the OO notion of "This is an A and this is how it foo()'s, this is a B and this is how it foo()'s" But I also have to say from experience that having all of the foo logic in one place instead of having it strewn across a billion files is nice I remember getting flamed on StackOverflow or something when I was complaining about not being able to find the line of code that was actually being executed because of a chain of like 5 classes where each inherited methods from the previous and overrode a bunch of them. The bug with the object of type A wasn't in A.foo() because B extended A, but the bug wasn't in B.foo() either because C extended B, but the ... etc. Had there just been a giant switch, it would have been a lot easier to find. Again, the usual caveat here where the codebase itself was just trash, not necessarily OOP's fault.
OOP basically replaces introspection and case (or switch) statements with compile time classes. That's it. The runtime cost of that case statement on a 1MHz eight bit CPU is obviously in the tens of microseconds for every time that the CPU has to decide which part of the code to call. Now do this on a 4GHz 64 bit CPU and the runtime cost is in the nanosecond range. It's completely irrelevant compared to the time it takes to fill the instruction/data caches and the instruction pipelines. Other optimizations will far outweigh the efficiency gains we can get from moving class dependent method selection into compile time. That's the reason why OOP was invented in the 1960s... because it made borderline sense. Today it doesn't make any sense, whatsoever.
My idea how to make this better is you can still have objects, but you have piping like in Elixir so it still has a similar syntax. In Java you have subject.verb(object) but I would do subject |> verb(object) which is the same as verb(subject, object). It has the benefits of both, and you can use it for more.
Loving the series. The hardest part of actually becoming an efficient programmer is unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been starting with C++ then reducing everything into procedural functions and tightly-packed data structs. Just by doing that I reduced static memory use and compiled program size at least 10-15%+ (which is a lot when you only have 32kb.) And holy damn, nearly 20 years of C and I never knew you could nest a function within a function, I had to try that right away.
You can't nest functions in standard C. It's that GCC supports it, so this feature is compiler dependent. Or perhaps you was just talking about nesting functions in General, not in C context.
You should just use C. Or use C++ as almost strict C (if you like the extras the C++ standard provides like parameter references, templates, preprocessor, ect...). Which is probably what you code ends up looking like anyway.
TekkGnostic >didn't know you could nest a function with in a function wat. I learned this in school. you know the place that everyone says is pushing OOP to be super popular. and I only really started programming couple years ago. at the same time as I started school. I think the main take a way is that everything has its place.
I switched from a CS minor to a Math minor in college. If my classes were taught with the clarity that you present here, I probably wouldn't have had to. Granted one difficulty was the fact that over half of the teaching assistants from India spoke such incomprehensible English, and explained things so badly, that the only way you could get any help from them was point at a bit of code where you were suck, and have them rewrite it for you.
I think one of the main things I've learned from the video is that there are plenty of people in the comment section that I wouldn't want to work with. 😂
47:00 This is a fantastic point. Atomised code is misleading because it looks like it is more meaningful, more thought-out and more general than it actually is. This is especially true in OOP, because if you write a small class, according to OOP, the class is meant to have responsibility of its own. It becomes an entity in itself, with a nice name and an intuitive description of its responsibilities. This is misleading because the class may only be used once and its methods may only be designed to work correctly in one particular case. The code is dissatisfying because it feels incomplete, and it becomes tempting to over-engineer the code so that the class is more robust and lives up to its name.
I'd add on top that such non-generalized small objects and functions -- especially if they produce side effects (or mutations to the internal object state) -- also cannot be thought of in isolation meaningfully without comprehending the bigger picture around them. The fact that they're all separate and teeny turns them into something akin to puzzle pieces that we have to piece them back together to even start approaching functionality that makes sense to a human in a reasonably high-level way. It's like the other day I was cleaning out my closet and I found this weird string. It looked kind of like a shoelace but it wasn't a shoelace: too thin and round for shoes. So I understand what strings do. It was self-explanatory and "self-documenting" in that sense... but WTF is this precise string for, exactly? And I had to dig through my closet to try to figure it out, and finally I did... aha! It's a string for my powerball (hand grip training equipment) to start rolling the ball inside the gyroscope. Finally, I get it, but it took a long time and a lot of digging and asking questions to figure it out because the string was designed only to work for that powerball and nothing else. It would have been so much simpler if it was integrated as part of the powerball and not a separate thing so that it's immediately obviously how and where it's supposed to be used.
I think there is a way around the fact that a local function can access the containing function's scope - at least this works for me in C#: If you have a function that's 500 lines, it's reasonable to put that in its own file anyway. But if the whole function is in its own file, then simply make it public, and for any local functions you'd like to have, instead make them global but private. Now it is clear to the reader that these private functions are only of concern in that file, but at the same time they don't have access to any state from the public function.
Long functions really defeat the point of functional programming. The reason you break up functions is so the dependencies of that block of code are clear. "This code relies on these two pieces of information, and returns this piece of information. Other than that, it doesn't need to worry." It makes code easier to think about. If the code is inlined, you don't know what local variables from the last 400 lines a particular block relies on or what it's doing to the next 400 lines. "Reducing the surface area" is not a worthwhile reason to discard that.
@Keeper Holderson No - the point is to make code easier to think about, more testable, and to reduce the number of levers which might go wrong. We discard classes because they're ambiguous - does this method rely on other code? Does it affect other code? The same principle applies to functions. The longer your functions, the more complex every single line in the function becomes, because it could potentially rely on (or affect) any of those other 400 lines.
For exploratory coding, sticking to large functions feels correct, since the pieces you can play with are readily available in scope. Until you are really settled into a design or you see massive repetition, splitting code along arbitrary boundaries is actively limiting your own experimentation. Also for future devs including yourself, changing features around is easier with fewer boundaries to rewrite. I really like pulling out a maximum of 20 functions that I use all the time as utility functions, where a reader starts to "get it" pretty quickly with my established vocabulary within my large procedures. I'm happy with this and don't feel obligated to narrow down functions, as long as I can do some E2E testing to confirm everything works. Not doing rocket science or servers though, just creative tools
@@molewizardBrian Will never talked about functional programming, he was talking about procedural programming. Are you confusing the two? Anyway, the separation of concerns and functional encapsulation you're advocating for, those he did acknowledge. That was the whole point about those inner sections of longer functions that he'd like to see; they are essentially inner functions in all but name, but without having outside local variables in scope without being explicitly passed in. They could even be considered testable units, with a sufficiently advanced testing framework.
The stepSeconds fn is a nightmare. "I would have moved these out, but they are only used once" just leaving the tester to die or wait for the appropriate refactor.
I saw a video recently by Anjana Vakil called 'Oops! OOP is not what I thought' that totally changed my perspective on OOP. The gist of it is Alan Keys made a miscalculation (which he has himself lamented) when he coined the term "Object Oriented Programming" that has confused whole generations of programmers. The point wasn't too make objects supreme. Keys was coming from a molecular biology background, and his thought was that programs would be best built as structures composed of smaller structures which are composed of smaller structures and on and on. For this goal, messaging is the most important piece, NOT the objects. It's how the bits of code communicate that allows the pieces to be independent. Inheritance isn't a core principle, nor is polymorphism. Encapsulation is handy but not necessary, etc. Contrast that to what is taught today, and it's clear something went very wing. Alan Keys' idea of OOP was kissing cousins with functional programming, not a polar opposite as it is done today.
20:51 "We've divided everything into these separate classes that are supposed to be self-contained and encapsulated and yet these objects are effectively reaching into each other and calling each others' methods in a way that totally defeats encapsulation." Well said.
Try that in any software development. You gonna see how quick modyfing the code and sub-branching new mechanics will fuck you up to the point, you dont know what works and what not. This dude never worked on huge workflows. Do not listen to this.
@@ph0ax497 Can you be more specific about what you mean when you say "modifying the code and sub-branching new mechanics"? I don't think this man is saying that you should make massive changes to an already working system just because the system is object oriented. I think it's just saying that the system will be very difficult to "modify" and "sub-branch new mechanics" because object oriented program has imposed a mirky, unnecessarily complex and unclear structure upon the program. Or are you saying something else?
@@richdog490 in every software development cycle you are bound to add new functionality to your code. That's the point. That's the programming. When you have already working solution, and you are just rewriting it to your wierd tastes, then its whatever. Try it in massive applications pipelines, like for example Photoshop. When you have project with milions line of code, and astonishing number of functionalities, you just cant do what this man is proposing. Dependencies upon dependencies will create problems and bugs which will be close to imposible to fix
Your channel is a goldmine. I don’t know Go but I started to pick it up about halfway through as I was listening. And what a fascinating project through which to learn it!
Liking people just because they have strong opinions is not a good idea. Any authoritarian a**hole will have strong opinions and try to shove them down people throats. It doesn't make them good ideas, and I wouldn't respect anyone for this. Quite the contrary for me: I immediately distrust anyone who walks in and starts being all in your face about what he thinks. That Brian Will though seems to know what he is talking about, so I'll give him the benefit of the doubt
@@recompile That doesn't conflict with what I said. I didn't say people with strong opinions tend to be experts. I said experts will have strong opinions about the minute details in their field.
There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my friend. General idea is to declare a struct inside which a function can be declared. Since you can declare structs inside functions, you can safely use it as a wrapper for your function-inside-function declaration. This has been done in MSVC but I believe it will compile in gcc too.
It would look somewhat like this (i don't remember it in the detail, but should be about right): void func1() { struct funcWrapper { void func2() {} } }
There is lambda expressions in C++ auto subfunc = [](params) --> ret_type { } I believe that is the syntax. You can also put references to variables between the brackets to tell which variables you want to be accessible from the subfunction or a single & if all of them should be accessible
15:40 just so you know, strobe is more of a clock signal to the controller when to reload the shift register with the current buttons (or at least that’s what strobe usually means for the nes controllers.) I don’t know why it would be persistent in the struct though. Index is probably the bit in the shift register to expose to the cpu.
Being the lead designer of an larger app (2m lines of code as of 3 years ago). I like to say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you have is a hammer, everything looks like a nail. So if you use OO on everything then you sometimes end up with garbage.
Dan Kelly Yea, Java has some issues. But be careful with the "Use the right tool for the job" kind of thinking cuz you'll just end up making all the Java developers unemployed. :p
Lucid Moses We know that Java came about because of the financial benefit of having a language that will run on any platform. IMO which is the only upside of Java.
Dan Kelly Well, I don't thing Java is the best there either. Just to pick a sample of radically different languages I think C, Cobol, Forth all do better at cross platform then Java. Of course it is on a lot of the current popular ones. I'm not 100% on this but I think Java is written in C so Java can't even exist on an architecture that there isn't a C on. So, again, if your going with best of bread on cross platform..... Java developers are unemployed.
+Lucid Moses C is not cross-platform in the same way that Java is. With C code you have to compile for the individual platforms. For Java - assuming you're using only the subset that is truly cross-platform and not relying on platform-specific stuff - you compile once and the bytecode is interpreted/JITted on every device. Fortunately we now have alternatives to Java that make it simpler to compile once, run (almost) anywhere. Mono lets me write programs on my Windows PC that will happily run on a fairly wide variety of platforms.
Learned with C as a kid and then never got anything done as a hobbyist for 30 years because of OOP. I overthink things, and OOP had me spending most of my time designing and bullet proofing my code than finishing anything. Obsessing about details and consistency as I learn new language features. I tried to use Unreal, and I couldn't do it. Everything derives from a first person shooter. Tried to make a simple 2.5D card game like mobile app, and I've got to derive it from the first person shooter class. I jest, but it sucks. Then Unity is a fragmented mess of shit. But my new approach is lean and clean since I am the only dev. I don't write or think about anything that is not necessary. No asserts, null checks, etc. unless there is a runtime condition that might cause it. Why all these years I assert things that I know I will never get wrong or things already throw an exception? I must remember C# in Unity is a scripting frontend to a C++ codebase. C# tricks and language features are unnecessary and undesired sometimes due to how Unity does some things. I hate OOP and how it screwed me up and I never made it as a self-taught developer. Finally getting things done since I gave up spending hours implementing generic interfaces and nonsense for a game that I am the only programmer for! Code I will never reuse in whole and just maybe copy/paste some pieces with variable names changed. OOP is for people writing animal and car simulators. Thanks for the video. I hope you survived the latest round of layoffs at Unity. That engine is a mess too, and now I wonder if ECS will continue to be developed. edit: like why make properties with {get; set;} when a public field is fine? As someone scripting the Unity engine where I'm the only dev. I never went to school and got the class on how to avoid spending hours figuring out if I should write my get/set with line breaks or not and if I should go back and fix every source file every time I change my mind? Do they teach how to avoid that in college programming? Is it by not giving a shit about design patterns?
I don't have an answer for a lot of that, but as for properties, most of the time, a public field is the best option, however, if the variable needs to always be in a valid state (say a pointer that can't be set to null), then you would want a getter or setter to avoid that issue, but then again, the getter and setter would be an interface for the clients of your code, not for yourself.
OOP is literally just handles and functions. What's so hard about that? What were you over thinking? Interfaces are just function pointers by nicer and safer. OOP provides a way to have safer handles rather than throwing around naked pointers and hoping someone remembers to clean them up later.
With the approach you have here, this is great if you know that the program you have isn't going to change significantly beyond a few hotfixes. With a huge code base, your approach would take quite some time to refactor, but not impossible. I suppose the best medium is to either start out in an OOP mindset for the main framework and clean up later or plan everything out initially to have the final result of 9 files and 59 functions, but that would take a lot of foresight that even a God would consider impossible to predict because of how all the individual components would behave with one another on run-time. I understand your approach and it's wonderful if you know the entire code base front and back. However, for programmers who may come into this revised project, they may be confused as to why everything is hot pressed into some very big functions. For those hating on the video and calling it "garbage" and "utter dog shit", do not dismiss the basic code base refactoring lessons that this video presents such as removing functions that are only used once/never and instead in-lining the code in that function into the place that they are used in, among other methods typical of his procedural methodology. If everyone is so concerned with the extreme reduction of the code base in-lining and still want to visualize the code into little chunks, consider this: if the function is only used in one place in the code, move that function to the place that it is used in and make it a static inline function (Although this is more particular for languages that support this kind of in-lining). If you notice that there are too many static inline functions, that is when you have to step back and reconsider how that code base is structured at a high level. Having said all this, this is a much better emulator tutorial than most because it is also considering the architectural decisions that are made when working with this big code base while also illustrating the general interaction of these components and why they act the way they do. Highly underrated video on removing needless OOP components and balancing procedural practices with OOP in this code base.
don't know why you would start with oop and switch over since oop is way more inflexible than more imperative styles of programming. It makes way more sense to start imperative and rewrites parts of the code base which operate like classes into an oop style. The only issue with that is why waste time rewriting perfectly good code to fit an oop with no intrinsic benefit other than fitting some backwards sense of elegance
@@rosangelaserra4552 Good fundamental code design by way of sensible high level decisions that focus on the data being used instead of obsessing with OOP UML nonsense (see any Tech with Tim programming stream) makes it so you're not forced to make such a massive refactoring like in the video or live with the code that originally inspired the video. If it can be done in an elementary way and it accomplishes the job, so be it, but don't be a slave to a dogma or paradigm. Just make code simpler, but no more simpler than that, even if it means inlining functions used once manually. You might accidentally find some bugs after inlining like John Carmack.
OOP should be thought of as a way to glue algorithms together in a modular, maintainable form. I think, obviously, algorithms don't need OOP (say, path finding in a game), and they suffer from it. But if you want to take an algorithm and make it integratable into a larger system in a way which is extensible, maintanable (by a team of more than 1 person) and well documentable, then OOP is pretty good for that. This is why I see whenever people bash OOP, they give examples of small self-contained programs/scripts (programmed by 1 programmer) that do just one little thing. Of course, OOP there isn't needed much. It's like using a cannon to shoot a sparrow. People misusing OOP for everything doesn't render OOP garbage.
OOP apologists merely DECLARE that OOP makes things extensible and maintainable. Where is the proof? What percentage of software needs to be "extensible"? Why is it whenever OOP produces a mess it's always because "OOP isn't the right tool for the job" or "people misuse OOP". Then why is it taught and promoted as a universal paradigm that everyone should use? When exactly IS OOP "the right tool" for the job? Any concrete examples? Any concrete examples of people not "misusing OOP"?
So, it might be better to consider Object assisted programming rather than Object Oriented programming, meaning that things, like CPU, View or anything that represents data/state which it can act upon/can be acted upon is a tool to use in your program, in stead of to have a goal where every piece of code is sub-sub-subdivided into oneliners which needs do-er 'objects' to work. Also, I see a lot of static/singleton possibilities in thecode that prevents having to pass around classes, which exactly fixes what caused the weird structure in the original code.
Rik Schaaf i like that phrase "object assisted". It is how I approach my c++ code. it encourages me to think about my problem and not make classes for everything ... my code is usually simpler and faster to write
Around 32:11 you mentioned having inner functions at the end of their enclosing function rather than the beginning. This is actually possible in JavaScript due to function hoisting (all the functions declared in a scope are "hoisted" to the top of the scope ahead of any code during interpretation) and I've found it quite useful in practice.
Hoisting is actually considered a major flaw in the early stages of Javascript history, being largely abandoned by ES6+ standards, mainly because using something before declaring it in code can not only be confusing, but can also lead to unexpected behaviour. For example: Variables declared with "var" in JS will be allocated when hoisted, but will have no value. So this code below is generally bad: returnRandom(); var doSomething = _ => Math.random(); while using "let" or "const" instead of var would fix that issue, but would not allow for hoisting.
The only thing I’m fully mystified by is the discussion of “subfunctions”. He says around minute 33 he wants them hoisted and to not see enclosing scope. So then....why not just write them as private functions. He doesn’t want private functions because he says that that’s more code to grok and requires me to jump from one place to another when reading. But isn’t having hoisted nested anonymous functions result in the exact same thing? Small tiny functions, and function composition, please.
Hm i'd say it does not. In his version, all the 'private' functions are contained in the one function that actually uses them. If you are not interested in that function, you can just skip over its specification and thereby also skip over all private functions that just concern the one function you're not interested in in the first place. I really do think this is indeed more clean
@@samuvisser that just garbage programming. Try modify the code and add additional mechanics, functions, and any other stuff considered "code development". Good luck with that, when all your code is co-independent. Fucking yikes.
@@ph0ax497 What the heck are you saying? That you want to account for the possibility that whoever maintains the code might want to use stepPPU outside of the StepSeconds functions? Why would you want that... We have one function that increments the nes logic. There should never be a need to call a part of that logic outside of said function. The point of the subfunction was that it's complicated/involved enough that he doesn't want to make it a big, inline chunk, but it _will only ever_ be used inside this function, so he clarifies that as well to whomever reads the code in the future.
@@volbla what's the point in using overly complicated code? If something is overly complicated, then it means its not factorised properly and you need sit down again and rethink the problem. 99.9% of succesful code was the most simple solution to the problem. I ve just pointed out, that OOP was made to sustain decades of software development, without need to scrap whole code, and write it from the beginning, which non OOP solution on the end always give. My friend work with Media Composer as software development, and without QT and strictly OOP, it would just be impossible to achieve. And there is code that was written pre 2000 year. Almost everything changed since then, and yet it is still compatible. Try that on functions only. Good luck.
@@ph0ax497 Did i defend overly complicated code? I don't think i did. These two versions of the nes emulator do the same thing, so they're equally complicated. They go through all the same steps. They're just structured differently. His idea of sub-functions _is_ factorization. They're just confined to the only scope where they're relevant. Kinda like methods... Do you not think that improves readability? And is there really any use in factorizing it further? Exactly how small does each segment need to be?
Maybe a data bus abstraction would get rid of the recursive references. Also, code is data too. And you don't have to interpret a function as being a generalisation, you can think of it as a way of ordering code.
Awesome video, I loved watching it. In my experience, there are many situations where, like you pointed out, procedural style makes things easier and prevents you from overthinking and overgeneralizing the problem you are trying to tackle. However, in some cases, object-oriented programming removes unnecessary conditions and switches that make your code harder to read. Especially in complex game engines where you deal with a bunch of objects which interact in diverse ways to the environment, other objects and the physics engine. In a procedural style, a program like this would become an unmanageable clutter of flags, variables and switch-statements. Therefore, the statement "Object-Oriented Programming is Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers can use - and just like you would not use pliers to get a nail into a wall, you should not force yourself to use object-oriented programming to solve every problem at hand. Instead, you use it when it is appropriate and necessary. Nevertheless, i would like to hear how you would realize such a complex program. Maybe I'm wrong and procedural programming is the best solution in any case - but right now, I think you need to differentiate situations which require a procedural style from those that require an object-oriented style.
Actually I think game logic is one of the best examples of a case where object oriented programming style doesn't make sense, because for complex interactions between gameobjects they almost always need to share a global world state, which breaks the primary goal of OOP, that is, encapsulation. A better way to deal with these interactions is using an Entity-Component-System, which is a more procedural style of programming. Most game engines use ECS these days.
As you said, this is an example of "forced OO". The fact that OO isn't the best way to do everything doesn't make it garbage. An emulator is obviously one of the worst places to use OO.
If we are to prefer long functions and minimal files for our application, do you have any rules of thumb on when to split functionality into a new file/module? Theoretically, an entire application could be written in one file, but that doesn't seem very practical.
The original code had some issues such as cross-dependencies. It should instead have been delegated up. Oh, and in regards to methods, I work by the principle that if code is never reused and should always go through the same procedure, then the code can be written as procedural. Not doing so is just introducing unnecessary complexity. That is, unless you can have a very good description if what a code section does and you can easily refactor it. However, then we're likely in functional territory rather than OO territory, although the lines do get a bit blurred in some cases.
A bad oo implementation does not prove oo design is shit. With that, i'm very slow to make solutions of multiple inheritance. Mostly one level, some two, and rarely three levels deep. use more hasa than isa. often have lots of static methods too all of which are very functional. I do recall doing oo design with structs and pointers to functions on mozzilla browser in C back in '94 ... im sick of managed code these days but i digress ... thanks for vid.
I don't know about Go, but in C/C++ if you wish to hide your extracted functions you could always use anonymous namespace in cpp or private static functions of some class. The namespace solution seems more appropriate. Also why bashing the GLFW? Why it's not professional enough? If you wish to do some really platform-specific stuff, you can always get the handle from it and operate on it, but still a lot of common functionality is covered. Smells like 'DIY is the path of true developers'.
Agree with most of this, but not the inlining of of all of the functions. I like code to read like plain english. Inlining all those functions reduceds testability and navigability, which are both extremely important. It also increases indentation, which can make code more difficult to comprehend.
Your version is easier to understand? OK, I don't know nothing about NES and I don't even know GO but suppose I would like to change CPU clock. In original project I've opened nes/ and see there is cup.go file, I've opened it and wow!, there is a CPUFrequency constant right at line 8. Now, where is it in your version? I don't even know where to start as names of your files give no clue.I've opened cpu_instructions. Didn't found it there but my god, the executeInstructions() is great! Please explain to me, how defining instructions set inside a function that suppose to execute them suppose to be easier to understand or track? It is even hard to track where the function ends if you don't have proper text editor. I also understand that spending hours on scrolling a text is easier then open proper file... On the other hand I could probably figure out entire original project using windows explorer and notepad but the thing is a code is not a fantasy book that you suppose to read from cover to cover. It's like machine with couple of levers - I like to be able to quickly find out what is proper lever for the job and not to think how entire machine works if it works correctly. For example: for j := 0; j < ny; j++ { for i := 0; i < nx; i++ { x := float32(ox + i*sx) y := float32(oy + j*sy) index := nx*(j+view.scroll) + i if index >= len(view.paths) { continue } path := view.paths[index] tx, ty, tw, th := view.texture.Lookup(path) drawThumbnail(x, y, tx, ty, tw, th) } } Main reason why you want drawThumbnail as a separate function, even if it's called only in this single place, is that you can clearly see what is inside scope of your loops. Why should I care how thumbnail is drawn? It is easy to test this single function and if something is wrong and I'm sure the function is ok this mean some other code in this loop is broken. It's much easier to focus on the problem. Another thing, it is easier to keep/track logical order of instructions. Sure you can just line comment what following code does but in real life you also temporally comment actual code lines or comment part of function. What if I want to move drawThumbnail code above previous instruction or set of instructions? It's hard to track where it ends in such scenario. Reason number three is that your version of those loops barely fits in my screen and only clue I see there is "//draw thumbnail" somewhere in the middle of it. I've found this comment, It's that all those loops do? Don't know, need to keep reading... On the other hand I have only 10 lines of original code that I need to take a look at to know what it do. Now when I want to know when this is executed I scroll few line up and see update. just beside MenuView and I'm in "menuview.go". You don't need to be a Sherlock to guess what it mean and that if you are interested in memory management for example, you are in wrong place. Where is this code in your example? Um, I'm somewhere in run.go, I scroll up, scroll up, scroll up, scroll up, scroll up, Aha! "case *MenuView:"! But it is the case of what? I don't even see any other cases... scroll up, scroll up, scroll up, scroll up, shit - beginning of the file! - scroll down, scroll down, scroll down... I can't imagine how you could say it's easier to understand unless you want to know every single line so you can just open a file and scroooooll. And even if you would want to know all the code eventually, it is still better to learn it from bigger pieces. Like if you want to know how car is working, after you found out that it has steel body you probably would like to know that it has wheels and engine etc. and not why it's steel, what a steel is made of, how it's melted and formed, what properties of chemical elements make it suitable for a steel and so on... I could argue with almost all statements in your videos but I refuse to believe you are not a troll. Problem is that there surely are some people that just started programming and they may think you are showing best practices...
God forbid you actually read what you’re working on
8 лет назад+6
Interesting, I tend to agree with you regarding the OOP stuff. However, simply using huge functions with everything inlined has a side-issue when it comes to testing. Testing smaller functions is usually easier. But of course this all depends on the code you're writing. It might still be testable if it is long. Just that you didn't mention tesing (I think?). Also regarding the "putting sub functions after the last return value". To me, it doesn't really warrant changing a language to accomodate this. Rather just put it after your function, and use naming or whatever to make it clear they are used in no other place. The code will still be just as readable, without making the language more complex.
+Brian Will All examples you show you are always converting code already written as an OO program. Would it not only be a true comparison when you code a program from scratch as a procedural program, and 'challenge' another equally skilled programmer to code the same program, but object orientated, and along with lines, functions, etc, time is also a category?
The specs would have to be excruciating precise. Why not remake a working program using two different techniques? And OO tends to need more radical refactoring when something changes in the spec affects the chosen class/code organisation. Generally this can be a problem in time/budget constraint projects and such a surprise change should be part of the test to make it more realistic.
what I personally don't like about these presentations is the general lack of slides in relation with the speech duration . The exposed ideas do not entirely reflect the contents of the slides thus it is easy to "fall asleep" and lose the audience (like on those painful school presentations we had to go through when we were young). Perhaps a diagram or more slides to capture the off-topic ideas would be better.
Pure FP (immutability) is great "in theory", but like OOP, it forces you to work around the problem in a FP way instead of just solving the problem itself. Also, extremely expressive type systems are nice but can encourage you to make everything super generic, which he talks about near the start about how generic code should be used for libraries, while actual application code is likely not going to be so generic. Again, over-thinking the problems. However FP does encourage simplifying data and keep it distinct from the functions that mutate it, so in that respect FP is better than OOP. I believe that FP and related type systems also encourage "reusability" and "modularity" (buzzwords) better than OOP, even though the premise of OOP was to solve these problems. Runtime polymorphism, one of the central properties of OOP, can be represented by FP languages.
I totally agree that general solutions have to be specifically engineered. This is one reason why this is a good reason to try to refactor and generalize solutions over time.
You have to ask yourself why you weren't trying a general approach in the first place. Most examples for OOP are along the lines of databases. They are defining a general animalClass and then are deriving dogClass and catClass from it. Why? What's the point? Just make ONE more general form and let someone write "Dog" or "Cat" or "Platypus" into the species field. Instead of writing two versions of the grooming() function, just write one and have an if(species=="Dog") statement take care of the differences between dog and cat and platypus. It's also far likely to produce software that leaves rooms for user-workarounds just in case that the architect gets the use case wrong (which the architect always does) and somebody decides to use the code to groom cows. :-)
If this were a serious application I was responsible for maintaining I would prefer having the 321 test points rather than 59. I also want classes for dependency injection.
Using code revisioning to nail your point ? Kind of unfair. I've done code revisioning and cut the number of lines of code by half. Does that make my tecnique better? No It simply means I had the time to review the code and re-structure it to achieve its final goal. If you really want to nail your point challenge an OO programmer to write a medium sized program ( 10,000+) lines of code and see who comes with the best solution in terms of performance , readabilty and maintainability.
while in general I agree with many of the things that you said against oop. in this case, the original version is more readable because of much lower indentation level
Why is having less files better? I’d rather have one class per one file. Even if they’re sparse if you don’t need to worry about that file at that moment. Hypothetically OO should get you code that is less to worry about and easier to change via inheritance. Hypothetically in a “functional style” you could extend functions by passing a function as an argument (if language supports it, I haven’t used GoLang)
A bit of a side thing, but don't be afraid to rewrite code. If it's not super well written you do have to go through and make sure you abide by the changes everywhere, but you can get code that is not only more understandable, but more efficient afterwards. Also if it doesn't work, that's why we have version control
Have you tried OCaml? It seems to fix many of the problems that you have issues with. For example, you can define nested functions and use nested let bindings to do simple computations without adding local variables to the scope.
Sorry B rooks and Linus missed it ... it’s not all about tables/data and it’s not all about flow/control ... you need to think about both and expose both ... as for OO the best oo code I seen was written in the early 60’s ... yes before it was named/invented.... it had 6 objects and a number of functions ... despite being written in (macro) assembly it was well committed, easy to understand, and very efficient. The key was the objects were well designed and represented the 6 fundamental things the code needed to manipulate .... virtually all the oo code I have seen since it was named is rubbish ..
Im of the mindset that there’s often more than one right way to do things. Between procedural and OO I think different use cases can determine that one or the other will work better. In this instance, I’m not actually sure which one I prefer, especially because I’m frankly not a fan of the way Brian shoves stuff into huge functions. I get for him that makes it easier to understand, but for me personally I prefer things more split up, whether they are procedural or OO. And so in this case I think the original OO repo has better code (I’ve looked through them both), even if I don’t have a strong conviction that you couldn’t do as good or a better job in straight procedural code.
I think you'd really like BASIC - nested DEFSUBs (functions) don't have closures and are difficult to return, and also can be put at the bottom of a parent DEFSUB. You can't use brace nesting as it doesn't have nesting is the only thing
You make a lot of very solid points. In your refactoring of the Mapper interface to a type-switch though: what is the point of still using a declared interface here? If you are disregarding extensibility (which would require adding to the internal type switch, rather than conforming a possible new struct to an interface) anyway, why not just make Mapper of type interface{} and add a (failing) default case to your switch?
Looked at your nes repo, you strait up butchered the readability of the project. Even if you don't do OO, you should at least follow some principles of making your code readable and reusable, even if it's just the bare minimum.
In C++ you can use lambdas as sub functions in your functions and they wont be able to access any variables declared in the outer scope unless you explicitly have the lambda 'capture' them.
31:23 Javascript allows you to declare "sub-functions" at the end of a function (even after the return!). Unfortunately you still can't exclude variables from the enclosing function's scope
I might have missed it, but was there any change in performance with the two programs? Obviously performance isn't the only parameter that is important, but in the end it's what my boss cares about most in the age of cloud computing. One would assume that if you don't start fracturing data into many different instances of a class, but instead you're operating on larger continuous arrays of data, you'll end up with less I/O lags and cache misses and thus keep the CPU busy a lot more resulting in better performance. Of course this is different to a case-by-case basis, but I'd be interested if this rewrite resulted in more than just a more readable and probably better maintainable code base.
If you are working with large data especially on cloud you will be using distributed paradigm like spark and you won't be doing I/O writes unless explicitly writing to disk. Moreover you'll be more interested in reducing the shuffling or sorting of data in memory to reduce computation or bottlenecks with coalescing the data before persisting. Cloud cost and performance is definitely data related first in my experience and the procedures about how you are moving, sorting and modifying the data is far more important than programming paradigm used. Your programming paradigm doesn't really affect it much because alot of the frameworks you use will optimize by using built in optimization engines or whole code generators. In the cloud CPU usage is mostly to run containers, VMs and batch writes. If you are doing long running batch processing with writes then you could possibly reduce CPU usage by designing your pipelines to write data to your chosen persistence storage layer to match that - rdbms requires indexing, blob storage doesn't. The code refactoring to move from OOP to functional etc is not as important as understanding the actions of the pipeline and the data structure and its persistence first so while I agree that refactoring for programming paradigm is not important there is no real way to compare it unless you use the exact same data processing. My guess is the memory overhead would be low for holding pointers to classes, functions etc
There's a corollary concept related to OOP that you might be interested in addressing - fully normalizing an SQL database. Is the "pure" normalized database actually desirable in the real world? I no longer do that at this point, and I still use SQL databases even though I've adopted MongoDB for various purposes, too.
At work we recently went through the process of normalizing our database because it was causing real world performance issues, so it's definitely not purely theoretical dogma, but I don't think we adhere to any notions of having "pure" normal forms unless there are real practical reasons for it. I don't think every database necessarily needs to be normalized, but doing so totally makes sense to me. It gives stronger guarantees at the database level that the data will be correct that you'd otherwise have to enforce in code, and it can improve performance and space usage. On the other hand, we do plenty of de-normalizing too since it can also improve performance or just make things simpler by putting commonly used data together in the same table. So I guess like all things it depends on the situation, but unless there's a reason not to, personally I'm inclined toward more normalization than less, just without being dogmatic about it.
@@telaferrum From an OOP point-of-view, a MongoDB is clearly more closely related than an SQL database. The so-called "document" - is exactly like an "object." If OO programming is so good and desirable, then its underlying permanent representation is a Mongo database. Since there are numerous issues with that in the real world, I think it makes sense to question OO methodology at the fundamental level.
@@Jollyprez For the purposes of this discussion, OOP and representation of data in databases isn't particularly related though. The problem with OOP is that it combines data with logic/functions/behavior which often does not make sense. However, before objects there were still structs for structuring data. Data without behavior is not an object. So neither relational or NoSQL databases are object oriented.
In modern C++ you can make some lambda function and define list of capture variables, so you suggestion about internal block can be implemented using lambda function and you also able return some value from such lambda function or even change variables outside it.
19:10 Surely using class fields to hold pointers does make sense if you have an object model where interaction with one class requires an update to another. What's the alternative - explicitly parameterise (25:00)? but the tight coupling is still there whichever strategy is used
the sub functions, if declared as methods privately in a class, could just be declared after usage. By inlining the code you have just shot yourself in the foot. Also, not saving a lot of lines for the whole project, while creating "need to scroll past the code" issues, doesn't really help your case.
Well the difference is that with functional programming you get to return functions that can be passed to other function's in order to create data transformations without worrying about data corruption, dynamic state changes, messaging, etc. Not to mention functional programming has natural in built capabilities to segregate data from state. OOP requires you to create special classes and managers to describe all the potential state changes as data objects are being passed to other objects directly or indirectly by reference which introduces the problem of OOP not able to separate data and its shape from state changes/transformations. Hence more code has to be written with complex design patterns to mitigate the fundamental handicaps that come with that paradigm.
@@georgeokello8620 more code isn't necessarily a bad thing though. And state is something that has to change sometimes. When just analyzing data, change of state isn't much needed, but data is changing. So somewhere the state has to change.
Hi Brian, first props to your voice, it sounds gorgeous. On all your points against the used OOP methods used in the original nes project, I mostly think the same way as you. Through all what you said, I was asking myself your view on the business side of programming. When everybody would program most sane, probably many many people would loose their job in the industry - don't you think? I think we are not at a point of a social community developed so far that we could handle so much intelligent code. We need broken, confusing and bad code to make sure that a lot of people will stay in their jobs. Don't you think?
In visual studio you can define #Regions and collapse those in the UI. That would solve the issue of the functions needing to be defined at the top of a function. You can simply hide them. However, what I do is define a private method and place it at the bottom of the class, that way it explicitly limits it's scope and tucks the function underneath, out of the way. I don't name the function with any particular naming convention, but if you really did want to explicitly define a function as only existing as a single use function you could put a prefix on the Function name like '||' or "su_" or something.
Now do the opposite. Take a program which is written as very few files and try to split it up in order to extend it. See what is easier and see why it is better to aim for "over-splitting" rather than "under-splitting". :)
You have absolutely no idea what you're talking about. The aim is not to intentionally split functionality in multiple locations. You have to write the code as is, make it easy for the CPU to execute your instructions, and the code structure will mature into a proper API naturally while you bundle data in logical groups. Once you go to higher levels of architecture design it will become even more apparent how files should be structured, especially when you're building multiplatform or hot-reloadable software. The fact that you're lobbying for "over-splitting", and even thinking in terms of "over" or "under splitting" shows me you've never done real low level multiplatform work.
@@Borgilian lol calm down "expert" Your rant about what you think I think is only in your head, just like your idea of my experience as a software developer.
This guy /actually/ has standards and sticks to them when they make sense, and not just because 'education' forced them into popularity. I highly respect that.
speeddefrost I mean you can still use OOP for some things, no?
He doesn't have standards, he has opinions. Any experienced OOP developer could go point for point with his opinions and provide well rounded, reasonable, pragmatic rationales for why OOP is important. He doesn't have to agree, but id fire anyone who inlined OOP code like that on a commercial project just because he thought it looked better. He is welcome to his opinions however.
@@BinaryReader In his other videos he shows that experienced, academic authorities even, are totally unable to refute any of the basic points, and in fact double down on working around all the problems and confusion that OO causes.
@@asdfdfggfd This just sounds like you personally don't know how to develop with OOP. For the most part, academic authorities don't know anything other than (OOP can do this and this and this) and rarely settle on "best practice". But ill give you an example...
interface IRepository {}
interface IProducer {}
interface ILogging {}
class Server {
constructor(private readonly repository: IRepository,
private readonly producer: IProducer,
private readonly logging: ILogging) {}
}
...
const server = new Server(
new MongoRepository(),
new KafkaProducer(),
new KabanaLogging()
)
Tell me, how is this code improved using either pure procedural or pure FP? Show your example (you can use Haskell type classes or Rust traits if you wish. Pick your language (or poison), and show me the code.
@@BinaryReader Your code literally does not do anything but alloc a struct to pointers to functions. It doesn't do anything. But anyhow, if I were to stupidly organize logic in this way with procedural code it would go something like this..
struct Server {void *(MogoRepository); void *(KafkaProducer); void *(KabanaLogging) }
main() {
yetanotherpointer = malloc (sizeof(Server));
/* Then if i needed a bunch of servers organized like this like I would in at HTTP or telnet server, Id just make that pointer to the serve struct an array of pointers to the server struct where as a retarded OOP system would need another class and more helper methods*/
}
But like I said, that would be a stupid way to organize your code there, because you are doing it with OOP, without the barest clue about the problem that OOP was attempting to solve before it became one of the dumber religions in existence...
Next video's title will be: Object-oriented programming is literally hitler.
+Dawid Cz "Object-oriented programming killed my dog"
+Dawid Cz
HAHA!!!! Glad to see others also noticing the ridiculous escalation.
+James Bishop let's not reject the possibility that it's what actually maybe metaphorically happened to poor Brian and that is the cause of this epic rant series in which case it's very human and understandable. would you be able to preserve your sanity if Java somehow killed your beloved hedgehog?
+Dawid Cz It happened. Scroll to bottom. medium.com/@brianwill/how-to-program-without-oop-74a46e0e47a3#.1rsj4p5c5
lol good one...
I started doing software in the 80's, creating ERP systems for small organizations. This was before: big networks, object oriented programming, SQL, and widget/event centered user interfaces. My productivity then was DOUBLE or TRIPLE what it is now using modern tools. It drove me nuts dealing with one fad after another, each one slowing productivity and exploding complexity. The popular cloud based DBMS I work with with now feels like a ship with a thousand leaks. Nobody seems to get that Simplicity is the golden rule of software.
+Mark Mark Damn straight.
+Mark Mark preach it
+Mark Mark Yes, I agree., although OOP is very useful in programming GUI screen objects. For most other stuff, Procedural is just fine.
+Mark Mark. This guy knows his stuff
+Mark Mark For everyone simplicity means something different, also apparant in this video. Is it simpler to have one big function or several small functions that do the same but are not reused? For me, smaller functions are simpler, because the context is very small (if properly chosen) and i can very quickly grasp what the code wants to do, what i can change and what constraints exist. In the video, the opposite is proposed and i heard it often enough: one function is simpler than many functions.
Came for bashing OOP, stayed for NES emulation code.
+TheMRJewfro same, this is the best "how to make an emulator" example i have ever seen.
he bashes mostly OOAD, not OOP language features well used, but yes... in a hour I even understood mostly how NES works (plus basic of Go), and I tried to get it from many similar emus with too much OO bloat (not to mention is sure that such realtime thing needs to be optimized and here it IS NOT premature at all) its excellent work!
To me the underlying problem is the conflict between theorists and those who actually need to create and maintain practical, working code. Theorists become more and more obsessed with their paradigms and adopt a "more object-oriented than thou" or "more functional than thou" view. But in real life there should only be one test: does using object-oriented design in your code make it simpler, clearer and cleaner? Then use it. If another program would be simpler, clearer and cleaner by using a procedural design, then use that. And if neither of those paradigms work then use a combination of the two (or make your own). Bottom-line: don't let any programming paradigm work against you in your code.
Architectural astronauts, they are called.
My friend, I wish you could speak your wisdom to all of my coworkers. I feel that many of us are unwilling to see when our tools are working against us rather than for us, especially when we’re only comfortable using that one tool.
I would probably call such theorists rather terrorists, even nazis, in some cases
That makes the assumption that OOP ever makes things simpler. It doesn't.
Real theorists are busy on Haskell and Higher Kinded Types and Dependent Types etc, the really really general code which are indeed difficult as Brian said. They have no time for this OOP bs.
50:10 "In fact when I think about the complexity of a function, I don't really think so much of it in terms of cyclomatic complexity, of how deeply you're nesting loops and branches and so forth (I mean that is a concern), but for me the real measure of complexity is, well how many variables do I have to keep track of."
Not sure how much of a fan you are of code metrics, but that sounds like an interesting idea for a complexity metric. How large is the state space, based on number of parameters, variables and references to globals.
Nice point
For me complexity is interdependency. A single function with a lot of stuff going on might be complicated but without any dependencies it's not complex. The important difference between the two is complexities price on maintenance and extensibility is polinomial while complicatedness stays linear.
It would be interesting to measure but it seems to me what is important is not how much state is available but how much is actually used
The Fred Brooks quote reminds me of Linus' saying: "bad programmers think about the code, good programmers think about data structures."
Yeah, yeah cool stuff bro, but he is totally different ruclips.net/video/MShbP3OpASA/видео.html ;)
True, once you have decided the data structure and the program structure, the code just flows.
@@reoz2113 How is that different? Those micro-optimizations are all about data structures: Laying out and aligning data in a way that can be continuously prefetched by the CPU so you avoid cache misses, laying it out in a way that allows for safe concurrency - still while avoiding cache misses, ... Sure, you should have a good algorithm - but it doesn't matter that your algorithm is twice as fast if each miss can cost orders of magnitude more.
BREAKING NEWS from year 2023: OOP still sucks badly, but they still use it.
In haskell, you can put a "where" word after a function defunition and there you can describe all subfunctions (and each of them can have the "where"-block itself). They will retain visibility on the parameters of the root function.
IMO the problem with Object-oriented is when you let it dictate your design and plan all of your classes before you've written your first line of code. Some languages force you to make a class first so you're already starting with a disadvantage.
You're thinking of Java but it's not that bad in Java. You can put procedural code in a class container.
You're supposed to plan before you write your code. OOP lets you see the relationship between your data easily.
@@chudchadanstudI would say it tends to have the opposite effect. There's no quicker way to lose sight of the simple fact that any software -- regardless of how sophisticated -- is doing nothing more than inputting and outputting and mutating data at the end of the day than OO design.
The overarching design philosophy of OOP tends to make people quickly forget that, since it's not merely organizing data into bundles, but instead into capsules that hide the data.
Not only you've made very good points in your previous videos, but also provided this example, kudos for this.
I think most of the problems come from adhering too strictly to any particular paradigm, but I agree with the notion that imperative happens to be the one that gives the programmer the least amount of headache.
Here's my view on some of the things you said / showed:
a) Your criticisms of the Console pointer in CPU etc. are a bit... odd. Its not uncommon to have bidirectional childparent relations. I don't see the problem in this case, especially since a real CPU is also connected to the rest of the system... otherwise it couldn't much of anything. Okay, they could have chosen to pass in the current state of the memory each loop, I guess, but that would be even farther away from the real situation. What is especially odd to me about your criticism of this point: It seems you especially dislike OO design that sticks to principles just for the sake of it. I agree with that. Yet, here you criticise this reference without any real reason except for principle. It's handy to give the CPU a pointer to the console, because then the CPU can easily access the memory. There's nothing confusing about that. Sure, it could have been a pointer to the memory, keeping it a bit cleaner... but that's about it.
b) Private, nested functions: I never saw the benefit of these. You still have the same work to do. Define a complete function with a decent name and call it somewhere. The only difference, really, is you choose to put it at the beginning of the parent function and assume you'll never have to call that function from somewhere else. So what? It would be just as readable if these functions were defined on their own just above the parent function. On top of that, you wouldn't have to change your code if you realized later on that, contrary to your initial belief, you now have need of that function somewhere else.
More important than were a function is defined is how intuitively understandable its purpose is.
c) I really don't see how inlining a lot of functions and creating longer functions helps readability.
One example would be your main loop:
I would much prefer a main method that basically gives an outline of what the program is doing by calling other functions in a specific order.
- InitialiseAudio(); InitialiseGL(); as separate functions I only need to look at if I actually care what's happening there. It's a really "boring" part of any program, as it doesn't add any understanding of its purpose. It's housekeeping. I don't put my vacuum cleaner next to my couch because I don't need to see it all the time to know I use it to keep my place clean.
- MenuView / GameView taking care of their loop iterations on their own, for example, would clean up that main loop a lot... I really don't have to fully understand what each view does to get the big picture of the main loop. And if I do need to see what they do, I can easily check it out. I like taking a "hierarchical" approach to understanding a program.
Doing things like this, your main method would probably crumble to less than 100 lines of code, would still contain the essence of what it's supposed to be and would be free of stuff that, to me, is irrelevant detail for my understanding of the main loop. I could then go and check out GameView and MenuView... since they tackle two separate issues: Displaying a game and displaying the menu.
I could go on, but perhaps you understand my point of view: Inlining functions just for the sake of it doesn't help readability and understanding for me. As you yourself said in another video: When we try and understand the human body, we don't first look at microbiology. We look at the organs. What your main loop here is doing is saying "there is a lung in your body and here is its microbiology" before telling me about how the lung fits together with the rest of the organs.
After watching this and your other two videos about how terrible OOP is, I have to say: You don't really make a strong point. Sure, you showed some really terrible OOP examples. Here, you showed a not so terrible example of OOP and turned it into a, IMO, not so great example of procedural programming.
I would have liked your videos a lot more if you hadn't started with that "most important programming video you'll ever watch" bit only to continue without saying anything new:
Doing something bad is bad.
+Clairvoyant81
Its not uncommon but it violates general oop rules.
That's the whole point of this.
Please read my comment again. I state that he dislikes OO design that sticks to principle just for the sake of it and ends up criticizing this specific design for no reason other than principle himself.
+Clairvoyant81 I understand what you mean. Especially this video weakens his own arguments because he doesn't builds on his own criticizm from the first video.
I think this is because he essentially just tries to disprove that oop is the surperior way for structure.
That is what everyone else is trying to convince you to think as far as I experienced it.
In this particular example that you picked the real problem is that oop should be a working solution to the shared state problem by design.
But thats simply not true.
His example is simply a different approach with no big drawbacks.
It seems he is not able to frame the real problems with oop he describes on a already working project.
Maybe he doesn't realize that himself.
The biggest Problems with oop as I see it are the major limitations for dynamic growth as the full program unfolds.
+alexander kerbers
I agree that it's a really bad idea to teach people that OO is the only way to go ever. But: I also think that it's a valid option one should always have in mind. Just as with languages, these paradigms are tools one should use when appropriate.
Could you elaborate on the dynamic growth problem?
As I see it, it's fairly hard to write code that grows well as the project continues, no matter what paradigm you use, but perhaps I misunderstood what you meant or you have some more specific issue with how OOP grows.
For me, OOP dogma really hindered my early years of programming. It made me hate programming and hate any code that I wrote and see it as utter dogshit, even if there was nothing technically wrong. I wasted so much time on questions such as... What work should be done in this class constructor? Should I split this class into two pieces? Should I move this piece of data from class X to class Y? Do these classes make sense philosophically? Should this function be a member function of class X or class Y? Should this member variable be public or private? Should it be const? Should it be accessed with getters and setters? Should I write const versions of my class member functions? Should helper functions be private? How do I write tests for private member functions?
Then there were further questions like: should my functions be only 5 lines long? Should my classes have only 5 members?
For the code I was writing none of this mattered! I should have been writing procedural code, and should have only started thinking about these questions if I wanted to wrap a public API around my code.
I feel ya
+Brian Will, I rather agree with your pushback against code "atomization" and the tyranny of overgeneralization and naming things that need not be named. Also subfuncs or use blocks sound nice. However, one of the main reasons I've heard for splitting up functions is to have smaller testable units. I'm wondering if you've found that writing such large functions impacts testability at all? Do you wish that you could write tests for subfuncs when you're coding in this style? Is unit testing something you approve of in the first place?
As a C99/Rust embedded dev, I can assure you, you can not and should not test everything in your code.
Ofc, utils, maths, physics functions or special module should be heavily tested via some unit test (for power consumption for instance), but if you want a clean and readable code, you should not create a mid process function testable. In almost cases, it will lead to some spaghetti code or lead to a need to simulate almost half your process in your unit test.
Instead, rely on performance test or integration test. Make for instance your process stop before and after your function and compare some well design features.
A simple byte field with a clean environment can do the job. Ofc you can't stop tricky errors with this method, but hey, it's mostly enough. Step by step unless it's not
I agree with what @zanzi8597 said, but I'd also like to add that in this hypothetical new language (or feature) that Brian Will proposes, I could also see how such a sub-section pseudo-function could be considered a separate testable unit by the language's compiler/build system/built-in testing framework. Like you could assign it some handle for identification, then in a separate section of the source file (or another source file entirely, but I'd prefer same-file), you could create your unit tests. But again, I'd say larger integration tests or unit tests of the outer function would often make more sense _overall._
I work in the automotive industry on safety critical software (e.g. braking controller)
We use C language and it is horrible. Is it horrible because C is a bad language? No.
It is horrible because unexperienced people (e.g. kids just out of school) write the code.
No time to think about good software design, no time for refactoring etc. (Sometimes the existing bad code is used as excuse to write even worse code.)
But this seems to be the "industry standard", because those kids are so cheap and don't complain all the time...
I don't think OO is bad, if used wisely. But when unexperienced, writing OO code usually ends up worse than writing procedural code...
Yes. C is pretty horrible language to develop in. And no amount of "skill" makes it a good language to develop in. For the longest time however it was the only game in town, that however doesn't make it good. It just makes it not as awful as writing raw assembly code.
This whole notion that creators of major tools are somehow infallible is nonsense. If you distribute a program, and users start complaining that your program is shit, going around roasting your users is usually not the valid approach. Programming languages, coding methodologies are no different.
following systematic and clean approach used here, even malloc and pointers of this small thing would be done well, by design; but anything with large data... no C nor C++, IMHO
@@antanaskiselis7919 I don't get it, but I'm only trying to learning programming. What's per se bad about c? c is the only thing that comes to my mind when i try to think of a language easy to read and therefore (re)write stuff in.
@@doesntmatter2017 I guess the most simply way to put it, because in modern day we have programming languages and tools which are that to C, what C was back in the day to writing forms of assembly. That's not to say that C is useless, far from it, there is loads of code written in C which is running pretty and still requires maintenance. However you would be hard pressed to see benefits of writing C code from scratch when starting something new now. And to extent in certain domains that's now even true for C++.
Now that's not true for all domains, just huge chunk of them. Most of your common applications be in browser, desktop often even in stuff like IoT (internet of things) will not be developed using C or other closer to machine code language. Java, C#, Javascript, Python or even php will be preferred. Question is why?
Well, because these languages handle a lot of stuff what languages like C does not. One of the more prominent factors would be memory management which is common concern not only for application breaking or causing undefined behavior but numerous security issues this causes. Ever wondered what all those updates in Windows for example do, even though no new features are added? Huge part is due to mismanagement of memory using languages like C or even C++. Here, microsoft report on it: visualstudiomagazine.com/articles/2019/07/18/microsoft-eyes-rust.aspx
When we bump into common knee jerk arguments like Klaatu Barada Nikto implied. Which goes along the lines that it's just bad programmers doing this supposedly. Which is just blatantly false narrative. Reality is this, while C was workable back in the day, when applications were relatively simple. Now, when scale of said applications and interconnectivity between those applications became a thing, even the small errors made with C can be magnified too huge degree. On top of that we are also living in post-moor's law where multi-threading and parallelism is becoming more relevant, and languages like C has close to nothing on it. So if there is a way to remove the human error factor from it we should.
That's not to say that there aren't domains were you kinda have to use C or C++ (the latter is fair to say is becoming better), however, due to certain design and backwards compatibility issues, you'll probably see languages like Rust which becoming 'competitor' to C / C++ becoming more prevalent in these domains.
@@doesntmatter2017 dont listen too much to this high-level fans who probably dont really understand how huge the world of Low-level programming stuff made of assembly and C they have below them, c is actually great and powerfull and someone have to do that, i agree with (almost everything) what was sayed, you cannot do a portal 3 game written in C, but some essential parts of everything has to be written in C, where you have to have more "power" and interact directly with the real memory and an error fuck up the whole thing... But its unavoidable you cannot write java and pretend your CPU will understand it lol
The reason for the "Memory" pointer pointing to the "Console" -- at least in my opinion without having looked at the code -- is that the 6502 uses memory mapped IO and the "Console" structure actually represents the physical hardware connected to the CPU via the memory/IO bus . Lots of NES emulators, or 8 bit console emulators in general were conceptualized this way (I know this to be true as I've done a lot of work with NES emulation). In other words, the "Console" structure really represents the physical interconnects between the hardware and is used to facilitate data transfer between those components the same as the memory/IO traces on the mainboard.
Just to clarify: The "Console" structure should have really been called "Bus" or "MemoryBus", etc., and it's purpose is to act as a tree trunk connecting all the various hardware branches (CPU, PPU, MMC, APU, Cartridge, etc) together as there was a lot of direct communication between the various hardware in the system that completely bypassed the CPU.
It's also worth pointing out that there were literally dozens of mappers for the NES, some of which are extremely simple while others added additional hardware to the system by directly interfacing with and extending the APU and PPU, in addition some were contained in add-on hardware but not the cartridges themselves. The CHR-ROM could also be instead RAM and since there was no direct way for the CPU to access the CHR memory it instead had to read and write to certain addresses on the memory bus and the Mapper would perform the read writes.
I've been doing procedural PHP programming for 25 years. Never understood the need for pages of getters and setters and abstractions - when you can just write straight A to B fuctions with half the code. Love these videos.
It's very hard to comprehend and do tests
You actually dont NEED getters or setters.
One of the thinks I hate about people saying OOP is SHIT because of this!
Try without! Its possible! :D
Is PHP still worth learning in 2022?
@@keidaron My problem with OOP is polymorphism and classes inside classe, layer of layers of abstraction and add verbosity of java on top of it, then what would be a simple task becomes a mountain of ...
@@felipegomes6312 sure!
The Mappers is a perfect example of why it's better to use an interface instead of a switch. Sure, it's not so bad when you only want to support 4 mappers... but there are HUNDREDS of them, imagine what the code would look like if you supported most of them... Some of these mappers have very complex logic, putting the logic for all of them in one spot and doing a switch to decide which mapper's logic to execute just muddies the waters and makes the code less clear.
+Evan Teran Good example. However, it would be unhelpful to claim that switch statements are almost always wrong, as Fred George has (saying he would write a case statement "once every 18 months" and "feel really bad about it").
+fburton8 You are correct. I over-generalized. I should have said "The Mappers is a perfect example of WHEN it's better to use an interface instead of a switch." switch statements are perfectly fine IMHO, and interfaces with virtual dispatch are also fine. All about picking the right tool for the job.
@@EvanTeran Your mistake is thinking that there are only two options! Interfaces aren't a replacement for switches in this case anyway because you still need, at some point, to actually attach an object. At that point, it doesn't matter if you're using an object implementing some interface or a function pointer.
You seem to have forgotten that functions exist. Also, have you ever written an emulator? It's very common to have each *machine instruction* defined inside a giant switch statement!
@@recompile yes I've written an emulator, several in fact. I didn't forget that functions exist, I was pointing out how a switch statement can be bad for maintenance and results in bad over coupling of mostly unrelated modules.
Why should code for an MMC3 implementation have ANYTHING yo do with code for an MMC1? Why should they even be near each other? The answer is, they shouldn't, because it just makes things less clear.
So sure, you can use functions (and you should) but if you're using function pointers, yeah that'll work... But now you've just manually implemented interfaces. Which you can do of course (it's what the Linux kernel does in many places), but you're still using interfaces... Just without the benefits of a more modern syntax allowing it to be written with less code.
Avoiding the unnecessary encapsulation like you've explained has actually made my programming even more reliable, extendable, testable, etc. than when I tried to fit everything to an OO paradigm
I know this is never gonna be seen by anyone, but I had a bit of a revelation while watching this so I'm gonna drop it here. I realized a big difference between people like Brian with a real distaste for OO, and people who absolutely fucking love OO. Brian's priority is to write "straightforward" code so someone coming behind him can easily understand the entire code base. A lot of these pure OO people do not care about that. Their priority isn't to have the entire system be digestable, but structure the whole thing in such a way that any idiot can come behind them, only worry about some small corner of the code, and get something working in that little sandbox. Those two very different priorities are going to lead to very different styles
I saw it and mostly agree. I'm in the camp that doesn't see OO design as useless, just grossly overrated and used far too often. The biggest problem IMO is the complexity that a mind which readily favors OO -- as the default design paradigm -- tends to introduce at the level of integration.
If we're just working in the corner of the definition and implementation of a single object as a data capsule, then it seems like a no-brainer that the object is simplifying things. Yet when we step back and look at the big picture in cases where there are many objects with complex interactions with each other (that is to say, looking at things at the inter-object level instead of intra-object), that's when a lot of people will find the objects contributing more complexity than they reduce.
One of my main observations, and even just using encapsulation (bundling hidden data together with exposed functions) is that when two or more capsules are involved in a function/method interact with each other with subject-object relationships, at least one of those target capsules will generally become weaker in terms of their encapsulation. This weakening tends to happen very rapidly to the point where the public interface becomes so leaky so as to no longer justify the benefits of having it.
As a concrete example, consider a few hundred pages of design requirements like this In a simple video game:
>> Healing potion: heals 5 HP for any creature that drinks it.
At this point we might abstract away the concept of a Creature and usable Consumable, of which HealingPotion is a subtype of Consumable. Then in the concrete HealingPotion's overridden use(Creature) method, it heals the creature so we need a generalized method in the creature abstraction to modify its HP. Even the most generalized design is now already weakening the data encapsulation of the Creature's HP. As we get hundreds of pages of rules like this, the abstractions will tend to become less and less abstract and the encapsulations will become weaker and weaker: the designs will progressively become leakier.
Let's also throw a wrench into the process:
>> Healing potion: heals 5 HP for any creature that drinks it. However, trolls become poisoned rather than healed through its consumption, as they have an alkaloid allergy to the nightshade ingredient in the potion.
Already, any object-oriented design here is going to start getting ugly pretty fast, and we're still just dealing with one simple design requirement for a document that spans several hundred pages. Meanwhile, functional and even careful procedural designs don't progressively become exponentially more complicated as we introduce more and more design requirements like this.
It's not uncommon that the run-of-the-mill OO programmer will require tens of thousands of lines of code to implement even the original Super Mario Bros code on NES and produce binaries that span megabytes, when the original developers managed to do it in ~16K lines of 6502 assembly code (and assembly is way, way more verbose than even C). Meanwhile, C programmers have managed to do the same with several hundred lines of code, and functional programmers can probably do it in even less.
But what about the issue of job security? If your code is easily readable by outsiders, you have no assurances you wont be replaced.
+Paul Fox --- I always wondered why print() was used for displaying text to a screen ?
+Paul Fox --- Whoops... my newInstance() always returns a actual new instance. Sorry guys, my mistake !!!
+Paul Fox --- The "Don't Recompile" trick wouldn't work on me. That's the first thing I do: Make sure i can build the project successfully. Cool article tho. Thanks for the sharing. I like the idea about the "shoemaker has no shoes !"
@Paul Fox : more a guide about getting fired. The tricks there aren't fine enough to go unnoticed.
You have no assurances you won't be replaced in any case. Sounds like a cop out excuse to write poor code. Seems to me that you're more likely to keep your job if you're good at it.
Interfaces are good exactly for the reasons you listed. If you write a single relatively small program just yourself, that likely won't change much, then you don't need interfaces. But if you are a professional programmer, that's rarely the case. Usually you will work with others, on multiple large software, with changing requirements, that you may have to maintain for several years. That's a whole different beast. If you are going to work for years on a project, the overhead of proper interfaces and generalization is relatively small, while the productivity boost is huge. As you mentioned APIs should have proper interfaces. When you are working on a large project, it's usually the best to start with writing your own APIs. This largely depends on the language. In C++ you can't really do anything without first building your own tools. If you use high level scripting languages, you may be able to skip this step, if your software not too complex, because you get a bunch of tools for free (and usually use OOP under the hood).
Maybe this is why you can't see the value of OOP.
You talk about utility functions, for which generalizations are may be required. Now you are starting to understand what is this all about. If you write large software, or especially if you write multiple ones, you should have a lot of utility functions. You can even write your own library, then all projects can use the same stuff without copying and pasting all the time. When you have projects which are changing a lot during development, you might be better off if most of your code is generalized. You save lot more time when you have to do major changes to the code than you spend on generalize everything.
Of course too much generalization is bad too. You are wasting time on designing and writing code that will never be used, and also make it harder to use the parts you actually need. The trick is that usually you don't have to decide in advance. You can generalize parts of the code when it's clear it will pay off. But then you'd better do it right away, because if you wait too long, it can get too entangled with other parts, and then the cost of refactoring goes up exponentially.
You talk a lot about sub functions. What you want is pretty much the same as private functions in a class. By declaring them private, you can guarantee that they can't be called outside the class. Also they are not in the way in your main function, don't have access to local variables in other functions, and the order of declaration doesn't matter. I think, you are beginning to understand why smaller functions are better. You don't have to wonder about their purpose, if they are properly named. The functions name should tell you exactly what it does. If that doesn't clarify it enough, you can still write a comment.
Again, the goal is to make the code readable, not blindly following one specific rule. Long functions have more local variables as you mentioned, but for me that's not the worst problem. Long functions tend to do more than one thing, and then it's hard to tell which part does what, and how those parts interact. The other thing is, if you can't see the whole function at the same time, you have to work from memory, and that is much harder, end error prone. I've seen code where it was challenge to figure out where one function ends and the next starts.
Not all languages have type switches. C++ surely doesn't. Anyway, you just reinvented polymorphism. I think if you don't know what the code does, it's easier to understand a properly written polymorphic class, than a type switch, because the class and the method names should tell you exactly what's happening there. Extending the code is also easier, because the compiler will tell you if you forgot to implement a pure virtual method, but it won't tell you anything about type switches.
And finally, this is again an example which doesn't really need much OOP in the first place.
small, testable functions that operate only on their inputs are best.
Using curly braces within one scope just to declare another scope means that that code should just be a function
I don't wanna read 800 lines trying to find the 50 I care about. A good procedural function is just a series of function calls
There is a time and place for everything. It might come as a shocker to you, but even goto is valid and elegant in some cases.
The problem is people like this who associate coding paradigm with their identity and become offended when someone contradicts them.
For many cases OOP has a heavy overhead. But as I learned the hard way, in many others it can save a huge deal of time and being more practical.
Well said. Too many functions without flowing code is actually harder to read as you need to redirect your attention to understand the function. But with complex code with a large number of moving cogwheels, it's inevitable that readability suffers.
oh no you gotta click F12
@@grimendancehall It's not about clicking, it's about potentially juggling multiple files and disconnected pieces and trying to glue them together in your head.
@@grimendancehall what is F12
Having worked in legacy codebases using both styles, I MUCH prefer the OOP style. I feel like your style of inlining is optimized for an end-to-end reading of the code. I rarely have to do this. (I actually can't remember the last time I had to do this.) I'm usually looking for a specific piece of functionality. When logic is broken down into well-named pieces, it is easier for me to see, at a glance, what something is doing. Looking through the before and after NES code, it was much easier for me to find the main game loop in the OOP version.
Visual Studio has great search features which make it pretty easy to follow calls around, and see how many places a method is called.
Also, virtual methods are great. :)
I can't wait for a video on Rust from you.
I’ve always wondered about this one bit… he changed “console.PPU.step()” into “stepPPU(console.PPU)” for the sake of not having PPU be a class… but if it was Rust, that step function would be defined in the impl for the PPU struct, which means you would end up back with “PPU.step()”. How is that any different from PPU being a class?
This video may be very long, but it's well worth watching. Thanks for putting in the time and effort to make this possible :)
24:05 I'm torn on this one. I like the OO notion of "This is an A and this is how it foo()'s, this is a B and this is how it foo()'s"
But I also have to say from experience that having all of the foo logic in one place instead of having it strewn across a billion files is nice
I remember getting flamed on StackOverflow or something when I was complaining about not being able to find the line of code that was actually being executed because of a chain of like 5 classes where each inherited methods from the previous and overrode a bunch of them. The bug with the object of type A wasn't in A.foo() because B extended A, but the bug wasn't in B.foo() either because C extended B, but the ... etc. Had there just been a giant switch, it would have been a lot easier to find.
Again, the usual caveat here where the codebase itself was just trash, not necessarily OOP's fault.
OOP basically replaces introspection and case (or switch) statements with compile time classes. That's it. The runtime cost of that case statement on a 1MHz eight bit CPU is obviously in the tens of microseconds for every time that the CPU has to decide which part of the code to call. Now do this on a 4GHz 64 bit CPU and the runtime cost is in the nanosecond range. It's completely irrelevant compared to the time it takes to fill the instruction/data caches and the instruction pipelines. Other optimizations will far outweigh the efficiency gains we can get from moving class dependent method selection into compile time. That's the reason why OOP was invented in the 1960s... because it made borderline sense. Today it doesn't make any sense, whatsoever.
My idea how to make this better is you can still have objects, but you have piping like in Elixir so it still has a similar syntax. In Java you have subject.verb(object) but I would do subject |> verb(object) which is the same as verb(subject, object). It has the benefits of both, and you can use it for more.
Loving the series. The hardest part of actually becoming an efficient programmer is unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been starting with C++ then reducing everything into procedural functions and tightly-packed data structs. Just by doing that I reduced static memory use and compiled program size at least 10-15%+ (which is a lot when you only have 32kb.)
And holy damn, nearly 20 years of C and I never knew you could nest a function within a function, I had to try that right away.
You can't nest functions in standard C. It's that GCC supports it, so this feature is compiler dependent. Or perhaps you was just talking about nesting functions in General, not in C context.
You should just use C. Or use C++ as almost strict C (if you like the extras the C++ standard provides like parameter references, templates, preprocessor, ect...). Which is probably what you code ends up looking like anyway.
TekkGnostic >didn't know you could nest a function with in a function
wat. I learned this in school. you know the place that everyone says is pushing OOP to be super popular. and I only really started programming couple years ago. at the same time as I started school.
I think the main take a way is that everything has its place.
I switched from a CS minor to a Math minor in college. If my classes were taught with the clarity that you present here, I probably wouldn't have had to.
Granted one difficulty was the fact that over half of the teaching assistants from India spoke such incomprehensible English, and explained things so badly, that the only way you could get any help from them was point at a bit of code where you were suck, and have them rewrite it for you.
I think one of the main things I've learned from the video is that there are plenty of people in the comment section that I wouldn't want to work with. 😂
Maybe LinkedIn profiles should include links to comments people make on programming videos. ;)
That's what I am thinking too!
47:00 This is a fantastic point. Atomised code is misleading because it looks like it is more meaningful, more thought-out and more general than it actually is. This is especially true in OOP, because if you write a small class, according to OOP, the class is meant to have responsibility of its own. It becomes an entity in itself, with a nice name and an intuitive description of its responsibilities. This is misleading because the class may only be used once and its methods may only be designed to work correctly in one particular case. The code is dissatisfying because it feels incomplete, and it becomes tempting to over-engineer the code so that the class is more robust and lives up to its name.
I'd add on top that such non-generalized small objects and functions -- especially if they produce side effects (or mutations to the internal object state) -- also cannot be thought of in isolation meaningfully without comprehending the bigger picture around them. The fact that they're all separate and teeny turns them into something akin to puzzle pieces that we have to piece them back together to even start approaching functionality that makes sense to a human in a reasonably high-level way.
It's like the other day I was cleaning out my closet and I found this weird string. It looked kind of like a shoelace but it wasn't a shoelace: too thin and round for shoes. So I understand what strings do. It was self-explanatory and "self-documenting" in that sense... but WTF is this precise string for, exactly? And I had to dig through my closet to try to figure it out, and finally I did... aha! It's a string for my powerball (hand grip training equipment) to start rolling the ball inside the gyroscope. Finally, I get it, but it took a long time and a lot of digging and asking questions to figure it out because the string was designed only to work for that powerball and nothing else. It would have been so much simpler if it was integrated as part of the powerball and not a separate thing so that it's immediately obviously how and where it's supposed to be used.
I think there is a way around the fact that a local function can access the containing function's scope - at least this works for me in C#:
If you have a function that's 500 lines, it's reasonable to put that in its own file anyway. But if the whole function is in its own file, then simply make it public, and for any local functions you'd like to have, instead make them global but private. Now it is clear to the reader that these private functions are only of concern in that file, but at the same time they don't have access to any state from the public function.
Long functions really defeat the point of functional programming. The reason you break up functions is so the dependencies of that block of code are clear.
"This code relies on these two pieces of information, and returns this piece of information. Other than that, it doesn't need to worry."
It makes code easier to think about. If the code is inlined, you don't know what local variables from the last 400 lines a particular block relies on or what it's doing to the next 400 lines. "Reducing the surface area" is not a worthwhile reason to discard that.
@Keeper Holderson No - the point is to make code easier to think about, more testable, and to reduce the number of levers which might go wrong.
We discard classes because they're ambiguous - does this method rely on other code? Does it affect other code? The same principle applies to functions. The longer your functions, the more complex every single line in the function becomes, because it could potentially rely on (or affect) any of those other 400 lines.
If the function follows a logical sequence of steps, it really doesn't matter how long it is. But I agree, he might have gone slightly overboard.
For exploratory coding, sticking to large functions feels correct, since the pieces you can play with are readily available in scope. Until you are really settled into a design or you see massive repetition, splitting code along arbitrary boundaries is actively limiting your own experimentation. Also for future devs including yourself, changing features around is easier with fewer boundaries to rewrite. I really like pulling out a maximum of 20 functions that I use all the time as utility functions, where a reader starts to "get it" pretty quickly with my established vocabulary within my large procedures. I'm happy with this and don't feel obligated to narrow down functions, as long as I can do some E2E testing to confirm everything works. Not doing rocket science or servers though, just creative tools
@@molewizardBrian Will never talked about functional programming, he was talking about procedural programming. Are you confusing the two?
Anyway, the separation of concerns and functional encapsulation you're advocating for, those he did acknowledge. That was the whole point about those inner sections of longer functions that he'd like to see; they are essentially inner functions in all but name, but without having outside local variables in scope without being explicitly passed in. They could even be considered testable units, with a sufficiently advanced testing framework.
what are your thoughts on Rust? i feel like it holds up many of the principles you were pointing to as the default and seems to encourage good design.
The stepSeconds fn is a nightmare. "I would have moved these out, but they are only used once" just leaving the tester to die or wait for the appropriate refactor.
I saw a video recently by Anjana Vakil called 'Oops! OOP is not what I thought' that totally changed my perspective on OOP.
The gist of it is Alan Keys made a miscalculation (which he has himself lamented) when he coined the term "Object Oriented Programming" that has confused whole generations of programmers.
The point wasn't too make objects supreme. Keys was coming from a molecular biology background, and his thought was that programs would be best built as structures composed of smaller structures which are composed of smaller structures and on and on.
For this goal, messaging is the most important piece, NOT the objects. It's how the bits of code communicate that allows the pieces to be independent.
Inheritance isn't a core principle, nor is polymorphism. Encapsulation is handy but not necessary, etc.
Contrast that to what is taught today, and it's clear something went very wing. Alan Keys' idea of OOP was kissing cousins with functional programming, not a polar opposite as it is done today.
It’s sad what it’s been turned into :(
20:51 "We've divided everything into these separate classes that are supposed to be self-contained and encapsulated and yet these objects are effectively reaching into each other and calling each others' methods in a way that totally defeats encapsulation." Well said.
Try that in any software development. You gonna see how quick modyfing the code and sub-branching new mechanics will fuck you up to the point, you dont know what works and what not. This dude never worked on huge workflows. Do not listen to this.
@@ph0ax497 Can you be more specific about what you mean when you say "modifying the code and sub-branching new mechanics"? I don't think this man is saying that you should make massive changes to an already working system just because the system is object oriented. I think it's just saying that the system will be very difficult to "modify" and "sub-branch new mechanics" because object oriented program has imposed a mirky, unnecessarily complex and unclear structure upon the program. Or are you saying something else?
@@richdog490 in every software development cycle you are bound to add new functionality to your code. That's the point. That's the programming. When you have already working solution, and you are just rewriting it to your wierd tastes, then its whatever. Try it in massive applications pipelines, like for example Photoshop. When you have project with milions line of code, and astonishing number of functionalities, you just cant do what this man is proposing. Dependencies upon dependencies will create problems and bugs which will be close to imposible to fix
Your channel is a goldmine. I don’t know Go but I started to pick it up about halfway through as I was listening. And what a fascinating project through which to learn it!
You have some strong opinions. I like it, even tho I don't always agree with you.
Liking people just because they have strong opinions is not a good idea. Any authoritarian a**hole will have strong opinions and try to shove them down people throats. It doesn't make them good ideas, and I wouldn't respect anyone for this. Quite the contrary for me: I immediately distrust anyone who walks in and starts being all in your face about what he thinks.
That Brian Will though seems to know what he is talking about, so I'll give him the benefit of the doubt
@@KingGrio If someone doesn't have strong opinions or opinions about every little thing, you know they aren't an expert in their field.
@@Elrog3 I've found that people with very strong opinions tend to know very little about their area of alleged expertise.
@@recompile That doesn't conflict with what I said. I didn't say people with strong opinions tend to be experts. I said experts will have strong opinions about the minute details in their field.
There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my friend. General idea is to declare a struct inside which a function can be declared. Since you can declare structs inside functions, you can safely use it as a wrapper for your function-inside-function declaration.
This has been done in MSVC but I believe it will compile in gcc too.
It would look somewhat like this (i don't remember it in the detail, but should be about right):
void func1() {
struct funcWrapper {
void func2() {}
}
}
There is lambda expressions in C++
auto subfunc = [](params) --> ret_type {
}
I believe that is the syntax. You can also put references to variables between the brackets to tell which variables you want to be accessible from the subfunction or a single & if all of them should be accessible
15:40 just so you know, strobe is more of a clock signal to the controller when to reload the shift register with the current buttons (or at least that’s what strobe usually means for the nes controllers.) I don’t know why it would be persistent in the struct though. Index is probably the bit in the shift register to expose to the cpu.
Haskell provides your subfuncs which can be placed after the return with the where syntax.
+AscendingSerb Well yeah, everything google produces is shit
Why do you think bigtable is closed source?
Fuck Haskell.
@@TocnaelStarcraft how is that relevant to what he said?
Being the lead designer of an larger app (2m lines of code as of 3 years ago). I like to say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you have is a hammer, everything looks like a nail. So if you use OO on everything then you sometimes end up with garbage.
+Lucid Moses Agreed, altho I would go so far as to say that Java is garbage, period.
Dan Kelly Yea, Java has some issues. But be careful with the "Use the right tool for the job" kind of thinking cuz you'll just end up making all the Java developers unemployed. :p
Lucid Moses We know that Java came about because of the financial benefit of having a language that will run on any platform. IMO which is the only upside of Java.
Dan Kelly Well, I don't thing Java is the best there either. Just to pick a sample of radically different languages I think C, Cobol, Forth all do better at cross platform then Java. Of course it is on a lot of the current popular ones.
I'm not 100% on this but I think Java is written in C so Java can't even exist on an architecture that there isn't a C on.
So, again, if your going with best of bread on cross platform..... Java developers are unemployed.
+Lucid Moses C is not cross-platform in the same way that Java is. With C code you have to compile for the individual platforms. For Java - assuming you're using only the subset that is truly cross-platform and not relying on platform-specific stuff - you compile once and the bytecode is interpreted/JITted on every device.
Fortunately we now have alternatives to Java that make it simpler to compile once, run (almost) anywhere. Mono lets me write programs on my Windows PC that will happily run on a fairly wide variety of platforms.
Learned with C as a kid and then never got anything done as a hobbyist for 30 years because of OOP. I overthink things, and OOP had me spending most of my time designing and bullet proofing my code than finishing anything. Obsessing about details and consistency as I learn new language features. I tried to use Unreal, and I couldn't do it. Everything derives from a first person shooter. Tried to make a simple 2.5D card game like mobile app, and I've got to derive it from the first person shooter class. I jest, but it sucks. Then Unity is a fragmented mess of shit. But my new approach is lean and clean since I am the only dev. I don't write or think about anything that is not necessary. No asserts, null checks, etc. unless there is a runtime condition that might cause it. Why all these years I assert things that I know I will never get wrong or things already throw an exception? I must remember C# in Unity is a scripting frontend to a C++ codebase. C# tricks and language features are unnecessary and undesired sometimes due to how Unity does some things. I hate OOP and how it screwed me up and I never made it as a self-taught developer. Finally getting things done since I gave up spending hours implementing generic interfaces and nonsense for a game that I am the only programmer for! Code I will never reuse in whole and just maybe copy/paste some pieces with variable names changed. OOP is for people writing animal and car simulators. Thanks for the video. I hope you survived the latest round of layoffs at Unity. That engine is a mess too, and now I wonder if ECS will continue to be developed.
edit: like why make properties with {get; set;} when a public field is fine? As someone scripting the Unity engine where I'm the only dev. I never went to school and got the class on how to avoid spending hours figuring out if I should write my get/set with line breaks or not and if I should go back and fix every source file every time I change my mind? Do they teach how to avoid that in college programming? Is it by not giving a shit about design patterns?
I don't have an answer for a lot of that, but as for properties, most of the time, a public field is the best option, however, if the variable needs to always be in a valid state (say a pointer that can't be set to null), then you would want a getter or setter to avoid that issue, but then again, the getter and setter would be an interface for the clients of your code, not for yourself.
OOP is literally just handles and functions. What's so hard about that? What were you over thinking? Interfaces are just function pointers by nicer and safer. OOP provides a way to have safer handles rather than throwing around naked pointers and hoping someone remembers to clean them up later.
With the approach you have here, this is great if you know that the program you have isn't going to change significantly beyond a few hotfixes. With a huge code base, your approach would take quite some time to refactor, but not impossible. I suppose the best medium is to either start out in an OOP mindset for the main framework and clean up later or plan everything out initially to have the final result of 9 files and 59 functions, but that would take a lot of foresight that even a God would consider impossible to predict because of how all the individual components would behave with one another on run-time.
I understand your approach and it's wonderful if you know the entire code base front and back. However, for programmers who may come into this revised project, they may be confused as to why everything is hot pressed into some very big functions. For those hating on the video and calling it "garbage" and "utter dog shit", do not dismiss the basic code base refactoring lessons that this video presents such as removing functions that are only used once/never and instead in-lining the code in that function into the place that they are used in, among other methods typical of his procedural methodology. If everyone is so concerned with the extreme reduction of the code base in-lining and still want to visualize the code into little chunks, consider this: if the function is only used in one place in the code, move that function to the place that it is used in and make it a static inline function (Although this is more particular for languages that support this kind of in-lining). If you notice that there are too many static inline functions, that is when you have to step back and reconsider how that code base is structured at a high level.
Having said all this, this is a much better emulator tutorial than most because it is also considering the architectural decisions that are made when working with this big code base while also illustrating the general interaction of these components and why they act the way they do. Highly underrated video on removing needless OOP components and balancing procedural practices with OOP in this code base.
don't know why you would start with oop and switch over since oop is way more inflexible than more imperative styles of programming. It makes way more sense to start imperative and rewrites parts of the code base which operate like classes into an oop style. The only issue with that is why waste time rewriting perfectly good code to fit an oop with no intrinsic benefit other than fitting some backwards sense of elegance
OP took 3 giant paragraphs to make his point. What do we learn from this? xDDD
@@rosangelaserra4552 Good fundamental code design by way of sensible high level decisions that focus on the data being used instead of obsessing with OOP UML nonsense (see any Tech with Tim programming stream) makes it so you're not forced to make such a massive refactoring like in the video or live with the code that originally inspired the video. If it can be done in an elementary way and it accomplishes the job, so be it, but don't be a slave to a dogma or paradigm. Just make code simpler, but no more simpler than that, even if it means inlining functions used once manually. You might accidentally find some bugs after inlining like John Carmack.
OOP should be thought of as a way to glue algorithms together in a modular, maintainable form. I think, obviously, algorithms don't need OOP (say, path finding in a game), and they suffer from it. But if you want to take an algorithm and make it integratable into a larger system in a way which is extensible, maintanable (by a team of more than 1 person) and well documentable, then OOP is pretty good for that. This is why I see whenever people bash OOP, they give examples of small self-contained programs/scripts (programmed by 1 programmer) that do just one little thing. Of course, OOP there isn't needed much. It's like using a cannon to shoot a sparrow. People misusing OOP for everything doesn't render OOP garbage.
OOP apologists merely DECLARE that OOP makes things extensible and maintainable. Where is the proof? What percentage of software needs to be "extensible"? Why is it whenever OOP produces a mess it's always because "OOP isn't the right tool for the job" or "people misuse OOP". Then why is it taught and promoted as a universal paradigm that everyone should use? When exactly IS OOP "the right tool" for the job? Any concrete examples? Any concrete examples of people not "misusing OOP"?
So, it might be better to consider Object assisted programming rather than Object Oriented programming, meaning that things, like CPU, View or anything that represents data/state which it can act upon/can be acted upon is a tool to use in your program, in stead of to have a goal where every piece of code is sub-sub-subdivided into oneliners which needs do-er 'objects' to work.
Also, I see a lot of static/singleton possibilities in thecode that prevents having to pass around classes, which exactly fixes what caused the weird structure in the original code.
Rik Schaaf i like that phrase "object assisted". It is how I approach my c++ code. it encourages me to think about my problem and not make classes for everything ... my code is usually simpler and faster to write
Around 32:11 you mentioned having inner functions at the end of their enclosing function rather than the beginning. This is actually possible in JavaScript due to function hoisting (all the functions declared in a scope are "hoisted" to the top of the scope ahead of any code during interpretation) and I've found it quite useful in practice.
+Patman128 Yes, useful indeed. As for isolating functions from the scope they're defined in, JAI should deliver on that when it is released.
Hoisting is actually considered a major flaw in the early stages of Javascript history, being largely abandoned by ES6+ standards, mainly because using something before declaring it in code can not only be confusing, but can also lead to unexpected behaviour. For example: Variables declared with "var" in JS will be allocated when hoisted, but will have no value. So this code below is generally bad:
returnRandom();
var doSomething = _ => Math.random();
while using "let" or "const" instead of var would fix that issue, but would not allow for hoisting.
The only thing I’m fully mystified by is the discussion of “subfunctions”. He says around minute 33 he wants them hoisted and to not see enclosing scope. So then....why not just write them as private functions. He doesn’t want private functions because he says that that’s more code to grok and requires me to jump from one place to another when reading. But isn’t having hoisted nested anonymous functions result in the exact same thing?
Small tiny functions, and function composition, please.
Hm i'd say it does not. In his version, all the 'private' functions are contained in the one function that actually uses them. If you are not interested in that function, you can just skip over its specification and thereby also skip over all private functions that just concern the one function you're not interested in in the first place. I really do think this is indeed more clean
@@samuvisser that just garbage programming. Try modify the code and add additional mechanics, functions, and any other stuff considered "code development". Good luck with that, when all your code is co-independent. Fucking yikes.
@@ph0ax497 What the heck are you saying? That you want to account for the possibility that whoever maintains the code might want to use stepPPU outside of the StepSeconds functions? Why would you want that... We have one function that increments the nes logic. There should never be a need to call a part of that logic outside of said function.
The point of the subfunction was that it's complicated/involved enough that he doesn't want to make it a big, inline chunk, but it _will only ever_ be used inside this function, so he clarifies that as well to whomever reads the code in the future.
@@volbla what's the point in using overly complicated code? If something is overly complicated, then it means its not factorised properly and you need sit down again and rethink the problem. 99.9% of succesful code was the most simple solution to the problem.
I ve just pointed out, that OOP was made to sustain decades of software development, without need to scrap whole code, and write it from the beginning, which non OOP solution on the end always give.
My friend work with Media Composer as software development, and without QT and strictly OOP, it would just be impossible to achieve. And there is code that was written pre 2000 year. Almost everything changed since then, and yet it is still compatible. Try that on functions only. Good luck.
@@ph0ax497 Did i defend overly complicated code? I don't think i did. These two versions of the nes emulator do the same thing, so they're equally complicated. They go through all the same steps. They're just structured differently.
His idea of sub-functions _is_ factorization. They're just confined to the only scope where they're relevant. Kinda like methods... Do you not think that improves readability? And is there really any use in factorizing it further? Exactly how small does each segment need to be?
Maybe a data bus abstraction would get rid of the recursive references. Also, code is data too. And you don't have to interpret a function as being a generalisation, you can think of it as a way of ordering code.
Awesome video, I loved watching it. In my experience, there are many situations where, like you pointed out, procedural style makes things easier and prevents you from overthinking and overgeneralizing the problem you are trying to tackle.
However, in some cases, object-oriented programming removes unnecessary conditions and switches that make your code harder to read. Especially in complex game engines where you deal with a bunch of objects which interact in diverse ways to the environment, other objects and the physics engine. In a procedural style, a program like this would become an unmanageable clutter of flags, variables and switch-statements.
Therefore, the statement "Object-Oriented Programming is Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers can use - and just like you would not use pliers to get a nail into a wall, you should not force yourself to use object-oriented programming to solve every problem at hand. Instead, you use it when it is appropriate and necessary.
Nevertheless, i would like to hear how you would realize such a complex program. Maybe I'm wrong and procedural programming is the best solution in any case - but right now, I think you need to differentiate situations which require a procedural style from those that require an object-oriented style.
+dailykouki I'm curious how you feel about Java, that will tell me all I need to know.
+Dan Kelly We get it, you hate Java. Did you have anything to say about OOP apart from endlessly spouting on about how terrible Java is?
Actually I think game logic is one of the best examples of a case where object oriented programming style doesn't make sense, because for complex interactions between gameobjects they almost always need to share a global world state, which breaks the primary goal of OOP, that is, encapsulation. A better way to deal with these interactions is using an Entity-Component-System, which is a more procedural style of programming. Most game engines use ECS these days.
As you said, this is an example of "forced OO". The fact that OO isn't the best way to do everything doesn't make it garbage. An emulator is obviously one of the worst places to use OO.
If we are to prefer long functions and minimal files for our application, do you have any rules of thumb on when to split functionality into a new file/module? Theoretically, an entire application could be written in one file, but that doesn't seem very practical.
I have now looked at your videos about the problems with OOP. They give me a lot of foods for thought. Thanks.
The original code had some issues such as cross-dependencies. It should instead have been delegated up.
Oh, and in regards to methods, I work by the principle that if code is never reused and should always go through the same procedure, then the code can be written as procedural. Not doing so is just introducing unnecessary complexity. That is, unless you can have a very good description if what a code section does and you can easily refactor it. However, then we're likely in functional territory rather than OO territory, although the lines do get a bit blurred in some cases.
A bad oo implementation does not prove oo design is shit. With that, i'm very slow to make solutions of multiple inheritance. Mostly one level, some two, and rarely three levels deep. use more hasa than isa. often have lots of static methods too all of which are very functional. I do recall doing oo design with structs and pointers to functions on mozzilla browser in C back in '94 ... im sick of managed code these days but i digress ... thanks for vid.
I don't know about Go, but in C/C++ if you wish to hide your extracted functions you could always use anonymous namespace in cpp or private static functions of some class. The namespace solution seems more appropriate.
Also why bashing the GLFW? Why it's not professional enough? If you wish to do some really platform-specific stuff, you can always get the handle from it and operate on it, but still a lot of common functionality is covered. Smells like 'DIY is the path of true developers'.
Agree with most of this, but not the inlining of of all of the functions. I like code to read like plain english. Inlining all those functions reduceds testability and navigability, which are both extremely important. It also increases indentation, which can make code more difficult to comprehend.
Your version is easier to understand? OK, I don't know nothing about NES and I don't even know GO but suppose I would like to change CPU clock. In original project I've opened nes/ and see there is cup.go file, I've opened it and wow!, there is a CPUFrequency constant right at line 8. Now, where is it in your version? I don't even know where to start as names of your files give no clue.I've opened cpu_instructions. Didn't found it there but my god, the executeInstructions() is great!
Please explain to me, how defining instructions set inside a function that suppose to execute them suppose to be easier to understand or track? It is even hard to track where the function ends if you don't have proper text editor.
I also understand that spending hours on scrolling a text is easier then open proper file...
On the other hand I could probably figure out entire original project using windows explorer and notepad but the thing is a code is not a fantasy book that you suppose to read from cover to cover. It's like machine with couple of levers - I like to be able to quickly find out what is proper lever for the job and not to think how entire machine works if it works correctly. For example:
for j := 0; j < ny; j++ {
for i := 0; i < nx; i++ {
x := float32(ox + i*sx)
y := float32(oy + j*sy)
index := nx*(j+view.scroll) + i
if index >= len(view.paths) {
continue
}
path := view.paths[index]
tx, ty, tw, th := view.texture.Lookup(path)
drawThumbnail(x, y, tx, ty, tw, th)
}
}
Main reason why you want drawThumbnail as a separate function, even if it's called only in this single place, is that you can clearly see what is inside scope of your loops. Why should I care how thumbnail is drawn? It is easy to test this single function and if something is wrong and I'm sure the function is ok this mean some other code in this loop is broken. It's much easier to focus on the problem.
Another thing, it is easier to keep/track logical order of instructions. Sure you can just line comment what following code does but in real life you also temporally comment actual code lines or comment part of function. What if I want to move drawThumbnail code above previous instruction or set of instructions? It's hard to track where it ends in such scenario.
Reason number three is that your version of those loops barely fits in my screen and only clue I see there is "//draw thumbnail" somewhere in the middle of it. I've found this comment, It's that all those loops do? Don't know, need to keep reading... On the other hand I have only 10 lines of original code that I need to take a look at to know what it do.
Now when I want to know when this is executed I scroll few line up and see update. just beside MenuView and I'm in "menuview.go". You don't need to be a Sherlock to guess what it mean and that if you are interested in memory management for example, you are in wrong place.
Where is this code in your example? Um, I'm somewhere in run.go, I scroll up, scroll up, scroll up, scroll up, scroll up, Aha! "case *MenuView:"! But it is the case of what? I don't even see any other cases... scroll up, scroll up, scroll up, scroll up, shit - beginning of the file! - scroll down, scroll down, scroll down...
I can't imagine how you could say it's easier to understand unless you want to know every single line so you can just open a file and scroooooll. And even if you would want to know all the code eventually, it is still better to learn it from bigger pieces. Like if you want to know how car is working, after you found out that it has steel body you probably would like to know that it has wheels and engine etc. and not why it's steel, what a steel is made of, how it's melted and formed, what properties of chemical elements make it suitable for a steel and so on...
I could argue with almost all statements in your videos but I refuse to believe you are not a troll. Problem is that there surely are some people that just started programming and they may think you are showing best practices...
God forbid you actually read what you’re working on
Interesting, I tend to agree with you regarding the OOP stuff. However, simply using huge functions with everything inlined has a side-issue when it comes to testing. Testing smaller functions is usually easier. But of course this all depends on the code you're writing. It might still be testable if it is long. Just that you didn't mention tesing (I think?).
Also regarding the "putting sub functions after the last return value". To me, it doesn't really warrant changing a language to accomodate this. Rather just put it after your function, and use naming or whatever to make it clear they are used in no other place. The code will still be just as readable, without making the language more complex.
+Brian Will All examples you show you are always converting code already written as an OO program. Would it not only be a true comparison when you code a program from scratch as a procedural program, and 'challenge' another equally skilled programmer to code the same program, but object orientated, and along with lines, functions, etc, time is also a category?
+Blake Denham That's true because, naturally, revision will lead to better design. So the videos aren't strictly a fair comparison.
The specs would have to be excruciating precise.
Why not remake a working program using two different techniques?
And OO tends to need more radical refactoring when something changes in the spec affects the chosen class/code organisation. Generally this can be a problem in time/budget constraint projects and such a surprise change should be part of the test to make it more realistic.
"naturally, revision will lead to better design"
Lol, no. Most programmers would take a procedural design and overcomplicate it with OOP.
what I personally don't like about these presentations is the general lack of slides in relation with the speech duration . The exposed ideas do not entirely reflect the contents of the slides thus it is easy to "fall asleep" and lose the audience (like on those painful school presentations we had to go through when we were young). Perhaps a diagram or more slides to capture the off-topic ideas would be better.
THIS presentation was worth every penny to stay awake ))
Only 12 minutes in and it’s no wonder I seem to prefer C and Python for personal projects....
Interesting stuff, whats your take on functional programming?
+Jacek Hury Is garbage too, but is more reasonable than OOP.
Pure FP (immutability) is great "in theory", but like OOP, it forces you to work around the problem in a FP way instead of just solving the problem itself. Also, extremely expressive type systems are nice but can encourage you to make everything super generic, which he talks about near the start about how generic code should be used for libraries, while actual application code is likely not going to be so generic. Again, over-thinking the problems.
However FP does encourage simplifying data and keep it distinct from the functions that mutate it, so in that respect FP is better than OOP. I believe that FP and related type systems also encourage "reusability" and "modularity" (buzzwords) better than OOP, even though the premise of OOP was to solve these problems. Runtime polymorphism, one of the central properties of OOP, can be represented by FP languages.
JBeja M why? are funcs not good?
Milo Turner what if I'm performing the same task over and over again in the same program? should I use functions then?
functional programming does not mean just programming with functions. This link gives a good overview docs.python.org/2.7/howto/functional.html
I totally agree that general solutions have to be specifically engineered. This is one reason why this is a good reason to try to refactor and generalize solutions over time.
You have to ask yourself why you weren't trying a general approach in the first place. Most examples for OOP are along the lines of databases. They are defining a general animalClass and then are deriving dogClass and catClass from it. Why? What's the point? Just make ONE more general form and let someone write "Dog" or "Cat" or "Platypus" into the species field. Instead of writing two versions of the grooming() function, just write one and have an if(species=="Dog") statement take care of the differences between dog and cat and platypus. It's also far likely to produce software that leaves rooms for user-workarounds just in case that the architect gets the use case wrong (which the architect always does) and somebody decides to use the code to groom cows. :-)
If this were a serious application I was responsible for maintaining I would prefer having the 321 test points rather than 59. I also want classes for dependency injection.
Using code revisioning to nail your point ?
Kind of unfair. I've done code revisioning and cut the number of lines of code by half. Does that make my tecnique better? No
It simply means I had the time to review the code and re-structure it to achieve its final goal.
If you really want to nail your point challenge an OO programmer to write a medium sized program ( 10,000+) lines of code and see who comes with the best solution in terms of performance , readabilty and maintainability.
So, why didn’t you put everything in one giant function anyway?
while in general I agree with many of the things that you said against oop. in this case, the original version is more readable because of much lower indentation level
*8* levels of nested control flow? A dozen+ variables in scope?? We have VERY different ideas of 'simple'.
I can't even imagine trying to test this.
Yeah, it's impossible to comprehend
Why is having less files better? I’d rather have one class per one file. Even if they’re sparse if you don’t need to worry about that file at that moment. Hypothetically OO should get you code that is less to worry about and easier to change via inheritance. Hypothetically in a “functional style” you could extend functions by passing a function as an argument (if language supports it, I haven’t used GoLang)
In C# you can declare local (nested) methods anywhere you like.
A bit of a side thing, but don't be afraid to rewrite code. If it's not super well written you do have to go through and make sure you abide by the changes everywhere, but you can get code that is not only more understandable, but more efficient afterwards. Also if it doesn't work, that's why we have version control
Have you tried OCaml? It seems to fix many of the problems that you have issues with. For example, you can define nested functions and use nested let bindings to do simple computations without adding local variables to the scope.
Sorry B rooks and Linus missed it ... it’s not all about tables/data and it’s not all about flow/control ... you need to think about both and expose both ... as for OO the best oo code I seen was written in the early 60’s ... yes before it was named/invented.... it had 6 objects and a number of functions ... despite being written in (macro) assembly it was well committed, easy to understand, and very efficient. The key was the objects were well designed and represented the 6 fundamental things the code needed to manipulate .... virtually all the oo code I have seen since it was named is rubbish ..
Lots of dogma in the software world. I’m pretty skeptical of OOP but I’m afraid to tell about it at work
Im of the mindset that there’s often more than one right way to do things. Between procedural and OO I think different use cases can determine that one or the other will work better.
In this instance, I’m not actually sure which one I prefer, especially because I’m frankly not a fan of the way Brian shoves stuff into huge functions. I get for him that makes it easier to understand, but for me personally I prefer things more split up, whether they are procedural or OO.
And so in this case I think the original OO repo has better code (I’ve looked through them both), even if I don’t have a strong conviction that you couldn’t do as good or a better job in straight procedural code.
Using the interface as a open enum is just brilliant
Haskell has where keyword for local scope functions
It's a shame that scope pollution is not addressed in a lot of programming languages.
I think you'd really like BASIC - nested DEFSUBs (functions) don't have closures and are difficult to return, and also can be put at the bottom of a parent DEFSUB. You can't use brace nesting as it doesn't have nesting is the only thing
You make a lot of very solid points.
In your refactoring of the Mapper interface to a type-switch though: what is the point of still using a declared interface here? If you are disregarding extensibility (which would require adding to the internal type switch, rather than conforming a possible new struct to an interface) anyway, why not just make Mapper of type interface{} and add a (failing) default case to your switch?
Looked at your nes repo, you strait up butchered the readability of the project. Even if you don't do OO, you should at least follow some principles of making your code readable and reusable, even if it's just the bare minimum.
Fuck you, victor.
@@edwin3928ohd == master programmer
Unreasonably shitty in one short, single line.
Real mature. @@edwin3928ohd
In C++ you can use lambdas as sub functions in your functions and they wont be able to access any variables declared in the outer scope unless you explicitly have the lambda 'capture' them.
Same with Python, for reference.
31:23 Javascript allows you to declare "sub-functions" at the end of a function (even after the return!). Unfortunately you still can't exclude variables from the enclosing function's scope
So does C#.
So does C, but I think it's a GCC extension, so it won't be standard and portable.
The language construct you are speaking about at 51' is lambdas, since C++11
can you compare the binary file sizes
@briantwill so what's your opinion on clean-code?
I might have missed it, but was there any change in performance with the two programs? Obviously performance isn't the only parameter that is important, but in the end it's what my boss cares about most in the age of cloud computing. One would assume that if you don't start fracturing data into many different instances of a class, but instead you're operating on larger continuous arrays of data, you'll end up with less I/O lags and cache misses and thus keep the CPU busy a lot more resulting in better performance. Of course this is different to a case-by-case basis, but I'd be interested if this rewrite resulted in more than just a more readable and probably better maintainable code base.
If you are working with large data especially on cloud you will be using distributed paradigm like spark and you won't be doing I/O writes unless explicitly writing to disk. Moreover you'll be more interested in reducing the shuffling or sorting of data in memory to reduce computation or bottlenecks with coalescing the data before persisting. Cloud cost and performance is definitely data related first in my experience and the procedures about how you are moving, sorting and modifying the data is far more important than programming paradigm used. Your programming paradigm doesn't really affect it much because alot of the frameworks you use will optimize by using built in optimization engines or whole code generators. In the cloud CPU usage is mostly to run containers, VMs and batch writes. If you are doing long running batch processing with writes then you could possibly reduce CPU usage by designing your pipelines to write data to your chosen persistence storage layer to match that - rdbms requires indexing, blob storage doesn't. The code refactoring to move from OOP to functional etc is not as important as understanding the actions of the pipeline and the data structure and its persistence first so while I agree that refactoring for programming paradigm is not important there is no real way to compare it unless you use the exact same data processing. My guess is the memory overhead would be low for holding pointers to classes, functions etc
There's a corollary concept related to OOP that you might be interested in addressing - fully normalizing an SQL database. Is the "pure" normalized database actually desirable in the real world? I no longer do that at this point, and I still use SQL databases even though I've adopted MongoDB for various purposes, too.
At work we recently went through the process of normalizing our database because it was causing real world performance issues, so it's definitely not purely theoretical dogma, but I don't think we adhere to any notions of having "pure" normal forms unless there are real practical reasons for it.
I don't think every database necessarily needs to be normalized, but doing so totally makes sense to me. It gives stronger guarantees at the database level that the data will be correct that you'd otherwise have to enforce in code, and it can improve performance and space usage. On the other hand, we do plenty of de-normalizing too since it can also improve performance or just make things simpler by putting commonly used data together in the same table.
So I guess like all things it depends on the situation, but unless there's a reason not to, personally I'm inclined toward more normalization than less, just without being dogmatic about it.
@@telaferrum From an OOP point-of-view, a MongoDB is clearly more closely related than an SQL database. The so-called "document" - is exactly like an "object." If OO programming is so good and desirable, then its underlying permanent representation is a Mongo database. Since there are numerous issues with that in the real world, I think it makes sense to question OO methodology at the fundamental level.
@@Jollyprez For the purposes of this discussion, OOP and representation of data in databases isn't particularly related though. The problem with OOP is that it combines data with logic/functions/behavior which often does not make sense. However, before objects there were still structs for structuring data. Data without behavior is not an object. So neither relational or NoSQL databases are object oriented.
oop is much like design patterns in that the aim is good, and there are times when it is good, but it can be abused by early stage developers.
22:00 Shouldn't the instructions be constants rather than variables? I think this whole part of the code needs rethinking.
In modern C++ you can make some lambda function and define list of capture variables, so you suggestion about internal block can be implemented using lambda function and you also able return some value from such lambda function or even change variables outside it.
19:10 Surely using class fields to hold pointers does make sense if you have an object model where interaction with one class requires an update to another. What's the alternative - explicitly parameterise (25:00)? but the tight coupling is still there whichever strategy is used
the sub functions, if declared as methods privately in a class, could just be declared after usage. By inlining the code you have just shot yourself in the foot.
Also, not saving a lot of lines for the whole project, while creating "need to scroll past the code" issues, doesn't really help your case.
Functional programming is not inherently bad. Neither is OOP. But there are bad designs in both camps.
Well the difference is that with functional programming you get to return functions that can be passed to other function's in order to create data transformations without worrying about data corruption, dynamic state changes, messaging, etc. Not to mention functional programming has natural in built capabilities to segregate data from state.
OOP requires you to create special classes and managers to describe all the potential state changes as data objects are being passed to other objects directly or indirectly by reference which introduces the problem of OOP not able to separate data and its shape from state changes/transformations. Hence more code has to be written with complex design patterns to mitigate the fundamental handicaps that come with that paradigm.
@@georgeokello8620 more code isn't necessarily a bad thing though. And state is something that has to change sometimes. When just analyzing data, change of state isn't much needed, but data is changing. So somewhere the state has to change.
@@georgeokello8620 how would you go about writing an ERP system in a functional manner?
@@georgeokello8620 and writing complex design patterns is usually a bad idea regardless of what paradigm you are using.
47:52: What's with that airborne vehicle(?) flying past the Moon in what looks like an ancient fantasy setting?
Hi Brian, first props to your voice, it sounds gorgeous.
On all your points against the used OOP methods used in the original nes project, I mostly think the same way as you.
Through all what you said, I was asking myself your view on the business side of programming. When everybody would program most sane, probably many many people would loose their job in the industry - don't you think? I think we are not at a point of a social community developed so far that we could handle so much intelligent code. We need broken, confusing and bad code to make sure that a lot of people will stay in their jobs.
Don't you think?
Insane
In visual studio you can define #Regions and collapse those in the UI. That would solve the issue of the functions needing to be defined at the top of a function. You can simply hide them. However, what I do is define a private method and place it at the bottom of the class, that way it explicitly limits it's scope and tucks the function underneath, out of the way. I don't name the function with any particular naming convention, but if you really did want to explicitly define a function as only existing as a single use function you could put a prefix on the Function name like '||' or "su_" or something.
some editors (scintilla based, wx) even allow collapse on the {} boundaries too
Now do the opposite. Take a program which is written as very few files and try to split it up in order to extend it. See what is easier and see why it is better to aim for "over-splitting" rather than "under-splitting". :)
You have absolutely no idea what you're talking about. The aim is not to intentionally split functionality in multiple locations. You have to write the code as is, make it easy for the CPU to execute your instructions, and the code structure will mature into a proper API naturally while you bundle data in logical groups. Once you go to higher levels of architecture design it will become even more apparent how files should be structured, especially when you're building multiplatform or hot-reloadable software. The fact that you're lobbying for "over-splitting", and even thinking in terms of "over" or "under splitting" shows me you've never done real low level multiplatform work.
@@Borgilian lol calm down "expert"
Your rant about what you think I think is only in your head, just like your idea of my experience as a software developer.
@@fededevi1985 Actually, he was completely right. Your lack of experience is showing.