yeah.. but maybe they should let programmer speak not GURUs/authors. this guy has no clue what hes talking about and "the gang of four" has done more damage to programming then anything else. OOP is for dummys and people who only think about code not writing it. im pretty sure no mars robot (critical infrastructure) uses OOP. no game engine uses OOP. they use component/data models and functional concepts.. i wonder why! since rendering is one of the more complex tasks i would assume these guys know what they are doing. this talk was so damn basic. you should properly name functions? no shit sherlock.. this is the first thing you learn. try to find any code from this guy not just talks.. he really has no fuckign clue what hes talking about. he never refactored ANYTHING by himself other then bash scripts to sort his porn folders.
Kevin Henney is one of the best speakers currently on the circuit. Looking forward to settling down with a glass of something festive this evening and listening.
4:00 I really like this perspective of “imagining technical debt as debt”. As in, in doing business, finance, investment and stuff, it feels so obvious that the first rule is “debt is good”, and your ability to take on lots of debt that are good debt is one of the most important factors of your abilities to get money running. Meanwhile, I’ve never even thought about the concept of “good technical debt”. But now thinking of it, the ability for a team to quickly take out a big “technical debt” when necessary is a very strong attribute.
Haven't seen this yet, but for a fairly long time I've been saying that refactors and rewrites are the most important thing in software architecture to the extent that you should write the code once as a proof of concept and only then actually write the finished version. It only takes maybe 20% more time but the end result is often way better and easier to read. It helps you understand the problem from a practical perspective at which point you can get rid of all the useless code that you used to get there leaving only a pure problem definition.
We used to say we were rewriting code. And the bosses would freak out, and whine about budgets, and "why are you doing things twice" and such. So we called it refactoring, and the bosses thought it was a neat sounding technical buzzword, and left us in peace. Now, after being out of the game a while, it looks like a whole theology has popped up around it while I was away, complete with holy books. Which is... fine?
Depends. If you are rewriting a search/sort etc. algorithm to make it O(nlogn) instead of O(n^2) for instance, then it's fine, especially if you leave the old code in place, just in case that it has fewer bugs than the new code. If you are simply doing it because you don't like somebody else's choice of method names, then it's completely wrong. What people call re-factoring is usually a thing between these two extremes, but a whole lot of it seems to be more about esthetics than actual functionality.
@@lepidoptera9337 Refactoring is more about data structures and how data is accessed and management of responsibility. For responsibility: e.g. outsource a certain part of the program into new method /class or even the other way around delegate a task to the caller of the method/class. For data management: e.g. instead of always sourcing the data out of a string out of a textfile, first read the file, parse the string and save into a dictionary/Hashmap and then access the dictionary/hashmap in future. Refactoring changes the form of the code not the logic. It makes it easier to program in the code base in the future. What you are referring to is optimization.
@@MrDavibu "Refactoring is more about data structures and how data is accessed and management of responsibility." That's my point. If refactoring moves the implementation or maintenance of a method from one team (member) to another, that's inefficient. People who were familiar with the code are now out of the loop and somebody has to learn it from scratch. Objects are, in many aspects, a false abstraction. In the real world "things" usually don't perform actions on themselves. A dog, for instance, doesn't walk itself. It is being walked. It may be walked by more than one person. It may be walked all by itself or together with multiple other dogs, which may not even belong to the same owner. So, where does walkingTheDog() belong? To personClass? To a dogOwnerClass that derives from personClass? But then we also need a dogWalkerClass that also derives from person but that does not have a fixed number of dogs as member variables, right? And no, one can not even exclude a more general walkingTheAnimal() method from catClass because there are plenty of people who are walking cats on a leash. :-) Linking actions to data is, in my opinion, a highly inflexible strategy that forces architects and teams to invent strange hierarchies that don't describe the real world well. Is it beneficial to press everything into such a narrow scheme? I doubt it. The only applications where I find classes and objects useful are GUIs. Everything else constantly tries to break out of this scheme in my experience. There is, of course, a need for libraries to isolate well defined groups of actions from each other. File IO is a different library than the low level SATA interface hardware library, is different from the file system library, which is different from a string library which is different from a word processor library. There is a natural hierarchy there. The word processor will never need to know about the block structure of a hard drive and the way the file system uses it. But then, again, these functions are so different that we would not be doing ourselves much of a favor by subclassing wordDocumentClass from stringClass from fileIOClass from blockDeviceClass from hardwareDriverClass.
"why are you doing things twice?" Because Fred Brooks said you will anyway, and he used to be a manager at IBM so he probably knew what he was on about.
Yeah it is kind of weird but at the same time I'm glad they don't have one of those camera set ups that tracks the speaker as they walk around, has a 2 second lag so is almost never properly framed correctly anyway and gives everyone watching motion sickness when they watch it back.
While I agree with much of this video, I find the goto solution of using heavy libraries (like regex) rather than lightweight string parsing disconcerting. The fact the example was far messier than it should have been (with duplicate steps) seems almost contrived to justify switching some something much heavier. Now maybe if your standard runtime is already an over-bloated mess (e.g. Java, .Net), it may not make much difference. But this kind of circular thinking is probably what created those bloated runtime libraries in the first place. I've also seen too much code over the years where programmers have used a bunch of high level functions to do what would have been much simpler (and less error prone) to do at a lower level. Basically it's the philosophy of "why use a screwdriver that you carry in your pocket everywhere?" when "you can lug around a big heavy toolbox full of many tools which allow you to fix specific problems faster". And don't get me started on the domino dependency problem.. some code using one class/function from a library. That [often bloated] library also contains other APIs (you don't use) which have their own dependencies.. and so on. So after the effort of installing and/or compiling of all those dependencies (hoping you don't hit a "DLL Hell" style version conflict in the process).. you finally get to use that one API, just so you can save writing ten lines of code to do the same thing (without any extra dependencies). Reusing someone else's implementation rather than doing your own is great.. until it's not.
Refactoring code really should be seen as increasing security. Code that is easier to read is easier to audit. Looking at code more times means more opportunities to see bugs that could lead to security concerns. That being said it has to be done carefully to not introduce new bugs of course...
Refactoring doesn't guarantee any of that. What it does guarantee is that everybody will be looking at code they have never seen after the refactoring is done, so people have to re-learn their own code. How in the frelling universe does that increase security and readability? Don't buy into the bullshit. Think on your own.
@@lepidoptera9337 the point of refactoring is to make complicated potentially poorly written code clearer to understand. It doesn't guarantee anything but if done well it should increase security. No one is forcing giant changes that confuse people on anyone.
@@wdavid3116 Refactoring doesn't make poor code better. It's a completely algorithmic operation at the lowest level of code comprehension. All it does is to untangle poor architectural choices that were made without an understanding of actual dependencies. The reason why it's needed in OOP is the OOP workflow itself, which requires the architect to have god-like foresight because he is tasked to create ultra-strong dependencies through inheritance. And because OOP boxes the coders in very strongly very early on, the team ends up in all the wrong boxes. The only way out is frequent refactoring. For sufficiently large systems refactoring necessarily breaks more than it fixes in addition. That's the problem with strongly coupled systems. Good architecture avoids strong coupling. In other words: inheritance is evil. Don't do it. Make the system flexible, with as few dependencies as possible.
@@lepidoptera9337 You are operating with a very specific, limited and incorrect definition of refactoring. If you google define: refactoring you'll get "restructure (the source code of an application or piece of software) so as to improve operation without altering functionality." I take operation in that sense to include it in the context of requiring less maintenance. If refactoring isn't providing benefit it isn't refactoring or potentially it's doing a bad job of refactoring.
"If you're using an integer to count through a list in a language developed after 1980, you're doing something wrong." Yes, you're using a language developed after 1958. :)
@@ernststravoblofeld That is not even close to the truth. In C++ and C dialects with proper type libraries a uchar8_t is very, very different from long. It has a different memory layout and completely different runtime behavior numerically and especially in cases where it's used as array index. A seasoned professional will never use char, int, long etc. but always explicitly declare the length of these variables.
Also, in _actual_ Roman numerals, four is written as "IIII", not "IV". That's a rather modern invention. Which makes sense, of course - adding numbers in Roman numerals is rather simple - just concatenate the numerals together and shorten as needed. The modern "Roman" numerals are clearly not something actual people ever used as numbers in any practical way. Heck, it's said the BBC started to show the "when was this show produced" numbers in "Roman" numerals so that people wouldn't notice they're watching reruns :D Needless to say, this makes code that converts between Roman numerals and binary numbers rather simpler than Kevlin's example.
Refactoring is a pain in the a.... It happens because the architect messed up and what it really is, in practice, is the willful breaking of a working system for sake of correcting architectural failure or, worse, perceived "ugliness". If it can be avoided (at almost any cost), then it should be.
You made self-contradicting statements. "the willful breaking of a working system for sake of correcting architectural failure". By definition can architectural failure be working ? And if it is working now, will it also be forever and ever ? How do you even define architectural failure ? Beside refactoring is pretty much defined as maintaining functionality as it is. If it is breaking anything, it's not refactoring. "If it can be avoided (at almost any cost), then it should be." And it is avoided pretty much at the cost of refactoring. Nobody writes perfect code at first. Worst than that, there's no such thing as perfect specifications either. In fact there's no such thing thing as perfect code. It might be sufficient at one point, but requirements and constraints change. That's the soft part of software. Code will have to evolve, and when it needs to, it better be easy to understand and change.
@@willyjacquet4436 I agree with your last part: a lot of refactoring has to be done in OOP to add functionality that wasn't required when the initial architectural decisions were made. So the system may be working, it just doesn't do everything that is required of it for the new version. My point is that if we have to recompile software, then there is always a potential for breaking it. Just because the code is theoretically equivalent doesn't mean that the runtime behaves the same. Bugs that are due to memory layout (like buffer overflows) may show up in one version but not in the other. Memory layout can also introduce hard to track down performance issues because of cache misses: I had to deal with a piece of code in my PhD that ran ten times faster simply by swapping two inner loops... the guy who wrote it was a newbie who knew nothing about cache misses. That isn't even refactoring. That's just code transformations that even modern compilers can do on their own. Refactoring can cause similar issues with cache misses on the program segment. I am not advocating that one can get around architectural changes one way or another, but refactoring is not the cure. It's merely a symptom of the problem and in OOP specifically it seems to be a permanent malaise.
always a delight listening to Kevlin Henney
yeah.. but maybe they should let programmer speak not GURUs/authors. this guy has no clue what hes talking about and "the gang of four" has done more damage to programming then anything else. OOP is for dummys and people who only think about code not writing it. im pretty sure no mars robot (critical infrastructure) uses OOP. no game engine uses OOP. they use component/data models and functional concepts.. i wonder why! since rendering is one of the more complex tasks i would assume these guys know what they are doing. this talk was so damn basic. you should properly name functions? no shit sherlock.. this is the first thing you learn. try to find any code from this guy not just talks.. he really has no fuckign clue what hes talking about. he never refactored ANYTHING by himself other then bash scripts to sort his porn folders.
Kevin Henney is one of the best speakers currently on the circuit. Looking forward to settling down with a glass of something festive this evening and listening.
4:00 I really like this perspective of “imagining technical debt as debt”.
As in, in doing business, finance, investment and stuff, it feels so obvious that the first rule is “debt is good”, and your ability to take on lots of debt that are good debt is one of the most important factors of your abilities to get money running.
Meanwhile, I’ve never even thought about the concept of “good technical debt”. But now thinking of it, the ability for a team to quickly take out a big “technical debt” when necessary is a very strong attribute.
Haven't seen this yet, but for a fairly long time I've been saying that refactors and rewrites are the most important thing in software architecture to the extent that you should write the code once as a proof of concept and only then actually write the finished version. It only takes maybe 20% more time but the end result is often way better and easier to read. It helps you understand the problem from a practical perspective at which point you can get rid of all the useless code that you used to get there leaving only a pure problem definition.
33:50 I never expected a Kevlin Henney - Adam Neely crossover, but here we are.
33:50 I never expected a Kevlin Henney - Adam Neely crossover, but here we are.
33:50 I never expected a Kevlin Henney - Adam Neely crossover, but here we are.
33:50 I never expected a Kevlin Henney - Adam Neely crossover, but here we are.
33:50 I never expected a Kevlin Henney - Adam Neely crossover, but here we are.
33:50 I never expected a Kevlin Henney - Adam Neely crossover, but here we are.
Waited 1.03.43 for Mr Henney to walk out the left side and come back the right side asteroids fashion. disappointed.
😂
We used to say we were rewriting code. And the bosses would freak out, and whine about budgets, and "why are you doing things twice" and such. So we called it refactoring, and the bosses thought it was a neat sounding technical buzzword, and left us in peace. Now, after being out of the game a while, it looks like a whole theology has popped up around it while I was away, complete with holy books. Which is... fine?
Depends. If you are rewriting a search/sort etc. algorithm to make it O(nlogn) instead of O(n^2) for instance, then it's fine, especially if you leave the old code in place, just in case that it has fewer bugs than the new code. If you are simply doing it because you don't like somebody else's choice of method names, then it's completely wrong. What people call re-factoring is usually a thing between these two extremes, but a whole lot of it seems to be more about esthetics than actual functionality.
@@lepidoptera9337
Refactoring is more about data structures and how data is accessed and management of responsibility.
For responsibility: e.g. outsource a certain part of the program into new method /class or even the other way around delegate a task to the caller of the method/class.
For data management: e.g. instead of always sourcing the data out of a string out of a textfile, first read the file, parse the string and save into a dictionary/Hashmap and then access the dictionary/hashmap in future.
Refactoring changes the form of the code not the logic. It makes it easier to program in the code base in the future.
What you are referring to is optimization.
@@MrDavibu "Refactoring is more about data structures and how data is accessed and management of responsibility."
That's my point. If refactoring moves the implementation or maintenance of a method from one team (member) to another, that's inefficient. People who were familiar with the code are now out of the loop and somebody has to learn it from scratch.
Objects are, in many aspects, a false abstraction. In the real world "things" usually don't perform actions on themselves. A dog, for instance, doesn't walk itself. It is being walked. It may be walked by more than one person. It may be walked all by itself or together with multiple other dogs, which may not even belong to the same owner. So, where does walkingTheDog() belong? To personClass? To a dogOwnerClass that derives from personClass? But then we also need a dogWalkerClass that also derives from person but that does not have a fixed number of dogs as member variables, right? And no, one can not even exclude a more general walkingTheAnimal() method from catClass because there are plenty of people who are walking cats on a leash. :-)
Linking actions to data is, in my opinion, a highly inflexible strategy that forces architects and teams to invent strange hierarchies that don't describe the real world well. Is it beneficial to press everything into such a narrow scheme? I doubt it. The only applications where I find classes and objects useful are GUIs. Everything else constantly tries to break out of this scheme in my experience.
There is, of course, a need for libraries to isolate well defined groups of actions from each other. File IO is a different library than the low level SATA interface hardware library, is different from the file system library, which is different from a string library which is different from a word processor library. There is a natural hierarchy there. The word processor will never need to know about the block structure of a hard drive and the way the file system uses it. But then, again, these functions are so different that we would not be doing ourselves much of a favor by subclassing wordDocumentClass from stringClass from fileIOClass from blockDeviceClass from hardwareDriverClass.
"why are you doing things twice?"
Because Fred Brooks said you will anyway, and he used to be a manager at IBM so he probably knew what he was on about.
Refactoring is about reducing constraints.
What a great talk!
36:43 that's obviously kanji, not hiragana, in hiragana it would looks like かいぜん
Great talk but a minor nitpick on the production: should've had more cameras or told Kevlin not to move outside camera boundaries
Yeah it is kind of weird but at the same time I'm glad they don't have one of those camera set ups that tracks the speaker as they walk around, has a 2 second lag so is almost never properly framed correctly anyway and gives everyone watching motion sickness when they watch it back.
Honestly, for me, I liked not always being able to see the speaker. Made me focus more on what he was talking about
36:13 and don't forget about excremental development, a.k.a. Microsoft SharePoint
While I agree with much of this video, I find the goto solution of using heavy libraries (like regex) rather than lightweight string parsing disconcerting. The fact the example was far messier than it should have been (with duplicate steps) seems almost contrived to justify switching some something much heavier. Now maybe if your standard runtime is already an over-bloated mess (e.g. Java, .Net), it may not make much difference. But this kind of circular thinking is probably what created those bloated runtime libraries in the first place. I've also seen too much code over the years where programmers have used a bunch of high level functions to do what would have been much simpler (and less error prone) to do at a lower level.
Basically it's the philosophy of "why use a screwdriver that you carry in your pocket everywhere?" when "you can lug around a big heavy toolbox full of many tools which allow you to fix specific problems faster".
And don't get me started on the domino dependency problem.. some code using one class/function from a library. That [often bloated] library also contains other APIs (you don't use) which have their own dependencies.. and so on. So after the effort of installing and/or compiling of all those dependencies (hoping you don't hit a "DLL Hell" style version conflict in the process).. you finally get to use that one API, just so you can save writing ten lines of code to do the same thing (without any extra dependencies). Reusing someone else's implementation rather than doing your own is great.. until it's not.
Brancusi was a Romanian sculptor, not a poet. This being said, Great presentation!
Refactoring code really should be seen as increasing security. Code that is easier to read is easier to audit. Looking at code more times means more opportunities to see bugs that could lead to security concerns. That being said it has to be done carefully to not introduce new bugs of course...
Refactoring doesn't guarantee any of that. What it does guarantee is that everybody will be looking at code they have never seen after the refactoring is done, so people have to re-learn their own code. How in the frelling universe does that increase security and readability? Don't buy into the bullshit. Think on your own.
@@lepidoptera9337 the point of refactoring is to make complicated potentially poorly written code clearer to understand. It doesn't guarantee anything but if done well it should increase security. No one is forcing giant changes that confuse people on anyone.
@@wdavid3116 Refactoring doesn't make poor code better. It's a completely algorithmic operation at the lowest level of code comprehension. All it does is to untangle poor architectural choices that were made without an understanding of actual dependencies. The reason why it's needed in OOP is the OOP workflow itself, which requires the architect to have god-like foresight because he is tasked to create ultra-strong dependencies through inheritance. And because OOP boxes the coders in very strongly very early on, the team ends up in all the wrong boxes. The only way out is frequent refactoring. For sufficiently large systems refactoring necessarily breaks more than it fixes in addition. That's the problem with strongly coupled systems. Good architecture avoids strong coupling. In other words: inheritance is evil. Don't do it. Make the system flexible, with as few dependencies as possible.
@@lepidoptera9337 You are operating with a very specific, limited and incorrect definition of refactoring. If you google define: refactoring you'll get "restructure (the source code of an application or piece of software) so as to improve operation without altering functionality." I take operation in that sense to include it in the context of requiring less maintenance. If refactoring isn't providing benefit it isn't refactoring or potentially it's doing a bad job of refactoring.
@@wdavid3116 I can tell that you got your software engineering degree on the internet and not from a good university. ;-)
Yahoo,that's well - take care.NDC- :)
I saw this in the recommendationd of the primeagen
Pascal is the stuff of nightmares. No, thank you, I don't care what problems it fixes.
"If you're using an integer to count through a list in a language developed after 1980, you're doing something wrong." Yes, you're using a language developed after 1958. :)
You can type what you like, but the machine will be using that same integer, regardless.
@@ernststravoblofeld That is not even close to the truth. In C++ and C dialects with proper type libraries a uchar8_t is very, very different from long. It has a different memory layout and completely different runtime behavior numerically and especially in cases where it's used as array index. A seasoned professional will never use char, int, long etc. but always explicitly declare the length of these variables.
@@lepidoptera9337 It's all going in the same register. You can buzzword all you like.
@@ernststravoblofeldWhy are you telling me that you have CS DK? I really don't give a frell. :-)
@@lepidoptera9337 You keep yammering on a bunch of nonsense, so apparently you want some correction.
2:30 document your code
In Roman numerals, this year is MMXXII - interesting. 🙂
MMM, Roman numerals!
Also, in _actual_ Roman numerals, four is written as "IIII", not "IV". That's a rather modern invention. Which makes sense, of course - adding numbers in Roman numerals is rather simple - just concatenate the numerals together and shorten as needed. The modern "Roman" numerals are clearly not something actual people ever used as numbers in any practical way. Heck, it's said the BBC started to show the "when was this show produced" numbers in "Roman" numerals so that people wouldn't notice they're watching reruns :D
Needless to say, this makes code that converts between Roman numerals and binary numbers rather simpler than Kevlin's example.
@@LuaanTi Apparently the Romans used subtractive notation, but inconsistently. For example, one of the Colosseum 's entrances was numbered XLIIII.
Bro how can you give a talk with all that noise? I’m going insane
funny I didn’t even notice
sadly commitstrip seems dead
Refactoring is a pain in the a.... It happens because the architect messed up and what it really is, in practice, is the willful breaking of a working system for sake of correcting architectural failure or, worse, perceived "ugliness". If it can be avoided (at almost any cost), then it should be.
You made self-contradicting statements.
"the willful breaking of a working system for sake of correcting architectural failure". By definition can architectural failure be working ? And if it is working now, will it also be forever and ever ? How do you even define architectural failure ? Beside refactoring is pretty much defined as maintaining functionality as it is. If it is breaking anything, it's not refactoring.
"If it can be avoided (at almost any cost), then it should be." And it is avoided pretty much at the cost of refactoring. Nobody writes perfect code at first. Worst than that, there's no such thing as perfect specifications either. In fact there's no such thing thing as perfect code. It might be sufficient at one point, but requirements and constraints change. That's the soft part of software. Code will have to evolve, and when it needs to, it better be easy to understand and change.
@@willyjacquet4436 I agree with your last part: a lot of refactoring has to be done in OOP to add functionality that wasn't required when the initial architectural decisions were made. So the system may be working, it just doesn't do everything that is required of it for the new version.
My point is that if we have to recompile software, then there is always a potential for breaking it. Just because the code is theoretically equivalent doesn't mean that the runtime behaves the same. Bugs that are due to memory layout (like buffer overflows) may show up in one version but not in the other. Memory layout can also introduce hard to track down performance issues because of cache misses: I had to deal with a piece of code in my PhD that ran ten times faster simply by swapping two inner loops... the guy who wrote it was a newbie who knew nothing about cache misses. That isn't even refactoring. That's just code transformations that even modern compilers can do on their own. Refactoring can cause similar issues with cache misses on the program segment.
I am not advocating that one can get around architectural changes one way or another, but refactoring is not the cure. It's merely a symptom of the problem and in OOP specifically it seems to be a permanent malaise.