New features at the expense of quality is a big one. New features are shiny and gather looks of attention. Quality does not sadly. Which makes this an uphill battle but a worthwhile endeavour.
I've worked in places that demand forward progress at the expense of everything. You can also take this too far. I've seen engineers put in charge plan, hedge and quibble over quality while the investment funds and time disappear into nothing and have little to show for their efforts. Good software isn't written slowly. But running too fast is reckless and results in garbage.
This is usually a misunderstanding between business and engineering. Business always assumes that engineering would implement solutions with quality. It's not that business will come with a requirement to develop a software product with poor quality. They will always ask for new features because that's what brings revenue. It is the engineering responsibility to implement them with a level of quality that ensures new features can be added on a reasonable cost and that incremental features don't cost the same as the original feature. If the quality is not there it is an engineering failure not business.
Developers shouldn't give their managers / product owners etc a choice between features and quality. That is not something they are qualified to micro-manage. This is one reason why a task list should be off-limits to these peoples, or why "technical debt" tasks shouldn't be on the storyboard.
The biggest project for years at my company has a huge secretive team who rush developed a large complex product working 7 days a week for 18 months. When they wanted to go live they just told us a date and refused to discuss their implementation plan. They were told a flat no. Forced to allow access to their implementation plan we discovered, unsurprisingly, that it was deeply flawed and had zero chance of succeeding. It took 3 months more to get it into an acceptable state. Once implemented almost none of our millions of customers used it despite masses of advertising. Despite the lack of use it had a big impact on our production environment. After a few months it was decided to completely reengineer it. Two years later the new version was rolled out. Two years later the whole thing was quietly scrapped.
Excellent examples. I would also add "Not knowing when to stop". I've seen many cases where the software is done, but we keep tinkering, adding useless features, just to have a new release.
Features vs. Quality is a good point. Quality does not just mean absence of defects, it means the product or service meets requirements, including features, but also including reliability, adaptability, compliance, etc. Quality is not only about user requirements and should include other aspects and stakeholder perspective.
Thanks, Dave, great video! You’ve validated decades of my career as a developer and entrepreneur delivering secure, quality software. Your data and insights align with my experience, and they’ll be invaluable in helping me advocate for quality over quantity in those critical discussions. Appreciate your continued contributions!
As the guy who uses a lot of obscure and cut features, the world sucks a lot more for me without them and we are seeing a huge decline in those “extra features”. Sometimes those are what give some thing it’s staying power even if it’s not immediately recognizable.
I know this video is more for programers, but i think there ia one fundamental mistake - making program without understanding context and for what is all about just making features. I am working in large ship building company and for my needs I did a managing system. Company decide to adopt it then I've start to work with professional programers I've explained everything how it works but was pointless they just want to know what is the action if you push button X. Now 5 years later, thing is still running as it was initially.
I find listening to your videos calming and thought provoking. Continual learning, making micro changes daily or weekly has become my approach to self improvement. Thank you.
Dave, Thanks for the last 2 items and their associated items. It is a shame that American software companies don't follow Deming's words of advice more. Only thing I can debate is with automation. Some companies/people think you need to automate everything (100% automation) instead of thinking how to strategically pick and build what is needed to remove the redundant work. They don't think about how to get automated test execution to run in phases instead of all at once. Automation is about efficiency gains.
In general I agree with all of these. But... my projects were all embedded controllers with dedicated hardware that had to be developed in parallel with the software. Also, there was almost always a significant safety aspect that had to be thoroughly tested before turning it loose on a customer. So heaving something together and putting it in front of a user to find out if that was what they wanted was not generally a practical option.
T.J. Watson't memo talk about the introduction of the CDC 6600. I've once owned one of these. I was being thrown away by our company and I won it. I pulled out a stack of PCB and left the rest. The PCB's were perfectly square and had a hole dead center; they are about 40x40cm. (I recognised this love for elegant details, also seen at Apple, and something I do too.) I made a cube of 6 of these PCBs and hung it from the ceiling in my small attic office.
The Explorer pattern plays especially well in business-facing Low-Code solutions. Certain platforms in this space really allow developers to rapidly prototype and help business users leap ahead while avoiding large-scale planning.
The problem is what I call a feature based promotion ladder: most promotions are awarded after completing a new project/feature. This creates a stupid incentive of "delivering" at a great cost to long term viability of projects. Most people care about putting a shiny new title on their resume, and if things go south, no problem; they just change the company/project they work for, and let the chips fall where they may.
Chuck Moore in 1999 in his talk 1x Forth said "Don't leave openings in which you are going to insert code at some future date when the problem changes because inevitably the problem will change in a way that you didn't anticipate. Whatever the cost it's wasted. Don't anticipate, solve the problem you've got."
Problem is many developers confuse good design with overengineering and poor design with "solving the problem you've got". As an example, some developers think putting all your code in one class/function is good because it's the fastest and easiest way to solve the problem. Using code design principles such as SRP is overengineering for these people.
I agree, that is why I think that the better way to grade "good design" is by ease of change. ALWAYS prefer code & systems that are easy to change. That means Modularity, Cohesion, Separation of Concerns, Abstraction and managed Coupling.
I would love to hear how exactly you would implement some of this advice. How to collect fast feedback? Should we ask end users of the product to immediately test every little change? Is that practical? How to know for sure if a feature is really useless? Perhaps the tester thinks it is but it might turn out to be useful for many people?
We include the business directly in testing, explain the basics of what idea they had, then what we built to support it, then ask them to try it out. We use Microsoft Teams and in-channel comms for this as it is visible to everyone. If you asked for a feature and we built it that day and you didn't try it for 2 weeks, then your boss sees that. We can immediately bring this up to explain to leadership why we deprioritize certain projects.
Surveying users is not the best, or only, way to do it, you gather feedback by measuring something that defines it. May up-time, latency, money earned, customers recruited, usage of a new version vs an old and so on. For each feature, you define what measurement will demonstrate the success of that feature and then you measure that in production. Many orgs do this on a fully automated basis, including Amazon, Netflix, Meta etc.
`About Regression Testing: Software developers should do more manual tests and pay very good attention to what a system is doing; this is skill that must be practiced. Especially under time pressure, developers move to the next thing quickly. Automatic tests don't cover everything.
Intuitively I love the advice given on this channel. What I don't understand is the claim that a lot of the advice on this channel is based on science. I get that there are studies out there but I also think that they are hard or impossible to trust fully. They are often based on fairly subjective data and they can also never account for the huge amount of variables in companies and software teams. They also rarely or never run counterfactuals. That said I agree with most of the advice intuitively and from experience.
I don't claim that the advice "is based on science" I say that the engineering discipline that I describe is based on scientific style rationalism, by which I mean the practice of science, not the findings. The practice of science is based on some key ideas. Always assume that you can be, and almost certainly are, wrong. Work to find out how, and where you are wrong and try something new to fix it. Make progress in small steps, and do your best to falsify, or validate each step (falsification is usually a more profound test). Make progress as a series of experiments. Being experimental means having a theory about what you are doing and why you are doing it, figuring our how you will determine if your theory is sound, before you begin the experiment and it means controlling the variables enough so that you will understand the results that you get back from the experiment. There is a fair bit more, but that is what I mean by being "scientific". The only study that I am aware of and that I believe as a decently strong claim to being scientifically defensible is the DORA study, described in the "Accelerate" book by Nicole Forsgren et al. The other vitally important aspect of a more scientific approach is to use, what David Deutsch calls "Good Explanations" According to Deutsch a "Good Explanation" is... 1. *Hard to Vary*: If you can change parts of the explanation while still making it work, the explanation is not considered robust or deep. A good explanation has little flexibility in its structure - any change would render it inadequate or false. 2. *Not Merely Predictive*: A good explanation goes beyond mere prediction. Many theories or models can predict outcomes (e.g., using formulas or data), but a good explanation delves into why something is happening, in a way that is resistant to arbitrary alteration. 3. *Truth-Seeking*: It aims to accurately represent the reality of the phenomenon it explains, rather than just being a convenient or pragmatic model. 4. *Problem-Solving*: A good explanation not only fits existing data but also solves the problem it was created to address. It reduces the mysteries, clarifying why things happen the way they do.
@@ContinuousDelivery Oh thank you for the feedback, that's a really helpful answer and giving me food for thought. Will explore those resources you have mentioned as well!
All these things you mention here should be common sense in the minds of managers and SWEs, and yet, I'm willing to bet that the vast majority of companies are doing most if not all of these things.
True, but that only means that we need to educate them that that is a stupid idea, at least if they are only counting rate of feature production sprint by sprint. That is not the sensible time to optimise for. The sane response is to optimise of ongoing, maintainable throughput. Queuing theory tells us that if you keep a queue permanently full, you go at a slower rate overall. You need some slack in the system, in our case, that means time to do a good job of quality, work to make out code easy to change and so on. Without that you go slower and slower until you stop, and there are many companies that fall into this trap and find it next to impossible to change their software. I worked with a company once that hadn't released ANY software for over 5 years, as a result of this mistake, their code was in such a mess, that you couldn't safely change it!
given that Microsoft's ideas include "boil the oceans to shoehorn regurgitative AI into every corner of product" I'm quite in favour of Microsoft employees working on things that in hindsight don't meet the corporate goals
From a developer's perspective easily the most time-wasting impediments are non-voluntary meetings - especially the god-awful daily "stand-ups" (where everyone is either sitting down or remote) and where newbies feel obliged to extend the vapid tedium with some inane contribution.
7:04 - "It certainly says something about the rate of development." No it doesn't. It's a terrible metric like you said. You can only really say exactly what the metric says: They've reached a 100K lines and nothing else Everything else you infer from this is just pulled out of your ass. I can give you countless examples why that is the case: - Some devs write a lot of comments - Code style affects lines of code (longer lines vs short lines) - Teams who refactor - Copy pasted code - Dead code - Teams who do and do not write tests I can keep going..
The most useful metric 100 kloc is written in a time is that of knowing how long build times are growing, but even that’s problematic because the dependency graph (especially cycles) greatly impacts the build time. I’ve observed there are periods of rapid growth in loc but then refactoring/eliminating stuff happens and it remains fairly stable for size, while functionality changes markedly, for better and/or worse.
5. Fake productivity - demos for the sake of looking good to others (usually higher ups) causing focus of energy into PowerPoint, presentation prep etc. rather than getting a quality job done on time
Its down to people. Most of my career pain points have been in dealing with leaders and decision makers who are incompetent and in turn are liabilities to the business, team or themselves.
Yep, anecdotal ofcourse and a bit of a rant here... but I went from a medium size company with some decent processes and experienced leaders, to a scaleup. The scaleup people are all incredibly academic (nearly all have masters degrees and some phd's) I don't though even though I excel here. It is far far worse than the medium size company here, and I put it down to the leadership/decision makers. They plan absolutely everything ahead, as in they try to get absolute maximum detail on absolutely everything, and then execute, rather than ever have any kind of investigation or exploration. They also barely test systems ever, even though the whole software is very very brittle. They often prioritise using new tech/services over tried and tested solutions. And they push features over quality constantly... The result...? The software is a mess, it is garbage. They are essentially incompetent when it comes to software development I feel, because I know the people at the medium sized company would never have ran into half of the issues we have here and the quality would be far far higher. These people are very "intelligent", but they are incredibly impractical in their decision making, not pragmatic at all, so they don't put in processes or organise things properly. It really is like the leaders/decision makers are all really chaotic thinkers, whereas at the medium sized company people all would make sure there is organisation and a process to things. I know things will always be slightly different in a more mature company, but I'm talking about the physchology of people and how people decide to lead or make decisions, and some people are really really bad at this.
I love it every time Dave makes a "what not to do" video because I know he's going to describe all my workplaces to a T.
New features at the expense of quality is a big one. New features are shiny and gather looks of attention. Quality does not sadly. Which makes this an uphill battle but a worthwhile endeavour.
I've worked in places that demand forward progress at the expense of everything.
You can also take this too far. I've seen engineers put in charge plan, hedge and quibble over quality while the investment funds and time disappear into nothing and have little to show for their efforts.
Good software isn't written slowly. But running too fast is reckless and results in garbage.
This is usually a misunderstanding between business and engineering. Business always assumes that engineering would implement solutions with quality. It's not that business will come with a requirement to develop a software product with poor quality. They will always ask for new features because that's what brings revenue. It is the engineering responsibility to implement them with a level of quality that ensures new features can be added on a reasonable cost and that incremental features don't cost the same as the original feature. If the quality is not there it is an engineering failure not business.
Developers shouldn't give their managers / product owners etc a choice between features and quality. That is not something they are qualified to micro-manage. This is one reason why a task list should be off-limits to these peoples, or why "technical debt" tasks shouldn't be on the storyboard.
@@TheEvertw managers: "we were selling the product to a customer, and promised it could do X, implement X in the next 2 weeks, so we can make a sale"
@@defeqel6537 Product Owner to Managers: your over-selling isn't my priority.
PO's have ONE job: isolate the dev team from this kind of sh!t.
The biggest project for years at my company has a huge secretive team who rush developed a large complex product working 7 days a week for 18 months. When they wanted to go live they just told us a date and refused to discuss their implementation plan. They were told a flat no. Forced to allow access to their implementation plan we discovered, unsurprisingly, that it was deeply flawed and had zero chance of succeeding. It took 3 months more to get it into an acceptable state. Once implemented almost none of our millions of customers used it despite masses of advertising. Despite the lack of use it had a big impact on our production environment. After a few months it was decided to completely reengineer it. Two years later the new version was rolled out. Two years later the whole thing was quietly scrapped.
Excellent examples. I would also add "Not knowing when to stop". I've seen many cases where the software is done, but we keep tinkering, adding useless features, just to have a new release.
Features vs. Quality is a good point. Quality does not just mean absence of defects, it means the product or service meets requirements, including features, but also including reliability, adaptability, compliance, etc. Quality is not only about user requirements and should include other aspects and stakeholder perspective.
Thanks, Dave, great video! You’ve validated decades of my career as a developer and entrepreneur delivering secure, quality software. Your data and insights align with my experience, and they’ll be invaluable in helping me advocate for quality over quantity in those critical discussions. Appreciate your continued contributions!
Building for automated regression testing is fine for greenfield projects but often can’t be done for legacy, non-browser based software
As the guy who uses a lot of obscure and cut features, the world sucks a lot more for me without them and we are seeing a huge decline in those “extra features”. Sometimes those are what give some thing it’s staying power even if it’s not immediately recognizable.
1. Not building what users want
2. Big teams
3. Delaying feedback/working in big steps
4. Features over quality
5. Manual Regression Testing
I know this video is more for programers, but i think there ia one fundamental mistake - making program without understanding context and for what is all about just making features. I am working in large ship building company and for my needs I did a managing system. Company decide to adopt it then I've start to work with professional programers I've explained everything how it works but was pointless they just want to know what is the action if you push button X. Now 5 years later, thing is still running as it was initially.
I find listening to your videos calming and thought provoking. Continual learning, making micro changes daily or weekly has become my approach to self improvement. Thank you.
Dave,
Thanks for the last 2 items and their associated items. It is a shame that American software companies don't follow Deming's words of advice more. Only thing I can debate is with automation. Some companies/people think you need to automate everything (100% automation) instead of thinking how to strategically pick and build what is needed to remove the redundant work. They don't think about how to get automated test execution to run in phases instead of all at once. Automation is about efficiency gains.
In general I agree with all of these. But... my projects were all embedded controllers with dedicated hardware that had to be developed in parallel with the software. Also, there was almost always a significant safety aspect that had to be thoroughly tested before turning it loose on a customer. So heaving something together and putting it in front of a user to find out if that was what they wanted was not generally a practical option.
I love it. How do you get to know all those studies and pick the interesting stuff? Good work you compile all that stuff for us all. Thanks
Such a great mention of S. Cray's comment on the IBM memo. I come back to this memo quite often.
T.J. Watson't memo talk about the introduction of the CDC 6600. I've once owned one of these. I was being thrown away by our company and I won it. I pulled out a stack of PCB and left the rest. The PCB's were perfectly square and had a hole dead center; they are about 40x40cm. (I recognised this love for elegant details, also seen at Apple, and something I do too.) I made a cube of 6 of these PCBs and hung it from the ceiling in my small attic office.
I’m burnt out working on half baked ideas.
Sounds like home
The Explorer pattern plays especially well in business-facing Low-Code solutions. Certain platforms in this space really allow developers to rapidly prototype and help business users leap ahead while avoiding large-scale planning.
The problem is what I call a feature based promotion ladder: most promotions are awarded after completing a new project/feature. This creates a stupid incentive of "delivering" at a great cost to long term viability of projects.
Most people care about putting a shiny new title on their resume, and if things go south, no problem; they just change the company/project they work for, and let the chips fall where they may.
I hadn't been as impressed by your shirts as the others so far, but now I need to recalibrate.
Chuck Moore in 1999 in his talk 1x Forth said "Don't leave openings in which you are going to insert code at some future date when the problem changes because inevitably the problem will change in a way that you didn't anticipate. Whatever the cost it's wasted. Don't anticipate, solve the problem you've got."
Problem is many developers confuse good design with overengineering and poor design with "solving the problem you've got". As an example, some developers think putting all your code in one class/function is good because it's the fastest and easiest way to solve the problem. Using code design principles such as SRP is overengineering for these people.
I agree, that is why I think that the better way to grade "good design" is by ease of change. ALWAYS prefer code & systems that are easy to change. That means Modularity, Cohesion, Separation of Concerns, Abstraction and managed Coupling.
Funny, sir, how we always seem to end up in an alliance-friendly bar come Unification Day
Lol, planners vs explorers is like scrum in my job vs agility
HOLY SH!T Sir David Farley turned out to be a fellow BROWNCOAT😮
Every CTO should watch this
I would love to hear how exactly you would implement some of this advice. How to collect fast feedback? Should we ask end users of the product to immediately test every little change? Is that practical? How to know for sure if a feature is really useless? Perhaps the tester thinks it is but it might turn out to be useful for many people?
We include the business directly in testing, explain the basics of what idea they had, then what we built to support it, then ask them to try it out. We use Microsoft Teams and in-channel comms for this as it is visible to everyone. If you asked for a feature and we built it that day and you didn't try it for 2 weeks, then your boss sees that. We can immediately bring this up to explain to leadership why we deprioritize certain projects.
@@JohnHall Wow, great example, thank you very much!
Surveying users is not the best, or only, way to do it, you gather feedback by measuring something that defines it. May up-time, latency, money earned, customers recruited, usage of a new version vs an old and so on. For each feature, you define what measurement will demonstrate the success of that feature and then you measure that in production. Many orgs do this on a fully automated basis, including Amazon, Netflix, Meta etc.
Great video Dave!
Bug at 6:33: Dave says "8 people or fewer" but text on screen says "< 8 people". That's an off-by-one error.
On part of the screen? Or Dave?
I wouldn't know; I haven't read the book.
"There are only two really hard problems in computer science: cache invalidation and naming things. And off by one errors."
`About Regression Testing: Software developers should do more manual tests and pay very good attention to what a system is doing; this is skill that must be practiced. Especially under time pressure, developers move to the next thing quickly. Automatic tests don't cover everything.
Intuitively I love the advice given on this channel. What I don't understand is the claim that a lot of the advice on this channel is based on science. I get that there are studies out there but I also think that they are hard or impossible to trust fully. They are often based on fairly subjective data and they can also never account for the huge amount of variables in companies and software teams. They also rarely or never run counterfactuals. That said I agree with most of the advice intuitively and from experience.
Which is why getting a fast feedback is a good idea :)
I don't claim that the advice "is based on science" I say that the engineering discipline that I describe is based on scientific style rationalism, by which I mean the practice of science, not the findings.
The practice of science is based on some key ideas. Always assume that you can be, and almost certainly are, wrong. Work to find out how, and where you are wrong and try something new to fix it.
Make progress in small steps, and do your best to falsify, or validate each step (falsification is usually a more profound test).
Make progress as a series of experiments. Being experimental means having a theory about what you are doing and why you are doing it, figuring our how you will determine if your theory is sound, before you begin the experiment and it means controlling the variables enough so that you will understand the results that you get back from the experiment.
There is a fair bit more, but that is what I mean by being "scientific". The only study that I am aware of and that I believe as a decently strong claim to being scientifically defensible is the DORA study, described in the "Accelerate" book by Nicole Forsgren et al.
The other vitally important aspect of a more scientific approach is to use, what David Deutsch calls "Good Explanations"
According to Deutsch a "Good Explanation" is...
1. *Hard to Vary*: If you can change parts of the explanation while still making it work, the explanation is not considered robust or deep. A good explanation has little flexibility in its structure - any change would render it inadequate or false.
2. *Not Merely Predictive*: A good explanation goes beyond mere prediction. Many theories or models can predict outcomes (e.g., using formulas or data), but a good explanation delves into why something is happening, in a way that is resistant to arbitrary alteration.
3. *Truth-Seeking*: It aims to accurately represent the reality of the phenomenon it explains, rather than just being a convenient or pragmatic model.
4. *Problem-Solving*: A good explanation not only fits existing data but also solves the problem it was created to address. It reduces the mysteries, clarifying why things happen the way they do.
@@ContinuousDelivery Oh thank you for the feedback, that's a really helpful answer and giving me food for thought. Will explore those resources you have mentioned as well!
Quoting Demming! I love it. Someone knows about the history of lean...
All these things you mention here should be common sense in the minds of managers and SWEs, and yet, I'm willing to bet that the vast majority of companies are doing most if not all of these things.
Four man squad is the ideal team size. Or a pair of 3 main squads, if you're marines,
Product manager performance reviews are based on features shipped. They are going to add features.
True, but that only means that we need to educate them that that is a stupid idea, at least if they are only counting rate of feature production sprint by sprint. That is not the sensible time to optimise for. The sane response is to optimise of ongoing, maintainable throughput. Queuing theory tells us that if you keep a queue permanently full, you go at a slower rate overall. You need some slack in the system, in our case, that means time to do a good job of quality, work to make out code easy to change and so on. Without that you go slower and slower until you stop, and there are many companies that fall into this trap and find it next to impossible to change their software. I worked with a company once that hadn't released ANY software for over 5 years, as a result of this mistake, their code was in such a mess, that you couldn't safely change it!
given that Microsoft's ideas include "boil the oceans to shoehorn regurgitative AI into every corner of product" I'm quite in favour of Microsoft employees working on things that in hindsight don't meet the corporate goals
Love the channel
Powershell. Is that on the list?
From a developer's perspective easily the most time-wasting impediments are non-voluntary meetings - especially the god-awful daily "stand-ups" (where everyone is either sitting down or remote) and where newbies feel obliged to extend the vapid tedium with some inane contribution.
I wonder if plumbers and electricians have these issues. Think im in the wrong job.
6. Watching this channel
Wow! Fantastic summary. Thanks Dave.
All for small teams but the trick is also having the right experts in that team. 1 expert with 7 monkeys is also bad.
ebooksbyai AI fixes this. Time & money wasters in software.
7:04 - "It certainly says something about the rate of development."
No it doesn't. It's a terrible metric like you said. You can only really say exactly what the metric says:
They've reached a 100K lines and nothing else
Everything else you infer from this is just pulled out of your ass. I can give you countless examples why that is the case:
- Some devs write a lot of comments
- Code style affects lines of code (longer lines vs short lines)
- Teams who refactor
- Copy pasted code
- Dead code
- Teams who do and do not write tests
I can keep going..
The most useful metric 100 kloc is written in a time is that of knowing how long build times are growing, but even that’s problematic because the dependency graph (especially cycles) greatly impacts the build time.
I’ve observed there are periods of rapid growth in loc but then refactoring/eliminating stuff happens and it remains fairly stable for size, while functionality changes markedly, for better and/or worse.
Honestly... I'm gonna stop watching this channel. It just makes me depressed when I go to work 😅
😂
1. SCRUM 2. Micro services 3. Meetings 4. Postmortems 5. ...
5. Fake productivity - demos for the sake of looking good to others (usually higher ups) causing focus of energy into PowerPoint, presentation prep etc. rather than getting a quality job done on time
@@iaminterestedineverything 6. Estimation poker using feelings and story points
Why are postmortem there?
@@MohamedSamyAlRabbani adding more things to lists than requested? 😉
@@iaminterestedineverything The Agile mindset is very inspiring.
Its down to people. Most of my career pain points have been in dealing with leaders and decision makers who are incompetent and in turn are liabilities to the business, team or themselves.
100%.
Yep, anecdotal ofcourse and a bit of a rant here... but I went from a medium size company with some decent processes and experienced leaders, to a scaleup. The scaleup people are all incredibly academic (nearly all have masters degrees and some phd's) I don't though even though I excel here. It is far far worse than the medium size company here, and I put it down to the leadership/decision makers. They plan absolutely everything ahead, as in they try to get absolute maximum detail on absolutely everything, and then execute, rather than ever have any kind of investigation or exploration. They also barely test systems ever, even though the whole software is very very brittle. They often prioritise using new tech/services over tried and tested solutions. And they push features over quality constantly... The result...? The software is a mess, it is garbage. They are essentially incompetent when it comes to software development I feel, because I know the people at the medium sized company would never have ran into half of the issues we have here and the quality would be far far higher. These people are very "intelligent", but they are incredibly impractical in their decision making, not pragmatic at all, so they don't put in processes or organise things properly. It really is like the leaders/decision makers are all really chaotic thinkers, whereas at the medium sized company people all would make sure there is organisation and a process to things. I know things will always be slightly different in a more mature company, but I'm talking about the physchology of people and how people decide to lead or make decisions, and some people are really really bad at this.
Initial commit!!
By the way, making an empty initial commit in git is a good practice.
@@alexeykrylov9995 with a readme!
Why was SCRUM not mentioned? 😂
I could get to 100'000 lines of code in one month. Am I as effective as an 8 people team?
so can I if I copy and paste instead of using `for` loops
AI features.
3:10 - Who's that a photo of? I can say for sure it's not Steve Jobs 🤔🤣 Made with AI?