It's really easy to track productivity. The more developers complain about what they're doing, the more productive they are. "Everything is great!" - Suspicious "I want to kill whoever made this" - Very productive
The best one: "Which totally idiot implemented this huge bunch of bullshit!?! He should be fired immediatly!" - Opens Annotate - "well... When I have done that by myself? Must be in my early days..." - looks at timestamp - "... uuuh. last week? As I said, my early days...."
What if I get separation 6 months in after working on our main deploy app and thinking "whoever made this is... insane", I'm out of their league right?
This sounds like one of those "consultants" that are paid insane amounts of money to help analyze your workspace, and then leaves without doing anything, but management is happy because now they have some pdf's with pie charts.
This article is living prooof that McKinsey can be successfuly replaced by McGPT. Mostly like with a tremendous added value, stellar productivity enhancements, out-performant tracking metrics and 420% ROI for all shareholders. Skyhigh ROI. So much ROI that the global economy will get a ROI just by up-optimizing the consulting processes.
They're a consultancy. That's why they "need" to measure performance: so they know which underperforming underperformer they need to fire due to making one presentation a year instead of once every 4 months.
They're a consultancy. That means you hire them when you want to do something and when shit happens, you blame it on them. Isnt that their bread and butter?
@@FranzAllanSee that’s the real game they are in CYA insurance for a hefty fee Let them take the blame and not you, neat grift No one ever got fired hiring McKinsey Will that ever change?
Working with these companies is sickening. But you're right in that this is their actual business model, passing the buck. Allows management to justify whatever they want and blame someone else.
i love sprints. there wasn't a sprint where everything was completed nor where the tasks on the sprint actually didn't change but it sure gives the project managers a feeling like they are doing something which is nice
This is why we have so many roles in tech and everyone is important to increasing the productivity of a team. Even the scrum master, though most scrum masters are useless, their role is actually really important and if they are not doing their job, usually someone else has to step up to be the glue but a good scrum master makes all the difference. It's good to have tools and metrics, but ultimately you need to identify gaps and make sure all the different roles in your SDLC are filled properly otherwise things don't go smoothly and the momentum is destroyed (gradually) It's not about the metrics, it is about the gaps and the creases that you need to smooth out
step 1: choose the wrong performance metric (spoiler: all of them are wrong) step 2: find who to blame step 3: fire the wrong person every single time, because the problem is probably not in an individual level step 4: do the step 3 enough times so you fuck up the business enough step 5: find new company to tear it down with bullshit metrics REPEAT
I work at 60% of my potential and am praised for my achievements. It's a win win situation. If I'm getting better it's not to give more to my employer but doing the same work in less hours. I'm not putting more value in the company unless I get a raise.
@TheSaintsVEVO monitoring, correlations and early indicator for other metrics. That is different from actually taking the metric itself as the factor in a decision making. F.e. if lines added is a metric that adds business incentive, you get garbage lines to meet the metric. If you instead use it as indicator that someone is working on a project at all, that's fine.
Software Architect at my workplace literally used copilot to write 3 entire classes. And all I could think was how I have to fix the issues in those classes.
Actually, as one myself, I can actually explain myself on this one. We had plans on paper, probably in rough code already, but actually writing classes and types for those are extremely tedious things for us to do, which we will just evade it whenever we can to stop us from burning out on this worthless labor. So, 3 classes are already tame. When I bootstraped a project back then, I had the entire project's data classes generated, and so what I only need to do is to check whether it's correct, that saves me a lot of sanity so I can work on some actual engineering
@@MIO9_sh I understand what you are saying. But what i saw was he was actively letting copilot run and do whatever without any guidance or rough code or even how the functions should work in plain text.
This gets back to a fundamental problem with hierarchy - everything must be legible to the higher levels of the structure. Everything must be boiled down, flattened, and reduced to numbers or bite-sized pieces of information that managers, executives, and shareholders can understand. Everything that doesn't fit into their spreadsheets and power points gets erased. The question is not "can/should stuff get measured" - it's who is measuring what, and for what purpose. This conversation often blows past the fact that individual developers and teams probably understand their own productivity far better and in much deeper ways than their superiors ever will. We jump straight to debating what types of reports should be generated for the higher-ups to see without stopping to ask why those higher-ups are so out of touch in the first place. Also you can change the filter settings on a site-by-site basis under the "More" tab of Dark Reader. (14:27)
I love the irony that McKinsey hasn't paid it's web devs enough to be able to cope with dark mode correctly. Maybe it wasn't important to their customer satisfaction metrics though?
Throwing out the baby with the bathwater when you’re dismissing the DORA metrics imho. Deployment frequency on its own is indeed senseless as a metric. But combined with lead time, %accurate and mean time to recovery, makes them trustworthy metrics. They did the research. McKinsey claiming they can measure individual performance while mentioning dora metrics is the dumbest thing I’ve ever read.
They are useful. But what McKinsey is carefully avoiding is the fact that the DORA metrics are a classification of these results. Not a precise measurement. e.g. You're probably a low performance team if you're deploying to production every 6 months and a high performance team if you deploy every few days. All your metrics need to agree to ensure you're not trying to game the system. But it is otherwise a good indicator based on the research done. What it cannot tell you is if you're "better" because you reduced your deployments from every 6 months to every 5.5 months. You might have actually made things significantly worse. (Go big or go home, MOFOs.) Your numbers were so bad to begin with and the change so insignificant that it is difficult to measure an actual change occurring.
Finally a sensible reply amid all the anti-metric/anti-consultancy vitriol. Metrics are just tools and can be used well or poorly. Sure, the urge to chase metrics is strong, and exacerbated by the snake oil salesmen of the consultancy world, but to simply dismiss them because of that is indeed throwing the baby out with the bathwater. Furthermore, I believe the earnest search for developer productivity metrics is a virtuous one. Having objective measures to improve against is a foundation of engineering progress. And I worry that rants like these may dissuade developers from being open to new ideas for metrics in the future just from a dogmatic gut reaction.
Those metrics do not take complexity into account. Changing text or a color is easy to do. Do a deployment for each small change and you get a highscore. A team that changes the core of a large 24/7 critical system will take a much longer time to get their change into production and have a weak score. Those metrics don't take that into account. And then you have external actors that influence the playfield. Like architects that require certain way of working that limit productivity, accountants that prevent the purchase of helpful tooling or you have a contract with a specific supplier that is not cooperative. These are not visible in those metrics.
@@SPeeSimon you are correct that they don’t take complexity into account, directly. Indirectly, they do: an architect constraining for no good reason increases lead time. You make a good point that these kinds of issues, that are beyond the field of influence of a team, are more difficult to expose with the 4 basic DORA metrics. That’s why you should also look at all the others that influence the 4 when you’re stuck on local optimizations not getting you improvements.
You can bet your a$$ that none of the managers will implement it with the proper understanding and just add pointless overhead and decrease everyones productivity
I think you know your company sucks at productivity when they have the audacity to pay someone to make those type of metrics up xD The rule I would love to see is for a team of 10-20 people to always have one person working on tooling/enhancement of the project (not like a dedicated post but more like at any point in time, there is one person doing this). Like setting up tools for code checking, adding abstraction for testing, making a script to automate something etc. This could be a good rule of thumb to measure how much understaff your team is (or maybe overstaffed).
The problem is not that such arrogant company is creating stupid and misleading metrics, it is equally or even more on their clients signing up for this shit and then ultimately destroying their own organization within..
We need management productivity measurement. You can’t measure software productivity because the actual development (programming) is a tiny part of solving an issue. You can’t measure finding solutions and refactoring other people’s crap let alone creativity what engineering is.
6:15 I've caught so much shit form MBAs from saying "ackchyually what you need here is the median, not the average" whenever they file a JIRA ticket asking for some kind of pie chart with averages of some obviously skewed data set.
These are the same metrics we’ve been trying and failing to measure for years, just with a new acronym to throw out in meetings to make people think you’re onto something new that will change the organisation.
The same company probably recommends low-code RPA tools to big corporates… would love to see Prime losing his mind to one of those articles… would also love to see developers are humans to cover the topic
"I haven't put down story points in ten years" Wtf?! Story points might be the dumbest idea ever but I have yet to work at a company that did not have this in some form or fashion. I need to work at Netflix...
I love repeating the manifesto for agile software development when I'm alone, not that much because it sounds pleasant but because it's simple, pragmatic, and fully opposing to this article. When I get a job, I imagine this scenario in my head where they say "First, I think we should separate our roles" "MANIFESTO FOR AGILE SOFTWARE DEVELOPMENT, WE IMPROVE AT DEVELOPING SOFTWARE BY ..." [bonk] I'm not sure it's a good idea though. I got to do it once at least.
I wonder if inserting social media garbage into the DOM whenever the user selects text is a good indicator of the underlying article being garbage as well.
Drives me nuts. Though I guess it’s the same bell curve graph situation from the video, where the normies in the middle of the graph think it’s a great feature to have in a fuckin blog
@@TheSaintsVEVO Another way they break established UI behavior is by injecting garbage into the clipboard in addition to the selected text when copying. Microsoft Teams does this. It's simply user-hostile software, an enemy of productivity.
You actually can measure them. We do it at work, but we use a point system from agile development. It works very well, once you are able to define the mean value. Irrespective of that fact, we use it to measure team performance, not individual performance.
Worked at a company as a manager/coordinator/techlead/devops (you know the drill, right) (~300 employees) that decided to buy the source code of their ERP due to the number of customizations they asked for (needed!?). The constant battle with the higher-ups was to show my team's productivity; those guys were used to dealing with salespeople where you ask for an amount to be sold, or you were fired (not sure if fair either). Long story short, it was cool to work there for other reasons, but I ended up accepting a job elsewhere.
My own productivity measurement: 10pm-4am: Extremely productive, 90% of progress but unreadable code. JDSL but robust. Literally any other time: 700x slower progress but human-readable code that breaks in the funniest ways that only 1am-me can fix with a too-genius-to-be-understood-by-mere-mortals solution If anyone is looking: I'm accidentally hosing that prod
I went to the site and scrolled WAAAAAAY down to the bottom. No comments section. Right. Any media outlet that is afraid to have people call them stupid on their own site deserves to be called stupid.
We need to spend more time on administrative bullshit so that we can be more productive! Sometimes, when the clients implement these micromanagement tools, they increase the administrative overload to such a degree, that I spend maybe 10% of the entire time to deliver something. The rest is meetings, communication, rewriting docs, redesigning stuff due to changes, adding story points consumed per story/epic/sprint to "keep track on daily basis", demos of half-assed features that will be changed tomorrow, writing test scripts for uninitiated testers who never saw the system...
Sounds like there is HUGE productivity gains possible for this company just by firing MBA staff only doing these kind of bullshit metrics and hire x more developers for same money :D
Let the management write specs and unit tests and we do the work to turn everything green. Better yet: Software engineers need/should leave companies like that and work as contractors with clear-cut boundaries and responsabilities and fundamentally sell their service. dollars metrics. simple, clear cut.
I think Claude (Anthropic's ai assistant) is pretty excellent when you give it a pattern for an operation that needs to be performed on a decent-sized data set. If you have a format for a set of functions you need, it can generate them for you without having to write a code gen algorithm.
I quit a job over these and didn't even know it was a thing. They also tracked nonsensical metrics that were trivial to fake. Like number of deploys, "contributions" (a commit and a comment would both count as 1), etc. It was batshit insane, to the point that multiple people were taking unpaid leaves for mental health while still being brainwashed how it's all great.
I love how he spent 18 mins reading this ajd didnt even get to the "Getting Started" section. This felt like me trying to hit the world limit on my university paper because i did it last second. Absolutely useless article
The amount of SEO’ing in this article shows how interested their core audience (Top 500 C-Suite) are about this topic - grim, would avoid like the plague
"Do we hire ten engineers to check on the ten engineers that do the work?" YES! That's the whole point. It's McKinsey, a CONSULTANCY company. They sell hours. They've got just the right people to do that for you. So that if you fuck up and everything falls apart you can say "Hey, I could not possibly have done more, I even hired McKinsey to get to the bottom of this gridlocked mess. We need new developers." This is good for job rotation, folks. If we don't leave because we hate the toxicity someone will fire us based on pure science.
See, ya know ya done goofed when Prime says "I'm not even mad, I just think that the best thing you could contribute to the industry is an object lesson."
60% point improvement in customer satisfaction? Sounds like sales got their s$#t in order and stopped promising things to customers that the product does not do... How do we know if engineers spend their time on activities that truly drive value? Easy, the product team have researched and planned out functionality that has a good ROI before it even gets to engineers...
This was hilarious, the outrage over the MBA speak on the page was funny simply because other engineers think the same thing. I guess the DORA metrics were so good and so accurate that github and MS only needed to add an entirely new framework of metrics to DORA to make it more effective. This entire space is occupied by efficiency consultants that have never built anything in their lives.
Okay. Here's how it works. You tell software engineers what you measure and how you measure it and they will adapt to that measurement and you will get numbers that improve the measurement even if it costs something you didn't think to measure.
Yep, that's how it goes. Sergey Bubka springs to mind, the legendary pole vaulter who had a contract with Nike which included a $50k bonus every time he broke the world record so, naturally, he would only have the bar raised 1 cm for every attempt. During his career the WR went from 5.88m to 6.14m.
It’s easiest to think of the problem in the case of a legacy service maintained by a single developer. Is he productive? No one knows. Is he good? Dunno. You can’t measure it in LOC-maybe his single line bugfix for the month would take anyone weeks to figure out and verify. Even peer feedback is missing much of the picture. All you can do is improve the macro environment: SDLC, tooling, dev feedback, customer feedback.
Deployment frequency alone is stupid. Deployment frequency controlled by the rate of failure brings quality. And guess what, the rate of failure is part of the Dora metrics. In order for the Dora metrics to work, you must optimize all of them. As for cache busting, you don't necessarily bust the cache each time you deploy.
It is possible to measure developer productivity but is it practical, I don’t think so. I think your best hope is to get a stellar engineer at the top of the team and call it a day. They also have to know how to lead the team so basically it’s not easy to get one of these guys.
Using customer satisfaction as a measure for developer productivity is batcrap crazy... how could a developer who is highly efficient, works long hours... be measured via custom,er satisfaction is say they were working on API interfaces, cloud infrastructure, changing the stack or the Database technology, or even simply making code maintainable... None of their work would be visible to the customer... also a product team can feed developers terrible work, the features could be dumb, does that make the dev unproductive?
I wouldn't wish reading that "book" on my worst enemy. It's just 200 pages of saying "devops practices are good and be nice to each other" in slightly different ways. And then a bizzare case study at the end about people using "obligoogoo" boards or whatever.
metrics and stuff like that are the bane of the developers existence put in place by people that do not know what they are doing trying to apply stuff they learned in a PM class or MBA to creating software which just does not work because those skills and techniques for project management only work for stuff like factories or construction
You can measure productivity. Give 10 people the exact same task inside the same project and measure the quality and the time taken. The quality is much more important though.
@@JamesSmith-cm7sg 1. is the code readable, 2. is the code performant, 3. is the code adaptable to new conditions. All these things are important. If your code is not readable, your teammates will spend way too much time when they have to touch your code. The second point is clear. The last one is super important. Your clients want to change things up, how much of a rewrite (if any) does the code need to implement this new feature? Some code is so tightly coupled together that touching one thing breaks everything else, and it's a disaster. Also, some people write 10 times more code to do the same thing. It's then neither performant, nor readable, nor adaptable. That kind of code leads to too many problems later on, of bugs popping up here and there, things sometimes not working for unknown reasons and when changing the code slightly you may end up breaking things somewhere else. It's a disaster. It's a question of skill at the end of the day. Some people go crazy with TypeScript for example. Why? They spend waaaay too much time on a task, and then nobody can read it and then the data model changes and they have to spend days to change that monster they have created. Been there done that so to say :D I don't just believe, I know for a fact that there is such a thing as code quality as I have to deal with it every day.
That video is a bit disappointing. He could've added so many details: languages used by NASA, what the rules actually mean in the given languages, etc.
What if we measured MBA productivity? We’ll make the KPI “number of words it takes to explain something” or maybe “fluffy buzzwords per article”
Judging by the internal newsletter, those sound like KPIs for our marketing department 😂
The hypocrisy is astounding
Complete lack is self awareness
How many meetings could have been an email. How many just in time decisions have added to the backlog and long tail maintenance.
Motivational 10minutes wasted per meeting, yeah boy shut the fuck up and let me work with the designer to make functional choices
It just need to have a catchy acronym. FBpA? It'll be a success!
It's really easy to track productivity. The more developers complain about what they're doing, the more productive they are.
"Everything is great!" - Suspicious
"I want to kill whoever made this" - Very productive
"Who the hell wrote this pasta? They will be made to pay!" - Frustratingly productive.
Got to have some dog in ya to code efficiently
The best one: "Which totally idiot implemented this huge bunch of bullshit!?! He should be fired immediatly!" - Opens Annotate - "well... When I have done that by myself? Must be in my early days..." - looks at timestamp - "... uuuh. last week? As I said, my early days...."
That's so on point 🤣
What if I get separation 6 months in after working on our main deploy app and thinking "whoever made this is... insane", I'm out of their league right?
Nothing says "my company treats engineers like shit" quite like hiring McKinsey to do a "productivity audit".
Hahaha! Yes!
I shuddered. And I work for a Consultancy myself LOL.
This sounds like one of those "consultants" that are paid insane amounts of money to help analyze your workspace, and then leaves without doing anything, but management is happy because now they have some pdf's with pie charts.
Deflecting blame
What do you mean? They have increased deployment frequency.
Commit: "add NOP to remove later"
@@ShrirajHegde smart people when given these metrics will game the system
Irs human nature
Why don’t those folks know that?
It’s pretty basic concept
This article is living prooof that McKinsey can be successfuly replaced by McGPT. Mostly like with a tremendous added value, stellar productivity enhancements, out-performant tracking metrics and 420% ROI for all shareholders. Skyhigh ROI. So much ROI that the global economy will get a ROI just by up-optimizing the consulting processes.
They're a consultancy. That's why they "need" to measure performance: so they know which underperforming underperformer they need to fire due to making one presentation a year instead of once every 4 months.
Never want to measure their own performance
Spoiler alert: it’s abysmal
They're a consultancy. That means you hire them when you want to do something and when shit happens, you blame it on them. Isnt that their bread and butter?
@@FranzAllanSee that’s the real game they are in
CYA insurance for a hefty fee
Let them take the blame and not you, neat grift
No one ever got fired hiring McKinsey
Will that ever change?
Working with these companies is sickening. But you're right in that this is their actual business model, passing the buck. Allows management to justify whatever they want and blame someone else.
@@orbatos the traditional boomer American way
Currently in collapse mode
Yes, you can measure Art Productivity!!!
Every creative industry begins to weep...
Yes but once you make a metric a goal you get the metric rather than the real thing the metric used to correspond to.
There’s a name for this law
@@ChrisAthanas Goodhart's law - most often phrased as "When a measure becomes a target, it ceases to be a good measure"
We can’t measure what we care about, so we care about what we measure
For anyone wondering - this is what happens when you lose your soul to the man. You end up writing articles like this.
Prime still ignoring our pleadings for a video about how he has managed to escape from seemingly silly scrum things like sprints and story points 😢
We need more likes for this comment
i love sprints. there wasn't a sprint where everything was completed nor where the tasks on the sprint actually didn't change but it sure gives the project managers a feeling like they are doing something which is nice
Trust. Hire good people. Let teams self-organize. Establish a rapport between devs and user/clients. => Successful team.
This is why we have so many roles in tech and everyone is important to increasing the productivity of a team. Even the scrum master, though most scrum masters are useless, their role is actually really important and if they are not doing their job, usually someone else has to step up to be the glue but a good scrum master makes all the difference. It's good to have tools and metrics, but ultimately you need to identify gaps and make sure all the different roles in your SDLC are filled properly otherwise things don't go smoothly and the momentum is destroyed (gradually) It's not about the metrics, it is about the gaps and the creases that you need to smooth out
Cargo cults are also efficient
Man the coconut radio tower!
step 1: choose the wrong performance metric (spoiler: all of them are wrong)
step 2: find who to blame
step 3: fire the wrong person every single time, because the problem is probably not in an individual level
step 4: do the step 3 enough times so you fuck up the business enough
step 5: find new company to tear it down with bullshit metrics
REPEAT
seems like good business to me haha
Why do companies think they are entitled to 100% of my productivity anyways?
It’s a serious question
I work at 60% of my potential and am praised for my achievements. It's a win win situation.
If I'm getting better it's not to give more to my employer but doing the same work in less hours. I'm not putting more value in the company unless I get a raise.
Because they pay you 100% of your salary every month?
@@EdwinMartinBut if they employed you in the first place, it means you generate more value for them than what they pay you
@@mek101whatif7 and? That explains literally every single employee. It's not an excuse to slack.
Goodhart's Law: Once a metric is used as a basis for decision-making or control, it loses its reliability as an accurate measure. 'nuff said
? What else would a metric be used in a business context for?
@TheSaintsVEVO monitoring, correlations and early indicator for other metrics.
That is different from actually taking the metric itself as the factor in a decision making.
F.e. if lines added is a metric that adds business incentive, you get garbage lines to meet the metric. If you instead use it as indicator that someone is working on a project at all, that's fine.
Software Architect at my workplace literally used copilot to write 3 entire classes. And all I could think was how I have to fix the issues in those classes.
Actually, as one myself, I can actually explain myself on this one. We had plans on paper, probably in rough code already, but actually writing classes and types for those are extremely tedious things for us to do, which we will just evade it whenever we can to stop us from burning out on this worthless labor. So, 3 classes are already tame. When I bootstraped a project back then, I had the entire project's data classes generated, and so what I only need to do is to check whether it's correct, that saves me a lot of sanity so I can work on some actual engineering
@@MIO9_sh I understand what you are saying. But what i saw was he was actively letting copilot run and do whatever without any guidance or rough code or even how the functions should work in plain text.
@@eshanchatterjee1250 whoa then that's a great misuse there
This gets back to a fundamental problem with hierarchy - everything must be legible to the higher levels of the structure. Everything must be boiled down, flattened, and reduced to numbers or bite-sized pieces of information that managers, executives, and shareholders can understand. Everything that doesn't fit into their spreadsheets and power points gets erased. The question is not "can/should stuff get measured" - it's who is measuring what, and for what purpose. This conversation often blows past the fact that individual developers and teams probably understand their own productivity far better and in much deeper ways than their superiors ever will. We jump straight to debating what types of reports should be generated for the higher-ups to see without stopping to ask why those higher-ups are so out of touch in the first place.
Also you can change the filter settings on a site-by-site basis under the "More" tab of Dark Reader. (14:27)
I think this article was written by chat gpt
Guaranteed it was involved
prime gets jippitied once again
The serendipity of prime reading an article by mckinsey and getting tilted after I was forced to do 3 hours of mckinsey "improvement" courses today
I love the irony that McKinsey hasn't paid it's web devs enough to be able to cope with dark mode correctly. Maybe it wasn't important to their customer satisfaction metrics though?
They should hire themselves to have a look at it and maybe in the end they will fire themselves and all lose their jobs.
"You thought copilot was your new slave, but it's actually your new taskmaster" --unfun mid-level manager
Throwing out the baby with the bathwater when you’re dismissing the DORA metrics imho. Deployment frequency on its own is indeed senseless as a metric. But combined with lead time, %accurate and mean time to recovery, makes them trustworthy metrics. They did the research.
McKinsey claiming they can measure individual performance while mentioning dora metrics is the dumbest thing I’ve ever read.
They are useful. But what McKinsey is carefully avoiding is the fact that the DORA metrics are a classification of these results. Not a precise measurement. e.g. You're probably a low performance team if you're deploying to production every 6 months and a high performance team if you deploy every few days. All your metrics need to agree to ensure you're not trying to game the system. But it is otherwise a good indicator based on the research done.
What it cannot tell you is if you're "better" because you reduced your deployments from every 6 months to every 5.5 months. You might have actually made things significantly worse. (Go big or go home, MOFOs.) Your numbers were so bad to begin with and the change so insignificant that it is difficult to measure an actual change occurring.
Finally a sensible reply amid all the anti-metric/anti-consultancy vitriol. Metrics are just tools and can be used well or poorly. Sure, the urge to chase metrics is strong, and exacerbated by the snake oil salesmen of the consultancy world, but to simply dismiss them because of that is indeed throwing the baby out with the bathwater. Furthermore, I believe the earnest search for developer productivity metrics is a virtuous one. Having objective measures to improve against is a foundation of engineering progress. And I worry that rants like these may dissuade developers from being open to new ideas for metrics in the future just from a dogmatic gut reaction.
Those metrics do not take complexity into account.
Changing text or a color is easy to do. Do a deployment for each small change and you get a highscore.
A team that changes the core of a large 24/7 critical system will take a much longer time to get their change into production and have a weak score.
Those metrics don't take that into account.
And then you have external actors that influence the playfield. Like architects that require certain way of working that limit productivity, accountants that prevent the purchase of helpful tooling or you have a contract with a specific supplier that is not cooperative. These are not visible in those metrics.
@@SPeeSimon you are correct that they don’t take complexity into account, directly. Indirectly, they do: an architect constraining for no good reason increases lead time.
You make a good point that these kinds of issues, that are beyond the field of influence of a team, are more difficult to expose with the 4 basic DORA metrics. That’s why you should also look at all the others that influence the 4 when you’re stuck on local optimizations not getting you improvements.
You can bet your a$$ that none of the managers will implement it with the proper understanding and just add pointless overhead and decrease everyones productivity
I think you know your company sucks at productivity when they have the audacity to pay someone to make those type of metrics up xD
The rule I would love to see is for a team of 10-20 people to always have one person working on tooling/enhancement of the project (not like a dedicated post but more like at any point in time, there is one person doing this).
Like setting up tools for code checking, adding abstraction for testing, making a script to automate something etc. This could be a good rule of thumb to measure how much understaff your team is (or maybe overstaffed).
The problem is not that such arrogant company is creating stupid and misleading metrics, it is equally or even more on their clients signing up for this shit and then ultimately destroying their own organization within..
We need management productivity measurement. You can’t measure software productivity because the actual development (programming) is a tiny part of solving an issue. You can’t measure finding solutions and refactoring other people’s crap let alone creativity what engineering is.
6:15 I've caught so much shit form MBAs from saying "ackchyually what you need here is the median, not the average" whenever they file a JIRA ticket asking for some kind of pie chart with averages of some obviously skewed data set.
Productivity is nice, but how do you measure their dicts in bytes?
These are the same metrics we’ve been trying and failing to measure for years, just with a new acronym to throw out in meetings to make people think you’re onto something new that will change the organisation.
The same company probably recommends low-code RPA tools to big corporates… would love to see Prime losing his mind to one of those articles… would also love to see developers are humans to cover the topic
"I haven't put down story points in ten years"
Wtf?! Story points might be the dumbest idea ever but I have yet to work at a company that did not have this in some form or fashion. I need to work at Netflix...
I love repeating the manifesto for agile software development when I'm alone, not that much because it sounds pleasant but because it's simple, pragmatic, and fully opposing to this article.
When I get a job, I imagine this scenario in my head where they say
"First, I think we should separate our roles"
"MANIFESTO FOR AGILE SOFTWARE DEVELOPMENT, WE IMPROVE AT DEVELOPING SOFTWARE BY ..."
[bonk]
I'm not sure it's a good idea though. I got to do it once at least.
I wonder if inserting social media garbage into the DOM whenever the user selects text is a good indicator of the underlying article being garbage as well.
Drives me nuts. Though I guess it’s the same bell curve graph situation from the video, where the normies in the middle of the graph think it’s a great feature to have in a fuckin blog
@@TheSaintsVEVO Another way they break established UI behavior is by injecting garbage into the clipboard in addition to the selected text when copying. Microsoft Teams does this. It's simply user-hostile software, an enemy of productivity.
@@salvatoreshiggerino6810 oh BROTHER, tell me about it. Shitty ass MS teams
20-30% less reported defects. Seems these guys broke their reporting.
More like they demotivated their entire org and now everyone has stopped shipping code fast
Counting lines of code, obviously. 🥳
You actually can measure them. We do it at work, but we use a point system from agile development. It works very well, once you are able to define the mean value.
Irrespective of that fact, we use it to measure team performance, not individual performance.
Worked at a company as a manager/coordinator/techlead/devops (you know the drill, right) (~300 employees) that decided to buy the source code of their ERP due to the number of customizations they asked for (needed!?). The constant battle with the higher-ups was to show my team's productivity; those guys were used to dealing with salespeople where you ask for an amount to be sold, or you were fired (not sure if fair either).
Long story short, it was cool to work there for other reasons, but I ended up accepting a job elsewhere.
How can we trust what Prime has to say about this? He spends all day on Netflix when he's at work.
My own productivity measurement:
10pm-4am: Extremely productive, 90% of progress but unreadable code. JDSL but robust.
Literally any other time: 700x slower progress but human-readable code that breaks in the funniest ways that only 1am-me can fix with a too-genius-to-be-understood-by-mere-mortals solution
If anyone is looking: I'm accidentally hosing that prod
I went to the site and scrolled WAAAAAAY down to the bottom. No comments section. Right. Any media outlet that is afraid to have people call them stupid on their own site deserves to be called stupid.
We need to spend more time on administrative bullshit so that we can be more productive! Sometimes, when the clients implement these micromanagement tools, they increase the administrative overload to such a degree, that I spend maybe 10% of the entire time to deliver something. The rest is meetings, communication, rewriting docs, redesigning stuff due to changes, adding story points consumed per story/epic/sprint to "keep track on daily basis", demos of half-assed features that will be changed tomorrow, writing test scripts for uninitiated testers who never saw the system...
The team going "this guy ain't cutting it" metric is probably the best individual measure there is!
Set goals to teams. If they are inching closer to those goals, performance is good. If not, let people experiment. That's it.
Sounds like there is HUGE productivity gains possible for this company just by firing MBA staff only doing these kind of bullshit metrics and hire x more developers for same money :D
They should consider measuring productivity of whoever the f wrote this never ending foreplay, jeez..
Been a developer for 40 years in three countries and four very different industries ... never heard of DORA
Let the management write specs and unit tests and we do the work to turn everything green.
Better yet: Software engineers need/should leave companies like that and work as contractors with clear-cut boundaries and responsabilities and fundamentally sell their service. dollars metrics. simple, clear cut.
What about buzzwords from the management and the legal team?
I think Claude (Anthropic's ai assistant) is pretty excellent when you give it a pattern for an operation that needs to be performed on a decent-sized data set. If you have a format for a set of functions you need, it can generate them for you without having to write a code gen algorithm.
I quit a job over these and didn't even know it was a thing. They also tracked nonsensical metrics that were trivial to fake. Like number of deploys, "contributions" (a commit and a comment would both count as 1), etc. It was batshit insane, to the point that multiple people were taking unpaid leaves for mental health while still being brainwashed how it's all great.
I love how he spent 18 mins reading this ajd didnt even get to the "Getting Started" section. This felt like me trying to hit the world limit on my university paper because i did it last second. Absolutely useless article
this is such a great way to state what happened
The amount of SEO’ing in this article shows how interested their core audience (Top 500 C-Suite) are about this topic - grim, would avoid like the plague
Measuring productivity is often just a way for middle management to justify their job.
"Do we hire ten engineers to check on the ten engineers that do the work?" YES! That's the whole point. It's McKinsey, a CONSULTANCY company. They sell hours. They've got just the right people to do that for you. So that if you fuck up and everything falls apart you can say "Hey, I could not possibly have done more, I even hired McKinsey to get to the bottom of this gridlocked mess. We need new developers."
This is good for job rotation, folks. If we don't leave because we hate the toxicity someone will fire us based on pure science.
I feel like this whole thing was written by AI. I guess I just would review git commits and see how big of changes there has been over time.
Every metric can and will be abused. Tickets done, lines of code added, lines of cose removed, whatever.
Love for someone to measure CEOs work.. they make some dumb decisions which can crush the company.. maybe they need to improve as well.
There is a measurement for CEOs, it is called profits.
Copilot for testing is great!, just saying.
See, ya know ya done goofed when Prime says "I'm not even mad, I just think that the best thing you could contribute to the industry is an object lesson."
If you do it correctly, chatGPT can generate maintainable code. For one, give it sample code that meets your style preference to set context.
60% point improvement in customer satisfaction? Sounds like sales got their s$#t in order and stopped promising things to customers that the product does not do...
How do we know if engineers spend their time on activities that truly drive value? Easy, the product team have researched and planned out functionality that has a good ROI before it even gets to engineers...
9:31 That first DORA sounded exactly like Steve Carrell
Aren't McKinsey consultants famous even among consultants for being terrible?
This was hilarious, the outrage over the MBA speak on the page was funny simply because other engineers think the same thing. I guess the DORA metrics were so good and so accurate that github and MS only needed to add an entirely new framework of metrics to DORA to make it more effective.
This entire space is occupied by efficiency consultants that have never built anything in their lives.
Please, do a review of the Accelerate book by Forsgren, Humble and Kim
Yup, I did read that book, it’s a really nice book, it also resonates with what prime said
Wow this was even worse than I was expecting
Okay. Here's how it works. You tell software engineers what you measure and how you measure it and they will adapt to that measurement and you will get numbers that improve the measurement even if it costs something you didn't think to measure.
Yep, that's how it goes. Sergey Bubka springs to mind, the legendary pole vaulter who had a contract with Nike which included a $50k bonus every time he broke the world record so, naturally, he would only have the bar raised 1 cm for every attempt. During his career the WR went from 5.88m to 6.14m.
The author of the article was measuring its quality by number of words.
It’s easiest to think of the problem in the case of a legacy service maintained by a single developer. Is he productive? No one knows. Is he good? Dunno. You can’t measure it in LOC-maybe his single line bugfix for the month would take anyone weeks to figure out and verify. Even peer feedback is missing much of the picture. All you can do is improve the macro environment: SDLC, tooling, dev feedback, customer feedback.
This is consulting firm, this an advertisement of their product, wich cost ton of money, so must sound almost as magic knowledge
I was like „lol, he invented his loops to sound fancy. I‘m sure he could make a pointless graph as well. And then
That article was 95% word salad!
lost interest around the 3 third time they didn't get to the point.
i literally had to quit myself
"I have no idea how you do the things you're paid to do. But I'm going to track and score how well you do it." - Big Brain MBA
OMG I was wondering what the heck that unlabeled table of DORASPACE was. LMAO.
Deployment frequency alone is stupid. Deployment frequency controlled by the rate of failure brings quality. And guess what, the rate of failure is part of the Dora metrics. In order for the Dora metrics to work, you must optimize all of them.
As for cache busting, you don't necessarily bust the cache each time you deploy.
In a sufficiently large organisation, for any given metric, there is at least one person trying to game it.
I wonder if there is a good article on the Personal Software Process (PSP) for Prime to react to
It is possible to measure developer productivity but is it practical, I don’t think so. I think your best hope is to get a stellar engineer at the top of the team and call it a day. They also have to know how to lead the team so basically it’s not easy to get one of these guys.
I always believe measuring the metric is like measuring the possession and shot on target in a football match
Deployment frequency is actually great
Using customer satisfaction as a measure for developer productivity is batcrap crazy... how could a developer who is highly efficient, works long hours... be measured via custom,er satisfaction is say they were working on API interfaces, cloud infrastructure, changing the stack or the Database technology, or even simply making code maintainable...
None of their work would be visible to the customer... also a product team can feed developers terrible work, the features could be dumb, does that make the dev unproductive?
in the end they did what consultants do. try to fool other business folks into thinking they did something useful
Nice Loop, Dawg! 😂
I'm with you, 100%.
Please do Accelerate book review, it talks a lot about how organisational performance correlates with devops
I wouldn't wish reading that "book" on my worst enemy. It's just 200 pages of saying "devops practices are good and be nice to each other" in slightly different ways. And then a bizzare case study at the end about people using "obligoogoo" boards or whatever.
So they consulted Elon about this during/after Twitter acquisition?
I’m always extremely skeptical of statistics when they are all perfectly round numbers
Dude went "RAAAAAAAAAA" at 5:49 and my dog freaked out 🙄😂
Customer satisfaction as a developer metric? Sure, I'll refuse to deploy any A/B tests... 😀
metrics and stuff like that are the bane of the developers existence put in place by people that do not know what they are doing trying to apply stuff they learned in a PM class or MBA to creating software which just does not work because those skills and techniques for project management only work for stuff like factories or construction
You can measure productivity. Give 10 people the exact same task inside the same project and measure the quality and the time taken. The quality is much more important though.
How are you measuring the quality
@@JamesSmith-cm7sg 1. is the code readable, 2. is the code performant, 3. is the code adaptable to new conditions.
All these things are important. If your code is not readable, your teammates will spend way too much time when they have to touch your code. The second point is clear. The last one is super important. Your clients want to change things up, how much of a rewrite (if any) does the code need to implement this new feature? Some code is so tightly coupled together that touching one thing breaks everything else, and it's a disaster.
Also, some people write 10 times more code to do the same thing. It's then neither performant, nor readable, nor adaptable. That kind of code leads to too many problems later on, of bugs popping up here and there, things sometimes not working for unknown reasons and when changing the code slightly you may end up breaking things somewhere else. It's a disaster. It's a question of skill at the end of the day. Some people go crazy with TypeScript for example. Why? They spend waaaay too much time on a task, and then nobody can read it and then the data model changes and they have to spend days to change that monster they have created. Been there done that so to say :D
I don't just believe, I know for a fact that there is such a thing as code quality as I have to deal with it every day.
Just ask the developers on the team on what the pain points are - how hard can it be.
Let's start measuring the productivity of CEOs and politicians ...
"How NASA writes space proof code" by Low Level Learning
S P A C E
N
A
S
A
FANCY
That video is a bit disappointing. He could've added so many details: languages used by NASA, what the rules actually mean in the given languages, etc.
Yes, you can measure dev productivity, just simply run random() * 100
'The Meat' can be found at 17:58
wait, this guy works for netflix?
You just need an EngineerHiringFactoryFactoryFactory
oh yeah, isn't this the company that was fixing bread prices?
That’s any easy one.. am I being yelled at Y/N = perceived productivity level.