There are too many non-engineers managing software engineering. This would never happen in civil engineering or medicine. Fundamentally that's our problem.
I think it can be problematic when you had coded before and try to extrapolate your knowledge to others, some times you are making false assumptions based on things were valid 10 years ago.
I can happen even with engineers managing. All it takes is one non-engineer higher up that won't be talked down. Or maybe a customer that requires metrics over how their project is going and a sales person who always says yes. These things are pervasive even with good people in place who know better. Granted eventually the good people move on and all you have left is those who don't.
I recommend giving "When McKinsey Comes to Town" a read for pretty much all the reasons to disregard anything they produce, not because they are bad at what they do, but because they are very good at what they do. It just so happens what they do is drive profits and cut costs for companies/governments that hire them, and if that comes at the cost of employees, customers, the environment, or with a literal genocide in the background - oh well, it's just business.
Given McKinsey’s role in the opioid epidemic I think anyone who brings them in to consult at this point should be assumed to be a sociopath with little interest in anything but money
I worked somewhere that suddenly cut staff because McKinsey gave them some bogus profitability metric. The company was in fact already profitable and in no danger of running at a loss. I'm honestly not even sure why they needed to pay consultants to analyse the company.
Except they aren't "good" at what they do they are idiots who have no real world experience in running the businesses they advise. They save a dollar today but cost you $100 in a decade by the time anyone realize the impact of what they advised the management has changed 3x and no one realizes who is to blame for the current predicament. It sounds great to senior management that they can have visibility into the teams in their organization using metrics but in reality the metrics are only useful in measuring progress of a team. Trying to compare teams across an organization is fraught with peril not all systems are the same, all have different challenges and development happens at a different rate depending on the phase of development. The reality is teams need more autonomy and outcomes should be measured. At some point metrics detract from actually completing the task and in many instances are more of a hinderance rather than a help. By all means lets try to measure some things but at the end of the day it shouldn't be the be all end all of what you are doing and the trap is it will get distilled to 1 or 2 lines on a report and will impact how and what is done.
@@loganmedia1142 companies always try to maximize profits, no matter how well they already do. This is just how capitalism works. If we want to change that we must try to abolish capitalism.
@@MrMattberry1 Going to trust that you're making a nice joke. Good one! But it *is* a serious matter that an employee maintain off-site records about their work, organized in a fashion that can be easily passed to an employment lawyer. Irony is that any observation of you maintaining such information can short-list you for "layoffs."
My background is in software but I once ran a sales team. The performance metrics for sales were totally gamed by the sales team and had almost no alignment to the interests of the company, short term or long term. I tried applying what I knew from software engineering to give the sales team slightly better aligned metrics (based on profit not revenue). They staged a mutiny and claimed to be too stupid to understand profit and loss. Of course nothing could be further from the truth - they clearly understood that profitability targets would be harder for them to game than revenue targets that were totally detached from cost, risk or feasibility.
Sandbagging quotas, sniping accounts, quarter lumping to reach accelerators, premature revenue recognition, commandeering sales resources, and the list goes on. Salespeople are very clever and they know how to play the game.
@@DemPilafian I used to travel every quarter to support Sun's main distribution center's software. I'd watch them fill orders to make their quarterly sales numbers and then cancel them.
@@haraldc70 it's a bit more subtle than that. Whatever creates incentives gets gamed. You can measure something accurately only as long as you don't incentivise the measure. Which is easier said than done.
The McKinsey report shows a lack of systems thinking. It's best captured in this quote from Russell Ackoff: “A system is never the sum of its parts. It's the product of their interaction.”
McKinsey is made of a team of guys that makes money giving advices on topics they have little to no expertise in. Does it come as a surprise they sell a lot of snake oil ?
I love how they acknowledge that "failing to move past old mindsets" is a pitfall and then propose a bureaucracy framework to overcome it. Oh, the irony!
The only thing consulting companies understand is bureaucracy. So it's no surprise that would be the only option they have to measure or control anything.
I would have answered somewhere between -100 and -300, or on a good day -500 to -1000, where it counts. (Yes I work with legacy code a lot.) Wonder what they would have said to that.
I've always been someone who spends a longer time writing more carefully designed software. My software rarely has bugs and generally works very reliably. The trouble is that there's no easy metric for the problems that didn't happen because my software was good, but it's easy for people to invent a mythical opportunity cost for what I could have done in the time if I worked faster with less care. I'm not saying my approach is better (I'm just a bit of a perfectionist and would find startup work awful) but it's a constant struggle to fight against some of these stupid metrics.
@nexussays you run it and see if it keeps working. It's like having the 9s of reliability in a service level agreement. This is a measure that indicates robust infrastructure and bug-free code. In a more esoteric context, functional programs can be proven correct mathematically. If you write programs for a living, you should give some thought to how you know if they are good.
Managerial consultants basically just tell management what they want to hear. c suites will pay millions so that they can try some PR spin and tell their staff that their “consultants” told them to do the thing they were already planning on doing anyway. If it goes sideways they can blame “bad consulting”. On reality, Management never had an open mind from the start!
As for sales performance - I do remember one organization that used to be worth around $10m, until they closed a sales deal of $100m: Overnight, the value of the company went up by an order of magnitude. The whole company, everyone together, worked hard to produce the pitch that would close the deal. Ultimately, the sales rep who closed the deal got the profit share, and the others got bogged down to no end in work to fulfill on the contract. You can guess who retired early, and who left in frustration. Who benefitted from measuring an individual on a team outcome? Only the person who is able to claim the award. But both the other team members (who might have done more than 90% of the work) and the company as a whole lose. Win-lose scenarios are bad, and individual performance metrics create them.
@@pedrob3953 Too often it works out like that. Winning the Tour de France is a team effort where the whole cycling team works together to position their #1 into a good position for the sprint at the finish line. The winner gets all the attention and the hype NOT the team, that made him win. When you don't know anything about cycling it just looks like he did it on his own.
at 12;52 mark, Dave hits the nail on the head without actually naming Goodhart's Law by name, simply referring to gaming the metric. "Any measure that become a goal ceases to be an effective measure."
Nonsense, yes. And dangerous when the C-suite starts to believe it. Tech leaders need to understand why this kind of false flag metric is problematic and more importantly, what to propose in its place that makes sense. If not, CEOs will make even more mistakes when deciding technical directions.
I think you miss the point. C-suite people paid for the paper and then use it as an excuse to "let go" staff. It's not surprising since many C level execs went to the same schools and they're scratching each other's backs.
The troubling part is that managers will embrace this article as if it had descended from heaven on tablets of stone accompanied by angels with trumpets. Managers as a class are addicted to having a sense of control, even if it's an illusion. Having a simple metric for judging performance provides this illusion.
Designing and developing simple performance metrics is possible, but it is really hard and takes more time, effort and money than many people or companies are willing to invest. Much simpler to search the internet for half an hour, to see what "they" say are good things to measure for performance
Unfortunately, yes. The best example I used to give to show them issue with metrics is a watch. Any watch, wall clock, cheapest Citizen, iWatch, Omega Seamaster - they all measure time good enough. But it doesn't help a person starring at them to manage TIME - no chance to slowdown time to meet deadline, one have to find a way to deliver faster. BTW, DORA metrics are all the same - they're jusy indicators, not the targets or forces. And that's one of the criteria to tell between good and lousy manager - what and why they measure.
The best software engineering managers are ex-devs who are ardent users of the product. They snatch code from the devs, build their own releases, and are trying out new features even before the devs have said 'done'. The worst have never even used the product, don't know how to install it, and are doing everything by numbers: lines of code, story points, whatever; they like to imagine these are telling them something.
The irony for me in this, is that I've experienced McKinsey's twice, in two separate corporations, and both times their suggestions were implemented, they were absolutely catastrophic failures in time, money and wasted man-power. A chop-shop if ever I saw one. They are not called McConsey's for nothing.
Ive njst watched them apply AGILE as a fully business wide idea to a company everyone included not just software devs. It failed miserably within 12months
Thank you for not mincing your words and calling things for what they are. Unfortunately these bizarre ideas are often born in mid management and, when sold to the upper floors of a company, have a devastating effect on its culture and eventually productivity as well.
Productivity is measurable as it is just the rate of output given input. Overemphasis of productivity will get you in trouble in creative fields, though. Focusing on individual productivity in a collaborative field will get you in more trouble. Delivering value over the long term is what matters and you need to get your developers to focus on that. Overfocusing on productivity will get them to focus on their activities, not company objectives.
Accenture low balls projects for the US government. Then, once Accenture has it's hooks in the project, costs skyrocket, with no end in sight, and no real help or product for the end user. The US government doesn't care, because so many high level government employees own stock in Accenture. See the cycle? See the problem?
Hi Dave, a sporadic viewer here. I'm retired but still writing software, still learning, after 50 years. My minor thesis was on programmer productivity tools, in the '80's. Meaning that I've had an interest in this topic for 30 years. After watching your vid I went and read the report. Apart from the issues that you highlight (I haven't chased up the other critiques) the biggest issue I see is a lack of transparency for how they came up with these "measures", what they actually measure, and how they have measured the performance of this model. Productivity was a hot issue when I started looking at it 30 years ago, and a lot of people have done serious work on it, but McKinsey say they've cracked it? They don't even provide examples of the kind of corporate culture that their approach was tried in. A second issue is that they, like almost everyone involved in software, seem to lump all development work into a basket they call "coding". As I have taught in my programming classes: every line of code is a place for a bug. It's not entirely true, but it focusses beginners on thinking before they code. Well thought-through, well-_designed_ systems have fewer _opportunities_ for bugs. And it's usually faster too. This emphasis on coding misses that point. I don't doubt that they've implemented it and seen improvements. If the existing approach is poor, and we implement almost any systematic change with a generally laudable goal and appropriate incentives, then we WILL see short term improvements. But they don't say how they measured the initial productivity, or the period of measurement. So my conclusion is that the purpose of the report is to reduce everyone else's productivity as we critique it, then they can say "see, our way is better"! 'Till next time, Andy
Very often, this type of consultants are not really interested in solving the real problem of their client. Instead, they care more about pleasing the management of the company who hired them, so they can gain new sales in the future. Imagine a case where the company's management wants to downsize staff, but they want some criteria to do it. So they hire some "experts" to give them the advice they need to do what they want. It all depends on what is the real motivation behind the scenes.
Spot on analysis! What really counts is the outcome as a result of business value added with the product increment that a team or a bunch of teams is working on. A team with only super specialists does not make it a winning team. Their success will depend on their ability to cooperate as a whole team together and interact with their customers, so that they build the product that meet their customer needs and that the customer loves to use.
I almost entirely agree with you. When I first started working I was spending about 95% of my time debugging code that I inherited. However, as I wrote more unit tests and cleaned up the code that has dropped to less than 5% of the time and most of the time is spent on actual development. Features are delivered faster and the bug rate has dropped to almost nothing. The ones area I don't entirely agree with is when doing software based on math modeling it can often be worth it to take some additional time to go over the math design and check all the assumptions, stability, ways of solving it, alternative approaches, etc. I have run into too many cases where someone just started implementing a model with some pretty bad consequences. I like iterative coding and delivery but I really want to at least try and make sure the math we are going to implement is the right math to solve the problem we really have.
The vignette at 8:27 seems almost universally applicable. I'm in construction and always have the issue with sales over-promising and trying to force production into time frames and parameters that can't be met without great cost to the project. But the sale is made, the salesman has gotten his commish, and upper management hasn't the snap to discern the trap sales left for someone else to clean up.
It looks like the Soviet union to me! A company I was interviewed for wanted to test how fast I could type on a keyboard! Having lengthy meetings without any structure is a time thief. If you want to increase performance of a team there are so many things you can do. It is very difficult to measure quality of code from an individual since it is a part of a big codebase where things interact between the individuals. Good insightful leadership is essential. Having worked as a software engineer both in sweden and in London I was amazed by how much better the organization and structure was in the London team. It was led by a guy with Indian background and he coached the team to be their best.
One very important point here is to challenge the premise and assumptions of the question before engaging in their answers … that’s why I think it’s very important to question the need and possibility of individual productivity measurements Good work Dave
That's the human way to do it But it's a trap I'm managed with some persons that don't understand my job They can listen, but at the end, they will tell me to accept it or leave xD
Thanks for a great review of that article. I'm working on recreating a goals/assessment framework for a mid-size organization of developers (~250), and been looking at a number of sources for ideas on how to approach this. We're definitely prioritizing a team approach to performance (and expect to heavily leverage the DORA framework), but we also need to find some mechanism to rationally assess individual developer contribution so that the allocation of limited bonus funds are done fairly, responsibly, and in a manner that does reward exceptional work. Can you point at a framework you think would be valuable for this?
In my early years as a SW PM, I was asked often to provide arbitrary metrics like the report suggests.They were rarely of any use. The only metric I've ever found that works is: Is the customer getting consistent quality quickly? The only good way I've seen this happen is through strong CI/CD.
Given it's from McKinsey they probably didn't have the time and will to do the required real-world checks. And the author (I didn't check) was likely a brilliant 24-year-old PhD in International Management.
My main occupation in my job is correcting production bugs. Almost nobody except me like to do this in my team so I teat bugs almost exclusively. What I found is that correcting a bug often consists of adding one or two line, or adding a condition somewhere. The more code you change the more chance you have to do something wrong. Then you write a test that reproduce the bug and that's it. It sometimes takes me hours of analysis to understand a production bug. Anyway I think it would be very hard to mesure my productivity through the code I write.
Good software development is not always good business - therein lies the problem. I've seen plenty of poorly constructed pieces of software receive high rates of revenue and the businesses sell for multi-millions. I've seen good software crumble when the business shifts from a focus on building a good product, to building a profitable business. Makes sense for the business as they often are successful - at the expense of their customers.
I've seen that too. As a supplier you often need to lie to get a foot in the door and outmaneuver your competitor, just to get the work or sell your product (that doesn't exist yet). And customers are asking to be lied to, they want the rosy picture, not the reserved, realistic, conservative story. They want solutions, not more problems. A report like the one discussed here is just another manifestation of that. "New methodology". You feel you have no control? Good news! You CAN be in control after all! And we have just the experts for you to help you with that! See you in ten to twenty years when you figured out it wasn't all it was cracked up to be, we 'll have some other story for you ready by then.
Development expertise can only be effectively assessed by developers. We accept that this is true of other types of engineers, we accept that it’s largely true of doctors and PhD candidates and many other technical fields. But for some reason, in spite of it being generally accepted that software development is a challenging technical pursuit, the managerial class at large is obsessed with the idea that they can measure the quality of developers without knowing really anything about software development or programming or anything. Developers *can* be assessed on their social skills and other non-technical axes by non-developers, of course, and those are certainly a large part of the job and developers should be cognizant of that.
To speak to a classic developer measurement problem, as somebody who spends a lot of time working on tragic legacy code, it can easily be the case that I’ll spend 3-4 days just understanding how some part of the system works before I resolve the problem with a dozen lines of code plus some unit tests. Alternatively, if I’m working on a greenfield feature or a tool or something like that, I can easily commit 500 or a thousand lines per day, especially if there’s a significant amount of UI work. Somebody who’s not a developer might not grasp this at all, whereas every experienced developer should understand it well (if they can’t, I’d argue that they’re not an experienced developer). Depending on the window in which somebody assesses my “LOC productivity”, they would label me either a superstar or a drain on the company.
I'm not sure someone from outside a team is in a position to assess whether someone's non-technical skills are good or not. That apparently non-social programmer might actually be very good at the important team interactions that the manager assessing would never witness.
@@loganmedia1142 Fair point! I'd always encourage programmers to find good ways to manage outward perceptions just as general career advice, but I certainly agree that positive and constructive internal team relationships are much more relevant to the success of projects than making good impressions across teams.
There are certain aspects you can measure and yes they are mostly team based. I think measuring the amount of time a task takes makes sense ONLY if it's meant to be used as a handbrake for the teams. For example, a story/bundle of tasks are rapidly adding up hours due to requirements changes or uncertainty, the team can use time spent as a measurement to stop such tasks. I've had success implementing similar metrics in the engineering teams. Flow efficiency and other metrics provide much better insights though.
I'd love to hear your more detailed thoughts on the SPACE metrics, once you've had a chance to think through them more. I've seen them come up a few times, have some initial thoughts, but haven't used any of them. Really curious where you land on them.
They were paid by CEOs to lie again and take the heat, but allow the CEO to fire people. I feel the need for a way to point of to my boss who needs a chat. I agree that the things pointed out here don't really work. However I still feel the need to measure individuals in some way otherwise you allow one dev to spend weeks on centering a button while the other team members work super hard and finish on schedule. Whenever I get a task that needs to be finished fast and effectively I actively avoid certain team members because they are not fast enough for the deadline. Again I am measureing the individual, using all the factors I know about them. Willingness to learn, speed, business knowledge, cababilities in the given stack given previous experiences, how willing they are to follow the plan (if they can't give good counter arguments), how will the tight timeline affect them, will they be able to function with potential overtime. I concede that I may be a part of the problem. We have no system to game but you are in some way gaming me. If you make good stuff, I don't have to fix constantly or give 30 comments for in review you are just better than the person that does.
Applying the measures to IT MANAGER productivity would be interesting. Would a top score make for a top manager? As the level of abstraction has advanced from bits via programming logic to configuring tooling and will soon be prompting AI, developers are becoming more like managers. Can you score a prompt engineer on the same points?
I feel like it's all and all bollocks. These people, they really just don't want to work ultimately. A proper dev team manager, call him what you want, has to actually delve in deep, understand everything and work with the people on site to be useful. Yes, there absolutely are useful tools and metrics for him, but ultimately he works with people, not with numbers. But this. This smells of "I never saw you, your work or the result of it. I am gonna manage you now!". That mindset, the one that makes me fume and consider violence.
I have found that, generally, the actual purpose of sprints and daily public sprint reports is not to schedule work efficiently but to put your nose to the grindstone, every minute of every day, with a side of public shaming. If you beat your estimate you get rewarded with more work, and possibly demerits for overestimating (= lazy). If you exceed it you get demerits for being too slow and underestimating.
Those DORA metrics, at least as presented there, are not at all good metrics either. They generally follow the "if things are better then these metrics are expected to improve", which sounds nice, until you realise that there is nothing said about them changing for any other reason, including that the improved approaches might provide much bigger changes in other forms than the improvements. For instance if you improve your bug detection, such as by making it easier for users to report them, you will get a far higher failure rate, even though finding and fixing those bugs really does improve the stability. This means that to make use of those measurements, you have to have laboratory level control of all those other factors that play in, and the result you get might still be only weakly correlated to what you want, and may still contain a lot of noise you were not able to fully control. This in turn means that for most of the things you would want to use such a measure for, you just cannot get a result you can trust nor get close to argue is statistically signnificant. This leaves it to measure just things that have very little direct effect on things related to the metrics itself, but rather how they affect developer productivity and through that the metric. Here we are talking about things like the amount of noise in a room, but even then, higher noice levels could make the developers more nervosus, which might prompt them to make their changes in smaller chunks, which would look like increased productivity to the metric, but really just be a false signal. In total, the DORA metrics are both way to noisy and easy to disrubt to be able to be trusted much even in the best case scenarioes, and since they are also pitifully easy to game due to being hugely affected by other things, they are also extremely poor things to have target known measurements of, since they can be gamed so easily.
So I spent most of my life, learning about writing software (I'm 53 started at 19). However I have never had a job where I was a developer/software engineer. However (x2), I have been writing software, on and off, with the company I work for. My manager allows me to do these things. What is my approach ?... get acquainted with a need for a particular software project or automation project, write the software, ensures it works, and make sure the owner of the project, or requestor, is very happy with the final outcome. Then I will tell my manager, that this is what I have done. I don't get involved into the nitty-grity of the code, it just confuses people. No one wants to know this stuff. 😊 I just try to make sure that people are happy.
Does anyone remember when McKinsey killed off their clients around year 2000 with a system that said, "Invest in whatever is working". People increased budgets in R&D and advertising until they went bankrupt or fired McKinsey. McKinsey forgot about diminishing returns. Several of my students work at McKinsey but the chief McKinsey resource is their network of former employees that become clients.Their attitude is arrogance. Two days after working at McKinnsey my former student thought that he knew more about modeling than me.
How I love watch your videos! We you talked about measurements in sales teams was a relief for me, because that's what I thought when I read the McKinsey article.
I largely love this video, *but* it remains a reality that some developers are in fact lemons that damage team efficiency, quality, and morale - significantly. I can see *some* of those individual metrics being useful in debugging what is going on if higher level team metrics aren't healthy or if you are hearing a lot of complaints from the team. Curious your take on how one *should* assess developers and identify bad hires.
I love the sales team example, So we must find the context and act upon, not take an absolute opinion and apply for every thing. The factor that really matter is the context, not a standard metric I faced this problem with agile and agile coaches for dev productivity and quick respond to change vs. quality😅
McKinsey is not really in the business of making things work better. They are largely in the business of A) telling management and ownership what they want to hear, and B) suggesting unsavory or unpopular ways of increasing profits, often to launder the blame and reputation of the people who hire them. It doesn't take a genius to realize that you can increase short term profitability by cutting wages or laying off a bunch of people (which is a very common 'suggestion' from consultancy firms like this), but it's not very popular. If you hire a fancy consulting firm and pay them a bunch of money, then they tell you to do it, suddenly you can claim you're just following expert advice. Given that their goal is never to actually solve real problems in the way that most people would conceive of them, it's very easy to see why they release this kind of commentary.
The article looks to be written by managers so they can make sure there is enough busywork for them to track the developer productivity and they can keep their job, or in case of McKinley their contracts.
The main thing I don't like in DORA is that it does not predict the future well and thus suggest very conservative changes only. If you embark on a major system redesign, your score will go down for quite some time, and eager PMs would point at the effort you do as bad right away, trying to kill the effort. However, once you do finish up that redesign quite often your productivity goes up because you just have a better, extendable, more reliable system. Honestly, any metrics that claims to know software productivity without including complexity, technical debt, and extension openness will fail to properly give any meaning to measurements. Quite often you can ship crap code fast that has a reasonably low crash-rate, but that only measures the "good enough" state, and actively creates a barrier to jump from one "good enough" to another one that might open up better results. That is not to say that highly focused metrics are not useful, but their impact should be limited to the scope they actually measure.
In my opinion - DORA metrics help spark conversations. If the metric goes down and as a team we feel that it is OK to be in this state for a while, so be it. If it continues to be in this state month-after-month, thats when alarm bells should ring and the team would need to rethink what they are doing. Without DORA metrics - these conversations wouldn't even happen.
@@mrinalmukherjee1347 Sounds really great, I do wish that would be the default case, but so far I've seen more the negative cases where metrics was used to micro-manage or limit development sadly.
Thanks for this video. I liked most the analysis of the metrics table starting at 10:05. This will help me discussing within my company and also gives me further ideas to look closer at DORA and SPACE.
I love your analysis of the mckinsey metrics. unfortunately the demage is done and these kind of insidious productivity metrics are becoming more used. I believe they all tie up to making managers look good, not the engineers, look at the individual metrics for example, they are management concerns, not productivity concerns.
As for metrics: I'm building up a comprehensive system of KPI's which can be used in software development organisationation at a large scale. BUT: 1) There are no individual metrics - and that's no coincidence. 2) The purpose of development KPI's is to be able to highlight problems and discussions that need to be had, not to achieve conformity or reach a universal baseline. What we need to look at is the performance of the system, more than the performance of the individuals. Ultimately, McKinsey's article, however, is achieving the opposite: measuring things that are _not_ a problem, generating universal baselines and putting people into boxes, which is the most counterproductive thing one can do in software development.
While I agree that many organizations have plenty of issues in their processes, communications, and systems that will very quickly diminish productivity overall, I think its shortsighted to think that individuals are never at fault for poor productivity, or misuse of processes. A couple of bad actors in a small team can totally tank a team's productivity even with a great set of processes and communication. I think this movement of never look at individual performance, is right to shift some blame to processes, but we should also be able to hold individuals responsible as well. After all, a process is only as good as the team's willingness to operate within them, and if the whole team isn't on board or misaligned, then performance will suffer.
Hi @@Eric-vh4qg , no individual "metrics" still provide the opportunity to talk about individuals' "contributions". --- Processes are guidelines in practice, left to the mercy of individuals, as no one can make anyone to follow the words without standing behind their shoulder all the time. Especially not clever, trained, experienced software developers ;-) (also irregularities may happen, etc., however that belongs to another topic) --- There was an interview with two engineers who's job is to visit steel bridges in the region and check and tighten each and every bolt on the bridge by the blueprint, and that each or every second year. The reporter asked why engineers are needed, as anyone with the tool can do it. The answer started with a question - Would you believe they really did and each bolt by the proper force? We understand the consequences.
I've been a developer for 8 years now and management has this insane obssesion on measuring productivity. OKR systems, estimates that are impossible to predict, due to scope size or lack of prior analysis of the problem. Meanwhile, there is little to no effort on improving the process. My current boss wants to do estimates that are accurate up to an hour, considering that I have to work on two gigantic apps, the only person with experience in them quit 6 months ago and there is no support or understanding that what they are expecting is impossible makes me want to quit the industry forever.
It amazes me that seemingly smart people keep asking for such accurate "estimates", when they are literally requiring clairvoyance. They should at least mention this essential requirement in the job ads.
Maybe time to have a quiet chat with your micro managing manager to explain to him what problems you are encountering. Might be HE is also being micro managed by a higher up. Maybe make clear, that he is about to loose you too, so there will be nobody working to improve and understand the software you are responsible for.
@@jfverboom7973 I've tried that. I've pointed out serious issues like tickets not being descriptive, like the whole team is relatively new to the project 6months or less, QA being a bottle neck, architect being a major asshole and instead of stopping for a moment and trying to address some of those issues we get a new metric to obey. I'm just speechless. I'm looking for a new gig the money is not worth my mental health.
Definitely! There's so many great conversations going on over there 😃 One of the reasons for setting up the Discord is to engage with the community Dave has built - long may it continue!
I used to be a CAD building designer, one of our pitfalls was the company wanted near-constant update notes on what you were designing, (like every 10 minutes) I’m all for documentation, but not to the extent that it prevents accomplishing anything. They ultimately looked at these notes for proof of productivity, and by comparison almost completely ignored the actual work product.
beautifully delivered... and so many comments are worth reading too. also one wonders why people take this sort of report seiously, given who has prepared this and their background, rather lack of.
I already read this "article" and some things seem illogical, but you had provided the critique in such structured way that it all seem obvious, even to inexperienced developers. Thank you.
When I went to business school, I remember being told a rule of thumb for planning the amount of time for developing software taking all things into consideration. They said, ask a programmer how much time it takes and multiply it by 3 and up the unit by one. So 3 days becomes 9 weeks. hehe
Universal Basic Income. Do not mail checks. Instead, you must go to a federal bank to get the cash. My point: Lots of government sponsored programs that need lots of software and user interfaces would not be needed, because those programs would be shut down. Why have 10x admins for SNAP when we could just shut down SNAP altogether and just give recipients the cash instead? In short, governments are driving the need for systems and software engineers. Once we remove more and more government from society, costs plummet and only the best software engineers survive.
Yeah, glad you talking about this but its nothing new. Large companies always have dumb KPIs that make no sense, the problem is that most managers/CTOs don't actually understand tech.
The more and more frequently aggravating thing for me is that the people working in / with tech don't understand tech. They think so (and on a skin-deep level they have a somewhat overview, and they manage to stay on the surface with that), so they don't bother to realise that there is (much) more.
only one question, where on earth do you get your t-shirts? I love 'em!!!... and your content too btw And in regards to the content of the video, indeed it is really hard to measure developers productivity. You can puke a thousand lines of code that are kind of useless or you can push 10 lines that make a world of a difference. In our job things take time and they take a lot of thinking sometimes; not everything is about lines of code. I believe ya'll get the point
Do you have a link to "Dave explains the Dora Metrics"???? I see you took my positive comment about the value of a swear word in the thumbnail to heart. ;)
Meh, there is some value in developer productivity, a "Rockstar" can save projects, way to many rely on those rockstars. And a team of people who produce 3 lines of basic code a day in a good week, will not deliver software no matter how good they organize their work. But as usual, McKinsey - as smart and sharp as many of them are - is operating in a very simple abstraction of reality. The most valuable developers in may teams i've seen have a horrible individual productivity because they multiplay their team in the team, making the rest better or less bad or setting up all the best practices (CI, CD, Testing), that is hard to measure. And it's really hard to come up with reportable metrics which identity bad developers and slackers while not catching those team multipliers. And in the world of McKinsey and Elon Musk, everyone not reaching the metric is often fired automatically. All that said, "slackers" in teams are a problem and it ruins teams if one or two people suck the fun out of it by not performing at all and pulling the others down with them and it's hard to act on stuff like that in a timely manner without objective arguments. So i wish there was a way to measure some of that stuff, but it's almost impossible. A developer who needs 5 days for a task, but thought about every name and every interface and created maintainable code, is my magnitudes better than someone who does the task in 1,5 days, but the code is a total mess and makes any change in the future impossible. How do you measure maintainability on an individuell level - as if somebody at McKinsey has a f'ing clue about that.
Inner vs outer loop is a good example of how these metrics don't work, because it's too gameable. I want to improve my dev pipeline to decrease the time others spend on the outer loop activity, so I class development of the pipelines as inner loop activity and using them as outer loop. On another team, improvement in pipelines is seen as unimportant so development of it is classed as outer loop. The work is done begrudgingly by the developer with the least social capital, probably the most inexperienced member of the team, and with little attention paid or review.
A client I‘ve worked with for over 10 years has McKinsey stampede through every couple of years, sent by corporate. I always have to pick up the pieces afterwards and fix the damage they made.
So i struggle a bit some of the things presented on the video, and now i see i was struggling for the wrong things. Thank you very much for helping us understanding software development and engineering!
Very nice video, I think you do a good job arguing against the usual sort of metrics used on software engineers. However... it's a plain fact that some devs are really good, some are just so-so, and some just plain suck. They vary in drive, knowledge, and analysis talent. If you told me I had to stack-rank my devs and get rid of the bottom third, and give a raise to the top third, it would be for me as their manager an easy task, because I work with them closely every day. The real question is whether there is an automated/easy metric that can be gathered, which is as useful as the "gestalt" a manager gets over time. Thoughts?
If that were possible the company would not need your function. Any Metrics will only show whether there are anomalies, which require further analysis. Don't use the metrics for reward or punishment.
Any company that takes this McKinsey report seriously will find themselves suffering from an acute shortage of software engineers. No doubt the C-suite will scratch their heads, and then ask McKinsey to come in and "solve" their talent retention problems...
Goodhart's law: When a measure becomes a target, it ceases to be a good measure. My believe is that individual metrics are not bad to have. It is just that they result in bad behavior if you use them as targets with financial incentives. The problem here is that once you have metrics, they almost always tend to be used to judge performance, rather than for teams improving collaboratively.
Over the last 25 years of my software development and test career, I’ve noticed the greatest predictor of success is the clarity of the concrete objective. The greatest predictor of failure is ephemeral objectives.
I wonder how many actually read the article. It’s pretty much an entry-level piece, aiming to make people think about measuring and metrics. And it actually specifically states that most metrics are contextual; and it’s totally fine - like most tools they need to be used the right way to get results. It’s the job of a manager to correctly interpret those metrics. Bad managers take those literally and as a single source of truth and you get your salesmen gaming the system and developers getting under-appreciated for the maintenance work. While good managers would be using any metric as just one of the numerous data points that can hint at something not working right or just indicate a change in project/team dynamic. Most important thing this article does is it tries to encourage to think about these things and bridge that manager-developer relations gap; which can be done much more efficiently and graciously, no doubt about that, but disregarding this notion altogether is as backwards-thinking as measuring productivity by lines of code written.
Thank you for making this video. The premise of the McKinsey article is wrong and dangerous. It promotes a short-term focused style of management that will hurt developers and development teams. The selection of the metric on which people are evaluated, set the course for the team, usually in ways that have undesired effects that are difficult to predict in advance. Because those subjected to the metrics will be creative and think out of the box to optimize their personal performance, instead of what is desirable for the team / company.
Business school grad here turned Software Engineer. Most B school grads have no coding experience, can’t do anything on a command line interface, they don’t know what a schema is, or what a json file is, what an API. They don’t know anything that a Software Engineer does. It doesn’t stop them from throwing buzz words like IT, Cloud, Agile, AI, Machine Learning, Data, API, cyber security, open source.
Be careful with overusing the Mythical Man-Month card. It only works during specific time frames and contexts, eventually, you need to add more engineers if you want to tackle more things in parallel. Should Google have 5 engineers in total? Not even the best DORA metric numbers would do the trick.
There are ways of scaling SW that work, and ways that don’t. In general, SW dev scales VERY POORLY unless you do some rather complicated things. Specifically organise it so that teams can work independently of one another. When people are looking at “individual dev productivity” as a useful metric, we are a VERY long way from the levels of sophistication necessary to scale effectively by adding more people.
I totally agree with you. Still I think that promoting your book even before providing counter arguments to the article is a bad move and hinders the credibility of the video, as it give detractors the chance to suspect this is a marketing video.
As a15+ year developer and Tech lead, I still have some problems with CI. If I imagine a very large system already in movement is enough space, developers, infrastructure and money to make it works, you could even make daily commits to main, no problem(I'll not discuss GIT flows vs Trunk based here). But for small projects, small teams, lack of money, big problems, situations that demands a lot of research different levels of seniority among developers, there is no row for rushing deliveries. In my view, DORA metrics are only effective under specific conditions. Outside of those circumstances, I find them to be an inadequate measure of a development team's performance. As of now, I'm not aware of a one-size-fits-all metric that accurately captures the nuances of a development team's effectiveness across all scenarios.
Well, having worked this way myself in teams ranging from the tiny (4 devs) to the large (a few thousand), in all sorts of industries and on lots of different kinds of software I don't agree. It doesn't take a lot of money, a lot of effort and it allows you to build better software more quickly. So I don't understand where you think "Change Failure Rate", "Mean Time to Recover from a failure", which are the DORA metrics that measure Stability (Quality), and "Lead time" and "Deployment Frequency", which are the Throughput metrics, are "inadequate to measure teams performance"? Sure there are other things that we are interested in, but they vary so much that they don't create a valid baseline for comparison. It's nice to know if our SW makes money, is resilient, and so on, but FIRST we need to deliver stuff that works, and that is what the DORA metrics measure, how earth is measuring "delivery of stuff that works" inadequate?
@@ContinuousDelivery So how would you be able to measure delivery in areas where features take days or weeks to be ready, or like embedded scenario where you cannot just deploy it quickly? I'm really trying to understand what is the intrinsic value of CD. For example a couple of years ago I was working on a bug related to video streaming record system in a embedded Linux, and it took almost 1 month only to get to the root cause, and delivery a proper solution for it took even more time! That means that this 1,5 month of work has no value because it wasn't fast enought?
@@cronnosli CD is about "working so that our software is always in a releasable state" that doesn't necessarily mean pushing to production every few seconds. My teamI built one of the world's highest performance financial exchanges, the authorities rather frown on "half an exchange" with other people's money in it. It was 6 months before we released anything, but we practiced CD from day one. At every stage, our software was of a quality and tested enough and audited enough and deployable enough to be released from day one. So you do that. How do you measure it? Use the DORA metrics, you will score poorly on "Deployment frequency" but as long as you "could" deploy, I would count that for now, and that will get you on the right road. The "intrinsic value" is in multiple dimensions, but when you are in the mode of not being able to release frequently, for whatever reason, then working so that you a capable of releasing your software after every small change is a higher-quality way of working, and stores up less trouble for when you do want to release. It is measurably (see DORA again) more efficient and more effective. As you develop the ability to do this HOWEVER COMPLEX YOUR SYSTEM you get better at it until ultimately you can release more often, if you choose to, even if that isn't your primary goal. This means that whatever the change, you can get to the point of release optimally, because your SW is always in a releasable state, and the only way that you can achieve that, is to MAKE IT EASY TO MAKE RELEASABLE SW! On your last example, no, it doesn't mean "this 1,5 month of work has no value because it wasn't fast enough?" but if you had been practicing CD, it would probably have been easier for you to find the problem, because your code would have been better tested and so easier to automate when you were bug hunting. This isn't a guarantee, but more a statistical probability. Let's be absolutely clear about this, CD works perfectly well, actually better than that CD works BEST, even for VERY complex software and hardware systems. SpaceX haven't flown their Starship recently, but the software is still developed with CD.
McKinsey: a corporation which is linked to US decline in my mind. If you want to know why the US military has a hard time divesting itself of Chinese made gear*, look at the these kind of consultancy companies which focus on financialization. To paraphrase the old adage "Those who can, do: Those who can't, consult". Inept managements hire consultants, so we all suffer from the double whammy. ps I am not referring to those who are hired as temporary workers who given the misnomer "consultant" (I have been one of those myself). *Did nobody notice that China is in the thrall of a communist party?
I take issue with the idea that how you structure & organize work is the primary indicator. In fact, I would argue that this perspective does more harm than good and that the extent that you emphasize organization is better focused on technical excellence, and then focusing on the right work is what comes next in priority. Technical competence is in far shorter supply than peoples willingness to shuffle story cards around endlessly.
Well, this is based on research carried out by Google in over 180 teams. So to “take issue” you need to have more evidence or show how this experiment is flawed. This is not what they expected at the start, so this is not really a bias, this is based on research. Actually tuis backs up evidence from earlier research. Belbin found in the 1960’s that it takes a lot more than just assembling a group of “high performers” to succeed.
@@ContinuousDelivery I don't disagree with the assertion that putting high performers together doesn't guarantee success. But I also know that Google has always favoured being able to hire less skilled, cheaper graduates than experienced people from various backgrounds. I see plenty of companies chasing the promises of being able to replicate this success and they fail miserably despite putting endless effort into how they organize their work. Then when everyone thinks organization is a silver bullet, at the far end, out comes unplanned and avoidable technical debt. Shameful levels really because nobody wants to admit that you still have to be competent before you start fiddling with other factors.
There are too many non-engineers managing software engineering. This would never happen in civil engineering or medicine. Fundamentally that's our problem.
It's been the problem since Business schools starting teaching programming in the 1970's.
I suffered that when I was in that company ...
agree 100% 👍
I think it can be problematic when you had coded before and try to extrapolate your knowledge to others, some times you are making false assumptions based on things were valid 10 years ago.
I can happen even with engineers managing. All it takes is one non-engineer higher up that won't be talked down. Or maybe a customer that requires metrics over how their project is going and a sales person who always says yes. These things are pervasive even with good people in place who know better. Granted eventually the good people move on and all you have left is those who don't.
I recommend giving "When McKinsey Comes to Town" a read for pretty much all the reasons to disregard anything they produce, not because they are bad at what they do, but because they are very good at what they do. It just so happens what they do is drive profits and cut costs for companies/governments that hire them, and if that comes at the cost of employees, customers, the environment, or with a literal genocide in the background - oh well, it's just business.
Given McKinsey’s role in the opioid epidemic I think anyone who brings them in to consult at this point should be assumed to be a sociopath with little interest in anything but money
I worked somewhere that suddenly cut staff because McKinsey gave them some bogus profitability metric. The company was in fact already profitable and in no danger of running at a loss. I'm honestly not even sure why they needed to pay consultants to analyse the company.
Except they aren't "good" at what they do they are idiots who have no real world experience in running the businesses they advise. They save a dollar today but cost you $100 in a decade by the time anyone realize the impact of what they advised the management has changed 3x and no one realizes who is to blame for the current predicament.
It sounds great to senior management that they can have visibility into the teams in their organization using metrics but in reality the metrics are only useful in measuring progress of a team. Trying to compare teams across an organization is fraught with peril not all systems are the same, all have different challenges and development happens at a different rate depending on the phase of development. The reality is teams need more autonomy and outcomes should be measured. At some point metrics detract from actually completing the task and in many instances are more of a hinderance rather than a help. By all means lets try to measure some things but at the end of the day it shouldn't be the be all end all of what you are doing and the trap is it will get distilled to 1 or 2 lines on a report and will impact how and what is done.
how come no movie on this yet?! 😅
@@loganmedia1142 companies always try to maximize profits, no matter how well they already do. This is just how capitalism works. If we want to change that we must try to abolish capitalism.
"This article is rubbish, completely misses the point, and is probably dangerous." Yeah, that's a good summary.
“so let's take a look” *sigh*
And yet managers throughout keep trying to push these measures, and if you question them you are considered to be 'the problem'
@@MrMattberry1 That they use such metrics might be a hint that they are looking for an "under-performance" excuse to cut you loose.
@Vzzdak somehow my metrics were always pretty good, however me and 400 others in my company have now been made redundant. 😞
@@MrMattberry1 Going to trust that you're making a nice joke. Good one!
But it *is* a serious matter that an employee maintain off-site records about their work, organized in a fashion that can be easily passed to an employment lawyer.
Irony is that any observation of you maintaining such information can short-list you for "layoffs."
My background is in software but I once ran a sales team. The performance metrics for sales were totally gamed by the sales team and had almost no alignment to the interests of the company, short term or long term. I tried applying what I knew from software engineering to give the sales team slightly better aligned metrics (based on profit not revenue). They staged a mutiny and claimed to be too stupid to understand profit and loss. Of course nothing could be further from the truth - they clearly understood that profitability targets would be harder for them to game than revenue targets that were totally detached from cost, risk or feasibility.
Sandbagging quotas, sniping accounts, quarter lumping to reach accelerators, premature revenue recognition, commandeering sales resources, and the list goes on. Salespeople are very clever and they know how to play the game.
@@DemPilafian I used to travel every quarter to support Sun's main distribution center's software. I'd watch them fill orders to make their quarterly sales numbers and then cancel them.
Should just fire the sociopaths
what gets measured, gets gamed
@@haraldc70 it's a bit more subtle than that. Whatever creates incentives gets gamed. You can measure something accurately only as long as you don't incentivise the measure. Which is easier said than done.
The McKinsey report shows a lack of systems thinking. It's best captured in this quote from Russell Ackoff: “A system is never the sum of its parts. It's the product of their interaction.”
McKinsey is made of a team of guys that makes money giving advices on topics they have little to no expertise in. Does it come as a surprise they sell a lot of snake oil ?
With the classic sole exception.. a system with only 1 part.
@@tiagodagostiniis that the apocryphal anonymous old timer in the basement who makes the whole thing work…until that one day?
@@enginerdy Let me put you a simple example that corresponds to 50% of the software companies in world.. : ONLY 1 DEVELOPER...
@@tiagodagostini good example!
I love how they acknowledge that "failing to move past old mindsets" is a pitfall and then propose a bureaucracy framework to overcome it. Oh, the irony!
It's a very scalable solution though - the bureaucracy will expand, to meet the needs of the expanding bureaucracy.
And not everything is a system.
could it be, that they are talking of moving past old mindsets in the other direction, to even older mindsets.....
McKinsey being McKinsey, in other words
The only thing consulting companies understand is bureaucracy. So it's no surprise that would be the only option they have to measure or control anything.
I walked out of an interview once when a team lead asked me how many lines of code I write during my work day! 🤐 Don't want to work with damb people.
I would have answered somewhere between -100 and -300, or on a good day -500 to -1000, where it counts. (Yes I work with legacy code a lot.) Wonder what they would have said to that.
The correct answer is, “the least amount of code to deliver the requested work.” Right??
There is no correct answer, but we should enjoy attempts to answer it with some sarcasm.
You dodged a bullet. Well done on the walk out.
It might have been a test to see how you handle ideas that go contrary to your beliefs. In team work all kinds of situations can arise.
I've always been someone who spends a longer time writing more carefully designed software. My software rarely has bugs and generally works very reliably. The trouble is that there's no easy metric for the problems that didn't happen because my software was good, but it's easy for people to invent a mythical opportunity cost for what I could have done in the time if I worked faster with less care. I'm not saying my approach is better (I'm just a bit of a perfectionist and would find startup work awful) but it's a constant struggle to fight against some of these stupid metrics.
Why can't you measure that the code you write has fewer bugs than other code?
That will be $500/hr
@nexussays you run it and see if it keeps working. It's like having the 9s of reliability in a service level agreement. This is a measure that indicates robust infrastructure and bug-free code. In a more esoteric context, functional programs can be proven correct mathematically. If you write programs for a living, you should give some thought to how you know if they are good.
If nothing else your colleagues will appreciate you for writing maintainable code, or at least they should do. You are improving THEIR productivity!
@@tzadiko Tracking defects per programmer is very difficult, is a long-term project, and almost no one cares to do it
Managerial consultants basically just tell management what they want to hear. c suites will pay millions so that they can try some PR spin and tell their staff that their “consultants” told them to do the thing they were already planning on doing anyway. If it goes sideways they can blame “bad consulting”. On reality, Management never had an open mind from the start!
I mean this is like, actually the business model. It's hardly a secret. And it's not "basically" what they do. It's what they do. It's their job.
@@neildutoit5177 “Uuhhmm ackschually it’s NORMAL that consulting is a corrupt racket 🤓☝️”
Yes. This is all there is, telling the managers what they want to hear.
As for sales performance - I do remember one organization that used to be worth around $10m, until they closed a sales deal of $100m: Overnight, the value of the company went up by an order of magnitude.
The whole company, everyone together, worked hard to produce the pitch that would close the deal.
Ultimately, the sales rep who closed the deal got the profit share, and the others got bogged down to no end in work to fulfill on the contract.
You can guess who retired early, and who left in frustration.
Who benefitted from measuring an individual on a team outcome? Only the person who is able to claim the award. But both the other team members (who might have done more than 90% of the work) and the company as a whole lose.
Win-lose scenarios are bad, and individual performance metrics create them.
As if the award for winning the World Cup goes only to the player who scored the winning goal.
@@pedrob3953
Too often it works out like that.
Winning the Tour de France is a team effort where the whole cycling team works together to position their #1 into a good position for the sprint at the finish line.
The winner gets all the attention and the hype NOT the team, that made him win. When you don't know anything about cycling it just looks like he did it on his own.
at 12;52 mark, Dave hits the nail on the head without actually naming Goodhart's Law by name, simply referring to gaming the metric. "Any measure that become a goal ceases to be an effective measure."
Exactly. Metrics should serve the goals, not be the goals themselves.
@@pedrob3953
It is a case of the tail wagging the dog.
Any Metrics will only show whether there are anomalies, which require further analysis.
Nonsense, yes. And dangerous when the C-suite starts to believe it. Tech leaders need to understand why this kind of false flag metric is problematic and more importantly, what to propose in its place that makes sense. If not, CEOs will make even more mistakes when deciding technical directions.
The problem is that c-suites start believing many things when expensive "experts" recommend them.
I think you miss the point. C-suite people paid for the paper and then use it as an excuse to "let go" staff. It's not surprising since many C level execs went to the same schools and they're scratching each other's backs.
Never ever let a CEO decide technical directions.
If the CEOs make mistakes and their projects are failing, they clearly need to hire expensive consultants!
Just like Scrum / SAFe. Nonsense that was believed by C-suite
The troubling part is that managers will embrace this article as if it had descended from heaven on tablets of stone accompanied by angels with trumpets. Managers as a class are addicted to having a sense of control, even if it's an illusion. Having a simple metric for judging performance provides this illusion.
Designing and developing simple performance metrics is possible, but it is really hard and takes more time, effort and money than many people or companies are willing to invest. Much simpler to search the internet for half an hour, to see what "they" say are good things to measure for performance
Well anything is better than the usual which is just going on feelings and who you like and dislike.
@@timh8324 No, you can simply check up on the employees and see who's goofing off and who's actually working.
Unfortunately, yes.
The best example I used to give to show them issue with metrics is a watch.
Any watch, wall clock, cheapest Citizen, iWatch, Omega Seamaster - they all measure time good enough.
But it doesn't help a person starring at them to manage TIME - no chance to slowdown time to meet deadline, one have to find a way to deliver faster.
BTW, DORA metrics are all the same - they're jusy indicators, not the targets or forces.
And that's one of the criteria to tell between good and lousy manager - what and why they measure.
Well of course you _can_ measure productivity of developers. For example, you can count how many lines they write per unit of time.
The best software engineering managers are ex-devs who are ardent users of the product. They snatch code from the devs, build their own releases, and are trying out new features even before the devs have said 'done'. The worst have never even used the product, don't know how to install it, and are doing everything by numbers: lines of code, story points, whatever; they like to imagine these are telling them something.
The irony for me in this, is that I've experienced McKinsey's twice, in two separate corporations, and both times their suggestions were implemented, they were absolutely catastrophic failures in time, money and wasted man-power. A chop-shop if ever I saw one. They are not called McConsey's for nothing.
Ive njst watched them apply AGILE as a fully business wide idea to a company everyone included not just software devs. It failed miserably within 12months
“This Mckinsey report is harmful drivel”
Man, this guy does NOT pull punches. Love it.
Thank you for not mincing your words and calling things for what they are. Unfortunately these bizarre ideas are often born in mid management and, when sold to the upper floors of a company, have a devastating effect on its culture and eventually productivity as well.
Productivity is measurable as it is just the rate of output given input. Overemphasis of productivity will get you in trouble in creative fields, though. Focusing on individual productivity in a collaborative field will get you in more trouble. Delivering value over the long term is what matters and you need to get your developers to focus on that. Overfocusing on productivity will get them to focus on their activities, not company objectives.
This! Underrated observations.
"Focusing on individual productivity in a collaborative field will get you in more trouble." Glorified factory floor managers will never get this.
To true.
How does one measure the productivity of Leonardo da Vinci ?
Answer, you don't and look at the value his work brought to the world.
It's McKinsey. Someone wanted this conclusion. They measure "productivity", divide it by hourly cost, and "optimize the spend".
Accenture low balls projects for the US government.
Then, once Accenture has it's hooks in the project, costs skyrocket, with no end in sight, and no real help or product for the end user.
The US government doesn't care, because so many high level government employees own stock in Accenture.
See the cycle? See the problem?
Hi Dave, a sporadic viewer here.
I'm retired but still writing software, still learning, after 50 years. My minor thesis was on programmer productivity tools, in the '80's. Meaning that I've had an interest in this topic for 30 years. After watching your vid I went and read the report.
Apart from the issues that you highlight (I haven't chased up the other critiques) the biggest issue I see is a lack of transparency for how they came up with these "measures", what they actually measure, and how they have measured the performance of this model. Productivity was a hot issue when I started looking at it 30 years ago, and a lot of people have done serious work on it, but McKinsey say they've cracked it? They don't even provide examples of the kind of corporate culture that their approach was tried in.
A second issue is that they, like almost everyone involved in software, seem to lump all development work into a basket they call "coding". As I have taught in my programming classes: every line of code is a place for a bug. It's not entirely true, but it focusses beginners on thinking before they code. Well thought-through, well-_designed_ systems have fewer _opportunities_ for bugs. And it's usually faster too. This emphasis on coding misses that point.
I don't doubt that they've implemented it and seen improvements. If the existing approach is poor, and we implement almost any systematic change with a generally laudable goal and appropriate incentives, then we WILL see short term improvements. But they don't say how they measured the initial productivity, or the period of measurement.
So my conclusion is that the purpose of the report is to reduce everyone else's productivity as we critique it, then they can say "see, our way is better"!
'Till next time,
Andy
I’m so glad you’re part of our industry Dave. Keep doing this brilliant work. You’re helping us!
Very often, this type of consultants are not really interested in solving the real problem of their client. Instead, they care more about pleasing the management of the company who hired them, so they can gain new sales in the future. Imagine a case where the company's management wants to downsize staff, but they want some criteria to do it. So they hire some "experts" to give them the advice they need to do what they want. It all depends on what is the real motivation behind the scenes.
Spot on analysis! What really counts is the outcome as a result of business value added with the product increment that a team or a bunch of teams is working on. A team with only super specialists does not make it a winning team. Their success will depend on their ability to cooperate as a whole team together and interact with their customers, so that they build the product that meet their customer needs and that the customer loves to use.
I almost entirely agree with you. When I first started working I was spending about 95% of my time debugging code that I inherited. However, as I wrote more unit tests and cleaned up the code that has dropped to less than 5% of the time and most of the time is spent on actual development. Features are delivered faster and the bug rate has dropped to almost nothing.
The ones area I don't entirely agree with is when doing software based on math modeling it can often be worth it to take some additional time to go over the math design and check all the assumptions, stability, ways of solving it, alternative approaches, etc. I have run into too many cases where someone just started implementing a model with some pretty bad consequences. I like iterative coding and delivery but I really want to at least try and make sure the math we are going to implement is the right math to solve the problem we really have.
The vignette at 8:27 seems almost universally applicable. I'm in construction and always have the issue with sales over-promising and trying to force production into time frames and parameters that can't be met without great cost to the project.
But the sale is made, the salesman has gotten his commish, and upper management hasn't the snap to discern the trap sales left for someone else to clean up.
It looks like the Soviet union to me! A company I was interviewed for wanted to test how fast I could type on a keyboard! Having lengthy meetings without any structure is a time thief. If you want to increase performance of a team there are so many things you can do. It is very difficult to measure quality of code from an individual since it is a part of a big codebase where things interact between the individuals. Good insightful leadership is essential. Having worked as a software engineer both in sweden and in London I was amazed by how much better the organization and structure was in the London team. It was led by a guy with Indian background and he coached the team to be their best.
One very important point here is to challenge the premise and assumptions of the question before engaging in their answers … that’s why I think it’s very important to question the need and possibility of individual productivity measurements
Good work Dave
That's the human way to do it
But it's a trap
I'm managed with some persons that don't understand my job
They can listen, but at the end, they will tell me to accept it or leave xD
@@sebastiencretin6278 I feel you man.
Patience in these situations is crucial to convey your point … or keep applying in other places 😅
Thanks for a great review of that article. I'm working on recreating a goals/assessment framework for a mid-size organization of developers (~250), and been looking at a number of sources for ideas on how to approach this. We're definitely prioritizing a team approach to performance (and expect to heavily leverage the DORA framework), but we also need to find some mechanism to rationally assess individual developer contribution so that the allocation of limited bonus funds are done fairly, responsibly, and in a manner that does reward exceptional work. Can you point at a framework you think would be valuable for this?
In my early years as a SW PM, I was asked often to provide arbitrary metrics like the report suggests.They were rarely of any use. The only metric I've ever found that works is: Is the customer getting consistent quality quickly? The only good way I've seen this happen is through strong CI/CD.
Given it's from McKinsey they probably didn't have the time and will to do the required real-world checks. And the author (I didn't check) was likely a brilliant 24-year-old PhD in International Management.
My main occupation in my job is correcting production bugs. Almost nobody except me like to do this in my team so I teat bugs almost exclusively. What I found is that correcting a bug often consists of adding one or two line, or adding a condition somewhere. The more code you change the more chance you have to do something wrong. Then you write a test that reproduce the bug and that's it.
It sometimes takes me hours of analysis to understand a production bug. Anyway I think it would be very hard to mesure my productivity through the code I write.
Good software development is not always good business - therein lies the problem. I've seen plenty of poorly constructed pieces of software receive high rates of revenue and the businesses sell for multi-millions. I've seen good software crumble when the business shifts from a focus on building a good product, to building a profitable business. Makes sense for the business as they often are successful - at the expense of their customers.
I've seen that too. As a supplier you often need to lie to get a foot in the door and outmaneuver your competitor, just to get the work or sell your product (that doesn't exist yet). And customers are asking to be lied to, they want the rosy picture, not the reserved, realistic, conservative story. They want solutions, not more problems. A report like the one discussed here is just another manifestation of that. "New methodology". You feel you have no control? Good news! You CAN be in control after all! And we have just the experts for you to help you with that!
See you in ten to twenty years when you figured out it wasn't all it was cracked up to be, we 'll have some other story for you ready by then.
This is so true.
Unfortunately a lot of managers regard everything McKinsey produces as the dogs bollocks - and the rest of us suffer for it.
20 to 30 percent reduction in customer-reported product defects. Hmm. "Never trust data without error bars" - Sabine Hossenfelder.
The timing of this video coinciding with my current struggles with C-level management could not be better. Thank you.
Development expertise can only be effectively assessed by developers. We accept that this is true of other types of engineers, we accept that it’s largely true of doctors and PhD candidates and many other technical fields. But for some reason, in spite of it being generally accepted that software development is a challenging technical pursuit, the managerial class at large is obsessed with the idea that they can measure the quality of developers without knowing really anything about software development or programming or anything.
Developers *can* be assessed on their social skills and other non-technical axes by non-developers, of course, and those are certainly a large part of the job and developers should be cognizant of that.
To speak to a classic developer measurement problem, as somebody who spends a lot of time working on tragic legacy code, it can easily be the case that I’ll spend 3-4 days just understanding how some part of the system works before I resolve the problem with a dozen lines of code plus some unit tests.
Alternatively, if I’m working on a greenfield feature or a tool or something like that, I can easily commit 500 or a thousand lines per day, especially if there’s a significant amount of UI work.
Somebody who’s not a developer might not grasp this at all, whereas every experienced developer should understand it well (if they can’t, I’d argue that they’re not an experienced developer). Depending on the window in which somebody assesses my “LOC productivity”, they would label me either a superstar or a drain on the company.
I'm not sure someone from outside a team is in a position to assess whether someone's non-technical skills are good or not. That apparently non-social programmer might actually be very good at the important team interactions that the manager assessing would never witness.
@@loganmedia1142 Fair point! I'd always encourage programmers to find good ways to manage outward perceptions just as general career advice, but I certainly agree that positive and constructive internal team relationships are much more relevant to the success of projects than making good impressions across teams.
As far as the managerial class is concerned, software developers are just glorified factory workers.
There are certain aspects you can measure and yes they are mostly team based. I think measuring the amount of time a task takes makes sense ONLY if it's meant to be used as a handbrake for the teams. For example, a story/bundle of tasks are rapidly adding up hours due to requirements changes or uncertainty, the team can use time spent as a measurement to stop such tasks. I've had success implementing similar metrics in the engineering teams. Flow efficiency and other metrics provide much better insights though.
I'd love to hear your more detailed thoughts on the SPACE metrics, once you've had a chance to think through them more. I've seen them come up a few times, have some initial thoughts, but haven't used any of them. Really curious where you land on them.
I was expecting from you that kind of answer addressed to McKinsey but as always it was also very valuable.
They were paid by CEOs to lie again and take the heat, but allow the CEO to fire people.
I feel the need for a way to point of to my boss who needs a chat.
I agree that the things pointed out here don't really work. However I still feel the need to measure individuals in some way otherwise you allow one dev to spend weeks on centering a button while the other team members work super hard and finish on schedule.
Whenever I get a task that needs to be finished fast and effectively I actively avoid certain team members because they are not fast enough for the deadline. Again I am measureing the individual, using all the factors I know about them. Willingness to learn, speed, business knowledge, cababilities in the given stack given previous experiences, how willing they are to follow the plan (if they can't give good counter arguments), how will the tight timeline affect them, will they be able to function with potential overtime.
I concede that I may be a part of the problem. We have no system to game but you are in some way gaming me. If you make good stuff, I don't have to fix constantly or give 30 comments for in review you are just better than the person that does.
Applying the measures to IT MANAGER productivity would be interesting. Would a top score make for a top manager? As the level of abstraction has advanced from bits via programming logic to configuring tooling and will soon be prompting AI, developers are becoming more like managers. Can you score a prompt engineer on the same points?
I feel like it's all and all bollocks. These people, they really just don't want to work ultimately. A proper dev team manager, call him what you want, has to actually delve in deep, understand everything and work with the people on site to be useful. Yes, there absolutely are useful tools and metrics for him, but ultimately he works with people, not with numbers. But this. This smells of "I never saw you, your work or the result of it. I am gonna manage you now!". That mindset, the one that makes me fume and consider violence.
Literally today I argued with my manager, that higher management complains about number of issues I closed with my code. What a nonsense is that.
Well said, Dave, as usual. And that shirt is amazing!
I have found that, generally, the actual purpose of sprints and daily public sprint reports is not to schedule work efficiently but to put your nose to the grindstone, every minute of every day, with a side of public shaming. If you beat your estimate you get rewarded with more work, and possibly demerits for overestimating (= lazy). If you exceed it you get demerits for being too slow and underestimating.
Those DORA metrics, at least as presented there, are not at all good metrics either. They generally follow the "if things are better then these metrics are expected to improve", which sounds nice, until you realise that there is nothing said about them changing for any other reason, including that the improved approaches might provide much bigger changes in other forms than the improvements.
For instance if you improve your bug detection, such as by making it easier for users to report them, you will get a far higher failure rate, even though finding and fixing those bugs really does improve the stability.
This means that to make use of those measurements, you have to have laboratory level control of all those other factors that play in, and the result you get might still be only weakly correlated to what you want, and may still contain a lot of noise you were not able to fully control. This in turn means that for most of the things you would want to use such a measure for, you just cannot get a result you can trust nor get close to argue is statistically signnificant. This leaves it to measure just things that have very little direct effect on things related to the metrics itself, but rather how they affect developer productivity and through that the metric. Here we are talking about things like the amount of noise in a room, but even then, higher noice levels could make the developers more nervosus, which might prompt them to make their changes in smaller chunks, which would look like increased productivity to the metric, but really just be a false signal.
In total, the DORA metrics are both way to noisy and easy to disrubt to be able to be trusted much even in the best case scenarioes, and since they are also pitifully easy to game due to being hugely affected by other things, they are also extremely poor things to have target known measurements of, since they can be gamed so easily.
So I spent most of my life, learning about writing software (I'm 53 started at 19). However I have never had a job where I was a developer/software engineer.
However (x2), I have been writing software, on and off, with the company I work for.
My manager allows me to do these things.
What is my approach ?... get acquainted with a need for a particular software project or automation project, write the software, ensures it works, and make sure the owner of the project, or requestor, is very happy with the final outcome.
Then I will tell my manager, that this is what I have done.
I don't get involved into the nitty-grity of the code, it just confuses people. No one wants to know this stuff.
😊
I just try to make sure that people are happy.
Does anyone remember when McKinsey killed off their clients around year 2000 with a system that said, "Invest in whatever is working". People increased budgets in R&D and advertising until they went bankrupt or fired McKinsey. McKinsey forgot about diminishing returns. Several of my students work at McKinsey but the chief McKinsey resource is their network of former employees that become clients.Their attitude is arrogance. Two days after working at McKinnsey my former student thought that he knew more about modeling than me.
How I love watch your videos! We you talked about measurements in sales teams was a relief for me, because that's what I thought when I read the McKinsey article.
I largely love this video, *but* it remains a reality that some developers are in fact lemons that damage team efficiency, quality, and morale - significantly. I can see *some* of those individual metrics being useful in debugging what is going on if higher level team metrics aren't healthy or if you are hearing a lot of complaints from the team. Curious your take on how one *should* assess developers and identify bad hires.
I love the sales team example, So we must find the context and act upon, not take an absolute opinion and apply for every thing.
The factor that really matter is the context, not a standard metric
I faced this problem with agile and agile coaches for dev productivity and quick respond to change vs. quality😅
The age of cargo cults is over! All hail cargo think-tanks!
McKinsey is not really in the business of making things work better. They are largely in the business of A) telling management and ownership what they want to hear, and B) suggesting unsavory or unpopular ways of increasing profits, often to launder the blame and reputation of the people who hire them. It doesn't take a genius to realize that you can increase short term profitability by cutting wages or laying off a bunch of people (which is a very common 'suggestion' from consultancy firms like this), but it's not very popular. If you hire a fancy consulting firm and pay them a bunch of money, then they tell you to do it, suddenly you can claim you're just following expert advice.
Given that their goal is never to actually solve real problems in the way that most people would conceive of them, it's very easy to see why they release this kind of commentary.
The article looks to be written by managers so they can make sure there is enough busywork for them to track the developer productivity and they can keep their job, or in case of McKinley their contracts.
The main thing I don't like in DORA is that it does not predict the future well and thus suggest very conservative changes only. If you embark on a major system redesign, your score will go down for quite some time, and eager PMs would point at the effort you do as bad right away, trying to kill the effort. However, once you do finish up that redesign quite often your productivity goes up because you just have a better, extendable, more reliable system.
Honestly, any metrics that claims to know software productivity without including complexity, technical debt, and extension openness will fail to properly give any meaning to measurements. Quite often you can ship crap code fast that has a reasonably low crash-rate, but that only measures the "good enough" state, and actively creates a barrier to jump from one "good enough" to another one that might open up better results.
That is not to say that highly focused metrics are not useful, but their impact should be limited to the scope they actually measure.
In my opinion - DORA metrics help spark conversations. If the metric goes down and as a team we feel that it is OK to be in this state for a while, so be it. If it continues to be in this state month-after-month, thats when alarm bells should ring and the team would need to rethink what they are doing.
Without DORA metrics - these conversations wouldn't even happen.
@@mrinalmukherjee1347 Sounds really great, I do wish that would be the default case, but so far I've seen more the negative cases where metrics was used to micro-manage or limit development sadly.
Thanks for this video. I liked most the analysis of the metrics table starting at 10:05. This will help me discussing within my company and also gives me further ideas to look closer at DORA and SPACE.
"Computers can tell you how much money your decisions are making. They can not tell you how much they are losing." - Sam Walton
I love your analysis of the mckinsey metrics. unfortunately the demage is done and these kind of insidious productivity metrics are becoming more used. I believe they all tie up to making managers look good, not the engineers, look at the individual metrics for example, they are management concerns, not productivity concerns.
As someone who's worked in sales, and may again, I really appreciate your observation on it.
As for metrics: I'm building up a comprehensive system of KPI's which can be used in software development organisationation at a large scale.
BUT:
1) There are no individual metrics - and that's no coincidence.
2) The purpose of development KPI's is to be able to highlight problems and discussions that need to be had, not to achieve conformity or reach a universal baseline.
What we need to look at is the performance of the system, more than the performance of the individuals.
Ultimately, McKinsey's article, however, is achieving the opposite: measuring things that are _not_ a problem, generating universal baselines and putting people into boxes, which is the most counterproductive thing one can do in software development.
While I agree that many organizations have plenty of issues in their processes, communications, and systems that will very quickly diminish productivity overall, I think its shortsighted to think that individuals are never at fault for poor productivity, or misuse of processes. A couple of bad actors in a small team can totally tank a team's productivity even with a great set of processes and communication. I think this movement of never look at individual performance, is right to shift some blame to processes, but we should also be able to hold individuals responsible as well. After all, a process is only as good as the team's willingness to operate within them, and if the whole team isn't on board or misaligned, then performance will suffer.
Hi @@Eric-vh4qg , no individual "metrics" still provide the opportunity to talk about individuals' "contributions". --- Processes are guidelines in practice, left to the mercy of individuals, as no one can make anyone to follow the words without standing behind their shoulder all the time. Especially not clever, trained, experienced software developers ;-) (also irregularities may happen, etc., however that belongs to another topic) --- There was an interview with two engineers who's job is to visit steel bridges in the region and check and tighten each and every bolt on the bridge by the blueprint, and that each or every second year. The reporter asked why engineers are needed, as anyone with the tool can do it. The answer started with a question - Would you believe they really did and each bolt by the proper force? We understand the consequences.
In Farley we Trust!
I've been a developer for 8 years now and management has this insane obssesion on measuring productivity. OKR systems, estimates that are impossible to predict, due to scope size or lack of prior analysis of the problem. Meanwhile, there is little to no effort on improving the process. My current boss wants to do estimates that are accurate up to an hour, considering that I have to work on two gigantic apps, the only person with experience in them quit 6 months ago and there is no support or understanding that what they are expecting is impossible makes me want to quit the industry forever.
It amazes me that seemingly smart people keep asking for such accurate "estimates", when they are literally requiring clairvoyance.
They should at least mention this essential requirement in the job ads.
Maybe time to have a quiet chat with your micro managing manager to explain to him what problems you are encountering.
Might be HE is also being micro managed by a higher up.
Maybe make clear, that he is about to loose you too, so there will be nobody working to improve and understand the software you are responsible for.
@@jfverboom7973 I've tried that. I've pointed out serious issues like tickets not being descriptive, like the whole team is relatively new to the project 6months or less, QA being a bottle neck, architect being a major asshole and instead of stopping for a moment and trying to address some of those issues we get a new metric to obey. I'm just speechless.
I'm looking for a new gig the money is not worth my mental health.
Cool to see this prompted by the Discord!
Definitely! There's so many great conversations going on over there 😃 One of the reasons for setting up the Discord is to engage with the community Dave has built - long may it continue!
I quite often find the Discord thought provoking, I am enjoying it, probably more than I expected to be honest.
I used to be a CAD building designer, one of our pitfalls was the company wanted near-constant update notes on what you were designing, (like every 10 minutes) I’m all for documentation, but not to the extent that it prevents accomplishing anything. They ultimately looked at these notes for proof of productivity, and by comparison almost completely ignored the actual work product.
Yeah, just count the lines of code written / time unit = productivity, simple. 🥴
beautifully delivered... and so many comments are worth reading too.
also one wonders why people take this sort of report seiously, given who has prepared this and their background, rather lack of.
I already read this "article" and some things seem illogical, but you had provided the critique in such structured way that it all seem obvious, even to inexperienced developers. Thank you.
When I went to business school, I remember being told a rule of thumb for planning the amount of time for developing software taking all things into consideration. They said, ask a programmer how much time it takes and multiply it by 3 and up the unit by one. So 3 days becomes 9 weeks. hehe
Universal Basic Income.
Do not mail checks. Instead, you must go to a federal bank to get the cash.
My point:
Lots of government sponsored programs that need lots of software and user interfaces would not be needed, because those programs would be shut down. Why have 10x admins for SNAP when we could just shut down SNAP altogether and just give recipients the cash instead?
In short, governments are driving the need for systems and software engineers. Once we remove more and more government from society, costs plummet and only the best software engineers survive.
Yeah, glad you talking about this but its nothing new. Large companies always have dumb KPIs that make no sense, the problem is that most managers/CTOs don't actually understand tech.
The more and more frequently aggravating thing for me is that the people working in / with tech don't understand tech. They think so (and on a skin-deep level they have a somewhat overview, and they manage to stay on the surface with that), so they don't bother to realise that there is (much) more.
Metrics proposed at meetings I've attended:
Lines of codes
Number of check ins
Yes, those will surely measure something alright..
Thanks for the video, I think it is the best response I have heard so far.
"It promotes strong reactions from a lot of people... so now it's my turn" Hehe
Spot on, Dave. Thank you for saying this so clearly!
only one question, where on earth do you get your t-shirts? I love 'em!!!... and your content too btw
And in regards to the content of the video, indeed it is really hard to measure developers productivity. You can puke a thousand lines of code that are kind of useless or you can push 10 lines that make a world of a difference. In our job things take time and they take a lot of thinking sometimes; not everything is about lines of code.
I believe ya'll get the point
Do you have a link to "Dave explains the Dora Metrics"????
I see you took my positive comment about the value of a swear word in the thumbnail to heart. ;)
Here is one of my takes on the DORA metrics ruclips.net/video/hbeyCECbLhk/видео.html
Meh, there is some value in developer productivity, a "Rockstar" can save projects, way to many rely on those rockstars. And a team of people who produce 3 lines of basic code a day in a good week, will not deliver software no matter how good they organize their work. But as usual, McKinsey - as smart and sharp as many of them are - is operating in a very simple abstraction of reality. The most valuable developers in may teams i've seen have a horrible individual productivity because they multiplay their team in the team, making the rest better or less bad or setting up all the best practices (CI, CD, Testing), that is hard to measure. And it's really hard to come up with reportable metrics which identity bad developers and slackers while not catching those team multipliers. And in the world of McKinsey and Elon Musk, everyone not reaching the metric is often fired automatically.
All that said, "slackers" in teams are a problem and it ruins teams if one or two people suck the fun out of it by not performing at all and pulling the others down with them and it's hard to act on stuff like that in a timely manner without objective arguments. So i wish there was a way to measure some of that stuff, but it's almost impossible. A developer who needs 5 days for a task, but thought about every name and every interface and created maintainable code, is my magnitudes better than someone who does the task in 1,5 days, but the code is a total mess and makes any change in the future impossible. How do you measure maintainability on an individuell level - as if somebody at McKinsey has a f'ing clue about that.
"9 women can't make a baby in one month" - can't put into words how true this rings!
Inner vs outer loop is a good example of how these metrics don't work, because it's too gameable. I want to improve my dev pipeline to decrease the time others spend on the outer loop activity, so I class development of the pipelines as inner loop activity and using them as outer loop. On another team, improvement in pipelines is seen as unimportant so development of it is classed as outer loop. The work is done begrudgingly by the developer with the least social capital, probably the most inexperienced member of the team, and with little attention paid or review.
A client I‘ve worked with for over 10 years has McKinsey stampede through every couple of years, sent by corporate. I always have to pick up the pieces afterwards and fix the damage they made.
So i struggle a bit some of the things presented on the video, and now i see i was struggling for the wrong things. Thank you very much for helping us understanding software development and engineering!
Very nice video, I think you do a good job arguing against the usual sort of metrics used on software engineers. However... it's a plain fact that some devs are really good, some are just so-so, and some just plain suck. They vary in drive, knowledge, and analysis talent. If you told me I had to stack-rank my devs and get rid of the bottom third, and give a raise to the top third, it would be for me as their manager an easy task, because I work with them closely every day. The real question is whether there is an automated/easy metric that can be gathered, which is as useful as the "gestalt" a manager gets over time. Thoughts?
If that were possible the company would not need your function.
Any Metrics will only show whether there are anomalies, which require further analysis.
Don't use the metrics for reward or punishment.
Any company that takes this McKinsey report seriously will find themselves suffering from an acute shortage of software engineers. No doubt the C-suite will scratch their heads, and then ask McKinsey to come in and "solve" their talent retention problems...
They can outsource it all to Tata
The thing about measurement in particular is that it's real use is as a diagnostic. If it's a vanity metric, it'll easily be gamed
Great discussion. So glad to hear a real point of view!
The bit about over-promising sales.... oh god, it's too real, please 😭
Goodhart's law: When a measure becomes a target, it ceases to be a good measure.
My believe is that individual metrics are not bad to have. It is just that they result in bad behavior if you use them as targets with financial incentives.
The problem here is that once you have metrics, they almost always tend to be used to judge performance, rather than for teams improving collaboratively.
Over the last 25 years of my software development and test career, I’ve noticed the greatest predictor of success is the clarity of the concrete objective. The greatest predictor of failure is ephemeral objectives.
I wonder how many actually read the article.
It’s pretty much an entry-level piece, aiming to make people think about measuring and metrics.
And it actually specifically states that most metrics are contextual; and it’s totally fine - like most tools they need to be used the right way to get results.
It’s the job of a manager to correctly interpret those metrics. Bad managers take those literally and as a single source of truth and you get your salesmen gaming the system and developers getting under-appreciated for the maintenance work.
While good managers would be using any metric as just one of the numerous data points that can hint at something not working right or just indicate a change in project/team dynamic.
Most important thing this article does is it tries to encourage to think about these things and bridge that manager-developer relations gap; which can be done much more efficiently and graciously, no doubt about that, but disregarding this notion altogether is as backwards-thinking as measuring productivity by lines of code written.
Few iterations more and the fellas will invent the D&D character sheet as an employees' description :)
Thank you for making this video. The premise of the McKinsey article is wrong and dangerous. It promotes a short-term focused style of management that will hurt developers and development teams.
The selection of the metric on which people are evaluated, set the course for the team, usually in ways that have undesired effects that are difficult to predict in advance. Because those subjected to the metrics will be creative and think out of the box to optimize their personal performance, instead of what is desirable for the team / company.
Business school grad here turned Software Engineer.
Most B school grads have no coding experience, can’t do anything on a command line interface, they don’t know what a schema is, or what a json file is, what an API. They don’t know anything that a Software Engineer does.
It doesn’t stop them from throwing buzz words like IT, Cloud, Agile, AI, Machine Learning, Data, API, cyber security, open source.
Be careful with overusing the Mythical Man-Month card. It only works during specific time frames and contexts, eventually, you need to add more engineers if you want to tackle more things in parallel. Should Google have 5 engineers in total? Not even the best DORA metric numbers would do the trick.
There are ways of scaling SW that work, and ways that don’t. In general, SW dev scales VERY POORLY unless you do some rather complicated things. Specifically organise it so that teams can work independently of one another. When people are looking at “individual dev productivity” as a useful metric, we are a VERY long way from the levels of sophistication necessary to scale effectively by adding more people.
I totally agree with you. Still I think that promoting your book even before providing counter arguments to the article is a bad move and hinders the credibility of the video, as it give detractors the chance to suspect this is a marketing video.
From McKinsey's perspective the report is excellent as companies will hand them large sums of cash.
As a15+ year developer and Tech lead, I still have some problems with CI. If I imagine a very large system already in movement is enough space, developers, infrastructure and money to make it works, you could even make daily commits to main, no problem(I'll not discuss GIT flows vs Trunk based here). But for small projects, small teams, lack of money, big problems, situations that demands a lot of research different levels of seniority among developers, there is no row for rushing deliveries.
In my view, DORA metrics are only effective under specific conditions. Outside of those circumstances, I find them to be an inadequate measure of a development team's performance. As of now, I'm not aware of a one-size-fits-all metric that accurately captures the nuances of a development team's effectiveness across all scenarios.
Well, having worked this way myself in teams ranging from the tiny (4 devs) to the large (a few thousand), in all sorts of industries and on lots of different kinds of software I don't agree. It doesn't take a lot of money, a lot of effort and it allows you to build better software more quickly.
So I don't understand where you think "Change Failure Rate", "Mean Time to Recover from a failure", which are the DORA metrics that measure Stability (Quality), and "Lead time" and "Deployment Frequency", which are the Throughput metrics, are "inadequate to measure teams performance"?
Sure there are other things that we are interested in, but they vary so much that they don't create a valid baseline for comparison. It's nice to know if our SW makes money, is resilient, and so on, but FIRST we need to deliver stuff that works, and that is what the DORA metrics measure, how earth is measuring "delivery of stuff that works" inadequate?
@@ContinuousDelivery So how would you be able to measure delivery in areas where features take days or weeks to be ready, or like embedded scenario where you cannot just deploy it quickly? I'm really trying to understand what is the intrinsic value of CD. For example a couple of years ago I was working on a bug related to video streaming record system in a embedded Linux, and it took almost 1 month only to get to the root cause, and delivery a proper solution for it took even more time! That means that this 1,5 month of work has no value because it wasn't fast enought?
@@cronnosli CD is about "working so that our software is always in a releasable state" that doesn't necessarily mean pushing to production every few seconds. My teamI built one of the world's highest performance financial exchanges, the authorities rather frown on "half an exchange" with other people's money in it. It was 6 months before we released anything, but we practiced CD from day one. At every stage, our software was of a quality and tested enough and audited enough and deployable enough to be released from day one. So you do that.
How do you measure it? Use the DORA metrics, you will score poorly on "Deployment frequency" but as long as you "could" deploy, I would count that for now, and that will get you on the right road.
The "intrinsic value" is in multiple dimensions, but when you are in the mode of not being able to release frequently, for whatever reason, then working so that you a capable of releasing your software after every small change is a higher-quality way of working, and stores up less trouble for when you do want to release. It is measurably (see DORA again) more efficient and more effective.
As you develop the ability to do this HOWEVER COMPLEX YOUR SYSTEM you get better at it until ultimately you can release more often, if you choose to, even if that isn't your primary goal. This means that whatever the change, you can get to the point of release optimally, because your SW is always in a releasable state, and the only way that you can achieve that, is to MAKE IT EASY TO MAKE RELEASABLE SW!
On your last example, no, it doesn't mean "this 1,5 month of work has no value because it wasn't fast enough?" but if you had been practicing CD, it would probably have been easier for you to find the problem, because your code would have been better tested and so easier to automate when you were bug hunting. This isn't a guarantee, but more a statistical probability. Let's be absolutely clear about this, CD works perfectly well, actually better than that CD works BEST, even for VERY complex software and hardware systems. SpaceX haven't flown their Starship recently, but the software is still developed with CD.
McKinsey: a corporation which is linked to US decline in my mind. If you want to know why the US military has a hard time divesting itself of Chinese made gear*, look at the these kind of consultancy companies which focus on financialization. To paraphrase the old adage "Those who can, do: Those who can't, consult". Inept managements hire consultants, so we all suffer from the double whammy. ps I am not referring to those who are hired as temporary workers who given the misnomer "consultant" (I have been one of those myself). *Did nobody notice that China is in the thrall of a communist party?
Oh yeah you can measure my productivity.. when I leave and stop saving other devs from their own incompetence.
I take issue with the idea that how you structure & organize work is the primary indicator.
In fact, I would argue that this perspective does more harm than good and that the extent that you emphasize organization is better focused on technical excellence, and then focusing on the right work is what comes next in priority.
Technical competence is in far shorter supply than peoples willingness to shuffle story cards around endlessly.
Well, this is based on research carried out by Google in over 180 teams. So to “take issue” you need to have more evidence or show how this experiment is flawed. This is not what they expected at the start, so this is not really a bias, this is based on research.
Actually tuis backs up evidence from earlier research. Belbin found in the 1960’s that it takes a lot more than just assembling a group of “high performers” to succeed.
@@ContinuousDelivery I don't disagree with the assertion that putting high performers together doesn't guarantee success.
But I also know that Google has always favoured being able to hire less skilled, cheaper graduates than experienced people from various backgrounds. I see plenty of companies chasing the promises of being able to replicate this success and they fail miserably despite putting endless effort into how they organize their work.
Then when everyone thinks organization is a silver bullet, at the far end, out comes unplanned and avoidable technical debt. Shameful levels really because nobody wants to admit that you still have to be competent before you start fiddling with other factors.