I have a question regarding a burndown. How do we not perform a large analysis stint before development in order to get our backlog and points together? I've run into the situation where once we have our fully fleshed out backlog with all the points and data, then a requirement changes at a foundational level and all the requirements above that point then are affected. Now we have to go and re-work/estimate all of those stories. How do you avoid that and still have a meaningful burndown?
As Uncle Bob says, agile is about producing data so you know how fast you're going. If you stop producing meaningful data you stop doing agile. So given that your requirements changed, so too must your estimates. Whatever the ripple effect is, re-estimating and even trashing and creating new stories ensures you continue to produce meaningful data necessary to continue to predict your completion date. If a foundational or re-specing happens frequently then that would be something to remedy for sure. Honestly this happens to all of us though, but we try to mitigate it best we can. I don't have any tips other than to confer with stakeholders frequently (i.e. the exploration is always happening from new learnings).
I don't recommend doing things this way, I would suggest looking into #NoEstimates. But to answer your question as best I can the way Bob would: You should be re-estimating the story points for features regularly, as well as changing the features themselves regularly as you get more data: removing ones that aren't relevant anymore, adding new ones that will be needed, splitting features apart, re-estimating because your architecture changed, etc. You can re-estimate completed features as well: if you called something a 5 but then once you started it you realized it was really a 2, then you should go back into the spreadsheet and adjust it to a 2. You typically do this after the first few iterations by choosing "yardstick features" and comparing future features against them. "This seems like the work involved will be very similar to that feature we did in March, and that was a 3, so let's call this a 3." If you aren't re-estimating and changing your feature backlog in response to what's happening during genuine iterations, then your backlog is just a roundabout way of doing water-scrum-fall. All that being said, again, look into #NoEstimates. Contrary to what Bob said, #NoEstimates is a absolutely an agile way of working. Indeed it is much more agile than the version Bob is proposing here, and it is also better aligned to data. Estimates as Bob describes here are not data, they are proven by research to be total bullshit, whatever nice personal feelings of confirmation-bias Bob has about them. You can start moving towards #NoEstimates by changing your range of available story points to "1" and "too fucking big, split it up", and the research shows that the numbers you'll get that way in terms of burndown will be more accurate than any other pointing (Fibonacci, T-shirts, etc).
Great talk, as always, Uncle Bob. 37:40 Sad to see that demographics is even mentioned in 2020. At first I thought it was about demographics of programmers: the diversity of viewpoints, not about body properties of people like blood type, chromosome pairs, age and cell pigmentation. MFW people caring and criticizing in 2020 demographics at a programmers meeting in 2001: 🤦
Interesting perspective at ruclips.net/video/4JihsBOBbdI/видео.html where he explains that waterfall concepts are, in fact, burnt into the Agile Manifesto. Some waterfall can be good. Waterfall is not intrinsicially bad.
eh... yes, waterfall in intrinsically bad. the separation of stages in long periods of preparation followed by implementation and then followed delivery is bad.
@@matswessling6600 Right. Agreed. But even William Royce did not advocate that (and he was working on truly mission critical things). But the problem is that Agile reductionists have gone to the other extreme where they think ANY level of planning is bad. Most people find the right level of planning with the right planning horizon beneficial or even welcome.
@@andrewsheehy2441 ? i dont see anyone saying that all planning is bad. Agile is not against planning, it just preventing too much planning and enables replanning.
@@matswessling6600 One of the many problems - which have contributed to the death of Agile - is that the field has been overpopulated by people who have no formal project, product or engineering (most important) experience. The invaders are bring 2x2 charts, checklists, books and no end of other nonsense that drive the actual engineers crazy.
Uncle Bob keeps pushing TDD. IMHO if you have simple, str8 to the point, components (classes, elements, etc), TDD is mostly a waste of time, however, it's just magical if it you are using it on complicated algorithms and/or systems. To contradict myself to a certain degree, for some, a simple 'if' statement warrants TDD.
If he's pushing it then he has a good reason. And he always says that TDD is a discipline, so not something you only use when You think the code is complicated. Because you don't write the tests only for yourself, but for the whole team and posterity. What if the originally simple class gets more complicated over time and there are no tests to cover for changes? What if the originally simple stuff is not that simple as you thought? And what if you and the whole team knows that the tests are covering only some (few) parts of the app? Then if they pass, it doesn't mean much, does it? Since your code might be broken elsewhere. Yes, some people can write clean, working code without tests, but once they leave and somebody younger replaces them, they will go through hell doing changes on that system.
I would recommend checking out one of the cleancoders case studies if you want to see what he's talking about in TDD in practice. London vs Chicago is probably the best to see 2 approaches of the discipline for the same project. Sandro Mancuso from codurance says in the first video that "there's what i teach and what i do". The principles are to give you the foundation, but experience informs your approach, without sacrificing the core idea of having a robust set of tests at the end. There is a lot of wiggle room to adjust your goals of good vs fast vs cheap vs done.
@Liviu SANDULACHE I asked him on the matter. He said that startups for example (epitome of rapid change) are the best place to adopt Clean Architecture. That includes testing!
There is a really fun talk from Kevlin Henney (I think it's called the error of our ways) where he shows the problem with leftpad (some js library that broke the internet). I'd suggest having a look at it and maybe consider that it's not size that matters. Perhaps how your code is used matters more!
This needs way, way more views.
One of the best talk on agile history, its idea and purpose. Clean and clear, good job Uncle Bob.
This is one of the most interesting and engaging videos I have watched recently!
“The purpose of Agile is to destroy hope… and replace it with DATA.” If only this was the message that spread…
59:02 There he goes!! That's the best part of Uncle Bob.
Thank you so much :)
50:51 Agile usually delivers bad news, we want the bad news as early as we can get it
hope the sliedes can be downloaded
There is 5th nob that managers prefer to choose: give smaller numbers to the estimations!
I have a question regarding a burndown.
How do we not perform a large analysis stint before development in order to get our backlog and points together?
I've run into the situation where once we have our fully fleshed out backlog with all the points and data, then a requirement changes at a foundational level and all the requirements above that point then are affected. Now we have to go and re-work/estimate all of those stories.
How do you avoid that and still have a meaningful burndown?
As Uncle Bob says, agile is about producing data so you know how fast you're going. If you stop producing meaningful data you stop doing agile.
So given that your requirements changed, so too must your estimates. Whatever the ripple effect is, re-estimating and even trashing and creating new stories ensures you continue to produce meaningful data necessary to continue to predict your completion date.
If a foundational or re-specing happens frequently then that would be something to remedy for sure. Honestly this happens to all of us though, but we try to mitigate it best we can. I don't have any tips other than to confer with stakeholders frequently (i.e. the exploration is always happening from new learnings).
I don't recommend doing things this way, I would suggest looking into #NoEstimates. But to answer your question as best I can the way Bob would:
You should be re-estimating the story points for features regularly, as well as changing the features themselves regularly as you get more data: removing ones that aren't relevant anymore, adding new ones that will be needed, splitting features apart, re-estimating because your architecture changed, etc. You can re-estimate completed features as well: if you called something a 5 but then once you started it you realized it was really a 2, then you should go back into the spreadsheet and adjust it to a 2.
You typically do this after the first few iterations by choosing "yardstick features" and comparing future features against them. "This seems like the work involved will be very similar to that feature we did in March, and that was a 3, so let's call this a 3."
If you aren't re-estimating and changing your feature backlog in response to what's happening during genuine iterations, then your backlog is just a roundabout way of doing water-scrum-fall.
All that being said, again, look into #NoEstimates. Contrary to what Bob said, #NoEstimates is a absolutely an agile way of working. Indeed it is much more agile than the version Bob is proposing here, and it is also better aligned to data. Estimates as Bob describes here are not data, they are proven by research to be total bullshit, whatever nice personal feelings of confirmation-bias Bob has about them.
You can start moving towards #NoEstimates by changing your range of available story points to "1" and "too fucking big, split it up", and the research shows that the numbers you'll get that way in terms of burndown will be more accurate than any other pointing (Fibonacci, T-shirts, etc).
Yet my teachers at university keep teaching that Agile is "doing things fast!"
These are boomers who grew up in the "glory" days of waterfall. Fuck them. Follow Uncle Bob, he's one of us.
Great talk, as always, Uncle Bob.
37:40 Sad to see that demographics is even mentioned in 2020. At first I thought it was about demographics of programmers: the diversity of viewpoints, not about body properties of people like blood type, chromosome pairs, age and cell pigmentation.
MFW people caring and criticizing in 2020 demographics at a programmers meeting in 2001: 🤦
Interesting perspective at ruclips.net/video/4JihsBOBbdI/видео.html where he explains that waterfall concepts are, in fact, burnt into the Agile Manifesto. Some waterfall can be good. Waterfall is not intrinsicially bad.
eh... yes, waterfall in intrinsically bad. the separation of stages in long periods of preparation followed by implementation and then followed delivery is bad.
@@matswessling6600 Right. Agreed. But even William Royce did not advocate that (and he was working on truly mission critical things). But the problem is that Agile reductionists have gone to the other extreme where they think ANY level of planning is bad. Most people find the right level of planning with the right planning horizon beneficial or even welcome.
@@andrewsheehy2441 ? i dont see anyone saying that all planning is bad. Agile is not against planning, it just preventing too much planning and enables replanning.
@@matswessling6600 One of the many problems - which have contributed to the death of Agile - is that the field has been overpopulated by people who have no formal project, product or engineering (most important) experience. The invaders are bring 2x2 charts, checklists, books and no end of other nonsense that drive the actual engineers crazy.
Williams Kimberly Lopez Paul Wilson Barbara
amazing talk, but 37:25 political agenda wasn't necessary...
Having your life taken is not something we normally deal with in computing. Horrible!
Uncle Bob keeps pushing TDD. IMHO if you have simple, str8 to the point, components (classes, elements, etc), TDD is mostly a waste of time, however, it's just magical if it you are using it on complicated algorithms and/or systems.
To contradict myself to a certain degree, for some, a simple 'if' statement warrants TDD.
If he's pushing it then he has a good reason. And he always says that TDD is a discipline, so not something you only use when You think the code is complicated. Because you don't write the tests only for yourself, but for the whole team and posterity. What if the originally simple class gets more complicated over time and there are no tests to cover for changes? What if the originally simple stuff is not that simple as you thought? And what if you and the whole team knows that the tests are covering only some (few) parts of the app? Then if they pass, it doesn't mean much, does it? Since your code might be broken elsewhere.
Yes, some people can write clean, working code without tests, but once they leave and somebody younger replaces them, they will go through hell doing changes on that system.
I would recommend checking out one of the cleancoders case studies if you want to see what he's talking about in TDD in practice. London vs Chicago is probably the best to see 2 approaches of the discipline for the same project. Sandro Mancuso from codurance says in the first video that "there's what i teach and what i do". The principles are to give you the foundation, but experience informs your approach, without sacrificing the core idea of having a robust set of tests at the end. There is a lot of wiggle room to adjust your goals of good vs fast vs cheap vs done.
@Liviu SANDULACHE I asked him on the matter. He said that startups for example (epitome of rapid change) are the best place to adopt Clean Architecture. That includes testing!
There is a really fun talk from Kevlin Henney (I think it's called the error of our ways) where he shows the problem with leftpad (some js library that broke the internet). I'd suggest having a look at it and maybe consider that it's not size that matters. Perhaps how your code is used matters more!
If all we had to build was simple "str8" to the point components, none of us would have jobs.