for user stories in the requirement sections do you focus more on brd like you would for a scope document or functional requirement?
2 года назад+59
Dear Dave, I have no idea how you manage to so concisely sum up in 17 minutes the mental model that I have built up for myself over the years and fail to convey to the teams I work with on a daily basis. But I know that in the future I will simply refer to your video for these questions. And I will add the phrase "Requirmenets that define how the system works aren't requirements" to my vocabulary. Thanks for these videos and please keep up the good job.
Be careful. User Stories are a kind of requirement , focused in the "what". But as a development team, you might want to solve the "how" as well. So, unless you want each developer in the team solves it in his own way, you will need more details, but of course, managed internally. Also, what Dave is describing, is an "analyst developer" (not just a "developer") who KNOWS about the business and its problems and is close to the user. Many companies have no place for such role (actually they prefer a DEVOP) leaving the business knowledge exclusively to the product owner (or worse to a manager).
@@Cerebradoa Spot on. This is the main reason why agile has created more problems than it is solving. Not to say it out loud though. The only principle that works in any project is everyone works on everything OR everything has a specific specialty to work on and they deliver their component. The leads have to be strong to keep it glued together and continue the journey.
As a software developer, I want user stories with testable acceptance criteria, so that I can confidently create acceptance tests that confirm the user story behavior. I think of acceptance criteria as the requirements and the user story as the narrative that provides context for those criteria.
Do you include non functional requirements in your acceptance test... e.g. do you test that the system can handle 1 million concurrent users? Or that it is fault tolerant?
@@alanjohnson7374 I’ve not had a lot of great use cases for valuable NFRs, sadly. And I’ve never been comfortable with them being in that category. If the system need to be secure, or “fast” I just think of those as acceptance criteria. It’s weird, in my waterfall days, this concept of “non-functional” never came up. We as product/business people just said everything that we needed to happen.
Not only should user stories express the “what” of the requirement, a good user story should also express the “why” of the requirement “so that” developers can understand the problem that gets solved by the story. I worked for an organization in which the marketing guy basically designed how the product worked by providing a list of features. That’s what passed for product owner there. But, I wanted to hear about the problems that customers were trying to solve. I you just ask a customer what they want, they may tell you that they want it to be faster, cheaper or some other increment to of what they already have. To them, the product is just a tool to do their work. However, the development team might see a different solution because they understand what the product could do if they understand what the customer actually needs.
The problem is, that often the developers incentives are not the same as the product owners. We work with external developers, and if we deliberately left everything high level the developer might end up developing into a direction that is potentially going to benefit them in the future. For example I have seen developers create short term solutions which caused large scale issues in the long term
@@SM-cs3nt you would hope the architecture of any new product is reviewed by a solution architect so that such technical debt doesnt take place. You would have this in place in most orgs
Pure gold. Thanks. I would suggest you to expand this video with a Part 2, including Requirements vs Features (although I guess there are less misunderstandings than the current one)
As a senior business analyst, I have thrown out user stories as a tool. I have retooled my BA toolkit to include RML. RML is requirements modeling language. Visual requirements modeling is the best way to define “requirements”.
Please, we need to be clear and specific about requirements and have it documented before going to implementation. Not these Scrum thing telling you don't need to documented and don't need to know what you are doing.
This sounds like serious professional work to me. Not sure about what RML is but according to my experience and studies is a serious approach, more professional than "user stories".
any decent BA would have a visual representation/process map/diagram of sorts of at least some part of the product. However, you can't visualise everything and user stories are still a valid and useful tool...
I have only been in software for 7 years but I am already my teams senior lead. I'm not saying this bc I think I'm some special flower but bc so much of what has made me a good developer is encapsulated in this video. I never thought I would enjoy translating customers vague wishing into computer capabilities. It's extremely rewarding to give others a product they enjoy using and capabilities they didnt know were possible.
A user story is a description of the problem in the context of the user. A specification is a description of a proposed solution in the context of the dev. It is the developer story. They are dependent but separate different things. The better the dev's understanding of the user context the better (== faster) the result. In my way-of-working the developer story is a general outline of the solution app or feature expressed as narrative (no implementation details). The most important skill I have developed as a dev is the user interview. Another important skill is the ability to look things up. Both skills are simply the ability to ask the right question. You ask the right question by understanding the context.
This was a clear thing in software engineering and software professionals. Scrum-cert charlatans came and now this is an ancient forgotten and forbidden art.
@@errrzarrr in my (limited) experience, formal requirements function mainly as focal points for disputes. Feature X doesn't exist because it wasn't properly specified, feature Y was properly specified but hasn't been implemented. The one becomes a bargaining chip for the other when the fighting starts (shortly after kickoff!). I'd love to see it done properly. But I'm pretty sure that (for example) successful off-the-shelf office suites are developed using ad-hoc hacking in parallel to a pantomime process.
I always had a hard time teaching our new PO's to request functionality rather than specific solutions. In the future this video will be one of my go-to's when topics like that one turns up. Thanks, Dave - great job! I already shared the vid several times!
I think this is quite an interesting topic! As a product manager i put a lot of effort into creating proper tickets, which tech agnostic user stories but clear, testable acceptance criteria. However i feel i'm a difficult position to include design thinking to it. Acceptance criteria often expresses some sort of solution already, when earlier in the video we wanted to be open about understanding the user and their problems, and come up with a multitude of possible solutions. I guess it would be interesting to see what the lifecycle looks like. Maybe a ticket should start with the user story and the description of the problem. Then. design and engineering can work on the "best guesses" to solve the problem and define the acceptance criteria based on the understanding and then the development lifecycle would begin? I would love some opinions about it. Jim in the comments mentioned he wants testable criteria. I was wondering how you would define them when you're at the early stage of describing the problem, without proposing a solution immediately. I'm usually trying to give designers the freedom to come up with whatever they see fit and to work with engineering to make sure it is also feasible. I think this all touches base on good discovery processes that should have happened before the creating of a story itself.
I have worked in a team which used BDD to test out business rules related to different types of Holdings in a bank (stocks, bonds, cash etc). They were using Cucumber, a tool which can be used in so many different ways: as executable specifications, functional/acceptance tests or even testing the implementation details. Our team unfortunately sometimes used Cucumber the wrong way, using Cucumber to test that files were written etc, meaning we were testing out implementation details. One technique used in DDD is called event storming. I have no practical experience from it but I am eager to try. Using this technique it is possible to create executable specifications, which will guide developers to design a system using BDD/TDD. Event storming should be performed in a couple of coherent sessions, each taking about 2hrs, gradually refining the User Story. The gust of the first such session is not delving too much into technical stuff, but instead focus on the users mental picture of what the system should do. Everybody in a team should be able to provide input and to learn from a session. Meaning BA as well as devs and testers should be able to participate. The "WHAT" stuff the system should "do" (written to stick-it notes) should gradually be refined into a simple business model of the core domain (what is most important) and sub-domains (domains that core is dependent upon). Next step is to further refine the core domain into simple "UML-diagrams", which now can even be understood by BA We should never write these types "technical diagrams" from the beginning of the event storming since BA seldom understands them and only experienced senior developers will fully participate. All the process you better use paper sheets, not white board, as you probably want to re-open a session from last time, as the whole process may take 1-2 weeks. In this way the design process can save a lot of time for you, and you may also produce "executable specifications" that are more correct, because everybody in the team will participate and learn/contribute.
As someone who's been with "user story" side of the house and not involved with "requirements," I have never really put that much thought I to this until all that changed recently. Now being heavily involved with product ownership, I am seeing how vitally important the need for a clear understanding of both of these and how that impacts traceability in the end.
We’ve moved towards User Stories and Acceptance Criteria, all with emphasis on the “what” and not the “how”. What I like about this approach is that it is lightweight, flexible and objective rather than subjective. It’s measurable, which our senior team likes, and sets us up well for UAT. And of course we no longer produce long documents that nobody reads.
Couldn't agree more. This is such a great video. User stories are about determining what a user needs to be able to do as simply as possible, it is not a bureaucracy excersize to justify why something can't be done or why it isnt a developers fault that they cannot implement it.
Regarding pressing buttons, it seems that some car manufacturers are implementing buttons again in their dashboards instead of touchscreens because they are considered more secure. Buttons offer better feedback, so the driver can keep their eyes on the road, instead of diverting their attention to make sure that whatever they pressed on the screen is the correct thing and it actually got pressed.
My requirement was for a succinct explanation of best practice that could would my mind (when it should be changed). Requirement delivered. Thanks as always Dave!
I use both user stories and classical requirements for my projects. Basically I define user stories for what the users want or need and requirements for what must be there. A simple example (please mind this i a youtube comment and not a formal document): Story As an employee I want to log on the application without the use of a password. Requirement The system must support FIDO for as a sign-in mechanism. The point os that the end-user doesn't care about security, but the organisation might, so you need both.
This summarizes my worries so well! I have learned that US is “an invite for a conversation”. It did not start as that. As a beginner you learn about specifications. I’m in a project, a rewrite of a product, where we are handed a pretty finished specification as “User Stories”. Project Managers communicate with Customers. We developers implement the “US” from a technical perspective and there is little focus on the product as a whole. The discussions are left to the PMs who are always on us about not making enough progress. I just wished that they understood the real value of agile and that US are not simply specifications. That would make the developers approach development differently - seeing the product and daring to make choices independently when needed. I’m aware that the customer is deciding how to communicate with them.
Thank you for another very clear exposition. My concern is this: there seems to be a lot of ambiguity in so many IT discussions now over the use of the term "user". For many years I was involved in both development and support of systems - usually individually "simple" but intimately interlinked with others to create a complete and coherent system in the most demanding OLTP environments. The term "user" normally denoted the head of the relevant area of the business: as you suggest, these people often didn't really understand what the day-on-day users of the system needed - no user stories there, but we managed given the relatively straightforward interfaces between full-time well trained users on simple screens and the centralised system with a common look and feel. However, the whole question of "who is a user?" has got infinitely more complex now, where systems are generally directly exposed to the public through the Internet. Is the user still the person in the business who is commissioning this work, and presumably paying for it, as is implied especially in some talks I see on DevOps? Or me, as a potential customer of the organisation, who is expected to understand what I see when I log in? Fifteen years after retirement I consider myself as a seasoned "user" of online systems, and I am still constantly confused by what is put before me. (The most common fault is assuming that I know my way round the website, and where I'm going next. As I am not psychic, I often get lost, and once this happens it is too often impossible to get back to a point of stability.) Once your users are the general public, the paradox arises that the people who best understand the "desired business interaction" are probably the least qualified to understand what the idiots like me out there are really going to need, to blunder round and achieve it. And from my own experience, the more you pour information and guidance onto the screen, the more confused the recipient becomes. In my book Good Code is Not Enough, I seriously recommend the use of a stroppy teenager to look at this area. (I had always referred to this activity as external design, but the term User Stories is much better and I wish I had known about it before.)
Thirty years ago (and longer) W. Edwards Deming was telling everybody who would listen that not only do you need to understand your users' requirements, you need to understand why they require them.
"People don't know what they want" - that's so important to understand! it's a crucial part of the (devs) job to help and support them to find out to eventually provide a satisfying product. unfortunately in the modern industrialized development process this is often not addressed properly and devs are rather just seen/treated/used as code-monkeys - with the predictable not-so-happy outcome for all sides.
We have a User Persona, to use thier empathy map context as we write the User Sories, and we write them to build on the User Journey map to have a true accurate results and it consists of: - What is the job they do and our system should do in process details - Why is that the right sequence of the process? - What's the constraints they fear to face? And what are the results (Risks/Losses) they'll face? - How to measure those loses properly to detect, diagnose and notify the user in case they occurred? - How much (Resources: Time or Money) each step usually takes before? - And how much reduction of that loss we should consider to make it useful and profitable for them? (That's the lean) - Lastly ask them if this user Story wouldn't exist or released would it make the solution function or not? (Initial prioritization) We then put a clear User story definition of one liner above that with the standard format on the top of that then each point is added. And after the whole user Journey with the defined Story points are gathered as clarified we reprioritize all (User Stories) from 0:5 whight and the critical ones would be between 4 and 5 then 3 for not critical but important then 2 for a nice to have if existed and 1 for a Wasteful code effort but useful for client lastly 0 is NEVER gonna have and those classified from Risks, Fears questions and all bad assumptions you could give and rejected by the User themselves in a brainstorming session at the end if that meeting. Yeah, that's how I do it, probably Not perfect and need more simplification for much excperinced fellows here how ever it works and I'm open for advice too.
I remember fondly the cavernous abyss between user’s capability to describe what they need from a system and the practical reality of what they are actually supposed to be doing. This chasm is cultivated under the guise of “not my job” reinforcing the idea that users have that they do not need to understand what a computer does or how software operates, they just need to demand the darn thing does what they wish in their heart of hearts, without any effort on their part to formulate the question.
You are lucky if the actual user is the one describing their issues. That's why "user stories" aren't user stories at all, is someone else making things up and dictating implementation (telling Devs what and how to do things instead of actually describing the user's issue) Remember your job is to understand the user's problem and translate into solutions. Don't ask the user for solutions because that's hell for you and them.
from my experience there are many levels of stakeholders in between "what" and "how", and all of them need a unique blend of what + how. Also "how" may be evolving as we understand the technicalities of our solutions. It's something like a V shaped diagram, where "how" increases and "what" decreases as we move down the levels.
That was enlightening. For the past 16 years I never had a chance to see proper stories. I will be sharing that video now to my colleagues. I wonder, if you have any advice for case when we have stories that describe no inputs or buttons and then very strict designs that are very difficult to connect with the stories?
Part of this approach is to try and move the design responsibility into the team. I think that UI specifics are better communicated as "style-guides" and case-by-case collaboration with usability experts when you hit something the style guide doesn't cover. Stories should NOT be specifying UI detail, if they do, we are back to coding by remote control, which never really works very well.
Parent: What & Context Child: How Based on my experience, the most important thing to understand is the concept above. It doesn't really matter what you call it or how you want to structure it. Lets take a user story. It identifies a user, states what the user wants, why the user want it, and acceptance criteria for the solution. From all of that information, what the user want is the most important. Everything else is a constraint or provides context for creating the solution. If you have multiple users, you can end up with different users asking for the same thing for similar or different reasons. You can also have the same user asking for the same thing for different reasons, and different constrainsts by different users. How you want to structure that information is up to you.
Using user stories are great to define some items. Story boards are also good. All great documentation in a waterfall methodology, passing into the build cycles. If struggling to use any documentation methods, then switch to using Agile.
Great exposition. I however have a question. At what point do you add acceptance criteria to your stories? An I approach I had my teams use was to come up with acceptance criteria after a User Story. This didn't seem to help as is sort of forced us into thinking about how the system will work. any ideas?
The way I've been structuring acceptance criteria is using the "Given, when, then" format. Given is the context, what are they doing? What environmental conditions need to be true for them to proceed. When is the execution of the functionality they desire. Then describes the sequence of events that transpire after the execution. I pitch it as a fantasy, "imagine you're about to do the thing you want to do. What does the environment look like? What do you envision yourself doing when you use this functionality? What happens after you start this process?" It's harder to keep acceptance criteria agnostic of the system they are built within. I work with some very specific, complex data models that, if kept agnostic, defeats the point of writing the story in the first place. So the acceptance criteria we've been writing lately have system specific details. But that's working for us.
Use of the term "requirements' allows confusion between what a user wants and the most efficient and cost-effective business area's response to an event. An event is the recognition of the arrival of data from an external entity at a business area. A temporal event is when a time is reached such as someone's policy about to expire. The requirement is the business area's response to each event as defined by data and process, a process that often includes some form of automation. A requirement is not what a user wants. By identifying what data arrives (an event) at a business area, the total data and process response can defined as a set of rules rather than a list 'wants and needs'.
The challenge with implementation details is it needs to be discussed as a team and documented somewhere. Researching, designing, documenting is a big job and should be done ahead of time, not when it's time to code it. Either it is with the user story ticket or outside is irrelevant. The big mistake peoples make is when they write everything by themself and hands it off to developers. It leaves a lot of gaps and no ownership from the team.
Well that's not how it works in most modern big, successful SW companies. I don't think that you get to build really complex system that way, because it means that you need to understand the solution before you have built it, and you don't understand it until you have built it. Engineering is an iterative process of learning, and great SW engineering is too.
@@ContinuousDelivery When I'm talking about implementation details, I'm mainly talking about designing the product. It takes 100x less time change a mockup or specification document than the fully done code. This is still iterative of course, but I've never seen successful SW companies that does not prepare their projects a head of time to some extend. I don't think this is incompatible with being agile.
Love this channel, but I've sadly never found any company whatsoever that works this way. Quite simply: I've never seen a company without a "product design" team, and I've never seen a person with title "product designer" who doesn't feel their job is to run off and think up every feature the developers will be asked to implement, down to detailed specifications of interaction and "UX" with detailed mock-ups. Reading Kent Beck's "Extreme Programming Explained", especially the superior 1st edition, it seems like the main idea of Agile is the same as Lean: lean process management suggests moving all the ownership of process and decisions to the people closest to the work. In software, that is the developers. When I read EPE, it's clear that Beck has taken all the steps of Waterfall: System Requirements, Software Requirements, Analysis, Design, Coding, Testing, Operation, Maintenance, and then recognized that it must be developers, and no one else, who takes charge of defining and controlling every step of this process: collapsing all these steps into "coding", as part of short iterative cycles that are controlled entirely by developers through their day-to-day work. TDD is obviously a way of bringing testing, analysis and design into the coding part. And stories are a way of bringing the system requirement, software requirements, and analysis parts of waterfall into a shorter cycle that is again part of a developer's work rather than another "product design" team. What is the role of a "product designer" when developers interact directly with user stories and come up with their own plans and features to address those stories?
Do you have a way after giving a story they give back more detail to help technical parts or changes - where do you store or manage those choices to maintain and develop?
to put all of that in a nutshell: user stories help developer fill the shoes of the user, put developer in the context of what their code actually delivers to end-user's experience and results
I think it depends on the organization and the scope of the software development project. Stand Alone user stories can be Requirements. However, I still think there should be an additional Requirements document that Provides a Top Level overview and also covers any other items that might not be covered in User Stories. Sometimes by creating a document like this, it presents some scenarios and issues that weren't addressed initially.
Hi, thanks for these videos, I've watched for a long time and they've really helped me gain a better understanding on the fundamentals to software development outside the IDE. To anyone who has the experience to answer: I'm on a small team, and so look to minimize resource waste. Like many teams we use tools like JIRA. These systems really seem to be a large time sink if you're using them 'as advertised'. Creating user stories with clear acceptance criteria + justifications and then splitting that up into tasks and organizing that work into sprints and having everyone mark their tickets correctly, etc etc. My plan is just to have user stories and bug tickets only and having all the acceptance criteria in the tests and using a passing tests as a notation of work being complete. Anyone have any lessons learned on doing something like this?
2 года назад+1
With many years of experience in the Jira context, helping customers and teams to implement and optimally use the system, I know the issues and understand the reputation the tool has with many. And yes, it can be a real time eater. In 95 percent of the cases, it's not the tool, but the processes that were mapped in it or how they were mapped. The most effective teams I've worked with have used Jira solely to plan and track features, stories, bugs, etc. All without a single line of requirements definition, specification, solution design, or acceptance criteria in the ticket itself. Everything else was documented e.g. in wikis, or as your thought process was, the acceptance criteria in the assigned test cases. And as always, there is no perfect answer, you have to experiment for yourself and find the way that fits best for you.
You have to keep in mind that at its core, JIRA has always been an issue tracking system, not an application lifecycle management system. Think Bugzilla on nuclear steroids. It's main focus is flexibility and traceability, not ease of use. That's why it usually takes more user interactions to perform a task than it does for other systems. This goes even deeper if you try to use any form of agile framework as JIRA's support for even Scrum and Kanban is quite rudimentary. For a small team, I would recommend you use anything else but JIRA. None of the reporting, permissions, traceability, time accounting and similar features that enterprises live would help you in any way, and the limitations to being truly agile will quickly become apparent as soon as you outgrow its basic Scum or Kanban boards. I've been coaching teams to be agile for well over a decade now, and I am yet to see a single JIRA instance that doesn't get in their way. And every single one of them would change it in a flash if it weren't so deeply embedded in the test of the corporate red tape. Try something simple like Trello, or even the Planner app in Office 365. If that's too limiting, maybe Kanbanize, Azure DevOps, CA Agile, TargetProcess or any other tool. But for the love of everything you hold dear, stay away from JIRA unless you're forced to by policy (in which case you'll find ways to subbert it, as most people do.)
@ "And as always, there is no perfect answer, you have to experiment for yourself and find the way that fits best for you." can you tell my product owner and project manager this? I make any kind of suggestion and apparently "it's not agile enough" even though they have zero issues with the software that we actually have built nor did we fail to meet our deadlines, but our "jira ticketing is not up to par"'. In other words, their definition of agile seems to go against one of the core values of "working software over documentation". They placed all sorts of dead weights on my team of people who literally took 1week worth of scrum/product-owner classes and now those people claim they are experts. I am now packing my bags
We are working on getting some of these for our training course. There is a video already on the channel that shows some real world examples: ruclips.net/video/9P5WG8CkPrQ/видео.html These are a bit technical for my taste, but they are real examples, and this company, Adaptive, is doing great with them.
Thank you, this is really insightful. In my experience, people think of user stories as of just brief SRS documents and it's always tempting for my colleagues to add extra bits of details and business rules. I even seen SQL requests in the acceptance criteria. Just wondering where does the UI/UX designer step in? When they create a mockup for a user story, they implicitly state how the system should work without recording any logic in text. So when the developer has a vague user story and a mockup that doesn't capture corner cases, should the developer talk to the designer to figure the solution?
I am not a big fan UI/UX as a separate team or responsibility. This is part of the development process, not the specifictaion process in my opinion. So you need that in the dev team as the SW is being built, not before. Clearly not all devs are great at UX, so use guidelines, or common components to define "house style" - can be extremely detailed. That should cover 80% of the normal work. When you hit the novel 20%, then draft-in UI/UX people to help in the team, to do the work.
Still not clear between 1. how a requirement is different from a user story (other than the format ) 2. Are product owners required to write both? 3. How do dev teams follow both?
I can't agree more with what you explain. I have dozen examples when I observed in my job how dev-team pushes business users to provide details of the solutions. Then business users, basing on their limited technical knowledge about the systems, are trying to figure out by themselves how to address their own requirements. 100% recipe for disaster.
Your videos are unbelievably helpful. Thank you so much. I just want to know, once we speak to a client, do we not need some kind of formal documentation signed off before ? Do we not need a SRS document of some kind? Is it correct to make the assumption that even in Agile, the requirements phase happens first pretty much ?
Sometimes you will meet dumb clients, and have to comply, even if what they are asking for is irrational. But apart from those circumstances, if you want to do the best job of SW dev, you need to collaborate in the decision making, not act as servants to someone else. To build good software you need to understand the problem you are working on very well, you can't delegate that to other people. So the more specific, more formal, the requirements or specifications, the less likely you are to be able to do a good job. SW dev is full of surprises, and some fine-grained thing can impact on decisions at a bigger more strategic level. This is nothing at all like a production line, we need to make decisions as we face the problems that need solving, so need the freedom to go back and say, "we can't do what we discussed last week, but how about this instead".
@@ContinuousDelivery Thank you for this. One last question promise :) So just to put this into context, How would we charge a client? Would it be based on an iteration of work I guess? My only concern about this is, what if we get a client who deliberately takes the piss and changes their mind after everyone agrees and then says now i dont want this, I would be happy to change, but it would mean I would still need to be paid for the work done for the previous iteration of work right ?
@@TheMyth2.9 It is a very difficult problem, we struggled with this when I was at ThoughtWorks. We attempted a few different strategies. The reality is that it is in no-ones interest in general to attempt to fix time and scope, and so fix the bill. Because SW doesn't work like that, so inevitably one of you will loose. Depending on the contact, either the dev org will loose because they under-bid, or the client will loose, because they either don't get what they want, or they have to keep paying for extras to get what they want. IT'S IRRATIONAL TO ATTEMPT TO FIX TIME AND SCOPE! For great clients you can have this conversation, and agree that collaborating and working on a kind of "pay as you go" scheme is in everyone's interest. Treat this more like venture-capital funding, pay for some initial work, then pay more when you want to add to it and keep going until you get what you want then stop. I think this is probably the most rational approach. At ThoughtWorks, we experimented with a few clients where we shared risk with them. We liked their idea for the system we would build, so we took some of the financial risk of building it, in exchange for a share of the payback, once the system was released. We were investing in the project with our work, and got paid more than we would have otherwise earned at the end. Obviously, this is very, very dependent on the project and the nature of the relationship with the client.
Maybe we could describe the "how" after requirement/user stories as a "technical hypothesis" expressing "this is how we are guessing we can solve it technically".
I get the reason for user stories, but at some point, code has to be written down, and requirements at the user level have to be broken down into actual tasks. It's easy to understand that the user has a need for a shopping cart. But do we then have a whole series of other user stories for creating the technical components of the shopping cart? How do you even come to agree on what those components are? How do you even relate a user to anything occurring on the backend or database? Should all user stories be treated as if they were independent? Regarding the last question, could it be that all user stories are equal, but perhaps there's an implied sequence based on how many points they're worth, and the points get reevaluated every sprint? As more gets done, some stories become more accomplishable, potentially fitting into a sprint?
Hi, I don't proclaim to know the answers you are looking for but I can give my thoughts. The user stories are the descriptions of the problems your customers are facing and generally contain the information of Who, What , Why. Ofc there are more questions to answer, such as the architecture and implementation but this information doesn't really belong in your ticket system and I would suggest really belongs in your code. Either as comments, or docs in your repo or using a tools like confluence. However I would question to yourself how much documentation do you really need? You really just need to know the technical 'why'. If your project management system has acceptance criteria such as "Should show the message "Thank you for your purchase" when the user clicks pay" then this to me sounds more like the pm is doing the development and you're just writing the code for him. Discussing what the components are and how you will solve the user story I think are best done face to face with limited documentation of what your going to do. If you've just gotten the team together to discuss the user story and decided on what your are going to do. Personally I think it's a waste of time writing task tickets for these things, as hopefully they'll be complete within such a short time frame that you don't need a ticket for it. I would say that yes user stories are best when independent. It sounds like you are asking about: If you have a user story for this sprint, and then another that is a follow up for the sprint after, which builds on the previous feature. So I would say that this normally shouldn't occur. If you have a user story for this sprint and you propose a solution, implement it then you first want to evaluate if your change was successful. So first of all you don't want to build on that change in the next sprint because now you should be waiting to see if the change worked and what the feedback is. By the time you've gotten that information and the feedback, it's likely that the second user story you would have had is no longer appropriate. I would suggest that you really only have points for stories you are doing this sprint or not at all if management will allow it. Anyway just my 2c, best of luck out there.
If you think of user stories as a model of the problem and your design as a model of the solution, it's pretty clear what should be in story form and what should not. It would not make sense for a change in architecture to affect your story map for example. A story need not be the only documentation of what there is to do: if a story takes 3 days to implement, surely you will want to break it down into technical steps. It's just that they are not stories. Well it's my take on it
In terms of Scrum, e.g. you break down the stories into (more technical) tasks per sprint, and you design and implement your solution by means of those tasks.
The requirements specifications should focus on defining the system's desired functionality rather than its specific implementation details. I've come across numerous SRS documents created by experienced business/system analysts that emphasize UI specifics, like "When the user clicks on the place order button, the system responds with a message blah blah..." These documents seem to be more like user manuals rather than requirements specifications in my opinion.
There are already a few on the channel, but more from the perspective of - "here's how to do automated testing well" and then you won't need manual regression testing. There's a playlist here: ruclips.net/video/Nmu4URA7pSM/видео.html
I'd like to comment on a couple of your points and am interested in your views: - Teams rejecting requirements/user stories - Stakeholders changing their minds Firstly, I definitely agree that a user story should describe the "what" rather than the "how". There are times now and then where I may add some "how" (as a suggestion to junior developers to point at some useful code perhaps), but these should be very rare. My ideal user story process is: 1. Stakeholder creates a user story describing "what". 2. User story goes through a team elaboration containing the stakeholder, QA and probably other developers. The purpose is to output a set of acceptance criteria for the user story. Any user story going into a sprint without sufficient acceptance criteria can be rejected by the team as this indicates how it is to be tested (definition of done). 3. After the developers complete the work and the agreed acceptance criteria passes, then the user story is complete. Anything outside of the agreed acceptance criteria (where the stakeholder changes their mind perhaps) would be handled by a further user story in normal circumstances. Eg - create a registration form. If the requirement of checking verifying password complexity only comes after the work is done and wasn't included in the agreed acceptance criteria then this constitutes a new requirement. Whilst stakeholders might not know exactly what they want at the beginning, this prevents too much context switching and ensures changes are small. Since each user story is generally a single releasable change, this also shouldn't affect any release cycle. Thoughts?
Yes, that sounds about right to me. The aim is to separate the acts of specifying what is needed, from how we are going to meet the need. The second part is too often attempted in a single bound, via requirements. This is a much more fragile way of doing things, because now every change invalidates everything, requirements, tests and implementation. We should decouple those things so that each changes more independently of the others.
Hi, In agile, are there recommandations in formalizing requirements a system (not a user) has on the system you're building? In other words, how to formalize API requirements? Can't we use user stories too in that case? I'd guess the user in that case could be the developer?
We use SysML for that task. The outer system is then an actor like end users, and via the use case diagrams, you can formulate how the systems interact with each other.
I think it depends on what you are trying to do with your API. If your API is really internal, aimed at providing some lower-level service for a bigger system, I think that you have to be very careful about design. It is way too easy to fall into the trap of fixing your design assumptions into API boundaries. Focusing on the high-level value can be useful, even if the "user" is a step or two away from the API. If you are publishing an external API, I think that now you more clearly have real users, users of the API, even if the technical interactions are via code. What do those users want? Focus on that, and you will not only have better stories, but also better APIs and better tests. Overall, what I mean by the bigger value, over and above user stories, is the idea of separation what the code needs to do, from how it does it. That is true in every case, and is a property of good design at every level. Your API should abstract the problem so that it is easy to user and hides detail, that's always true.
I’m not a software developer or programmer but I’m just trying to understand this. How can the user not know what he or she wants? Wouldn’t the customer’s requirements be explicit in the requirements and SOW. Certainly I can see in the requirements validation process, going back to the customer and suggesting changes to the requirements. In the example of how fast the software loads, wouldn’t the customer generally specify that time in the requirements? If I was the customer I would think I’d specify load time, DO-178 DAL level, all of the interactions of the software to the system…. I haven’t heard of user stories before but a book I’m currently reading discusses things like ever piece of code must trace to a requirement and ever requirement must be traceable to code. Thanks for any explanation.
Yes, but it’s a sound byte, not enough - “I promise to have a conversation” all clear now? No, so we need more than that. The conversation is extremely valuable, but if stories were only promises, then my story here is the only one we’d ever need.
@@ContinuousDelivery ah. so to me a story is an XP thing, so that promise means that once the story is pulled, all the required people are available for pairing and can document the discoveries of the conversation as working code in one go.
@@m13v2 Yes, exactly that. The story is a statement of the problem, and its context, and our job as developers is to figure out how we will solve it, so we need all the skills necessary when we are defining the solution, not when we are capturing the need.
I often find that behind every requirement there is still a "why?" to be answered. Drilling down to discover the why behind the requirement can often fundamentally change both the user's and team's understandings of the requirement. E.g., Yes, the user wants a weekly report generating, but why? What will they use this report for? Stepping back to discover this can often get to more "truthful" requirements
Sorry for commenting on an old video! I’m 3 years into working on a pretty complex system, as a UI/UX designer working together with the requirements team. The turnover for developers in the project is pretty high, and we have a lot of issues with lack of documentation for technical/implementation details when a dev needs to work on a functionality initially developed by someone else. This on top of a pretty complicated/specific domain… since it’s used for processes with lots of bureocracy and many different user profiles with different permissions. When we write user stories on “higher level” without going into detail, the devs always get confused… it stresses me out so bad, hahaha. I guess I’d like to know what are the best practices for maintaining at least the bare necessary technical documentation (like maybe schema diagrams? api docs? I’m not a professional dev so i don’t have a lot of reference) alongside the requirements in user story format…
sometimes tech debt piles up because of messy programming. code can always be "refactored", however, it is always easier to do it before things become interdependent and get tangled up (again, in code)
Question: once the software is released into production and the customer effectively uses it, how should we specify the changes the customer wants to the visual aspect of the software? As user stories it doesn't seem right, so what other means you suggest?
It's part of the design of the solution, it doesn't specify the need. I see a very common, as I see it, anti-pattern of separate UI teams defining solutions and giving them to dev teams as though they were requirements. They are just someone else's guess of how to give the user what they really want. The approach I recommend is capture what the user really wants as the requirement, then organise work on that to find solutions that can work. If consistency is important, have a style guide that teams use, but I have never seen pixel-by-pixel UI designs really work as a sensible input to development.
Making the calibration machine more user-friendly is a nice idea, but should be easy to do, or could be done easily completely outside the machine (e.g. detect power consumption and hook up a Raspberry Pi). It is not a good example of forcing a team to switch to user stories when hey are happy coding requirements into firmware. I worked in TelCo and we had testing cycles running 6h on an expensive proprietary hardware farm similar to your calibration device example. We have been tired of Ivory Tower Consultants telling us that we do Continuous Delivery wrong because testing takes more than a few minutes.
Can we force everyone to obey this follow's wisdom? In 25 years of pharma validation of environmental recording, I had one customer who specified and verified that actual staff could use an operation SOP for as written. Everybody else has User Specs that are mostly system specs with some vague admin text. Especially the last handful of years "teams" have become online discussion groups with no defined goals for the recurring meetings. And this is for Phama! You know: stuff that could kill you.
I would do it differently: not asking to like and subscribe from the start before the viewer has gotten any value yet out of the content. By the end of the video, it would sound more realistic and fair.
I have 0 idea what I you try to say However as indie development I have find the best way to develop my own game 1. Try Create the game ideas I have in mind 2. Try to make the game playble 3. Shred screen shot and small videos for the game 4. Listen to feedback but not every feedback same times ther people who send feedback that is not fetting in the game 5. Wean the game become playble release the for early access or release a demo for free in the order to take as much feedback as possible 6. Wean the game is ready then I tried to marketing the game and selling this game however I have not mange to get to the last point yet So I don't know what you try to say will all the story stuff but the way I development my it's working so fur I'm I still don't how the marketing works yet but whatever I will figure out wean the time came
How would you approach User Stories for a more distributed system? Let's say you have multiple services of which one handles a connection to some IoT devices, and another concerns itself with the user view of the data collected from the IoT device, from the perspective of the IoT device itself should the user stories be told from end user perspective or from system perspective (keeping in mind that IoT team might not know how the data is used by the services)? So, is it: "As a user, I would like to see temperature data from the device, so I know the component is not overheating" Or: "As a service, I would like to acquire temperature data from the device, so I might show it to user in some form" Or something else?
The idea of a user story is to be a "story". Neither of your examples is telling a story, they are describing a solution. I think that the real story is more like: "The device overheats" - we could just say that! or we could say what the user expects, "As an IoT user, I'd like to be told when the system exceeds the safe operating temperature, so that I can shut it down" or "As an IoT user, I'd like the system to shut down when it exceeds the safe operating temperature, so that my house doesn't burn down". Your first point on distributed systems has two parts, this is more about org design, and system design than requirements. There are two answers. Divide the system (and your teams) so that each part is independently deployable, and so loosely-coupled - Microservices. The boundaries between microservices are, by definition, Bounded Contexts, and so are natural translation points in your design so you can always create requirements, at these boundaries, that represent natural conversations with the users of your service. So your example is too technically detailed, focused on implementation, to make a good requirement, and raising the level of abstraction "I don't want my house to burn" helps focus on what really matters - AND DOESN"T CHANGE WHEN YOU CHANGE THE DESIGN OF YOUR DEVICE. The second approach is to treat the whole system as one thing, and test, and specify, it all together, there are nuances to all of this of course.
@@ContinuousDelivery This simple succient explanation has cleared up what consultancies bringing scrum and user story writing in companies have messed up for me in my mind. Do you have a course which covers this in more depth from a simple to detailed approach as it has seriously made me think and I can see where my Business analysts need to support Thank you for your help...
@@Aksamsons Yes, my "ATDD & BDD - Stories to Executable Specifications" course is mostly about this. It is in 3 versions, aimed at different groups 1) Dev Teams - need to understand all this, and make it work technically 2) for POs, QAs, BAs - need to specify things, and do it in a way that supports dev, actively involved in the creation of stories and specifications and 3) Domain Experts, Biz people - Need to understand how stories work, and a little about why this matters for dev teams. 1. "ATDD & BDD - Stories to Executable Specifications" courses.cd.training/courses/atdd-from-stories-to-executable-specifications 2. "Acceptance testing with BDD" courses.cd.training/courses/AT-with-BDD 3. "Understanding Acceptance Testing" courses.cd.training/courses/understand-AT
You did not conclude why you were having a dig at the over reliance of Manual Testing in the process at the beginning, but it was one of the main issues?
No, that wasn't the focus of this episode. I talk about some of that in this one though: ruclips.net/video/XhFVtuNDAoM/видео.html The main problem with manual testing is that it is slow, inefficient and low-quality because it is very limited, compared to automated testing, and it finds problems too late in the process. We need to find problems as close to the point where we create them as possible, not after we think we are finished. As Deming famously said "you can't inspect quality into a product, you need to build it in".
@@ContinuousDelivery I'm all for automation but the comment "manual testing is that it is slow, inefficient and low-quality because it is very limited, compared to automated testing, and it finds problems too late in the process" depends entirely on the type of SUT and the context. Obviously at a lower level unit testing is very useful but at the end of the day automation is not "testing" its just a glorified form of checking.
@@ITConsultancyUK manual testing has a role, but I think it is only better than automated testing for exploratory testing. This doesn't depend on the nature of the SUT in my experience. You can use automated testing in vastly more cases than most people thing. SpaceX do this for space rockets, I have done this for scientific instruments and medical devices. The problem is that we often don't treat testing as what it is, a tool for SW dev.
@@ContinuousDelivery I don't think you can realistically achieve and maintain 100% test coverage with automation. Management see automated regression as a panacea, I think at best it can give an indication that the software meets certain "baseline" acceptance criteria. I think how you implement automation based on ROI matters greatly on the SUT. e.g. web GUI automation vs some kind of emended systems automation require completely different skill sets, in my experience of delivering high quality, robust software for corporate defense companies a skilled human tester is far more effective in many regression testing scenarios.
@@ITConsultancyUK Well lots of companies like yours would disagree. SpaceX (not defence but similar), USAF are currently using continuous delivery and high levels of automated testing for fighter jets. Tesla for cars. The difference is that you have to build the testing into the development process. Sure, people may be cheaper if you do it after the fact, but this isn't how it works for examples like I have given. In the cases you design the system to be testable from day one. I was involved in building one of the world's highest performance financial exchanges, we ran roughly 100,000 test cases every 30-40 minutes. No army of people can match that coverage. Google run 104,000 test cases per minute. I have helped manufacturers implement these techniques for medical devices, scientific instruments, computer systems, chip manufacturers, cars, the list goes on. So we aren't talking about "toy websites" here, these are complex, regulated, safety-critical systems. What I am trying to describe here is a genuine engineering approach for SW dev in these contexts. Sure, you can never test 100%, whatever your strategy, but automated testing is always going to be orders of magnitude more cases than manual testing, unless you do it really poorly. Tesla recently released a significant design change to the charging of their Model 3 car. It was a software change, test-driven, using ONLY automated tests to validate the change. The change went live in under 3 hours, and after that the (software driven) Tesla production line was producing cars with a re-designed charging mechanism that changed the max charge-rate from 200Kw to 250Kw. That would be simply impossible if it relied on manual testing. I think that humans have no place in regression testing, so I am afraid that we will have to disagree on this.
Often I see the ideation phase gap. From a requirement long before a line of Eden pseudo code is put down is the phase we design/UX/dev do Quico paper prototyping / mocks while some also gather additional user feedback / input parallel to this. Then this goes into implementation in a so small deliverable chunks as possible (not always releasable though) and the team do testing or I call it more validation is it what we thoigh we want.
If you’re going to bother to throw a quote on the screen, would you be so kind as to leave it on screen long enough to read it, or even long enough to pause it before it vanishes again?
This would work great if you have flexible budgets and timelines and can afford to let everyone involved contribute what their opinion on what the system should do is and then reconcile it all.
I think that you are assuming that this takes longer. This is how the most efficient team that I have ever seen worked, so it is certainly not inherently slower. I think that this approach greatly contributed to my team's efficiency. Einstein once said "If I had an hour to save the world, I'd spend 55 minutes understanding the problem, and then save it in the last 5".
This presentation is obvs. software specific but covers material that can & should be used in an discipline agnostic fashion i.e. irrespective of software, hardware or hybrid solutions. I'm surprised you didn't introduce the notion of DSL when discussing testing via the unknowing user
I think that this is broader than only SW. I think that this is how all good engineering works. You have to keep your focus on the real goal. The solutions are just your best guesses so far of how to achieve it. Sometimes it is hard to get the scope of a video right, I am nearly always tempted to expand into related topics, because I think of SW as a very holistic exercise. Sorry if I cut off too short in this one, but there are others that cover that topic 😉
No, a story isn't a "technical task". I also wouldn't record this, or add a card to my card wall or add it to Jira or whatever. This kind of activity is a side-effect of doing something useful, not the goal of your work. So you are either doing this as a tidy-up, or you are doing this for a some reason that matters to a user, in which case that reason is your user story.
@@ContinuousDelivery So, does adding a new card that depends on the story so I can describe what I will do about some point of the story is ok? If I don't create a card I will forget my implementation plan.
@@DamjanDimitrioski I'd advise against "publishing" it anywhere, it is just you way of organising your work. I keep very rough, free-hand notes in my notebook for that kind of thing. The risk is formalising this, so that sometime later your boss comes along and says "so how are you getting along reworking your Dockerfile?". It is an invitation to micro-management. If you can keep conversations with people that aren't directly engaged in the work to only being about externally visible, useful, features it works a lot better.
@@DamjanDimitrioski Nope! 😁 My advice is to try to avoid technical stories as far as you can. The time when you can't do this is when you are already in a mess, with lots of technical debt, now you are in a "cut your own arm off to save you life" territory so you have to do what you have to do. But the aim should be to work on features that are useful to users. Plan and talk about those, then EVERYTHING that you need to do to achieve those outcomes, including refactoring, is part of that story.
Microsoft: ai.stanford.edu/~ronnyk/2013%20controlledExperimentsAtScale.pdf Standish: Can't find the original right now, here is a critique that reference the study www.linkedin.com/pulse/why-45-all-software-features-production-never-used-david-rice/
Given the number of people flocking into the industry product owners if not coming from a technical level are the first to turn into an obstacle and in some cases the reason behind the downfall of the project
Every time I see a story with "as a User" I know it's going to be garbage. What user, internal, external, customer, admin? So I had a laugh at your little example there.
I cringe whenever I see "As an architect|developer|product owner...". I know what follows isn't going to describe a user story at all and is probably going to be filled with unnecessary implementation details.
I noticed “as a user” in the video too…and felt “garbage” too - but the I noticed it was describing the user story template, not an example story. This was the best user story vs requirement explanation I’ve ever seen .
The issue usually not in approach but in general understanding of the product by people who writes user stories. User stories can be beneficial and lightweight approach to eliminate waste of too much work. But there is big preparation work before you have user stories, where you understand different types of users, different conditions, pains that you want to help with and etc.
Dave, a software company that I once worked for have recently adopted Waterfall - go figure :| I put this down to because the entire management team are non-technical, don't understand the complexities of software development and there is 0 trust in developers. How do you suggest a company like this can be turned around from a purely process driven dialog to a results driven one ?
It is certainly possible that they can't be turned around, not all orgs are saveable. I think that you need to frame the argument in terms that make sense to the people that you are trying to convince. There's no point explaining tech processes, to a non-tech person. If the person is commercial, talk about the commercial advantages, if they are worried about risk, talk about quality and safety, if they are nervous about security or compliance, explain those things. I think that the DORA (formerly State of DevOps Reports) and in particular the "Accelerate" book, are great sources of information to help you structure your arguments in these terms. I have a video that is a bit related to this "How to Manage Your Boss" ruclips.net/video/bCEmRwsmmlY/видео.html
Sometimes when I get "requirements" that are just technobabble, I ask the person "If instead of a (whatever prescriptive thing they're asking for) I had a magic lamp, what would that magic lamp give you?
Does user always mean end-user? I think not. If you have hundreds of developers split into small teams working on different components of a system then the user might be another development team. It becomes necessary to constrain the interactions through technical requirements to ensure all the components of the system work together. This is where it can all go wrong with excessive design leaking into the requirements. The lightest possible touch is needed. However, there is a balance required.
Well it really does depend. If the split between devs and teams is arbitrary or specifically technical, this can cause problems. All these problems are really down to coupling, between the code and the teams. If the code is something like a platform, then that is a bit different, and then treating stream-aligned dev teams as users makes sense. But this is a slippery slope and is more commonly done poorly rather than well. How you divide up work between people has a massive impact on how well you can do things. I have a couple of videos that may help: Team topologies: ruclips.net/video/pw686Oyeqmw/видео.html Platform Teams: ruclips.net/video/_zH7TIXcjEs/видео.html
Hi, I've watched many of your videos and I would often refer to them for advice but I have some disagreements mostly based on the reality of the industry. Within this reality, the level of engineers engaging with the "end 2 end process" is not up to par with the implied requirements you make. e.g. many developers only want to code. Good or wrong it doesn't matter. Many product owners and managers don't know what they want and expect devs to figure it out. Developers are not UX designers. Management has made choices against automation. I know there is a space for improvements but the availability of excellence that can drive your implied advices is not there. With that in mind, it really depends what the priority and goal is, something that needs to be set by product management. e.g. is it deliver on promised dates? is to deliver functionality when available? Reality with customers is that it will be probably promised dates just because this is how sales and customers work. With that in mind, requirements need to be finetunes enough that there is a technical counter part with its compromises and specific scope to code and test for that can be estimated to be delivered on a specific date. When the previous vary, then this goal is compromised. It is a well known fact in the industry that terms are used however everyone wants. So with that in mind, my number one concern for an engineering team is that the goals for the "next" release are clear and the estimations are realistic to achieve. Any perceived wasted time in preparation for this, that is requirements definition, user stories, ux etc, is there to achieve the promised date with quality. And all this needs to be adjusted to the people of the team. Do you work with top devs? Do you work with long term internals? Do you want to scale on demand with externals? My point is that theory is nice and don't get me wrong interesting to follow. Reality though forces messy situations and a compromise is needed. But the biggest issue of them all is not having a well defined set of higher goals and restrictions for the how the team works. e.g. Delivery dates, security requirements etc etc.
Don't confuse the culture you are exposed to, as industry reality. There is a difference between writing code, and being a software developer/engineer. Beginners are expected to require direction, but not experienced developers. They need to master the art of creating software to solve problems, not merely write code. This is why we say "do you have 10 years experience, or 1 year repeated 10 times?"
The whole over-reliance on user stories irks me honestly. They are always overbroad and there's many times where either I had to gather further information, or I implemented something and no that's not what the author meant, at a certain point I really considered just implementing stuff that will fit into nothing that will check off the box just to see what'll happen. There were also times when I read it and all I could say was "What the fuck does this even mean?", "What the fuck is this trying to accomplish?", "Do you have literal brain damage because this sounds like a disaster waiting to happen.". This happens especially when you are adding more functionality into an existing process flow because do you branch off from that, do you branch back in, or do you just make it a standalone flow that might just end up re-doing the same exact thing? Like why wasn't this just defined originally? This is doubly annoying when certain things are literally matters of legal compliance. For example for the user to apply for and get a license they need to meet requirements that are defined by statute and our product owner managed to get those completely wrong so I had to go back and change a bunch of stuff. At that point I just started asking for the statute so I can see what legally we need from the users.
If you would like a FREE guide to help you Write Better User Stories, sign up here ➡www.subscribepage.com/write-user-stories
Thanks! The link seems to be dead though. :(
@@janisimila Glitch - It is working now.
for user stories in the requirement sections do you focus more on brd like you would for a scope document or functional requirement?
Dear Dave, I have no idea how you manage to so concisely sum up in 17 minutes the mental model that I have built up for myself over the years and fail to convey to the teams I work with on a daily basis. But I know that in the future I will simply refer to your video for these questions. And I will add the phrase "Requirmenets that define how the system works aren't requirements" to my vocabulary. Thanks for these videos and please keep up the good job.
Be careful. User Stories are a kind of requirement , focused in the "what". But as a development team, you might want to solve the "how" as well. So, unless you want each developer in the team solves it in his own way, you will need more details, but of course, managed internally.
Also, what Dave is describing, is an "analyst developer" (not just a "developer") who KNOWS about the business and its problems and is close to the user. Many companies have no place for such role (actually they prefer a DEVOP) leaving the business knowledge exclusively to the product owner (or worse to a manager).
@@Cerebradoa Spot on. This is the main reason why agile has created more problems than it is solving. Not to say it out loud though. The only principle that works in any project is everyone works on everything OR everything has a specific specialty to work on and they deliver their component. The leads have to be strong to keep it glued together and continue the journey.
As a software developer, I want user stories with testable acceptance criteria, so that I can confidently create acceptance tests that confirm the user story behavior.
I think of acceptance criteria as the requirements and the user story as the narrative that provides context for those criteria.
As a software developer, I want the acceptance criteria to also not include any technical details, just like the story itself.
As a product owner, that’s how I like to write stories. The story is the context of what and why, the acceptance criteria is how we know it’s working.
Do you include non functional requirements in your acceptance test... e.g. do you test that the system can handle 1 million concurrent users? Or that it is fault tolerant?
@@alanjohnson7374 I’ve not had a lot of great use cases for valuable NFRs, sadly. And I’ve never been comfortable with them being in that category. If the system need to be secure, or “fast” I just think of those as acceptance criteria.
It’s weird, in my waterfall days, this concept of “non-functional” never came up. We as product/business people just said everything that we needed to happen.
@@alanjohnson7374 NFRs should be separate stories with their own testable acceptance criteria and associated test cases.
Not only should user stories express the “what” of the requirement, a good user story should also express the “why” of the requirement “so that” developers can understand the problem that gets solved by the story. I worked for an organization in which the marketing guy basically designed how the product worked by providing a list of features. That’s what passed for product owner there. But, I wanted to hear about the problems that customers were trying to solve. I you just ask a customer what they want, they may tell you that they want it to be faster, cheaper or some other increment to of what they already have. To them, the product is just a tool to do their work. However, the development team might see a different solution because they understand what the product could do if they understand what the customer actually needs.
What a mess
The problem is, that often the developers incentives are not the same as the product owners. We work with external developers, and if we deliberately left everything high level the developer might end up developing into a direction that is potentially going to benefit them in the future. For example I have seen developers create short term solutions which caused large scale issues in the long term
@@SM-cs3nt you would hope the architecture of any new product is reviewed by a solution architect so that such technical debt doesnt take place. You would have this in place in most orgs
Pure gold. Thanks. I would suggest you to expand this video with a Part 2, including Requirements vs Features (although I guess there are less misunderstandings than the current one)
My personal summary: distinct between problem space and solution space. Your tips and explanations give good practices to achieve that.
As a senior business analyst, I have thrown out user stories as a tool. I have retooled my BA toolkit to include RML. RML is requirements modeling language. Visual requirements modeling is the best way to define “requirements”.
Please, we need to be clear and specific about requirements and have it documented before going to implementation. Not these Scrum thing telling you don't need to documented and don't need to know what you are doing.
This sounds like serious professional work to me. Not sure about what RML is but according to my experience and studies is a serious approach, more professional than "user stories".
So in essence you do everything in the opposite way of what the guy in the video just said? Does that work out well for you?
any decent BA would have a visual representation/process map/diagram of sorts of at least some part of the product. However, you can't visualise everything and user stories are still a valid and useful tool...
@@errrzarrr that is not what agile manifesto tells you and its not what you would learn from any introduction to agile work.
That is the BEST description of what should - and should not - be in a User Story that I have ever heard. Thank you.
Glad it was helpful!
I have only been in software for 7 years but I am already my teams senior lead. I'm not saying this bc I think I'm some special flower but bc so much of what has made me a good developer is encapsulated in this video. I never thought I would enjoy translating customers vague wishing into computer capabilities. It's extremely rewarding to give others a product they enjoy using and capabilities they didnt know were possible.
A user story is a description of the problem in the context of the user. A specification is a description of a proposed solution in the context of the dev. It is the developer story. They are dependent but separate different things. The better the dev's understanding of the user context the better (== faster) the result. In my way-of-working the developer story is a general outline of the solution app or feature expressed as narrative (no implementation details).
The most important skill I have developed as a dev is the user interview. Another important skill is the ability to look things up. Both skills are simply the ability to ask the right question. You ask the right question by understanding the context.
That is what I love your channel, a few minutes clip expresses the essence of the topic enhanced of your professional experiences.
Very nice. I've been saying for years that exactly zero people know how to write requirements. This is a much more lucid and nuanced assessment.
Sadly enough you just don't learn that in software engineering or computer science classes or courses.
This was a clear thing in software engineering and software professionals. Scrum-cert charlatans came and now this is an ancient forgotten and forbidden art.
@@errrzarrr in my (limited) experience, formal requirements function mainly as focal points for disputes. Feature X doesn't exist because it wasn't properly specified, feature Y was properly specified but hasn't been implemented. The one becomes a bargaining chip for the other when the fighting starts (shortly after kickoff!). I'd love to see it done properly.
But I'm pretty sure that (for example) successful off-the-shelf office suites are developed using ad-hoc hacking in parallel to a pantomime process.
This was a fantastic explanation of how focusing on the user's problems rather than the solution is a crucial beginning to any product development.
I always had a hard time teaching our new PO's to request functionality rather than specific solutions. In the future this video will be one of my go-to's when topics like that one turns up. Thanks, Dave - great job! I already shared the vid several times!
Glad it was helpful!
I think this is quite an interesting topic!
As a product manager i put a lot of effort into creating proper tickets, which tech agnostic user stories but clear, testable acceptance criteria.
However i feel i'm a difficult position to include design thinking to it.
Acceptance criteria often expresses some sort of solution already, when earlier in the video we wanted to be open about understanding the user and their problems, and come up with a multitude of possible solutions. I guess it would be interesting to see what the lifecycle looks like. Maybe a ticket should start with the user story and the description of the problem. Then. design and engineering can work on the "best guesses" to solve the problem and define the acceptance criteria based on the understanding and then the development lifecycle would begin?
I would love some opinions about it. Jim in the comments mentioned he wants testable criteria. I was wondering how you would define them when you're at the early stage of describing the problem, without proposing a solution immediately. I'm usually trying to give designers the freedom to come up with whatever they see fit and to work with engineering to make sure it is also feasible. I think this all touches base on good discovery processes that should have happened before the creating of a story itself.
I have worked in a team which used BDD to test out business rules related to different types of Holdings in a bank (stocks, bonds, cash etc). They were using Cucumber, a tool which can be used in so many different ways: as executable specifications, functional/acceptance tests or even testing the implementation details. Our team unfortunately sometimes used Cucumber the wrong way, using Cucumber to test that files were written etc, meaning we were testing out implementation details. One technique used in DDD is called event storming. I have no practical experience from it but I am eager to try. Using this technique it is possible to create executable specifications, which will guide developers to design a system using BDD/TDD. Event storming should be performed in a couple of coherent sessions, each taking about 2hrs, gradually refining the User Story. The gust of the first such session is not delving too much into technical stuff, but instead focus on the users mental picture of what the system should do. Everybody in a team should be able to provide input and to learn from a session. Meaning BA as well as devs and testers should be able to participate. The "WHAT" stuff the system should "do" (written to stick-it notes) should gradually be refined into a simple business model of the core domain (what is most important) and sub-domains (domains that core is dependent upon). Next step is to further refine the core domain into simple "UML-diagrams", which now can even be understood by BA We should never write these types "technical diagrams" from the beginning of the event storming since BA seldom understands them and only experienced senior developers will fully participate. All the process you better use paper sheets, not white board, as you probably want to re-open a session from last time, as the whole process may take 1-2 weeks. In this way the design process can save a lot of time for you, and you may also produce "executable specifications" that are more correct, because everybody in the team will participate and learn/contribute.
As someone who's been with "user story" side of the house and not involved with "requirements," I have never really put that much thought I to this until all that changed recently. Now being heavily involved with product ownership, I am seeing how vitally important the need for a clear understanding of both of these and how that impacts traceability in the end.
We’ve moved towards User Stories and Acceptance Criteria, all with emphasis on the “what” and not the “how”. What I like about this approach is that it is lightweight, flexible and objective rather than subjective. It’s measurable, which our senior team likes, and sets us up well for UAT. And of course we no longer produce long documents that nobody reads.
It is a VERY nice way to work! I am pleased that you like it.
Couldn't agree more. This is such a great video. User stories are about determining what a user needs to be able to do as simply as possible, it is not a bureaucracy excersize to justify why something can't be done or why it isnt a developers fault that they cannot implement it.
Regarding pressing buttons, it seems that some car manufacturers are implementing buttons again in their dashboards instead of touchscreens because they are considered more secure. Buttons offer better feedback, so the driver can keep their eyes on the road, instead of diverting their attention to make sure that whatever they pressed on the screen is the correct thing and it actually got pressed.
My requirement was for a succinct explanation of best practice that could would my mind (when it should be changed).
Requirement delivered. Thanks as always Dave!
I use both user stories and classical requirements for my projects. Basically I define user stories for what the users want or need and requirements for what must be there. A simple example (please mind this i a youtube comment and not a formal document):
Story
As an employee I want to log on the application without the use of a password.
Requirement
The system must support FIDO for as a sign-in mechanism.
The point os that the end-user doesn't care about security, but the organisation might, so you need both.
NFRs are a different thing
This summarizes my worries so well!
I have learned that US is “an invite for a conversation”. It did not start as that. As a beginner you learn about specifications.
I’m in a project, a rewrite of a product, where we are handed a pretty finished specification as “User Stories”. Project Managers communicate with Customers. We developers implement the “US” from a technical perspective and there is little focus on the product as a whole. The discussions are left to the PMs who are always on us about not making enough progress. I just wished that they understood the real value of agile and that US are not simply specifications. That would make the developers approach development differently - seeing the product and daring to make choices independently when needed.
I’m aware that the customer is deciding how to communicate with them.
What you describe is an org that simply renamed their old methods of working and understanding, with agile and/or scrum terms.
@@-Jason-L Exactly.
Excellent summary of a crucial topic. If you want to save yourself and your team a lot of time and emotional anxiety, follow this advice.
Thank you for another very clear exposition. My concern is this: there seems to be a lot of ambiguity in so many IT discussions now over the use of the term "user". For many years I was involved in both development and support of systems - usually individually "simple" but intimately interlinked with others to create a complete and coherent system in the most demanding OLTP environments. The term "user" normally denoted the head of the relevant area of the business: as you suggest, these people often didn't really understand what the day-on-day users of the system needed - no user stories there, but we managed given the relatively straightforward interfaces between full-time well trained users on simple screens and the centralised system with a common look and feel.
However, the whole question of "who is a user?" has got infinitely more complex now, where systems are generally directly exposed to the public through the Internet. Is the user still the person in the business who is commissioning this work, and presumably paying for it, as is implied especially in some talks I see on DevOps? Or me, as a potential customer of the organisation, who is expected to understand what I see when I log in? Fifteen years after retirement I consider myself as a seasoned "user" of online systems, and I am still constantly confused by what is put before me. (The most common fault is assuming that I know my way round the website, and where I'm going next. As I am not psychic, I often get lost, and once this happens it is too often impossible to get back to a point of stability.)
Once your users are the general public, the paradox arises that the people who best understand the "desired business interaction" are probably the least qualified to understand what the idiots like me out there are really going to need, to blunder round and achieve it. And from my own experience, the more you pour information and guidance onto the screen, the more confused the recipient becomes. In my book Good Code is Not Enough, I seriously recommend the use of a stroppy teenager to look at this area. (I had always referred to this activity as external design, but the term User Stories is much better and I wish I had known about it before.)
Thanks Dave. Nice and clear. What's your take in use cases?
Thirty years ago (and longer) W. Edwards Deming was telling everybody who would listen that not only do you need to understand your users' requirements, you need to understand why they require them.
"People don't know what they want" - that's so important to understand!
it's a crucial part of the (devs) job to help and support them to find out to eventually provide a satisfying product.
unfortunately in the modern industrialized development process this is often not addressed properly and devs are rather just seen/treated/used as code-monkeys - with the predictable not-so-happy outcome for all sides.
We have a User Persona, to use thier empathy map context as we write the User Sories, and we write them to build on the User Journey map to have a true accurate results and it consists of:
- What is the job they do and our system should do in process details
- Why is that the right sequence of the process?
- What's the constraints they fear to face? And what are the results (Risks/Losses) they'll face?
- How to measure those loses properly to detect, diagnose and notify the user in case they occurred?
- How much (Resources: Time or Money) each step usually takes before?
- And how much reduction of that loss we should consider to make it useful and profitable for them? (That's the lean)
- Lastly ask them if this user Story wouldn't exist or released would it make the solution function or not? (Initial prioritization)
We then put a clear User story definition of one liner above that with the standard format on the top of that then each point is added.
And after the whole user Journey with the defined Story points are gathered as clarified we reprioritize all (User Stories) from 0:5 whight and the critical ones would be between 4 and 5 then 3 for not critical but important then 2 for a nice to have if existed and 1 for a Wasteful code effort but useful for client lastly 0 is NEVER gonna have and those classified from Risks, Fears questions and all bad assumptions you could give and rejected by the User themselves in a brainstorming session at the end if that meeting.
Yeah, that's how I do it, probably Not perfect and need more simplification for much excperinced fellows here how ever it works and I'm open for advice too.
Simple and Effective explanation. I always try to jump to solution...
I remember fondly the cavernous abyss between user’s capability to describe what they need from a system and the practical reality of what they are actually supposed to be doing. This chasm is cultivated under the guise of “not my job” reinforcing the idea that users have that they do not need to understand what a computer does or how software operates, they just need to demand the darn thing does what they wish in their heart of hearts, without any effort on their part to formulate the question.
You are lucky if the actual user is the one describing their issues. That's why "user stories" aren't user stories at all, is someone else making things up and dictating implementation (telling Devs what and how to do things instead of actually describing the user's issue)
Remember your job is to understand the user's problem and translate into solutions. Don't ask the user for solutions because that's hell for you and them.
from my experience there are many levels of stakeholders in between "what" and "how", and all of them need a unique blend of what + how. Also "how" may be evolving as we understand the technicalities of our solutions. It's something like a V shaped diagram, where "how" increases and "what" decreases as we move down the levels.
That was enlightening. For the past 16 years I never had a chance to see proper stories. I will be sharing that video now to my colleagues. I wonder, if you have any advice for case when we have stories that describe no inputs or buttons and then very strict designs that are very difficult to connect with the stories?
Part of this approach is to try and move the design responsibility into the team. I think that UI specifics are better communicated as "style-guides" and case-by-case collaboration with usability experts when you hit something the style guide doesn't cover. Stories should NOT be specifying UI detail, if they do, we are back to coding by remote control, which never really works very well.
@@ContinuousDelivery Thank you
Great content. I recommend my fellow engineers to read about NLP appied to User Stories.
Parent: What & Context
Child: How
Based on my experience, the most important thing to understand is the concept above. It doesn't really matter what you call it or how you want to structure it.
Lets take a user story. It identifies a user, states what the user wants, why the user want it, and acceptance criteria for the solution. From all of that information, what the user want is the most important. Everything else is a constraint or provides context for creating the solution.
If you have multiple users, you can end up with different users asking for the same thing for similar or different reasons. You can also have the same user asking for the same thing for different reasons, and different constrainsts by different users. How you want to structure that information is up to you.
Using user stories are great to define some items. Story boards are also good. All great documentation in a waterfall methodology, passing into the build cycles. If struggling to use any documentation methods, then switch to using Agile.
Great exposition. I however have a question. At what point do you add acceptance criteria to your stories? An I approach I had my teams use was to come up with acceptance criteria after a User Story. This didn't seem to help as is sort of forced us into thinking about how the system will work. any ideas?
The way I've been structuring acceptance criteria is using the "Given, when, then" format.
Given is the context, what are they doing? What environmental conditions need to be true for them to proceed.
When is the execution of the functionality they desire.
Then describes the sequence of events that transpire after the execution.
I pitch it as a fantasy, "imagine you're about to do the thing you want to do. What does the environment look like? What do you envision yourself doing when you use this functionality? What happens after you start this process?"
It's harder to keep acceptance criteria agnostic of the system they are built within. I work with some very specific, complex data models that, if kept agnostic, defeats the point of writing the story in the first place. So the acceptance criteria we've been writing lately have system specific details. But that's working for us.
Use of the term "requirements' allows confusion between what a user wants and the most efficient and cost-effective business area's response to an event.
An event is the recognition of the arrival of data from an external entity at a business area. A temporal event is when a time is reached such as someone's policy about to expire.
The requirement is the business area's response to each event as defined by data and process, a process that often includes some form of automation. A requirement is not what a user wants.
By identifying what data arrives (an event) at a business area, the total data and process response can defined as a set of rules rather than a list 'wants and needs'.
The challenge with implementation details is it needs to be discussed as a team and documented somewhere. Researching, designing, documenting is a big job and should be done ahead of time, not when it's time to code it. Either it is with the user story ticket or outside is irrelevant. The big mistake peoples make is when they write everything by themself and hands it off to developers. It leaves a lot of gaps and no ownership from the team.
Right. Scrum have us believing is a bad thing.
Well that's not how it works in most modern big, successful SW companies. I don't think that you get to build really complex system that way, because it means that you need to understand the solution before you have built it, and you don't understand it until you have built it. Engineering is an iterative process of learning, and great SW engineering is too.
@@ContinuousDelivery When I'm talking about implementation details, I'm mainly talking about designing the product. It takes 100x less time change a mockup or specification document than the fully done code. This is still iterative of course, but I've never seen successful SW companies that does not prepare their projects a head of time to some extend. I don't think this is incompatible with being agile.
Not to be petty sir, but verification != validation and these terms should not be confused, ever. Great content, I love your videos.
Love this channel, but I've sadly never found any company whatsoever that works this way. Quite simply: I've never seen a company without a "product design" team, and I've never seen a person with title "product designer" who doesn't feel their job is to run off and think up every feature the developers will be asked to implement, down to detailed specifications of interaction and "UX" with detailed mock-ups.
Reading Kent Beck's "Extreme Programming Explained", especially the superior 1st edition, it seems like the main idea of Agile is the same as Lean: lean process management suggests moving all the ownership of process and decisions to the people closest to the work. In software, that is the developers. When I read EPE, it's clear that Beck has taken all the steps of Waterfall: System Requirements, Software Requirements, Analysis, Design, Coding, Testing, Operation, Maintenance, and then recognized that it must be developers, and no one else, who takes charge of defining and controlling every step of this process: collapsing all these steps into "coding", as part of short iterative cycles that are controlled entirely by developers through their day-to-day work.
TDD is obviously a way of bringing testing, analysis and design into the coding part. And stories are a way of bringing the system requirement, software requirements, and analysis parts of waterfall into a shorter cycle that is again part of a developer's work rather than another "product design" team.
What is the role of a "product designer" when developers interact directly with user stories and come up with their own plans and features to address those stories?
Do you have a way after giving a story they give back more detail to help technical parts or changes - where do you store or manage those choices to maintain and develop?
to put all of that in a nutshell: user stories help developer fill the shoes of the user, put developer in the context of what their code actually delivers to end-user's experience and results
I like that you've spent as much money with Last Exit To Nowhere as I have
I think it depends on the organization and the scope of the software development project. Stand Alone user stories can be Requirements. However, I still think there should be an additional Requirements document that Provides a Top Level overview and also covers any other items that might not be covered in User Stories. Sometimes by creating a document like this, it presents some scenarios and issues that weren't addressed initially.
Hi, thanks for these videos, I've watched for a long time and they've really helped me gain a better understanding on the fundamentals to software development outside the IDE.
To anyone who has the experience to answer: I'm on a small team, and so look to minimize resource waste. Like many teams we use tools like JIRA. These systems really seem to be a large time sink if you're using them 'as advertised'. Creating user stories with clear acceptance criteria + justifications and then splitting that up into tasks and organizing that work into sprints and having everyone mark their tickets correctly, etc etc.
My plan is just to have user stories and bug tickets only and having all the acceptance criteria in the tests and using a passing tests as a notation of work being complete.
Anyone have any lessons learned on doing something like this?
With many years of experience in the Jira context, helping customers and teams to implement and optimally use the system, I know the issues and understand the reputation the tool has with many. And yes, it can be a real time eater. In 95 percent of the cases, it's not the tool, but the processes that were mapped in it or how they were mapped.
The most effective teams I've worked with have used Jira solely to plan and track features, stories, bugs, etc. All without a single line of requirements definition, specification, solution design, or acceptance criteria in the ticket itself. Everything else was documented e.g. in wikis, or as your thought process was, the acceptance criteria in the assigned test cases.
And as always, there is no perfect answer, you have to experiment for yourself and find the way that fits best for you.
@ Very informative, thank you for your response.
You have to keep in mind that at its core, JIRA has always been an issue tracking system, not an application lifecycle management system. Think Bugzilla on nuclear steroids. It's main focus is flexibility and traceability, not ease of use. That's why it usually takes more user interactions to perform a task than it does for other systems. This goes even deeper if you try to use any form of agile framework as JIRA's support for even Scrum and Kanban is quite rudimentary.
For a small team, I would recommend you use anything else but JIRA. None of the reporting, permissions, traceability, time accounting and similar features that enterprises live would help you in any way, and the limitations to being truly agile will quickly become apparent as soon as you outgrow its basic Scum or Kanban boards.
I've been coaching teams to be agile for well over a decade now, and I am yet to see a single JIRA instance that doesn't get in their way. And every single one of them would change it in a flash if it weren't so deeply embedded in the test of the corporate red tape.
Try something simple like Trello, or even the Planner app in Office 365. If that's too limiting, maybe Kanbanize, Azure DevOps, CA Agile, TargetProcess or any other tool. But for the love of everything you hold dear, stay away from JIRA unless you're forced to by policy (in which case you'll find ways to subbert it, as most people do.)
@ "And as always, there is no perfect answer, you have to experiment for yourself and find the way that fits best for you." can you tell my product owner and project manager this? I make any kind of suggestion and apparently "it's not agile enough" even though they have zero issues with the software that we actually have built nor did we fail to meet our deadlines, but our "jira ticketing is not up to par"'. In other words, their definition of agile seems to go against one of the core values of "working software over documentation". They placed all sorts of dead weights on my team of people who literally took 1week worth of scrum/product-owner classes and now those people claim they are experts. I am now packing my bags
Could we get some examples on requirements and user stories that are done correctly?
We are working on getting some of these for our training course. There is a video already on the channel that shows some real world examples: ruclips.net/video/9P5WG8CkPrQ/видео.html
These are a bit technical for my taste, but they are real examples, and this company, Adaptive, is doing great with them.
Thank you, this is really insightful. In my experience, people think of user stories as of just brief SRS documents and it's always tempting for my colleagues to add extra bits of details and business rules. I even seen SQL requests in the acceptance criteria.
Just wondering where does the UI/UX designer step in? When they create a mockup for a user story, they implicitly state how the system should work without recording any logic in text. So when the developer has a vague user story and a mockup that doesn't capture corner cases, should the developer talk to the designer to figure the solution?
I am not a big fan UI/UX as a separate team or responsibility. This is part of the development process, not the specifictaion process in my opinion. So you need that in the dev team as the SW is being built, not before. Clearly not all devs are great at UX, so use guidelines, or common components to define "house style" - can be extremely detailed. That should cover 80% of the normal work. When you hit the novel 20%, then draft-in UI/UX people to help in the team, to do the work.
The WHY is very much key as well as the WHAT ...As a I want WHAT to happen WHEN...Because I Want this OUTCOME
Still not clear between
1. how a requirement is different from a user story (other than the format )
2. Are product owners required to write both?
3. How do dev teams follow both?
I can't agree more with what you explain. I have dozen examples when I observed in my job how dev-team pushes business users to provide details of the solutions. Then business users, basing on their limited technical knowledge about the systems, are trying to figure out by themselves how to address their own requirements. 100% recipe for disaster.
Your videos are unbelievably helpful. Thank you so much. I just want to know, once we speak to a client, do we not need some kind of formal documentation signed off before ? Do we not need a SRS document of some kind?
Is it correct to make the assumption that even in Agile, the requirements phase happens first pretty much ?
Sometimes you will meet dumb clients, and have to comply, even if what they are asking for is irrational. But apart from those circumstances, if you want to do the best job of SW dev, you need to collaborate in the decision making, not act as servants to someone else. To build good software you need to understand the problem you are working on very well, you can't delegate that to other people. So the more specific, more formal, the requirements or specifications, the less likely you are to be able to do a good job. SW dev is full of surprises, and some fine-grained thing can impact on decisions at a bigger more strategic level. This is nothing at all like a production line, we need to make decisions as we face the problems that need solving, so need the freedom to go back and say, "we can't do what we discussed last week, but how about this instead".
@@ContinuousDelivery Thank you for this. One last question promise :)
So just to put this into context, How would we charge a client? Would it be based on an iteration of work I guess?
My only concern about this is, what if we get a client who deliberately takes the piss and changes their mind after everyone agrees and then says now i dont want this,
I would be happy to change, but it would mean I would still need to be paid for the work done for the previous iteration of work right ?
@@TheMyth2.9 It is a very difficult problem, we struggled with this when I was at ThoughtWorks. We attempted a few different strategies.
The reality is that it is in no-ones interest in general to attempt to fix time and scope, and so fix the bill. Because SW doesn't work like that, so inevitably one of you will loose. Depending on the contact, either the dev org will loose because they under-bid, or the client will loose, because they either don't get what they want, or they have to keep paying for extras to get what they want. IT'S IRRATIONAL TO ATTEMPT TO FIX TIME AND SCOPE!
For great clients you can have this conversation, and agree that collaborating and working on a kind of "pay as you go" scheme is in everyone's interest. Treat this more like venture-capital funding, pay for some initial work, then pay more when you want to add to it and keep going until you get what you want then stop. I think this is probably the most rational approach.
At ThoughtWorks, we experimented with a few clients where we shared risk with them. We liked their idea for the system we would build, so we took some of the financial risk of building it, in exchange for a share of the payback, once the system was released. We were investing in the project with our work, and got paid more than we would have otherwise earned at the end. Obviously, this is very, very dependent on the project and the nature of the relationship with the client.
@@ContinuousDelivery Thank you for taking the time to explain.
Most common mistake here that I have seen, is that requirements are often treated as synonymous to technical specification document
Are you able to provide a link to the studies that you reference, such as the work by Microsoft and Standish?
Maybe we could describe the "how" after requirement/user stories as a "technical hypothesis" expressing "this is how we are guessing we can solve it technically".
Do you have a video on how to define an agile epic?
I get the reason for user stories, but at some point, code has to be written down, and requirements at the user level have to be broken down into actual tasks. It's easy to understand that the user has a need for a shopping cart. But do we then have a whole series of other user stories for creating the technical components of the shopping cart? How do you even come to agree on what those components are? How do you even relate a user to anything occurring on the backend or database? Should all user stories be treated as if they were independent?
Regarding the last question, could it be that all user stories are equal, but perhaps there's an implied sequence based on how many points they're worth, and the points get reevaluated every sprint? As more gets done, some stories become more accomplishable, potentially fitting into a sprint?
Hi, I don't proclaim to know the answers you are looking for but I can give my thoughts.
The user stories are the descriptions of the problems your customers are facing and generally contain the information of Who, What , Why.
Ofc there are more questions to answer, such as the architecture and implementation but this information doesn't really belong in your ticket system and I would suggest really belongs in your code. Either as comments, or docs in your repo or using a tools like confluence. However I would question to yourself how much documentation do you really need? You really just need to know the technical 'why'.
If your project management system has acceptance criteria such as "Should show the message "Thank you for your purchase" when the user clicks pay" then this to me sounds more like the pm is doing the development and you're just writing the code for him.
Discussing what the components are and how you will solve the user story I think are best done face to face with limited documentation of what your going to do. If you've just gotten the team together to discuss the user story and decided on what your are going to do. Personally I think it's a waste of time writing task tickets for these things, as hopefully they'll be complete within such a short time frame that you don't need a ticket for it.
I would say that yes user stories are best when independent. It sounds like you are asking about: If you have a user story for this sprint, and then another that is a follow up for the sprint after, which builds on the previous feature.
So I would say that this normally shouldn't occur. If you have a user story for this sprint and you propose a solution, implement it then you first want to evaluate if your change was successful. So first of all you don't want to build on that change in the next sprint because now you should be waiting to see if the change worked and what the feedback is.
By the time you've gotten that information and the feedback, it's likely that the second user story you would have had is no longer appropriate.
I would suggest that you really only have points for stories you are doing this sprint or not at all if management will allow it.
Anyway just my 2c, best of luck out there.
If you think of user stories as a model of the problem and your design as a model of the solution, it's pretty clear what should be in story form and what should not. It would not make sense for a change in architecture to affect your story map for example. A story need not be the only documentation of what there is to do: if a story takes 3 days to implement, surely you will want to break it down into technical steps. It's just that they are not stories.
Well it's my take on it
In terms of Scrum, e.g. you break down the stories into (more technical) tasks per sprint, and you design and implement your solution by means of those tasks.
The requirements specifications should focus on defining the system's desired functionality rather than its specific implementation details. I've come across numerous SRS documents created by experienced business/system analysts that emphasize UI specifics, like "When the user clicks on the place order button, the system responds with a message blah blah..." These documents seem to be more like user manuals rather than requirements specifications in my opinion.
Well explained - Thanks!
Will there also be an episode about the mentioned "Over-Reliance on Manual Testing" part as source of problems?
There are already a few on the channel, but more from the perspective of - "here's how to do automated testing well" and then you won't need manual regression testing.
There's a playlist here: ruclips.net/video/Nmu4URA7pSM/видео.html
I'd like to comment on a couple of your points and am interested in your views:
- Teams rejecting requirements/user stories
- Stakeholders changing their minds
Firstly, I definitely agree that a user story should describe the "what" rather than the "how". There are times now and then where I may add some "how" (as a suggestion to junior developers to point at some useful code perhaps), but these should be very rare.
My ideal user story process is:
1. Stakeholder creates a user story describing "what".
2. User story goes through a team elaboration containing the stakeholder, QA and probably other developers. The purpose is to output a set of acceptance criteria for the user story. Any user story going into a sprint without sufficient acceptance criteria can be rejected by the team as this indicates how it is to be tested (definition of done).
3. After the developers complete the work and the agreed acceptance criteria passes, then the user story is complete. Anything outside of the agreed acceptance criteria (where the stakeholder changes their mind perhaps) would be handled by a further user story in normal circumstances.
Eg - create a registration form. If the requirement of checking verifying password complexity only comes after the work is done and wasn't included in the agreed acceptance criteria then this constitutes a new requirement. Whilst stakeholders might not know exactly what they want at the beginning, this prevents too much context switching and ensures changes are small. Since each user story is generally a single releasable change, this also shouldn't affect any release cycle.
Thoughts?
Yes, that sounds about right to me. The aim is to separate the acts of specifying what is needed, from how we are going to meet the need. The second part is too often attempted in a single bound, via requirements. This is a much more fragile way of doing things, because now every change invalidates everything, requirements, tests and implementation. We should decouple those things so that each changes more independently of the others.
Hi,
In agile, are there recommandations in formalizing requirements a system (not a user) has on the system you're building? In other words, how to formalize API requirements? Can't we use user stories too in that case? I'd guess the user in that case could be the developer?
APIs often still have an end user at the end of the day.
We use SysML for that task. The outer system is then an actor like end users, and via the use case diagrams, you can formulate how the systems interact with each other.
I think it depends on what you are trying to do with your API. If your API is really internal, aimed at providing some lower-level service for a bigger system, I think that you have to be very careful about design. It is way too easy to fall into the trap of fixing your design assumptions into API boundaries. Focusing on the high-level value can be useful, even if the "user" is a step or two away from the API.
If you are publishing an external API, I think that now you more clearly have real users, users of the API, even if the technical interactions are via code. What do those users want? Focus on that, and you will not only have better stories, but also better APIs and better tests.
Overall, what I mean by the bigger value, over and above user stories, is the idea of separation what the code needs to do, from how it does it. That is true in every case, and is a property of good design at every level. Your API should abstract the problem so that it is easy to user and hides detail, that's always true.
I’m not a software developer or programmer but I’m just trying to understand this. How can the user not know what he or she wants? Wouldn’t the customer’s requirements be explicit in the requirements and SOW. Certainly I can see in the requirements validation process, going back to the customer and suggesting changes to the requirements. In the example of how fast the software loads, wouldn’t the customer generally specify that time in the requirements? If I was the customer I would think I’d specify load time, DO-178 DAL level, all of the interactions of the software to the system…. I haven’t heard of user stories before but a book I’m currently reading discusses things like ever piece of code must trace to a requirement and ever requirement must be traceable to code. Thanks for any explanation.
Thx a lot for that very inspiring clip. love it😁
A user story is a promise for a conversation. Alistair Cockburn
Yes, but it’s a sound byte, not enough - “I promise to have a conversation” all clear now? No, so we need more than that. The conversation is extremely valuable, but if stories were only promises, then my story here is the only one we’d ever need.
@@ContinuousDelivery ah. so to me a story is an XP thing, so that promise means that once the story is pulled, all the required people are available for pairing and can document the discoveries of the conversation as working code in one go.
A conversation that never happens. Charlatans rule the world
@@errrzarrr find out why! :)
@@m13v2 Yes, exactly that. The story is a statement of the problem, and its context, and our job as developers is to figure out how we will solve it, so we need all the skills necessary when we are defining the solution, not when we are capturing the need.
I often find that behind every requirement there is still a "why?" to be answered. Drilling down to discover the why behind the requirement can often fundamentally change both the user's and team's understandings of the requirement.
E.g., Yes, the user wants a weekly report generating, but why? What will they use this report for? Stepping back to discover this can often get to more "truthful" requirements
sometimes I learn more from comments!
Sorry for commenting on an old video! I’m 3 years into working on a pretty complex system, as a UI/UX designer working together with the requirements team. The turnover for developers in the project is pretty high, and we have a lot of issues with lack of documentation for technical/implementation details when a dev needs to work on a functionality initially developed by someone else. This on top of a pretty complicated/specific domain… since it’s used for processes with lots of bureocracy and many different user profiles with different permissions. When we write user stories on “higher level” without going into detail, the devs always get confused… it stresses me out so bad, hahaha. I guess I’d like to know what are the best practices for maintaining at least the bare necessary technical documentation (like maybe schema diagrams? api docs? I’m not a professional dev so i don’t have a lot of reference) alongside the requirements in user story format…
sometimes tech debt piles up because of messy programming. code can always be "refactored", however, it is always easier to do it before things become interdependent and get tangled up (again, in code)
Question: once the software is released into production and the customer effectively uses it, how should we specify the changes the customer wants to the visual aspect of the software? As user stories it doesn't seem right, so what other means you suggest?
It's part of the design of the solution, it doesn't specify the need. I see a very common, as I see it, anti-pattern of separate UI teams defining solutions and giving them to dev teams as though they were requirements. They are just someone else's guess of how to give the user what they really want. The approach I recommend is capture what the user really wants as the requirement, then organise work on that to find solutions that can work. If consistency is important, have a style guide that teams use, but I have never seen pixel-by-pixel UI designs really work as a sensible input to development.
Making the calibration machine more user-friendly is a nice idea, but should be easy to do, or could be done easily completely outside the machine (e.g. detect power consumption and hook up a Raspberry Pi). It is not a good example of forcing a team to switch to user stories when hey are happy coding requirements into firmware.
I worked in TelCo and we had testing cycles running 6h on an expensive proprietary hardware farm similar to your calibration device example.
We have been tired of Ivory Tower Consultants telling us that we do Continuous Delivery wrong because testing takes more than a few minutes.
Can we force everyone to obey this follow's wisdom?
In 25 years of pharma validation of environmental recording, I had one customer who specified and verified that actual staff could use an operation SOP for as written.
Everybody else has User Specs that are mostly system specs with some vague admin text.
Especially the last handful of years "teams" have become online discussion groups with no defined goals for the recurring meetings.
And this is for Phama! You know: stuff that could kill you.
An insider's perspective: exclusive interview with Binance's CEO on future developments
Question .. where do the UI designs get up in relation to user stories.
We always put them in the story for the developers to use
They should not be in user stories. UI should be in another document tailored for devs.
such a great video. thanks
Glad you liked it!
User stories can be developed into numbered requirements which are traceable through the sdlc. Stories are too high level but provide useful context.
I smell RUP and big upfront design :)
Stay ahead of the game with an exclusive interview featuring Binance’s CEO on future developments
I would do it differently: not asking to like and subscribe from the start before the viewer has gotten any value yet out of the content. By the end of the video, it would sound more realistic and fair.
So, where should the requirements be?
I have 0 idea what I you try to say
However as indie development I have find the best way to develop my own game
1. Try Create the game ideas I have in mind
2. Try to make the game playble
3. Shred screen shot and small videos for the game
4. Listen to feedback but not every feedback same times ther people who send feedback that is not fetting in the game
5. Wean the game become playble release the for early access or release a demo for free in the order to take as much feedback as possible
6. Wean the game is ready then I tried to marketing the game and selling this game however I have not mange to get to the last point yet
So I don't know what you try to say will all the story stuff but the way I development my it's working so fur I'm I still don't how the marketing works yet but whatever I will figure out wean the time came
How would you approach User Stories for a more distributed system? Let's say you have multiple services of which one handles a connection to some IoT devices, and another concerns itself with the user view of the data collected from the IoT device, from the perspective of the IoT device itself should the user stories be told from end user perspective or from system perspective (keeping in mind that IoT team might not know how the data is used by the services)?
So, is it: "As a user, I would like to see temperature data from the device, so I know the component is not overheating"
Or: "As a service, I would like to acquire temperature data from the device, so I might show it to user in some form"
Or something else?
The idea of a user story is to be a "story". Neither of your examples is telling a story, they are describing a solution. I think that the real story is more like:
"The device overheats" - we could just say that!
or we could say what the user expects,
"As an IoT user, I'd like to be told when the system exceeds the safe operating temperature, so that I can shut it down" or
"As an IoT user, I'd like the system to shut down when it exceeds the safe operating temperature, so that my house doesn't burn down".
Your first point on distributed systems has two parts, this is more about org design, and system design than requirements. There are two answers. Divide the system (and your teams) so that each part is independently deployable, and so loosely-coupled - Microservices. The boundaries between microservices are, by definition, Bounded Contexts, and so are natural translation points in your design so you can always create requirements, at these boundaries, that represent natural conversations with the users of your service. So your example is too technically detailed, focused on implementation, to make a good requirement, and raising the level of abstraction "I don't want my house to burn" helps focus on what really matters - AND DOESN"T CHANGE WHEN YOU CHANGE THE DESIGN OF YOUR DEVICE.
The second approach is to treat the whole system as one thing, and test, and specify, it all together, there are nuances to all of this of course.
@@ContinuousDelivery This simple succient explanation has cleared up what consultancies bringing scrum and user story writing in companies have messed up for me in my mind. Do you have a course which covers this in more depth from a simple to detailed approach as it has seriously made me think and I can see where my Business analysts need to support Thank you for your help...
@@Aksamsons Yes, my "ATDD & BDD - Stories to Executable Specifications" course is mostly about this. It is in 3 versions, aimed at different groups 1) Dev Teams - need to understand all this, and make it work technically 2) for POs, QAs, BAs - need to specify things, and do it in a way that supports dev, actively involved in the creation of stories and specifications and 3) Domain Experts, Biz people - Need to understand how stories work, and a little about why this matters for dev teams.
1. "ATDD & BDD - Stories to Executable Specifications" courses.cd.training/courses/atdd-from-stories-to-executable-specifications
2. "Acceptance testing with BDD" courses.cd.training/courses/AT-with-BDD
3. "Understanding Acceptance Testing" courses.cd.training/courses/understand-AT
Great video
Glad you enjoyed it
You did not conclude why you were having a dig at the over reliance of Manual Testing in the process at the beginning, but it was one of the main issues?
No, that wasn't the focus of this episode. I talk about some of that in this one though: ruclips.net/video/XhFVtuNDAoM/видео.html
The main problem with manual testing is that it is slow, inefficient and low-quality because it is very limited, compared to automated testing, and it finds problems too late in the process. We need to find problems as close to the point where we create them as possible, not after we think we are finished. As Deming famously said "you can't inspect quality into a product, you need to build it in".
@@ContinuousDelivery I'm all for automation but the comment "manual testing is that it is slow, inefficient and low-quality because it is very limited, compared to automated testing, and it finds problems too late in the process" depends entirely on the type of SUT and the context. Obviously at a lower level unit testing is very useful but at the end of the day automation is not "testing" its just a glorified form of checking.
@@ITConsultancyUK manual testing has a role, but I think it is only better than automated testing for exploratory testing. This doesn't depend on the nature of the SUT in my experience. You can use automated testing in vastly more cases than most people thing. SpaceX do this for space rockets, I have done this for scientific instruments and medical devices. The problem is that we often don't treat testing as what it is, a tool for SW dev.
@@ContinuousDelivery I don't think you can realistically achieve and maintain 100% test coverage with automation. Management see automated regression as a panacea, I think at best it can give an indication that the software meets certain "baseline" acceptance criteria. I think how you implement automation based on ROI matters greatly on the SUT. e.g. web GUI automation vs some kind of emended systems automation require completely different skill sets, in my experience of delivering high quality, robust software for corporate defense companies a skilled human tester is far more effective in many regression testing scenarios.
@@ITConsultancyUK Well lots of companies like yours would disagree. SpaceX (not defence but similar), USAF are currently using continuous delivery and high levels of automated testing for fighter jets. Tesla for cars. The difference is that you have to build the testing into the development process. Sure, people may be cheaper if you do it after the fact, but this isn't how it works for examples like I have given. In the cases you design the system to be testable from day one.
I was involved in building one of the world's highest performance financial exchanges, we ran roughly 100,000 test cases every 30-40 minutes. No army of people can match that coverage. Google run 104,000 test cases per minute.
I have helped manufacturers implement these techniques for medical devices, scientific instruments, computer systems, chip manufacturers, cars, the list goes on. So we aren't talking about "toy websites" here, these are complex, regulated, safety-critical systems. What I am trying to describe here is a genuine engineering approach for SW dev in these contexts.
Sure, you can never test 100%, whatever your strategy, but automated testing is always going to be orders of magnitude more cases than manual testing, unless you do it really poorly.
Tesla recently released a significant design change to the charging of their Model 3 car. It was a software change, test-driven, using ONLY automated tests to validate the change. The change went live in under 3 hours, and after that the (software driven) Tesla production line was producing cars with a re-designed charging mechanism that changed the max charge-rate from 200Kw to 250Kw. That would be simply impossible if it relied on manual testing.
I think that humans have no place in regression testing, so I am afraid that we will have to disagree on this.
Often I see the ideation phase gap. From a requirement long before a line of Eden pseudo code is put down is the phase we design/UX/dev do Quico paper prototyping / mocks while some also gather additional user feedback / input parallel to this. Then this goes into implementation in a so small deliverable chunks as possible (not always releasable though) and the team do testing or I call it more validation is it what we thoigh we want.
Subscribing because of the shirt
If you’re going to bother to throw a quote on the screen, would you be so kind as to leave it on screen long enough to read it, or even long enough to pause it before it vanishes again?
Where is the link to the new course?
This would work great if you have flexible budgets and timelines and can afford to let everyone involved contribute what their opinion on what the system should do is and then reconcile it all.
I think that you are assuming that this takes longer. This is how the most efficient team that I have ever seen worked, so it is certainly not inherently slower. I think that this approach greatly contributed to my team's efficiency. Einstein once said "If I had an hour to save the world, I'd spend 55 minutes understanding the problem, and then save it in the last 5".
This presentation is obvs. software specific but covers material that can & should be used in an discipline agnostic fashion i.e. irrespective of software, hardware or hybrid solutions.
I'm surprised you didn't introduce the notion of DSL when discussing testing via the unknowing user
I think that this is broader than only SW. I think that this is how all good engineering works. You have to keep your focus on the real goal. The solutions are just your best guesses so far of how to achieve it.
Sometimes it is hard to get the scope of a video right, I am nearly always tempted to expand into related topics, because I think of SW as a very holistic exercise. Sorry if I cut off too short in this one, but there are others that cover that topic 😉
que grande que sos!
Do you need to write stories from a user perspective if you need to optimize the Dockerfile ?
No, a story isn't a "technical task". I also wouldn't record this, or add a card to my card wall or add it to Jira or whatever. This kind of activity is a side-effect of doing something useful, not the goal of your work. So you are either doing this as a tidy-up, or you are doing this for a some reason that matters to a user, in which case that reason is your user story.
@@ContinuousDelivery So, does adding a new card that depends on the story so I can describe what I will do about some point of the story is ok?
If I don't create a card I will forget my implementation plan.
Also, can there be stories for a refactoring process?
@@DamjanDimitrioski I'd advise against "publishing" it anywhere, it is just you way of organising your work. I keep very rough, free-hand notes in my notebook for that kind of thing. The risk is formalising this, so that sometime later your boss comes along and says "so how are you getting along reworking your Dockerfile?". It is an invitation to micro-management. If you can keep conversations with people that aren't directly engaged in the work to only being about externally visible, useful, features it works a lot better.
@@DamjanDimitrioski Nope! 😁 My advice is to try to avoid technical stories as far as you can. The time when you can't do this is when you are already in a mess, with lots of technical debt, now you are in a "cut your own arm off to save you life" territory so you have to do what you have to do. But the aim should be to work on features that are useful to users. Plan and talk about those, then EVERYTHING that you need to do to achieve those outcomes, including refactoring, is part of that story.
Microsoft: ai.stanford.edu/~ronnyk/2013%20controlledExperimentsAtScale.pdf
Standish: Can't find the original right now, here is a critique that reference the study www.linkedin.com/pulse/why-45-all-software-features-production-never-used-david-rice/
Given the number of people flocking into the industry product owners if not coming from a technical level are the first to turn into an obstacle and in some cases the reason behind the downfall of the project
Again, Dave shows us common sense is not so common, and terms that mean the wrong things and miss the things that matter.
Every time I see a story with "as a User" I know it's going to be garbage. What user, internal, external, customer, admin? So I had a laugh at your little example there.
I cringe whenever I see "As an architect|developer|product owner...". I know what follows isn't going to describe a user story at all and is probably going to be filled with unnecessary implementation details.
There was a time when I kept seeing "as (the name of the company) ...". Completely meaningless.
I noticed “as a user” in the video too…and felt “garbage” too - but the I noticed it was describing the user story template, not an example story. This was the best user story vs requirement explanation I’ve ever seen .
The issue usually not in approach but in general understanding of the product by people who writes user stories. User stories can be beneficial and lightweight approach to eliminate waste of too much work. But there is big preparation work before you have user stories, where you understand different types of users, different conditions, pains that you want to help with and etc.
If you're looking for garbage you will probably find it. I'm looking for useful patterns.
Dave, a software company that I once worked for have recently adopted Waterfall - go figure :| I put this down to because the entire management team are non-technical, don't understand the complexities of software development and there is 0 trust in developers. How do you suggest a company like this can be turned around from a purely process driven dialog to a results driven one ?
It is certainly possible that they can't be turned around, not all orgs are saveable. I think that you need to frame the argument in terms that make sense to the people that you are trying to convince. There's no point explaining tech processes, to a non-tech person. If the person is commercial, talk about the commercial advantages, if they are worried about risk, talk about quality and safety, if they are nervous about security or compliance, explain those things.
I think that the DORA (formerly State of DevOps Reports) and in particular the "Accelerate" book, are great sources of information to help you structure your arguments in these terms.
I have a video that is a bit related to this "How to Manage Your Boss" ruclips.net/video/bCEmRwsmmlY/видео.html
Sometimes when I get "requirements" that are just technobabble, I ask the person "If instead of a (whatever prescriptive thing they're asking for) I had a magic lamp, what would that magic lamp give you?
👍
Does user always mean end-user? I think not. If you have hundreds of developers split into small teams working on different components of a system then the user might be another development team. It becomes necessary to constrain the interactions through technical requirements to ensure all the components of the system work together. This is where it can all go wrong with excessive design leaking into the requirements. The lightest possible touch is needed. However, there is a balance required.
Well it really does depend. If the split between devs and teams is arbitrary or specifically technical, this can cause problems. All these problems are really down to coupling, between the code and the teams. If the code is something like a platform, then that is a bit different, and then treating stream-aligned dev teams as users makes sense. But this is a slippery slope and is more commonly done poorly rather than well. How you divide up work between people has a massive impact on how well you can do things. I have a couple of videos that may help:
Team topologies: ruclips.net/video/pw686Oyeqmw/видео.html
Platform Teams: ruclips.net/video/_zH7TIXcjEs/видео.html
All of this is great once you don't work in a bureaucratic company, where "this is the task, get onto it". 😐
Hi, I've watched many of your videos and I would often refer to them for advice but I have some disagreements mostly based on the reality of the industry. Within this reality, the level of engineers engaging with the "end 2 end process" is not up to par with the implied requirements you make. e.g. many developers only want to code. Good or wrong it doesn't matter. Many product owners and managers don't know what they want and expect devs to figure it out. Developers are not UX designers. Management has made choices against automation. I know there is a space for improvements but the availability of excellence that can drive your implied advices is not there.
With that in mind, it really depends what the priority and goal is, something that needs to be set by product management. e.g. is it deliver on promised dates? is to deliver functionality when available? Reality with customers is that it will be probably promised dates just because this is how sales and customers work. With that in mind, requirements need to be finetunes enough that there is a technical counter part with its compromises and specific scope to code and test for that can be estimated to be delivered on a specific date. When the previous vary, then this goal is compromised. It is a well known fact in the industry that terms are used however everyone wants. So with that in mind, my number one concern for an engineering team is that the goals for the "next" release are clear and the estimations are realistic to achieve. Any perceived wasted time in preparation for this, that is requirements definition, user stories, ux etc, is there to achieve the promised date with quality. And all this needs to be adjusted to the people of the team. Do you work with top devs? Do you work with long term internals? Do you want to scale on demand with externals?
My point is that theory is nice and don't get me wrong interesting to follow. Reality though forces messy situations and a compromise is needed. But the biggest issue of them all is not having a well defined set of higher goals and restrictions for the how the team works. e.g. Delivery dates, security requirements etc etc.
Don't confuse the culture you are exposed to, as industry reality. There is a difference between writing code, and being a software developer/engineer. Beginners are expected to require direction, but not experienced developers. They need to master the art of creating software to solve problems, not merely write code. This is why we say "do you have 10 years experience, or 1 year repeated 10 times?"
The whole over-reliance on user stories irks me honestly. They are always overbroad and there's many times where either I had to gather further information, or I implemented something and no that's not what the author meant, at a certain point I really considered just implementing stuff that will fit into nothing that will check off the box just to see what'll happen. There were also times when I read it and all I could say was "What the fuck does this even mean?", "What the fuck is this trying to accomplish?", "Do you have literal brain damage because this sounds like a disaster waiting to happen.".
This happens especially when you are adding more functionality into an existing process flow because do you branch off from that, do you branch back in, or do you just make it a standalone flow that might just end up re-doing the same exact thing? Like why wasn't this just defined originally?
This is doubly annoying when certain things are literally matters of legal compliance. For example for the user to apply for and get a license they need to meet requirements that are defined by statute and our product owner managed to get those completely wrong so I had to go back and change a bunch of stuff. At that point I just started asking for the statute so I can see what legally we need from the users.
I deserve an award, I have been your 1000th like. 🤣
and your award is... my eternal gratitude. 😁
"This isnt a rant about evil products owners"
:(