Where Do The Software Bugs Come From?
HTML-код
- Опубликовано: 3 дек 2024
- Where do software bugs come from? What does ‘quality’ mean when we talk about quality code or quality systems? How do we create code with high quality? High-quality software is easier to work on, nicer to use, more flexible in use, more robust under stress, more secure, easier to change as we learn more, and more fun to work on, but how do we achieve it?
In this episode, Dave Farley, expert in Continuous Delivery, DevOps and Software engineering in general, describes what high-quality in software really means, and the ideas that we need to focus on to achieve it. He then describes how best to organise the responsibilities in software development teams to achieve the high-quality outcomes that high-end software development and best practices for software can deliver.
-------------------------------------------------------------------------------------
📚 BOOKS:
📖 Dave’s NEW BOOK "Modern Software Engineering" is now available on
Kindle ➡️ amzn.to/3DwdwT3
(Paperback version available soon)
In this book, Dave brings together his ideas and proven techniques to describe a durable, coherent and foundational approach to effective software development, for programmers, managers and technical leads, at all levels of experience.
📖 "Continuous Delivery Pipelines" by Dave Farley
paperback ➡️ amzn.to/3gIULlA
ebook version ➡️ leanpub.com/cd...
📖 The original, award-winning "Continuous Delivery" book by Dave Farley and Jez Humble
➡️ amzn.to/2WxRYmx
-------------------------------------------------------------------------------------
Also from Dave:
🎓 CD TRAINING COURSES ➡️ bit.ly/DFTraining
📧 JOIN CD MAIL LIST ➡️ bit.ly/MailListCD
to get regular updates, advice and offers from Dave and Continuous Delivery!
-------------------------------------------------------------------------------------
LINKS:
USENIX “Simple Testing Can Prevent Most Critical Failures” ➡️ www.usenix.org...
"High performers spend 44% more time on new features” is a quote from the Accelerate book based on the State of DevOps reports: "Accelerate, The science of Lean Software and DevOps", by Nicole Fosgren, Jez Humble & Gene Kim ➡️ amzn.to/2YYf5Z8
-------------------------------------------------------------------------------------
CHANNEL SPONSORS:
Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ www.equalexper...
Harness helps engineers and DevOps teams simplify and scale CI/CD. Sign up for your free account at ➡️ harness.io
Octopus are the makers of Octopus Deploy the single place for your team to manage releases, automate deployments, and automate the runbooks that keep your software operating. ➡️ octopus.com/
SpecFlow Behavior Driven Development for .NET SpecFlow helps teams bind automation to feature files and share the resulting examples as Living Documentation across the team and stakeholders. ➡️ go.specflow.or...
Senior Software Engineer here with 20+ years experience. I share your videos regularly with my team. Your videos are always on point, on topic and directly part of our discussions in real world scrum meetings and retrospectives. I just wanted to say thank you.
Thank you, I am pleased that you find them helpful.
09:36 I'd say product owner can't prevent bugs with requirements, but they can certainly reduce the likelihood that they'll come about. Bugs can quite often be traced back to requirements. Anything ambiguous, misleading, missing or contradictory in requirements could lead to bugs in the code. So I'd say some bugs could be prevented at the requirements stage. A useful approach is have the QA team test the requirements before any code has been written to highlight the above risks. The developer can then assess the risks and make changes as required.
Some bugs are part of the domain too, it's not just all in the code. A product owner can come up with requirements that arise in contradictions inside the domain of the product or business. For example, creating a new user story that makes it so that an email or notification should be sent to all users of the system, but forgetting there is a feature where some users can opt out of notifications, and thus that unwanted notification would be sent to them too (and would be reported as bug by those users)
@@gonzalowaszczuk638 absolutely. Your example is what I love about whole team quality: everyone considering the risks, issues and constraints with everything everyone does as they do it.
But having a distinct requirements "phase" is a waterfall smell. Often, the questions that lead to requirements don't actually come up until you're actually writing code. Obscure state transitions that seemed impossible on first inspection, but which turn out to be inevitable in practice. Rather than trying to guess at all of these in the first place, there needs to be a continuous, open line of communication between dev and the design owner, so that as these things come up, decisions can be made quickly.
That design owner (product owner, project manager, designer, etc.) needs to be open to diving deep on the new problems as they come up, and can't just abdicate by saying things like "do what's easiest," or proposing bandaid solutions.
one man's "high quality " makes another man say "nah, you're just being perfectionists" ..
One persons "nah, you're just being a perfectionist" is most people's "why has my project ground to a halt and I can't change anything any more?" 😉
@@ContinuousDelivery that actually can be the same person too..i just gave up..actually i reached your vids trying to research on how to do integration testing for cloud projects, but i barely have time to actually do anything with all the regular work going on
yOu'Re OvErEnGiNeErInG
Well, when you've had an entire system grind to a halt because a single space was in the wrong place, I call "perfectionism" being "good at my job"...
13:38 - I would say that testers highlight on risk analysis and exploration of them, being subjective or objetive. Experiencing of a product, using techniques of exploration, goes a whole area of learning about the product status that automated checks cannot go (because computers can't learn and do critical thinking).
There's one flaw in this analysis, in my opinion: developers frequently have a conflict of interest with bug-free code, namely that what constitutes a "bug" is tremendously subjective. One of the biggest disagreements we had on the Internet Explorer team was what to do with a CSS colour specification that did not exist, such as a five-digit hex code or a misspelled name. According to the standard, there were two things you could do, depending on a semantic subtlety.
If this were an ILLEGAL colour specification, it MUST be ignored. The CSS rule is discarded entirely and we act as though it does not exist. This is the fastest and easiest option, so many developers were much in favour of it, to reduce workload if nothing else.
But if it is a MALFORMED colour specification, it MAY be reinterpreted to meet the desires of the author. Given a five digit hex code, you might add a zero on either end, or arbitrarily select either the first or last three digits. Given a misspelled colour name, you might make some attempt to determine what colour they meant.
The CSS standard never defined "malformed" in any way. We spent days arguing about whether these specifications were always illegal, or some could be considered malformed, and in the latter case we disagreed broadly about WHICH specifications were malformed. Ultimately, we decided to treat certain specifications as malformed, and the update was sent into the wild.
We received a great many bug reports to the effect that we were correcting CSS rules other browsers ignored as illegal. Web developers were apparently writing bad CSS that didn't work, leaving it in the files for some reason, and then being surprised when Internet Explorer assumed they had left it in the file because they actually WANTED it to work.
There was also something of a religious discussion about this in many developer communities, where people argued about whether any computer should ever try to "fix" the code you wrote. The division fell largely on experiential lines: the older, more experienced developers wanted their code left alone. The younger, less experienced developers appreciated any effort the computer made to improve their code.
There was also a central idea that when developers write bad code, where "bad" is an inherently nebulous concept, the code should fail catastrophically and force them to fix it. At its extreme, some older developers thought if there was a single error in a CSS file, the entire page should refuse to render - or, at the very least, the entire file should be ignored. More lenient approaches included ignoring the block of code where the error appeared, at gradually decreasing scope, until you reached the standard's position of "only the line with the error."
So if we assume the behaviour implemented in IE was in fact a bug, what exactly IS the bug? To identify a bug effectively, you must not only identify the incorrect behaviour, you must also identify the EXPECTED behaviour. But nobody could agree on the expected behaviour. Even if you agreed the expected behaviour was to treat the specification as malformed, is the incomplete hex code #89FF supposed to become #0089FF, #89FF00, #8899FF, or #99FFFF?
In my opinion, and bear in mind I am a technical project manager which necessarily means I am an arrogant jackass who wants to be in charge of EVERYTHING, these decisions need to be taken out of the hands of development and put into the hands of a QA department. The PM sits in the middle, identifies that dev needs a verdict from QA, and enforces that QA's verdict is treated with proper respect. Developers get tunnel vision - they look at the problem through the lens of getting the code done, not accomplishing the desired task. (Hence the concurrency fix you describe.) QA, on the other hand, is looking at the problem solely from the standpoint of getting the job done, and should really be writing the initial tests for the code.
Essentially, QA writes acceptance tests before there is anything to test, and everything fails because nothing is implemented. As code is implemented, each side typically treats the other as a black box without examining their code; devs don't read test source, QA doesn't read application source. Dev and test leads have access to the source, in the event there are disputes about whether a test is wrong or the code is wrong, and this forms a little iron triangle with the PM. If dev and test can agree on whether the test or the code is wrong, all is well, and the PM just remains apprised of the situation. If they cannot agree, the PM can cast a deciding vote, potentially after consulting stakeholders.
This has its own issues, of course - every approach does - but it evades most of the potential abuses in the system, provided you have a large enough team. This was never a problem at Microsoft, of course, but a smaller company may not be able to implement this effectively.
"Hard in training; easy in battle." -- Alexander Suvorov
I want to subject my code to every possibly nightmare case I can think of in unit testing. If it behaves as I expect in these "training" scenarios, then I hope it will handle anything that happens to it in battle ... production.
💯! I want to test all the awkward cases.
That moment where you think "I'll fix that another time" and never do
Yep, add it to the backlog
In software, there's nothing as permanent as a temporary solution.
I deliberately use the words "hack" and "bodge" to make people uncomfortable when they ask me to do this.
@@edwardcullen1739 in my experience, management can often understand when something is hacky busy indidt on it to get something out the door.
If the public knew how software is made, they might never get in that aeroplane or make that purchase.
Without getting too technical, I will describe it as a huge pattern made of a line of dominoes that will fail if any single domino doesn't tip over to push the next one. And while you're trying to set up all the dominoes, there's a cat walking around.
I'm not sure I agree bug prevention is solely a developer responsibility. The number of bugs I have seen stemming from overly complex user interactions, as well as unclear user specifications is significant.
What if our requirements were in the form of acceptance criteria/tests? Don't just provide a user story. Provide unambiguous specifications that can be confirmed automatically. The ability to test the code too is just a nice side effect.
The requirements can't put the bugs into the code, only developers can do that. The requirements may be terrible, and make it hard for the developer to understand the problem well, but it is still the developer who puts the bugs into the code.
@@ContinuousDelivery One thing I don't think you defined that's kind of important. What is a bug? That's a question that borders on being philosophical. I'll take a quick stab at it. A bug is behavior that's inconsistent a requirement specifications. And there's the point of the original post. If the requirement specifications are vague, ambiguous, incomplete, inconsistent, etc., how can we possibly avoid bugs in the code. When requirements are less than perfect, then everyone has his/her own interpretation of what the system should do ... conflicting interpretations. It's not until multiple people get their hands on the product that those differing interpretations arise. This is one of the advantages of the tight feedback loop.
Shifting left with better requirements would help with the reducing bugs as well. Garbage In/Garbage Out.
As for more traditional bugs, as in the requirements are good, but the an edge case was missed, then I agree with you that we need automated testing for as much of this as possible. At one time in my career I didn't see the advantage of this, but I'm all in for as much automated testing as possible now.
@@jimhumelsine9187 I guess the point that I am trying to make is that to me the requirements don't "belong" to someone else, they are owned by the development team. Sure, other people can suggest changes, but if the developers don't understand the problem well enough to build software that is in some sense functionally coherent that is still a development problem to me. The developers are the ones that are closest to the solution, whatever that may be.
The problem specified may be the wrong problem to fix, and I think that is something that can be sensibly outside of the development team, but whatever software we create needs to make sense within our understanding of the problem and its solution. So that means that, to me, a "bug" is something that is within our understanding of the system but where it doesn't work properly in some way. Missing something in the requirements is a gap in our understanding of the system, but not really a bug, by that definition.
Sure, pragmatically what I have described is not how lots of teams work, but I still think it makes sense as a model. So I aim to ensure that the system works, by which I mean that it fulfils all of the behavioural needs that we have identified, and got around to implementing so far. We will inevitably still miss things, but if they cause a failure on the delivery of the behaviour of the system that we have so far they are bugs, and if they are gaps in our understanding of the requirements I'd see those as "yet to be delivered features".
I think that makes sense?
@@ContinuousDelivery Fair enough. I think it's a distinction between it's not doing what it's supposed to do versus it's not doing what it aught to do. The first one is a bug. The second one is customer expectations.
Thanks for the great advice! I miss the old chime sound you used to start your videos with. It got me relaxed and in the learning mindset. The new banging, crashing sound… not so much.
You're completely right. Too bad most companies are resistant to implementing these practices
That's the law of inertia. I'd say ignore it and work your way up until you can roll your own.
Love the video! Picky opinion: I do think that the PO can reduce bugs by making requirements that are more clear, and leveraging SQA in that phase to lay out edge cases and clear up ambiguous requirements. There is definitely a subset of bugs that comes from misunderstanding the refs or not understanding the implications of a req
Really nice video and thanks as it's very informative.
For constructive criticism and hopefully interesting debate, I'd say this may be a great video for software quality but is avoiding part of the question of where bugs come from. To me that video would be oriented around the thought processes of developers, what is it in our psychology that results in us making coding mistakes or perhaps misunderstanding requirements.
Also I've never personally seen the value in the version of TDD where there is a rule of writing tests before the code, to me this is an inconvenient version of writing the code and then the tests, as long as you write the tests at some point, or even just establish a list of test cases first and then write unit/automated tests for them after coding, I feel the same benefit is reached. Would be interested to see other ppl's opinions, I've asked others about it before and I get a different answer every time but a respectful debate never seems to happen to completion hehe.
If everything is the responsebilty of the dev, what are the other depardments for? I find it funny, that everything is a Dev job and at the end, when everything is finished, it was a team effort... like reallife.
Feature selection, roadmaps, business stuff like that.
I think he draws a fair boundary around what is the dev teams his sole domain to worry about, make sure they are good at, and have to be kept responsible for.
This definitely puts more burden on the developers but also gives them more autonomy and power. The effort of the organization should shift to providing the dev teams with quality, detailed, vetted information as input to their process rather then chucking things over the wall to the dev teams and fixing it in the test cycle. That way, the burden of the developers shifts to ensuring quality code rather than design, interpreting specs, coordinating with product owners, etc.
Having responsibility doesn't mean you do all the work. For example, boards of directors are legally responsible for the company, but that doesn't mean they do all the legal paperwork themselves. At the end of the day, developers are the only ones who can change or effect quality, but product owners are responsible for communicating quality from customers and QA is responsible for investigating/reporting on quality from the product itself.
@@DavinStewart You make a good point. I wonder if anyone has written or spoken on this topic in any detail. Most contemporary theories of agile and quality trace their roots to Kent Beck and XP. This is before the role of product owner emerged, and the starting point of the sprint was a single hand written statement on an index card. This approach implies that the following things are developed during the sprint: acceptance criteria, behavior specifications based on AC, automation of behavior specifications, user experience design, wireframes and UI assets and specifications of their behavior etc. In many cases the contemporary gurus (of agile, TDD, BDD etc) hint that these things could be done ahead of the sprint, but they also cling tightly to the purity and reductive extreme of XP. @Continuous Delivery do you have any thoughts on the starting points of dev sprints?
As always one more clever analysis about software, thank you for the video.
At the begging you said software is less constrained by physics, just to proceed with an example of limited processing power (made worse by a synchronization mechanism), which is a physical limit of the CPU! 😁
I did say "less constrained" not "unconstrained" 😉
Thank you very much for the great video, Dave!
Thanks
I have code running on production at work this very moment, where I fixed concurrency by forcing synchronization. I did it on purpose though, because I know the hardware can handle it, and removing the complexity was worth it. It's just funny how often this happens 😁
This might bite you in the back in the future.
Bottlenecking something at 1 a single shorts computation at once may seem neglectable but what about when requirements change.
Will it be resilient enough?
@@OggerFN Yeah, I've made the mistake of thinking a piece of code wasn't performance critical enough to be written properly only to figure out that that wasn't really the case after all later down the line.
Most bugs don't happen during the initial development and release of a feature.
It's mostly happening months of years when the original author or a colleague tries to modify the code.
They all happen then, they may not be noticed till later if your tests are poor.
Automated tests don't find issues now. They help find issues in the future.
Hey Dave, there was this guy on Oreilly (he was the instructor in one of the live courses that I attended), he was teaching devops testing and he claimed that unit testing is all but useless.
He said that most of software failure s are not at the unit level.
For example today we had a failure where one of our developers misconfigured a DB through terraform.
How does one handle failures like this ? How do you even test this infrastructure misconfiguration
Well, at least some of the data says he is wrong. But I agree that you should test that stuff. I think that unit testing alone is not enough and that functional testing alone is not enough, you need both. In my "Continuous Delivery" and "CD Pipelines" books I describe a deployment pipeline that includes both "Commit stage" testing (mostly unit tests) and "Acceptance stage" testing (functional tests). There is a play-list of videos on this channel on Acceptance testing and BDD that will describe the tests that find the config errors.
I'd say that's a cope from the school of "I'm god's gift to software".
The biggest benefit of having *some* unit tests, even if coverage is poor, is that it makes bug find/fix much easier.
It's also _known_ that the most bugs appear in the least tested code...
Infrastructure code testing is tricky, as it's inherently difficult to unit test.
The only answer I'm aware of is to ensure that you adhere to the DevOps (sorry Dave 😉) principle of having minimal variance between environments. This is always tricky because people don't want to pay the costs of fully-duplicated environments, but that's what TF etc. are for - to enable you to stand-up and throw-away test environments "rapidly" and to maintain consistency between them.
💯 from under my bed. I knew this since childhood
This video is literally a childhood to adulthood journey 😂
The gate keeper attitude and responsibilities after effect of some of the structured programming models it's in the v-model it's in the waterfall model. However I will tell you that in modern software on most teams testers hate this responsibility. Why? Because testers believe their job is to interrogate the code to determine bits of information that are relevant to share with the team so it can make an informed decision about whether it is ready to release or not. Sure we may find a thousand bucks delaying the release for then that responsibility isn't going to get pulled up very often isn't know. And the reality is even though many companies still use this gated structure model managers and leadership often like to override the QA process or hamstring it so that they have so little time to test that they don't find anything until it's got into production
Hi love the vids that you put out, I'm a junior PHP dev and I never seen my boss talk about TDD ever, I want to show him the importance of TDD in our projects, but I don't know how I can apply the stuff that you show in your videos in my projects, maybe if you showed a code snippet in your videos to illustrate the point that you are giving I would comprehend a lot more.
I'm a visual learner myself so.... just a suggesting to future videos so the topics and ideas are more welcoming to new coders :)
Checkout several of my older videos. My first on the channel was exactly that, but the sound and video editing has improved since ruclips.net/video/xUi2951ufaw/видео.html
Is your boss a software engineer or a "software manager"? My guess would be the latter. (Though, not all "software engineers" seem able to comprehend the value of TDD...)
This is an endemic problem in the industry, sadly, as it's really difficult to get someone to understand the value of things like TDD if they've never had to fix code someone else wrote.
(One of the most overlooked benefits of unit testing/TDD is that when you have *some* it makes identify and verifying bug fixes much quicker...)
If you don't get a positive response, I'd recommend not over-egging, just try to do your best to implement best practice in the tasks you're given. This can be extremely tricky, so you may need to compromise until you've got enough experience to move-on.
One thing is for certain: in your next interview, ask if they do TDD. If the answer is "no", then your answer should be "thanks, but no thanks" - because you'll only be dissatisfied in the role anyway.
This is the hardest thing to do when you're starting out, but trust me, the people you WANT to work for are the people who will appreciate that question/response.
@@ContinuousDelivery Loved that video and may I say the quality of your videos have come a long way!
Maybe a updated version of that video with a simple CRUD system in mind would be very good. Congrats with the evolution of the channel.
@@edwardcullen1739 To be honest I'm not sure what he is, I know that he has 15+ years of experience with PHP and was a dev just like me but at some point he decided to open a company.
But I think that he kinda got stuck in time if you know what I mean, sometimes I feel its necessary to refactor a whole section of the code that he wrote not because its necessarily wrong but because its "old code", its not that readable or he uses the vanilla way instead of using the framework "shortcut".
But the real problem to me and is we finish everything that the client asked but we need to test the whole system, I'm the one who does that hole thing basically by myself, of course we send to the client so they can test with "their own hands" but the testing process is a pain, and it takes way to long to fully do it, and I feel like everything could me fully automated things like testing creating forms, deleting users etc.
And if I don't get a positive response, I don't plan on staying at this job for long anyways, it helps me to pay my bills so I won't even bother trying to talk my boss into it too much hahaha.
@@Zacpowerr Good plan. Look at it as an opportunity to practice the art of persuasion - asking questions is a good starting point 😉
"Quality is value to some person who matters." - Gerald Weinberg
This fantastic advice. Now ... how can I get developers to care about all these quality issues?
How do you get everyone in the chain to care about quality issues? If the focus is to keep pushing changes through, without allowing time to address quality at the start you'll always have a problem. The quality issue will just move to a different team.
British Diplomat: "President Neville Chamberlain, Hitler's army has invaded Poland."
Dave Farley: "This could have been prevented with Test-Driven Development and Continuous Integration."
In that case, I am not sure that it could 🤣
*watches video as I try to fix some bugs in production* 👀
it all comes down to: shorten the cycles.
Do you believe this captures all the nuance of the discussion and provides all the actionable advice one needs to succeed?
Good video, would just mention that software quality falls to devs and designers, especially for more visual products.
Great video, thank you.
Thanks
They’re very agile.
They come from the bunny. The bug's bunny.
You are Anya Jenkins and I claim my five pounds!
@@barneylaurance1865 ???
@@totocaca7035 -She- You had a song blaming bunnies.
@@barneylaurance1865 Me? Who? Sorry, I don't get your refference.
@@barneylaurance1865 Who is Any Jenkins, and what is this bunny thing? I am very curious.
Maybe if we stopped putting testers in a separate team and then stopped calling them QA. Just a thought.
I think I said that in the video, not the names, I don’t much care if you call testers testers or QA people, but separate testing teams is certainly an anti-pattern.
@@ContinuousDelivery Depends. In the hardware world (using hardware description languages) it is very often a requirement that testers and designers be separate teams. Generally testing is referred to as a verification in the ASIC/FPGA world. Designers all have blind spots on their own designs that are best caught by someone else. There isn't as much overlap in the skillsets of design and verification as one might think. Proper verification takes substantially more man hours than design, and an incredible amount of CPU time to simulate the hardware.
I think saying they should be separate teams might be too simplistic. Really it depends on the consequences of bugs (respinning an ASIC is very expensive). Failures in safety critical systems are even worse. To say separate testing teams is an anti-pattern is I think a harmful attitude. Now that isn't to say designer's don't have a responsibility to write good, testable code. It doesn't mean they write 0 tests. It just mean modules get verified by different engineers with a specialized skillset which is proven in the hardware industry to catch a ton of bugs (this is why your CPU works so well!). I'm not 100% sure how this translates into the SW world as I mostly work in hardware (but shamelessly steal ideas from the SW development world).
This is a very long way of saying the answer to whether or not you should have a separate testing team is "it depends". Everything else in your video I agreed with, just couldn't resist replying to the comment that my entire industry partakes in an anti-pattern :)
@morech11 However In safety critical systems you do effectively have one try or people may die. In remote systems, you may have the opportunity to upgrade/patch but the cost is much higher. I imagine financial systems also have a similar level of rigor since mistakes could be very expensive, but truthfully I am ignorant of that domain. I agree a lot of software (possibly most!) does not need this level of rigor, and thus gatekeeping as you succinctly put it, is not needed. But sometimes this is needed for. This is the point I'm trying to get across, verification is needed and separate verification teams is not an anti-pattern. However perhaps I'm confusing definitions, and the broader definition of test isn't the rigorous gate keeping process I am used too.
Wow, what a radical concept!
Surely no one already came up with this idea - say 20 years ago - because if they had, "the QA team" would be a thing of the past, right?
Five QA's have watched this video.
I like his shirt
A bug happens whenever the mental model doesn't match the behavior of the code.
umm... Where can I get that shirt?
Developers have all these responsibilities, but are they actually given the corresponding power and autonomy to take care of them?
You can't have responsibilities without power. If management/business doesn't give developers the power to act on the quality of the system as you mention, then the devs aren't really responsible for it; you can't be responsible for something you have no way to achieve and no control of.
This is very true. This is why so many older devs just start working for themselves...
That is true, but what we can do is to take control of the things that we are in control of, and not abdicate responsibility for those things. I often see dev teams convincing themselves that they need to cut corners on quality. Few managers will say that explicitly, so take them at their word and take the time to build high quality software.
We devs choose to cut corners because we are buying in to the wrong idea that that will save time, it won't. So if you manager says "we need to go fast" then work with quality because that is fastest!
CDPR developers of Cyberbugs 2077 game should watch this video. That game almost killed them. 😂
The developers or the management who set the budget?
@@michaelnurse9089 The investors. Too bad the developers and management are incompetent.
Security is not really measurable and there's no such thing as complete security to even discuss. Rather than asking *how much* security you need projects need to define *security goals which depend on context* and how a thing is intended to be used. Changes to how it's used (which, together with lasting longer than you expected, you could call the mark of successful software) change the security.
I do like the shirt
Bugs? I make them.
Thank you for the informative videos you do, I like them a lot, except for one thing: PLEASE stop the screen save style background graphics, they are really distracting and makes it hard for the viewer to focus on your delivery, at least that is how I experience this. Thank you.
Continuous misery
Brilliant programming insights, Dave, but Douglas Adams is for 15-year-olds.
And feeling superior to advice/insight for young people is for immature adults.
What are you talking about? No one was discussing “advice for young people.”
Agile software cannot be quality software. Waterfall software can be quality software. The simple, standard, accepted definition of quality is conformance to specification. A quality hole can be defined as a 1" diameter plus or minus 1/1000th of an inch. You could have more criteria. If that is your specification, then it is quality if it meets the specification. Agile software throws away specifications for flexibility to adapt to new knowledge. Perfectly sane, but it cannot produce a quality product. There is no standard to which it can be measured. You could also say that you can't have a quality book. Which doesn't mean that some books are not very good. Great art. The quality you speak about is marketing quality. Some subjective, fuzzy measure of goodness, wonderfulness and usability that will change from person to person. For software a senior official can simply declare it a quality product because the budget is exhausted and no more is to be done. Software is too complicated to 100% test, so you can set the bar at 99.99% or 73%. Waterfall software could at least theoretically be tested to specification. Just go with the art of software design.
Words have multiple definitions - and in most cases they don't really work by a system of definitions at all - they work by fuzzy pattern matching. Clearly the definition you've given for 'quality' doesn't capture the usage of the term in this video.
With both methods is perfectly possible to create an abomination, something that does not reflect reality. However with Agile you find that out a bit sooner and have processed in place to adjust course and keep quality high.
Waterfall on the other hand...
@@Xorgye It's always great when people are concerned with quality. I think we are all pretty close to agreement. I like to articulate thinks definitively rather than be vague and emotional. So I think in terms of specification rather than satisfaction and usability. Software is a two step process (at least). Step one. Define the software that you are going to write. Step two. See if the users are satisfied with it. You can go through these steps multiple times. Step one next iteration, actually write the software and test it. Step two next iteration, have users test the software and make comments. Continuous Delivery implies that you will go through this loop numerous times. Step one is heavily rational. Step two can be emotional, fuzzy and squishy. An important part of design is to break software down (decompose is such a lovely engineering word) into manageable piece. Each piece should be easily ununderstood and have no more functionality than necessary. No “four tools in on”. We can call these small pieces “units”, which goes nicely with unit test. A unit can be an object, function or subroutine. Each unit should start with a complete and accurate description of the unit function. This isn’t fluff, it forces you to think hard about design. It is effectively a specification for the unit. You should be able to write the code that fits the specification. You should be able to write a unit test that has excellent test coverage. Every software test is checking for “conformance to specification”. All the automated tests you run over and over again are doing this. You can’t test every possible input and output, but the art of software engineering is to pick test cases that exercise the unit well. For a signed 32 bit number this would typically include -1, 0, 1, the maximum possible and negative values, a few prime numbers, a few even numbers and some random (but repeatable) numbers. The more testing you do the more apparent which numbers tend to crash things. For memory tests 0h, 5h, Ah and Fh as wide as necessary are very good at finding bits next to each other shorted together or stuck. As you integrate units together you should keep writing complete and accurate descriptions of the larger pieces. Agile may not be written to a specification, but it can create a specification (description) as it goes. This will be invaluable to sustaining engineers and even the original engineer if he hasn’t touched that piece of code for a month. Becoming the sustaining engineer because no one else can understand the code is career death. Software tests simply don’t check for emotional, subjective answers. They don’t tell you if the software is intuitive, easy to use and has the right mix of functionality. Not too much, not too little, Humans will always be involved in the soft squishy aspects of software testing.
Sorry, but this just comes across as semantic jibberish.
You know that the the original description of Waterfall was a critique, right?
You also fundamentally misunderstand Agile.
Agile is not the absence of specification, it's the development of specification closer to the time they will be implemented; when you're doing Agile, you should still have thorough requirements specification for the features you've implemented/are implementing, you just do them together.
This differs from Waterfall, in that:
1. Doing waterfall implies up-front analysis of a problem in a changing world, where the time between analysis and implementation can be years. This leads to the problem of requirements churn, where "management" keep changing their minds as the (perceived) needs of the organisation change.
2. Developing the requirements and implementation in the light of what actually exists, rather that what "will". This works from both the business and technical perspective.
You get a new requirement that requires fundamental re-design? If you've been Agile, you'll have implemented only what you needed up to that point and so the "loss" is minimal.
SSADM/Waterfall is NO protection/guarantee against such technical redesign requirements. In fact, it was the frequency of these fundamental re-designs in Waterfall projects that contributed to the birth of Agile in the first place!
@@camgere personally I wouldn't be so quick to rely on specifications or discount emotions. Emotions aren't vague either, if someone is so angry with an application that they throw their monitor out the window, I'd say there's nothing vague about how they feel about its quality.
Both Juran and Deming advocate quality as customer satisfaction, not specification. Weinberg describes quality as "value to some person", a highly subjective viewpoint. Pirsig describes quality as relative between the subjective (feelings/emotions) and objective (software/specification).
Without any specification, can software still have quality? I'd say yes, of course, and not everything in a software program can be in the specification anyway.