This talk is interesting. This sounds to me a bit like playing a lot of devil's advocates to see how the proposed architecture survives under different edge cases that we usually do not think about, and then we take a greedy approach to improve its resiliency. - I am not a software architect but I wonder if there are anything similar for other fields of engineering or architecture. - How do we know that we have found out enough stresses (sample size in the circle area estimation analogy)? - How do we know our random simulation is random enough (whether the distribution of probes is uniform in the analogy)? - Do we do this alone or as a team or we have a small contest within team trying to break the architecture? - Do we do this only when we build new software, or every time when we change the architecture? - This approach sounds a bit opposite to KISS or YAGNI principle. Would the resulting architecture be more complicated than it initially needs to be? Or do we just use this as a tool to reimagine our core architecture and leaves spaces for future known-unknowns? Sorry if I have asked too many questions.
I lost interest in the video when he claimed pickup truck drivers dislike electric car drivers based on their voting preferences. For someone who should rely on science and principles, this is disappointing.
This just looks like another way to make a "Risk analysis" table, and to see if it risks can be catered for and mitigated. The presenter seems a bit unhinged and drinking his own kool-aid without being able to take any questions that dare question his theory. Which is not what science is about - you should be able to enter into discussions and debate the merits of something, instead of just blindly following.
What is it that you want to debate? The idea differs from standard risk analysis by not using probability or impact guesses and discards the actual inputs as irrelevant at the end, which is a pretty radical approach that hasn’t been used before. On a scientific level all the information required for replication and refutation has been made available through peer reviewed sources.
@@DrPierredelaMora This is assuming that risk analysis is done seperately, and it is just presented as a check list of things to "CYA" incase of a problem in the future, and how to handle it at that point. I am referring to doing these risk analysis or "what if" situations up front, embedded, and part of the design process and the design output. That seems very much aligned with this?
This is very different than a risk analysis because it’s not analyzing risk or interested in protecting against risk - it’s looking for gaps in our understanding of a problem and how that relates to any possible solution and any weaknesses in an architecture that will be revealed in uncertain environments. Simply assessing risk during a design process is still laboured with ideas about probability that stifles exploration. The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies. My research shows that this weird result is actually to be expected theoretically and is replicable experimentally - this is a long way from risk management and makes the people who work with that very angry. This is a very different way of thinking than traditional approaches, but it’s not for everyone. If it feels uncomfortable or feels like something else it’s probably best not to adopt it. A good read is the article “The It’s Just Like….Heuristic” which you can find on the web.
@@DrPierredelaMora "The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies." That's exactly what I said. Implementing risk analysis at the start and during the design of a system is a concept very similar to what you are discussing. It's not new, and many companies already practice this. So far, all you're doing is trying to sell us on your theory. My question is: where is the empirical validation of this theory? I would like to review those results and, as scientists do, reproduce them. We don't want another LK-99 situation on our hands.
I guess you didn’t read the “It’s Just Like…” article then. There’s a huge difference between random simulation and risk analysis. There’s a huge body of literature behind that statement, so if you like to do what scientists do you should start by reading the literature. Now, certain senior architects, as I mention in the talk, eventually figure out that random simulation gives better results, but they’ve never written it down, never understood the implications, never formulated a theory - instead we stumble around with half baked definitions of risk and risk analysis. That some people have figured this out intuitively is actually part of the talk so I’m not sure what point you think you’re adding. If you’re already doing this, then you should know that it works. It seems you’re making two arguments - one is that this isn’t replicable, and the other that you’re already doing it, contradicting yourself isn’t really the basis for a good discussion and seems like you’re attacking the idea for the sake of it. I’d suggest since you already do this and it doesn’t work that you spend some time thinking about things. I’m not trying to sell you on the theory, but I would love to see an actual argument against it.
Presents a deeper understanding of software architecture. He is onto something. Brilliant.
What a wonderful new perspective. Thank you for brining it to my attention, Barry.
This talk is interesting. This sounds to me a bit like playing a lot of devil's advocates to see how the proposed architecture survives under different edge cases that we usually do not think about, and then we take a greedy approach to improve its resiliency.
- I am not a software architect but I wonder if there are anything similar for other fields of engineering or architecture.
- How do we know that we have found out enough stresses (sample size in the circle area estimation analogy)?
- How do we know our random simulation is random enough (whether the distribution of probes is uniform in the analogy)?
- Do we do this alone or as a team or we have a small contest within team trying to break the architecture?
- Do we do this only when we build new software, or every time when we change the architecture?
- This approach sounds a bit opposite to KISS or YAGNI principle. Would the resulting architecture be more complicated than it initially needs to be? Or do we just use this as a tool to reimagine our core architecture and leaves spaces for future known-unknowns?
Sorry if I have asked too many questions.
Hi!
These are very common questions. All should be answered in the book on LeanPub (I don't think it’s possible to post links here).
interesting talk, thanks for putting me on to residuality theory
I lost interest in the video when he claimed pickup truck drivers dislike electric car drivers based on their voting preferences. For someone who should rely on science and principles, this is disappointing.
Why do you think he relies on science and principles? I thought he was discussing software architecture.
Why? Just because he has a degree?
This just looks like another way to make a "Risk analysis" table, and to see if it risks can be catered for and mitigated. The presenter seems a bit unhinged and drinking his own kool-aid without being able to take any questions that dare question his theory. Which is not what science is about - you should be able to enter into discussions and debate the merits of something, instead of just blindly following.
What is it that you want to debate?
The idea differs from standard risk analysis by not using probability or impact guesses and discards the actual inputs as irrelevant at the end, which is a pretty radical approach that hasn’t been used before.
On a scientific level all the information required for replication and refutation has been made available through peer reviewed sources.
@@DrPierredelaMora This is assuming that risk analysis is done seperately, and it is just presented as a check list of things to "CYA" incase of a problem in the future, and how to handle it at that point. I am referring to doing these risk analysis or "what if" situations up front, embedded, and part of the design process and the design output. That seems very much aligned with this?
This is very different than a risk analysis because it’s not analyzing risk or interested in protecting against risk - it’s looking for gaps in our understanding of a problem and how that relates to any possible solution and any weaknesses in an architecture that will be revealed in uncertain environments. Simply assessing risk during a design process is still laboured with ideas about probability that stifles exploration. The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies. My research shows that this weird result is actually to be expected theoretically and is replicable experimentally - this is a long way from risk management and makes the people who work with that very angry.
This is a very different way of thinking than traditional approaches, but it’s not for everyone. If it feels uncomfortable or feels like something else it’s probably best not to adopt it.
A good read is the article “The It’s Just Like….Heuristic” which you can find on the web.
@@DrPierredelaMora "The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies."
That's exactly what I said. Implementing risk analysis at the start and during the design of a system is a concept very similar to what you are discussing. It's not new, and many companies already practice this. So far, all you're doing is trying to sell us on your theory. My question is: where is the empirical validation of this theory? I would like to review those results and, as scientists do, reproduce them. We don't want another LK-99 situation on our hands.
I guess you didn’t read the “It’s Just Like…” article then.
There’s a huge difference between random simulation and risk analysis. There’s a huge body of literature behind that statement, so if you like to do what scientists do you should start by reading the literature.
Now, certain senior architects, as I mention in the talk, eventually figure out that random simulation gives better results, but they’ve never written it down, never understood the implications, never formulated a theory - instead we stumble around with half baked definitions of risk and risk analysis. That some people have figured this out intuitively is actually part of the talk so I’m not sure what point you think you’re adding.
If you’re already doing this, then you should know that it works. It seems you’re making two arguments - one is that this isn’t replicable, and the other that you’re already doing it, contradicting yourself isn’t really the basis for a good discussion and seems like you’re attacking the idea for the sake of it. I’d suggest since you already do this and it doesn’t work that you spend some time thinking about things. I’m not trying to sell you on the theory, but I would love to see an actual argument against it.