An Introduction to Residuality Theory - Barry O'Reilly - NDC London 2024

Поделиться
HTML-код
  • Опубликовано: 14 май 2024
  • This talk was recorded at NDC London in London, England. #ndclondon #ndcconferences #developer #softwaredeveloper
    Attend the next NDC conference near you:
    ndcconferences.com
    ndclondon.com/
    Subscribe to our RUclips channel and learn every day:
    / @NDC
    Follow our Social Media!
    / ndcconferences
    / ndc_conferences
    / ndc_conferences
    #architecture #cloud #software
    Residuality theory is a revolutionary new theory of software design that aims to make it easier to design software systems for complex business environments. Residuality theory models software systems as interconnected residues - an alternative to component and process modeling that uses applied complexity science to make managing uncertainty a fundamental part of the design process.
  • НаукаНаука

Комментарии • 15

  • @blazjerebic8097
    @blazjerebic8097 17 дней назад +4

    interesting talk, thanks for putting me on to residuality theory

  • @megaloadian
    @megaloadian 17 дней назад +6

    I lost interest in the video when he claimed pickup truck drivers dislike electric car drivers based on their voting preferences. For someone who should rely on science and principles, this is disappointing.

  • @ThomasBergersen-ji4gs
    @ThomasBergersen-ji4gs 14 дней назад +1

    This just looks like another way to make a "Risk analysis" table, and to see if it risks can be catered for and mitigated. The presenter seems a bit unhinged and drinking his own kool-aid without being able to take any questions that dare question his theory. Which is not what science is about - you should be able to enter into discussions and debate the merits of something, instead of just blindly following.

    • @Barry-ru9kf
      @Barry-ru9kf 14 дней назад

      What is it that you want to debate?
      The idea differs from standard risk analysis by not using probability or impact guesses and discards the actual inputs as irrelevant at the end, which is a pretty radical approach that hasn’t been used before.
      On a scientific level all the information required for replication and refutation has been made available through peer reviewed sources.

    • @ThomasBergersen-ji4gs
      @ThomasBergersen-ji4gs 13 дней назад

      @@Barry-ru9kf This is assuming that risk analysis is done seperately, and it is just presented as a check list of things to "CYA" incase of a problem in the future, and how to handle it at that point. I am referring to doing these risk analysis or "what if" situations up front, embedded, and part of the design process and the design output. That seems very much aligned with this?

    • @Barry-ru9kf
      @Barry-ru9kf 13 дней назад

      This is very different than a risk analysis because it’s not analyzing risk or interested in protecting against risk - it’s looking for gaps in our understanding of a problem and how that relates to any possible solution and any weaknesses in an architecture that will be revealed in uncertain environments. Simply assessing risk during a design process is still laboured with ideas about probability that stifles exploration. The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies. My research shows that this weird result is actually to be expected theoretically and is replicable experimentally - this is a long way from risk management and makes the people who work with that very angry.
      This is a very different way of thinking than traditional approaches, but it’s not for everyone. If it feels uncomfortable or feels like something else it’s probably best not to adopt it.
      A good read is the article “The It’s Just Like….Heuristic” which you can find on the web.

    • @ThomasBergersen-ji4gs
      @ThomasBergersen-ji4gs 13 дней назад

      @@Barry-ru9kf "The key is the use of random simulation - which gives the weird result that architectures get stronger than when we employ traditional risk strategies."
      That's exactly what I said. Implementing risk analysis at the start and during the design of a system is a concept very similar to what you are discussing. It's not new, and many companies already practice this. So far, all you're doing is trying to sell us on your theory. My question is: where is the empirical validation of this theory? I would like to review those results and, as scientists do, reproduce them. We don't want another LK-99 situation on our hands.

    • @Barry-ru9kf
      @Barry-ru9kf 13 дней назад

      I guess you didn’t read the “It’s Just Like…” article then.
      There’s a huge difference between random simulation and risk analysis. There’s a huge body of literature behind that statement, so if you like to do what scientists do you should start by reading the literature.
      Now, certain senior architects, as I mention in the talk, eventually figure out that random simulation gives better results, but they’ve never written it down, never understood the implications, never formulated a theory - instead we stumble around with half baked definitions of risk and risk analysis. That some people have figured this out intuitively is actually part of the talk so I’m not sure what point you think you’re adding.
      If you’re already doing this, then you should know that it works. It seems you’re making two arguments - one is that this isn’t replicable, and the other that you’re already doing it, contradicting yourself isn’t really the basis for a good discussion and seems like you’re attacking the idea for the sake of it. I’d suggest since you already do this and it doesn’t work that you spend some time thinking about things. I’m not trying to sell you on the theory, but I would love to see an actual argument against it.