One concept plaguing software architecture and design (Part 3)

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии •

  • @Tony-dp1rl
    @Tony-dp1rl 7 месяцев назад +8

    Coupling is best defined by how easy it is to replace. If it is easy to replace, it is loosely coupled and good. If it is hard to replace, it is tightly coupled and bad.

    • @mohamedchibani5935
      @mohamedchibani5935 7 месяцев назад

      agree. However the problem how we define "easy" and "hard"

  • @CraigLivingston
    @CraigLivingston 6 месяцев назад +2

    First, thank you for these videos. I seem them more as a discussion, kind of a "retrospective" in a way, rather than a discussion of "ok, based on what we've learned, let's do X".
    Part of the difficulty for me personally is...how do we measure the effectiveness of one approach, vs the other? And get visibility. For example: Implementing a system using a more monolithic approach, vs a microservice approach. It's difficult to see the "real" costs of everything from start to end, and to see the true measurement of latency, performance, maintainability, all the side tasks related to devops, etc. I feel like with topics like this, the industry does a "let's try this for 10 years...ok, now lets try something else. Or something we did before. Or a hybrid.". It all just seems like guesswork and experimentation, there's no way to get a clear answer when it comes to effectiveness, or we wouldn't be in this situation.

  • @FlaviusAspra
    @FlaviusAspra 7 месяцев назад +3

    The modulith really is about making logical boundaries, without the expenses of microservices.

    • @RaVq91
      @RaVq91 3 месяца назад

      Isn't is about turning many logical boundaries into single deployable unit in a more future-proof way. The way that will allow easier shift into more (micro)servicey model?

  • @maf_aka
    @maf_aka 7 месяцев назад +7

    The video format is great, but it has so much left to be desired.
    From a business standpoint, why is one approach better than the other? We're engineers because we solve real-world problems, not because we can code and design complicated architecture.
    It'd be better if you (and your guests) talk more about your decades of experiences regarding certain engineering decisions that you made and the impact to the internal team and/or the business, e.g. how did migrating from monolith to microservices (or argue it vice versa, whatever) affect the OpEx, the development velocity/ new experimentations and innovative solutions, and profitability of the business?
    Also, maybe have strong biased opinions based on your decades of experience, and challenge your guests' perspective on how they do engineering. Backroom talks where two people just agree on everything isn't really exciting TBH. Other than being entertaining, it's also giving more nuances - every approach must be appraised by contexts.

    • @CodeOpinion
      @CodeOpinion  7 месяцев назад +3

      great feedback! Appreciate it, sincerely.

  • @AkosLukacs42
    @AkosLukacs42 7 месяцев назад +1

    The db transaction boundary in modular monolith is really important IMO.
    If you don't separate actions that really must be in a single transaction (or single CUD operation, if you are not using a relational db). And the rest that may follow up by messaging, you will end up with an accidentally tightly coupled mess again.
    The "you can't scale a monolith" notion probably comes from a couple of things:
    Really huge messed up monoliths that take a long time to start, possibly contain several background or scheduled tasks. Possibly half of those things not even used anymore, just everybody is afraid to change. Vs all the effort that went to reduce the runtime size of containers, AOT compilation, and other methods to reduce startup time.
    On premises servers, where you have to spec'd up as much as you could, because later scaling up or out would take months. So scaling out dynamically probably was not even a priority for a lot of projects. Until they really needed to.
    The infrastructure 10+ years ago, when even if you had spare capacity, you had to spin up a VM, and provision all requirements with whatever automation tools your company used at that time. Vs container services now, or even just "basic" auto scaling services that could spin up a new instance of a software and react faster. Things got progressively better over time, but it's just easier to contrast the current state with how things were 10 years ago.
    And of course our cloud overlords: if they can sell you dozens of instances of tiny machines that cost more than just a couple of semi-decent VMs, and also there is serialization and deserialization overhead and latency of network calls, just push the industry that way! :D
    And the new and shiny thing is of course cleaner, faster, less messed up that something that has been in production for years, developed by people who left the company years ago, evolving for the continuously changing requirements...
    Btw, containerization, kubernetes or other things that are most likely cited as advantages of microservices, do benefit monoliths as well. If you take advantage of all the things available, it's so easy to scale instances or deploy a complete test environment now!

  • @bobbycrosby9765
    @bobbycrosby9765 7 месяцев назад

    I worked on a system that was implemented something like service weaver before. The basic way that one worked is we had coarse grained interfaces representing services. We had hand coded implementations of those services, but also, we could point a code generator at the interface to generate client and server code for it.
    This was because we had some remote and some local clients. Local clients could use the raw implementation. Remote ones couldn't.
    The benefit was that it was easy to refactor while we were developing it. All our IDE refactoring tools were available to us. We also never had to write any http client or server specific code - codegen handled it all. But you really need to work with an immutable design, otherwise subtle bugs will seep through.

  • @StevenEvans
    @StevenEvans 7 месяцев назад

    The "logical" vs "physical" is a good conflation point in a lot of architectural views of systems. What are your thoughts on C4 models/Structurizr diagrams for illustrating a logical view of a system being deployed multiple ways?

  • @AkosLukacs42
    @AkosLukacs42 7 месяцев назад

    Oh, btw, people did not break up systems when it made sense? For example separate the public facing side of your application + the internally used side of your application + background or scheduled jobs?

  • @FlaviusAspra
    @FlaviusAspra 7 месяцев назад

    A modulith is still distributed. It's highly available, has redundancies.
    It's not a monolith.
    But each worker server can fulfill any task.
    In the load balancer you can PREFER some of the workers for some of the tasks in order to maintain cache coherence, but in a degenerate state the LB can still forward requests to any other worker in hope of recovering.

    • @bobbycrosby9765
      @bobbycrosby9765 7 месяцев назад +2

      Just to clarify, monoliths can also be distributed, highly available, and have redundancies. Distributing a stateless monolith across N servers is as old as the web.

    • @FlaviusAspra
      @FlaviusAspra 7 месяцев назад

      @@bobbycrosby9765 yeah, but people associate monolith with something that runs on a single server, and maybe the db is on a separate server.

    • @Rick104547
      @Rick104547 7 месяцев назад

      And yet somehow seems alot of ppl, even very experienced ones (+10 years) seem to have forgotten this.
      Instead it's microservices hell.

  • @os68
    @os68 7 месяцев назад +1

    Real decoupling is hard and in some industries/domains probably impossible (insurance?) soooooo ... you look for easy technology-based solutions ... and that does not solve the problem because the problem is a domain/business problem and not a technology problem to begin with ...

    • @os68
      @os68 7 месяцев назад

      In 2000 / 2010 the only way to force some kind of decoupling was to force the use of an open protocol for the decoupled modules to speak with each others. HTTP was a good candidate but the latency was just way too high to use it. Then came the cloud, json embedded application firewalls and so on and if you downscaled your services enougth you could achieve low latency service calls! And with that the community pivoted to microservices as the new standard architecture ...

    • @neo17091
      @neo17091 7 месяцев назад +2

      totally agree... sounds similar to Conway's Law... without changing the organization structure it becomes very hard to achieve better architecture

    • @pierrelautrou1210
      @pierrelautrou1210 7 месяцев назад +1

      I was recently asked to provide my point of view on micro-services architectures. When I said that the awareness of business analysts regarding this type of architecture is a factor of succes in its implementation, it rose some eyebrows.
      Coupling is not just an implementation problem, it's often how the business is organized that introduces coupling.

  • @madhattersc4051
    @madhattersc4051 7 месяцев назад

    Think what kind has been glossed over in the series is that while just turning things into “mircroservices” doesn’t decouple a system it doesn’t mean it has no positive effect. Granted that pattern can and quite often be abused to the degree of making “nanoservices” but there is purpose and value in breaking up monolithic apps to allow for easier development and deployment. That does indeed come at a cost and the detriment is what this series has pointed out. It’s not decoupled and you have introduced wire latency that previously did not exist. But it doesn’t necessarily mean breaking it up was a bad decision. Like all architectural decisions it’s knowing the tradeoffs and how to mitigate their negative effects.