Scaling a Monolith with 5 Different Patterns

Поделиться
HTML-код
  • Опубликовано: 19 июн 2024
  • Want strategies for scaling monolithic applications? If you have a monolith that you need to scale, you have many options. If you're thinking of moving microservices specifically for scaling, hang on. Here are a few things you can do to make your existing monolith scale. You'd be surprised how far you can take this.
    🔗 EventStoreDB
    eventsto.re/codeopinion
    🔔 Subscribe: / @codeopinion
    💥 Join this channel to get access to a private Discord Server and any source code in my videos.
    🔥 Join via Patreon
    / codeopinion
    ✔️ Join via RUclips
    / @codeopinion
    📝 Blog: codeopinion.com
    👋 Twitter: / codeopinion
    ✨ LinkedIn: / dcomartin
    📧 Weekly Updates: mailchi.mp/63c7a0b3ff38/codeo...
    0:00 Intro
    0:41 Up
    1:57 Out
    3:32 Queues
    5:52 Read DB
    7:00 Caching
    7:53 Multi-Tenant
    8:29 Hot Paths
    #softwarearchitecture #softwaredesign #eventdrivenarchitecture
  • НаукаНаука

Комментарии • 35

  • @bobbycrosby9765
    @bobbycrosby9765 Год назад +31

    We did a lot of this stuff for our monolithic Facebook apps back in the late '00s. Oh, the memories.
    During peak load we would have around 2k writes/sec to our database, and around 15k reads/sec even with heavy caching. I only mention this because people talk about "scaling" but rarely talk actual numbers.
    The database was really the hard part. The code was shared nothing and we could just pile another server on top, but the database was another story. We had something like 5 webapp+memcached servers, but a total of 9 MySQL servers.
    A seemingly mysterious problem we ran into was our working dataset no longer fitting into the database's memory. Previously instant queries start taking tens of ms, which is way too slow - this was an easy fix, we just bought more memory (eventually, 128GB per server). We also ran into a problem of replication lag - replicated servers couldn't keep up with the master since replication in MySQL was single threaded. To help we had replicas dedicated to certain groups of tables and skip the rest. We also made sure to hit master and inject the user's freshest data where necessary.
    A problem with a lot of memory, at least in MySQL, was that cold restarts were painful. After coming back up, a database server wasn't ready to serve requests - it took it an hour or two to "warm up" before it could serve live requests without choking.
    I believe the not-in-cache problem is a "thundering herd" - as in, all the requests coming in stampede the database and kick over your sandcastle when it isn't in the cache. We resolved this by also adding a cache-lock key in memcached: if something isn't in the cache, before going to the database, you must set the cache lock key. If you fail, you aren't allowed to go to the database. There's tradeoffs here - the process that has the lock key could die. We set it to expire at something somewhat reasonable for our system - around 100ms.
    We were lucky in that our database schema was relatively mundane. It would have been much more difficult with some of the more complex schemas I've worked with over the years.
    It would be a lot easier to accomplish these days, at least for this particular project. There wasn't much in the way of actually transactional data, and for the hardest hit tables we could have easily gotten by with some of the NoSQL databases that came a bit later. And nowadays, hardware is much more powerful - I would have killed for an SSD instead of spinning rust.

    • @CodeOpinion
      @CodeOpinion  Год назад

      Thanks for the comment and details. Really great insights and background info!

  • @SergijKoscejev
    @SergijKoscejev Год назад +9

    I love this channel. It changed my mind on software development

    • @CodeOpinion
      @CodeOpinion  Год назад +1

      Glad to hear!

    • @essamal-mansouri2689
      @essamal-mansouri2689 Год назад +2

      @@CodeOpinion How do we know that he doesn't mean to say it made him quit software development and follow his passion for something else?
      But seriously, I love the channel too. More than anything, it made me realize that I like software design and architecture way more than writing actual software.

    • @CodeOpinion
      @CodeOpinion  Год назад +2

      That's a good point! Never thought about it that way 😂
      Thanks for your support! Appreciate it!

  • @roeiohayon4501
    @roeiohayon4501 Год назад +4

    Hi Derek!
    I just wanted to say thank you for all of the useful and very educational content you upload.
    I am definitely a better programmer and software architect thanks to your videos.
    As a person who loves learning, your videos are amazing:)

  • @krskvBeatsRadio
    @krskvBeatsRadio Год назад

    Just love your integration with the EventStoreDB. Ultimately the integration I get the most value from 😂

  • @ShighetariVlogs
    @ShighetariVlogs Год назад

    Thank you for putting out this content!

  • @michaelslattery3050
    @michaelslattery3050 Год назад

    For user-facing webapps you can switch from SSR to SSG (or CSR) + CDN. This offloads your app server from having to generate pages.
    Also, you can use edge cache servers for REST calls. This is like a CDN, but it's a caching reverse proxy. There are several such services.
    With bounded contexts and/or vertical slicing, you can have multiple databases and therefore multiple databases servers.
    In a monolith we once put some of our non-critical tables into an in-memory database (hsqldb). We did this with tables that didn't change or held data we didn't care if we lost.

  • @PelFox
    @PelFox Год назад +5

    I feel like microservices besides cloud scaling is scaling teams of developers. Having 200 developers working in the same monolith could become messy where's instead each team owns certain services it's a none issue.
    However, what I often see is 1 developer or 1 team making everything microservices and then they also have to manage all of that themselves, while the system have like 100 users..

    • @dandogamer
      @dandogamer Год назад +2

      Precisely systems mimic the communication structure of our organisations (believe this is explained in more detail in team topologies). Thats not to say monoliths fall apart after X users in a company. Google being the most famous example of a monolith used successfully in a huge org. I've been in a company which was microservices everywhere, it worked fine for ages until the company decided to axe 80% of the workforce. Then employees were outnumbered 10:1 per microservice, which as you can imagine caused immense slow down on new features as we were stretched so thin

    • @oeaoo
      @oeaoo Год назад

      @@dandogamer never understood this mimic idea. I'd say, not necessarily. The way you map teams with runtime modules and functionality is completely up to you as an org. Isn't it so?

  • @jannishecht4069
    @jannishecht4069 Год назад

    I would really enjoy a video about different ways how to implement multi tenancy and their implications. Thanks for the terrific content.

    • @CodeOpinion
      @CodeOpinion  Год назад

      Thanks. There is this video as well with a different take on multi tenancy: ruclips.net/video/e8k6TynqGFs/видео.html

  • @yuliadubov2964
    @yuliadubov2964 Год назад

    A very helpful summary, thanks a lot! Definitely applied some of these over the years to our monolith and probably will…
    From what I read/hear so far, micro-services have more to do with org structure than with performance and with even physical boundaries. If your app is comprised of several services, but they are always tested and deployed together - it’s not micro-services. It’s making them independently deployable that produces complexity. So I think that breaking up the monolith physically but keeping all the CI/CD/Deployment flows together can be an extension of this list…

    • @CodeOpinion
      @CodeOpinion  Год назад

      Correct. Physical boundaries aren't logical boundaries. I talk about that in this video: ruclips.net/video/Uc7SLJbKAGo/видео.html

  • @MichaelKocha
    @MichaelKocha Год назад +3

    Me, a 3d game artist thinking this was a video on procedural materials for game environment art, being insanely confused 3 minutes into the video wondering when the art stuff was going to start.

    • @CodeOpinion
      @CodeOpinion  Год назад +2

      Weird video recommendation! But thanks for holding out 3 minutes 😂

    • @MichaelKocha
      @MichaelKocha Год назад +1

      @@CodeOpinion lol I know! I was thinking the same thing.

  • @abdmaster
    @abdmaster Год назад +1

    Could you please share the database UX video mentioned at 6:25 ?

    • @CodeOpinion
      @CodeOpinion  Год назад +2

      Here you go: ruclips.net/video/wEUTMuRSZT0/видео.html

  • @haraheiquedossantos4283
    @haraheiquedossantos4283 Год назад +1

    The part that you talked about the email sender not being reliable, because of the possible failure, should we use outbox pattern to solve this problem?
    I think is one of the possible ways to solve.

    • @Isitar09
      @Isitar09 Год назад +3

      Yes, write the mail you wanna send into a table / queu e/ whatever and have a hosted service processing these mails in the background.
      If you do it in process you have to wait for the smtp / ms graph / whatever email sending system you're using before returning to the client. And that is kinda slow

    • @geraldmaale
      @geraldmaale Год назад +1

      I usually push it Hangfire, and carry on with my activities. If it fails, it will retry the process for x times.

  • @juhairahamed5342
    @juhairahamed5342 Год назад

    I have 5 instances of account microservice which transfers the money from account A to account B and then updates the data in the Postgres database. My problem is
    A user sent five requests to the account service and all of my microservices are working in parallel each request went to all 5 services way but now the user doesn't have enough balance in the account and I am already checking if the user is having enough balance or not.
    but after 2 requests user doesn't have enough balance so I am in confusion how to check this and implement data consistency first before the request goes to another instance of the same microservice.
    Can u suggest solution for above situation..

  • @mohammadutd2323
    @mohammadutd2323 Год назад +1

    Can you make a video about multi database multi tenant apps

    • @CodeOpinion
      @CodeOpinion  Год назад +4

      Take a look at: ruclips.net/video/e8k6TynqGFs/видео.html

  • @mahmutjomaa6345
    @mahmutjomaa6345 Год назад

    How would you implement Replica (Leader/Follower) for EF Core? Would you use LeaderDbContext and FollowerDbContext that both inherit from the same DbContext and disable SaveChanges for the FollowerDbContext?

    • @Isitar09
      @Isitar09 Год назад

      Something along those lines. I wouldnt call them leader and follower, more read and write context.

    • @mahmutjomaa6345
      @mahmutjomaa6345 Год назад

      @@Isitar09 There are critical reads tho, like permissions or consistent data to calculate bookings.

    • @Isitar09
      @Isitar09 Год назад

      Yes in your write path, so POST / DELETE / PUT / PATCH, etc. you use your write context for reads to stay consistent and have a single transaction. In your read path, so for 'GET' Requests in an API you use your read replicas. Read replicas are replicated almost instantly (normally in a few ms), so the end user will not see stale data.

    • @CodeOpinion
      @CodeOpinion  Год назад

      Ya this is one appraoch is to have different contexts and override Save(), or you can have different factories that provides a different connection string to the same dbContext