Here are some things worth mentioning: - When using a monolith in a production environment, you should definitely scale horizontally for increased reliability/uptime. It's still more reliable than running a single process. - For single-threaded runtimes like Node.js, the process can only use a single thread, so you actually *have to* scale horizontally. Scaling Node.js vertically only helps with memory availability, but not CPU performance. (thx Shahab Dogar for pointing this out) - In microservice architecture, the "pure" approach is for each service to have its own database/store so that the DB doesn't become the single point of failure. If my examples took that approach, there would be 2 databases: one for the Auth service and another for the Products service. However, in real life this approach isn't always taken, because you'd eventually end up with 20+ different databases that need to be secured, backed up, replicated, upgraded, etc- a maintainability nightmare. Nonetheless, if you want full decoupling, then you should go the pure approach. I'll periodically update this comment with any other pertinent info/corrections.
Thank you for this addition. I almost wanted to add that Microservices were referencing the same DB, and it adds additional coupling that, in the world of microservices, is not acceptable, according to several posts that I've read earlier. But your video and this comment gave me a realization that in the real world, architectural paradigms adjust to factual matters. Also, this video brought to mind that different architectural ideas are like network topologies. Each with its pros and cons, but in the end, the Internet uses all of them.
Some details to clarify: NodeJS is single threaded, but that is rarely a bottleneck outside FAANG scale when designed correctly because by design you should only be using Node to dispatch asynchronous tasks, like DB queries. Single threaded apps DO NOT SCALE HORIZONTALLY! NodeJS spawns *child tasks* on EventQ that take advantage of additional cores, allowing it to handle million+ requests per second in ideal cases. NodeJS is single threaded yes but because you should use it to spawn “child tasks” for EventQ, it not crazy to treat it as a multithreaded program where Node is the main thread and child tasks (written as promises normally) are on child threads.
@@jibreelkeddo7030 Am I right that managing NodeJS with pm2 (or do the same things manually) to run it in cluster mode where you would use number of cpus of the machine minus 1 (to leave one for the OS to run smoothly) as the number of spawned processes that share the same network socket so practically you can use the CPU of the machine to the maksimum of its possibillity and then you can scale that vertically to get more memory/cpu power or maybe even buy another machine with the same exact setup and put the load balancer in front of both of those two
NodeJS isn't single threaded. V8's event loop has a single thread architecture. LibUV observes and reacts to multiple file/socket descriptors' status asynchronously, and it uses 4 to 1024 threads as required. So underlying OS networking/filesystem stack will benefilt from increased thread count, avoiding head-of-line blocking. Also, to handle large buffers, you can always utilize additional worker threads to handle them on multiple event loops.
Microservices don’t always mean strong domain boundaries, you can have a distributed monolith (aka, a nightmare) services that are dependent on each other and not decoupled.
Tends to happen when companies build microservices first without fully knowing what their building. I find if you go the other way of cutting up a big monolith into a microservice it works out better.
We had a CTO that was completely bought into micro services to its truest form, allowing different teams to code in different languages and different backend infrastructure. When those teams blow up and the CTO is fired you're then left supporting small pieces of functionality written in non familiar languages that you will end up writing back into familiar languages.
The phrase "teams blow up and CTO is fired" basically just means "everything is going terribly", which means your comment reduces to "when everything is going terribly then everything is terrible" which is true but not a meaningful contribution to the discussion.
@@fennecbesixdouze1794 plus there no much info about "those teams". Are people that also got fired with the CTO? Were they consultants and the contract was terminated when the CTO left? This sounds like a huge management issue and not related at all with the codebase. Don't get me wrong, I can empathize with pain of having to fix someone mess, but this is not the fault of micro-services.
@@fennecbesixdouze1794true but in general is better to keep the lid on how many languages and frameworks teams use. It's just less expensive over time most of the time. Most of the times whatever advantage one language/framework etc has over another is negligible compared to the cost in support when you can't leverage the org know how on a certain piece of software.
The first video that I've watch from your channel (somehow got recommended to me), very clear explanation (for someone who's only been working as a backend dev for
That is why I love the Elixir programming language. It makes writing monoliths easier because the language kind of behaves like microservices. The language uses something called supervisors which spin up processes and monitors them (These are not OS processes, but erlang VM processes). If a process is killed unexpectedly, then the supervisor will spin up a new process (kind of like how kubernetes starts/restarts pods if they go down. This removes the single point of failure point you mentioned. Elixir also runs on top of the Erlang VM which is already built to be able to scale both vertically and horizontally. It also has ETS which is like a built-in redis. If you use Phoenix (which you should if you are building a backend) you also get Pubsub out of the box, and it is setup to automatically connect to your cluster if your app is distributed. You don't get Rust or CPP level performance (Even GO is still slightly faster than Elixir), but you do get a lot of other benefits. The biggest thing people complain about when it comes to Elixir is that it is dynamically typed.
Have you checked out gleam? its a typed BEAM lang. Its one of my favorite languages right now because its simple and the BEAM is amazing. The biggest issues with it right now is its tiny so there isn't many packages, the erlang package doesn't include bindings to ets tables, timers, etc so you need to do that yourself, and there is no macros so you can't use a lot of elixir libraries easily.
So I actually looked into using Elixir as a Full Stack Language. While I like its' concurrency model and its' functional paradigm, I don't like Phoenix. So I looked into Phoenix as a potential replacement for Next.js, as I am a Front End Dev first, Back End Dev second. At first Phoenix seemed promising, but later I realized that it is completely backend rendered, even for client side operations! So if a user simply wanted to do a client-side operation like sort a list (where the order doesn't matter to the back-end), they would have huge Latency! Imagine pressing sort and then 1 second later it sorts. The only solution is to somehow have CDN nodes distributed for every user, but that is complex. So what I found that is the solution of the Javascript problem (JS being a shit language to use) is to actually use Clojurescript on the Front End and Clojure or Elixir or anything on the Backend. Clojurescript with Reagent or Fulcro will give you that functional expressiveness and elegance on the front-end while also giving you the option to SSR, CSR, or Statically Generate sites just like Next.js. I would look into Clojure/Clojurescript if you want a good Full Stack Experience.
@@Nellak2011 Phoenix Liveview has something called Phoenix hooks where you can have javascript that purely runs on the client. So your example of sorting an array purely on the client is still possible using this method. There is even a video of someone using svelte connected to liveview using the phoenix hooks method. In your case it would seem that you probably didn't look at liveview (or maybe not as closely) but liveview does indeed allow you to do client side stuff either using the JS interop, or with hooks. This way you aren't needlessly sending requests to the backend. Edit: When I said "using svelte connected to liveview" I meant that the svelte app still runs entirely inside Phoenix. It is still just 1 app that gets started using mix phx.server
For monolith scaling there is another factor to consider, which is language. Usually these days node is used for everything, which is a single threaded language. Having multiple cpus (vertical scaling) has no impact on the process since it doesn't use those cpus (at least not without the devs going out of their way to add workers and fork processes) and in this case horizontal scaling is actually the only available option. Just putting this here for future viewers, it's not always this simple
@@MyNameIsPetch mostly startups, a lot of companies use node for their backend so that the same team that builds the front end can also work on the backend and the team stays tight, while also saving the company money
Another thing to add though is that there are technologies that allows servers to maximize CPU cores with single threaded languages. Concrete example: Most ruby on rails apps use Puma that can fork your ruby on rails apps to maximize your CPU threads. At work I have a single EC2 instance able to run 5 instances of my app server. However it seems that most NodeJS projects probably do not use technologies that allow for this type of scaling.
@@danielleedottech yeah it's a combination of most people not using this method to scale as well as the fact that this is required in order to scale. Even with Ruby on rails for example, using puma to make multiple process instances will introduce complexity, and a lot of times either developers or management or both are not willing to introduce the complexity, so they end up just deploying to multiple nodes instead
It's hilarious to see people finally understanding round trip times and n* stacking round trip times. It's one of the primary things that drove me to Elixir.
Hey Ryan, that was an amazing explanation. I am an intern at a startup and I was getting confused on all these terms they were speaking and you really made it more clear and now i don't feel stupid, thanks
11:35 Slight correction: you'd use a protocol buffer like protobuf, cap'n proto, FlatBuffers, etc. which are independent from gRPC. The protocol buffers are the IDL (Interface Description Language) for your binary data for SerDe, typically sent _over_ gRPC, but not necessarily.
This is a great video that covers the main differences between architectures. I work on a project that is almost fully serverless. The importance of integration tests and e2e tests are key to building a stable system that minimizes breakages when introducing changes. Performance testing is also important to get your scaling configuration set to meet your goals while keeping costs minimized. We deploy every branch to an isolated stack to run e2e tests before it gets merged, and run most of the e2e tests when deploying to the production stack. Another important consideration is retry logic when needed. If you are calling an external service, what will you do if it is unavailable... retry or fail? SQS and Lambda make a great pair for implementing a scalable fault tolerant system. All of these architectures have their place... picking the right one for the task at hand is the most important first step.
With Azure Functions, which is the Azure equivalent of lambda functions. You can integrate them into your private network and also allocate dedicated infrastructure to run the functions which solves cold start problem you mentioned for lambda functions in this video.
You gave an example that if the Auth service is down, we can still create products. I assume to create a product you will need the data of the Auth user to associate it with the created product. How would you create a product if the Auth service is down?
i understand that it will continue if the auth use some kind of JWT. if you are already log in, you just keep sending the JWT and the create product use it for the bind with the user, it will not use the auth service. if you are not already log in, you can't use the create product.
I have just come across your channel and I have been looking for this type of content for ages even though it's above my level. It answers a lot of conceptual questions for me. thank you so much.
I think your final take on the video was spot on. The best back-end architecture is the hybrid architecture. I think once your app reaches a certain point of complexity, there is really no way you can go all in 1 paradigm of architecture. I have no doubt in Netflix that they probably have a "monolithic micro-service" somewhere in their tech stack. My current workplace we have a "monolithic API Gateway" where we extracted our own internal micro-services and use company wide micro-services who themselves are monolithic in scope / complexity.
Great vid! The part where you said "scraped 50k websites/sec using Lambda functions" where could I find it? would be immensely helpful for r&d for my lil bootstrapped startup!
Brilliant, your way of presentation with visuals is very useful for people like me. I'm kinda new in software, trying to learn front-end but I know that I'll need all the back-end knowledge and all of these features in no time. Thanks a lot.
I am obviously late to the party here, and my points probably have already been mentioned here, but I just saw the video and I need to write that down ^^ My first point would be that one very important point should always be mentioned when talking about these approaches, and that is the quantity of developers. One of the main reasons why big companies like Netflix and Amazon implemented microservice architectures is that they had trouble scaling up their development teams when they were all working on the same codebase. So they added complexity by moving to microservices to allow their very large team to split up into smaller teams which could each work on very specific functionalities. Of course this also provides benefits like becoming more flexible in what language to use in each service, but it is very important to know that this will always introduce more cost and a lot more complexity first. Not only do those services now need to communicate with each other (including authentication and proper contracts), but also the development teams. And as soon as one service is used by multiple other services (authentication is a good example for that) you not only have a possible single point of failure again, but you also have to consider all the consuming applications when you want to introduce any changes, which in turn can slow the whole development process down. Talking about issues: I have never seen that a single service crashing was less problematic than the same problem happening in a monolith. First: The error needs to be matched anyway. So if the consuming app doesn't handle a server exception properly, it will crash anyway. Second: What is the difference between the outbox of the monolith crashing or the outbox service crashing? Both will prevent the users from using the outbox, but the rest of the application should still work in either case Third: Debugging in a microservice environment can be a bitch. I've had way too many occasions where a crash was first investigated in one application, then moved to another team because it apparently happened in their service, but they delegated it to another team and so on. Especially if you have a chain of services, this can get ugly. And don't even get me started if you need to handle a rollback through those services if one of the steps fails ... I could go on and on about this and give a lot of practical examples of what I heard and experienced, but in the end I totally agree with your conclusion. Just build your MVP as a monolith, but always keep a proper architecture. Keep to the SOLID principles and always consider YAGNI. Don't over-engineer/over-anticipate/over-complicate things in the beginning and keep in mind that you can always refactor when required. When you then reach a point where one specific part of the app keeps slowing the whole application down and you reached the point where switching to another programming language would help way more than refactoring in the current one -> extract that one functionality into a service. Or you see that one part of your application is the same as it is in multiple other applications in the company -> Consider extracting that one into a microservice so that this part doesn't have to be implemented and maintained in different applications multiple times. Or your sixteen or more devs constantly keep running into each other when working on the application and it is hard to scale their work -> Consider extracting parts of the application into services that can be maintained by one team each. Going from monolith to microservice is by the way not always that easy and you should be very careful with that. I've seen multiple cases where that was very contra-productive. Just check out the AWS blog for example, where they recently had a post about one time where they rolled back from a microservice architecture to a monolith and discovered that this did not just reduce complexity, but also costs by 90%.
Thank you for such a great video. I wonder what you think about monolith-serverless approach. You start with single function that has modules to handle all API requests and then you only split it if it is really needed (for example if one particular API endpoint is used a lot comparing to others or if one function requires specific dependancies that are not needed in others). This way you limit number of cold-starts in case of usage fluctuations, as every instance of cloud function/lambda can handle any type of incoming requests. I used this approach on couple projects so far and it really helps to start fast. On the other hand all my projects were relatively small.
You have to understand that most of the “Cost” depends on “Development Experience” after all. Reliability is easy to die due to application matters (Who gets benefit only Auth Server running while Product Server down?). Monolith is also horizontally scalable enough when its functions keep good response time, though long duration tasks could certainly cause bad effect on scalability. Conclusion: Think Monolith first, then consider idiosyncratic features to be separated as Microservice and Serverless according with the requirement, technology, and dev resources.
Bro can you explain why some of the small product based startups code their backend own by building jars and libraries and don’t use pre build framework and libraries like spring or spring boot .Is there any benifit to it like customisation or something else ……… btw great video
“It’s difficult to write spaghetti code with micro services” 9:03 (a little before that time stamp). No it isn’t difficult at all. You just end up with spaghetti spread over multiple processes instead of it being all together.
cool video! in case of native dependencies for lambda function, you can bootstrap the functions beforehand and install required dependencies before the function gets executed. Also good to mention is that the cold start is not happening on every request and you have the option to provision the lamdas to avoid cold starts but this affects the cost point 😄
Coming from an IBM Mainframe background and trying to learn web development. Mainframes are often considered Monolithic and outdated. But seeing this helps in discovering what works best, including Hybrid. Effective IT solutions are rarely just one solution. :) Thanks!
17:43 I would disagree it can scale till a certain point, but after certain point it just fails to scale especially on burst traffic where cold starts are issue (I am talking about when you have 12k invocations per second)
9:00 One may add that traditional enterprise architectures look exactly the same, just that the different components are not spun out into their own server process. In some architectures, they are grouped into layers (e.g. presentation, processing, database), which may live in their own container, but in others, they live side-by-side in the same process. This is one of the reasons microservices got adapted quickly---in many cases there was nothing to change in code, only in packaging it.
Omygosh!! what a pleasure to listen to. So many lecturers dont do it for me. I listened to the whole thing. Keep up the good work. Alternative to lambda on AWS? is there something i can grab from somewhere that would work on a docker or ec2s on aws? Be well, Regards, Joel
Top quality content! Thanks for making this often-drab content seem really engaging! Also you have a really top speaking voice! Don't know if you've ever been told that?
I don't see any reason to not go with serverless if you are a startup with no prior experience with server infrastructure Although my partner (ex-CTO) insists on going with Microservices, Docker, Apache kafka. I am confused if he insists on that because he likes to flex. I am more of a business person like "do it with simplest way possible as long as it works" but I might be wrong. Are all programmers like that ? IDK
Very good video, I am use to using one big server, or even clusters, but our new platform will use Micro Services, and that is all very new to me. so this helped a bit.
For Monolith, you didn't mention that you can easily host a managed app cluster that automatically deals with load-balancing and all so it's not complex at all; just need to pay the bill. Then normally you have a process watcher, if the app crashes, it restarts it automatically.
Monoliths don’t mean no horizontal scaling. They just are that you don’t have individual responsibility services so instead of network hops for functionality, your service has everything it needs in the runtime You’d be nuts to not have multiple hosts running your service with respect to an availability point of view (what happens if you have bad hardware) or if you ran your server with your customers accessing the host directly not through some sort of load balancer or gateway. Huge security risk
They also don't mean easy develpment. Tiny monoliths for your school science fair project sure are easy to develop test and deploy, but anything on the enterprise scale will be terrible development experience.
Those are all good points and I agree. My main argument is that scaling a monolith horizontally doesn’t entirely cure the single-point-of-failure problem. All instances of the monolith run the same code, and if there’s a fatal uncaught runtime error (e.g. null pointer exception), then during high traffic you *may* find that all instances crash in the same window of time. If availability is a concern, then splitting logic into separate services (and scaling those horizontally) makes more sense IMO. Nonetheless, if you’re sticking to a pure monolith, then having multiple instances is certainly better, and using a load balancer (or cloudflare to proxy at the DNS level) is a good idea to obfuscate the server IP.
sure, valid points but I was addressing your slide at 5:16. I’m typically not a shill for monoliths but that slide is just wrong. Tbh I didn’t finish the video after that point so I don’t know if you corrected it or addressed it after that slide With respect to your point about having uncaught runtime errors causing crashes in your servers, that argument still applies to poorly handled errors returned by micro services on (or off) the critical path. Like what happens if your auth service has a bug? What micro services do provide is a lower chance of this scale of event from happening because you aren’t necessarily touching the service code of your critical and stable services everyday. An exception to this that happens very often though is shared libraries between services. Anyways, a lot of things are not cut and dry and depends a lot on how you have set up your processes, development, error handling and testing.
@@Pscribbled yikes you completely and totally missed his point, I work in a large Telecoms company, and bro, without microservices its bloody impossible to even imagine deploying our architecture as a monolith and scaling it. Trust me if something goes wrong the entire spaghetti goes down, and it will take us light years to fix, and cost of that is unimaginable! The Developer experience would be 0/100! The guys slide is totally correct, with respect youre wrong bro!
@@kaypakaipa8559 lmao telecoms doesn’t tell me anything about your credentials. That’s no flex. I work with micro services as well. My stance is to use whatever architecture makes sense for the scale, cost, performance and requirements of your service. Generally I shy away from monoliths but not because of their inability to scale horizontally… because they can scale horizontally… If I’ve completely missed the point on how I believe Cody is wrong on slide 5:16 where he says it’s not feasible to scale a monolith horizontally, please give me your data and not your anecdotes
thank you for the video, its very inforamtive and useful. I started to develop an e-learning system at my company, and I also started in a monolith architecture, but in a modular way. I use NestJS for backend and NextJS for frontend. With NestJS its very easy to develop the system in modules, so I hope if it will necceserry I can split up to services. With NextJS I can also can make serverless functions, but firts of all I want to keep the buisness logic in one place, and later I can separetly.
Development process for monoliths can be a nightmare, because you always have to have the whole application opened. And if you want to split it, the simplicity goes out the window. Same goes for anything, that's not standard. I developed applications to run on JBoss and We logic as well as microservices (Spring Boot and Node stuff) and I'd never want to switch back to working on a monolith ever again - unless it's for moving to microservices. However: if code reviews are not really great, it's easy to mess up microservices. If the developers aren't on the same page about when to have a new microservice and when to include something in an existent one, it can be a nightmare. Then again integration tests can be easy depending on what you use and if you split the stuff correctly. For example you can choose to only have E2E integration tests from the user perspective and test only microservice-wide tests (and unit tests) within each microservice. Also: customer barely ever pay for refactorings, unless it's really necessary and you can explain the benefits in $$$ (many customers don't speak any other language). The problem is, that sometimes you don't know for sure and after a few more months or years the stuff becomes too expensive to refactor. At that point you end up patching stuff instead of refactoring it for real. I've seen this happen many times in my 14 years of software development.
Every question is a “it depends”. Staff has a lot of impact. A single team is better off with a monolith in most cases, but 20 teams are better off with 20 loosely coupled services. As you said cost management is constant in any of these solutions. Predictable pricing lpoints to monolith in many cases.
One small tweak to thinking about the latency of Microservices - They are communicating (hopefully) over an intranet, not the internet. The problem with 3rd party APIs (like, say, using Google for Oauth) is that you have to traverse the internet to access it. To be clear about what I'm saying, here's an example: Accessing data from a microservice is like going to your neighbor's house in the same neighborhood (within a specific distance, like 1/4 of a mile, let's say). Accessing data from a 3rd party service, like Google Oauth, is like having to get on the highway. One is much less busy and a much shorter route while the other is a much longer route and potentially packed with traffic. Not saying that there is no latency with an intranet, but it's negligible compared to 3rd party services.
Monoliths can also have the best performance. And you can scale horizontally with monoliths which combined with their lower usage of resources per action, will be more than 99% of servers need.
For strong domain boundaries with monoliths just use multiple packages. It forces even jr devs to decouple. And you can then require intercolunn communications to go through an event pattern to enforce loose coupling.
The problem with scaling horizontally like caching, load balancer, etc mentioned is not specific to monolith. Its horizontal scaling issue and applies to microservice.
You left out a dimension. Team Size. Past a certain number of developers a Monolith becomes a bottleneck as the entire team is within blast radius of each other and tied to the same deployment schedule. This was a driver for Amazon's initial move towards microservices. Microservices provide a level of isolation between teams letting them work with less friction as long as they keep the contracts between services fulfilled and have a rollback strategy.
@@codewithryan It depends on team size. I've also seen very small teams go nuts with Microservices when they're not necessary from an organizational perspective.
Good presentation, but a few points from another active practitioner: * Typically, each microservice has its own database, and makes up a "vertical slice". Multiple different microservices sharing a database is often seen as an anti-pattern, sometimes informally called "miniservices". Generally, your architecture should have most requests use a single microservice, and "horizontal" movement should be limited, which mitigates latency issues. * Docker/kubernetes actually make integration tests a lot easier, since you can use declarative setups like docker compose to spin up a complete copy of your entire app in miniature, assuming you have docker on your workstation/ci server * Kubernetes ease of installation has come really far the last few years with tools like KinD, k3s, and rke2, where you can launch an entire cluster with a single command on each node, or even realistically run on a single node (though you don't get the server-level reliability that way). * As you mentioned with cold starts, serverless actually tends to be less scalable, and more expensive under heavy load. It really only makes financial sense for extremely infrequently used functionality, and at that point, you might as well just use a cron job or a batch workflow engine. The other use case, as you mentioned, is extremely small projects with a handful of users. I think amazon did an article the other day basically coming to the same conclusion. One suggestion:, instead of just giving static good/bad grades, I'd recommend considering the grades under different circumstances. For example, monoliths have great developer experience when you have one or two engineers, because its so simple, but absolutely miserable when you have 100 engineers, as you now have to all get in line for a maintenance window to deploy your changes and fixes.
cold starts are not a per request problem in serverless, if other lambda instances already handled a request they stay up to 15 min or more handling subsequent requests, also there are multiple solutions to minimize them to the point where their effect on users is negligible, microservices on the other hand, due to their architecture and separate databases as you mentioned, a single request might go through multiple servers and multiple authorizations and 3 way handshakes plus multiple database look-ups in order to query or mutate some specific data, which is a performance hit that you will get on every request to that specific route, also "less scalable and more expensive" is not accurate, it depends on your use case and how you architectured it
There are limits to what can be done with monoliths in a sane way. The moment you introduce processes which aren't directly triggered by the user (e.g: payment confirmation, invoices, automated processes happening overnight...) , then you're better off doing that in a different "service", which can be a tiny thing (think a webhook). If you have a single process which takes a lot of processing (think of image resizing, video/audio encoding, etc), then you're better off not having your "clerk" (user account etc) handling that but a replicable dedicated "service" (lambda, container, whatever) . If you chose to go with "one service per entity/action" then you're in foot-gunning territory already.
If you run microservices hosted in Kubernetes, would you really say that Cost is that bad? It's the most effecient way of utilizing capacity i would say
My point of view : If you want to build an application(s) and you expect a lot of users in a few days after release, with high security, the option is Monolith. If you don't expect a lot of users for a couple of months after release, then microservices is your suitable option, if you want to build a small to medium app with minimum security then serverless is your option.
I don't understand the point you were making in the serverless functions section about whitelisting IP's. You would have the same problem using any other infrastructure. If you were spinning up microservice instances on AWS and you were hosting a database on Google Cloud, the exact IP addresses of your EC2 boxes would also be dynamic and you'd have the exact same whitelisting problem.
Why I can’t scale monolith horizontally (replicas)? I just put my app in docker container and run with docker-compose replicas. Monolith Reliability: fatal errors must be unacceptable while you developing program. And of course you need restart app immediately on crashing.
A minor point but I think a clearer term for what you call Reliability is Fault Tolerance. Reliability to me would be how many times a system fails, whereas Fault Tolerance would be how well it handles problems which is what you seem to be talking about.
For the Serverless, I don't agree with low rating on Development Experience and native dependicies limitation. AWS has docker images which we can download, add all the libraries and code, and simply upload it to work as the lambda function. Most likely, people will be using one vendor anyways so security limitation is not a solid point for serverless as well. To make response time better we can aim for more async architecture. In the end its like choosing the correct tool for the correct architecture. Everything works if the choice is right.
Good video. One mistake, however. You assume people will host their sites on Amazon, Azure or somewhere else. However, you can also just host it all on your own servers or on a VPS environment. It does mean that you have to do your own server administration but once things are running, that's not too complex. I have a setup at home with three MSI Cubi mini-desktops that have 8 GB, 8 GB and 64 GB of Ram with the 64 GB being my main server and the other two for various other tasks. All have about 500 GB of disk storage. I also have a Synology NAS with built-in DB/web server and a RAID-1 configuration giving me 16 TB of data storage. (Mostly for backup, database and large media served as CDN.) I also have a Gigabit Internet connection which I need anyways. So I don't even need Docker or Kubernetes for most of the things I do. (I use a docker container for the PHPMyAdmin site as I don't want PHP on my main server.) This system is pretty powerful already, compared to what you get on those Cloud services. When you have to choose what kind of architecture you need, you should consider doing self-hosting too. An MSI Cubi 5 12M with 8 GB RAM and Intel Core i3 and 256 GB of SSD storage would cost less than 500 euros and should last at least three or more years. Just make sure you make daily incremental backups and weekly full backups to a NAS or other storage. (Which doesn't need to take much space if your database isn't too big. Your code should be in Git somewhere, so that's safe too.) Remove Windows and install Ubuntu on this hexa core system and you have a very good server already... Disadvantages of self-hosting is that you have to do backups yourself too. And you need to be able to replace your server if it breaks for whatever reason. But this system would not need a keyboard, mouse or monitor once Ubuntu Server is installed on it. All you need is to use SSH to make a terminal connection or install WebMin or Cockpit on it for remote management. Seriously, many developers keep looking at Cloud services for hosting, but if you are a software developer and learning, self-hosting is far much easier.
in serverless the best thing you can do is limit your code to be under 50mb basically abandoning complex frameworks. if its more than 250mb its better of in a container. i think the hybrid approach is the best relying on request pattern to decide if its in microservice or serverless is good thinking
Well, talking about reliability, it’s only fair to compare the systems on whole. To calculate it you need to multiply the reliability if every block. So, more blocks you have less the reliability goes (multiply number that are between 0 and 1 for probabilities). It means, even if reliability of a monolith is less but more parts in microservices setup would make it less. Duplication helps but with cost overhead. Also, it’s worth to mention master/slave or write/read replicas for monolith. I agree, that you just trade one complexity for another. With monolith you need better software quality, with micro services you need better infrastructure management. Which experts of high quality is easier to find - devs or devops :)
It seems like all of the problems of microservices can be solved if you just learn kubernetes and run your own server. With the monolith and the serverless functions, there's limitations in the infrastructure that physically prevent you from scaling, so getting smarter and more skilled doesn't solve the problem. That's why I like microservices. You front the effort and pain of learning in the beginning, but once you get good, everything becomes easy.
One point in microservices , reliability is not easy to obtain i guess because if one service is down, underlying services will get affected. This has happened several times.
Serverless is more than just functions that scale horizontally. It is about abstracting the whole runtime environment away from the application. And you can also deploy a monolith into a serverless environment, if it is designed to compatible. But monolith and serverless doesn't exclude each other.
It might be worth mentioning that Erlang & Elixir sort of blur the boundaries here. The BEAM is at least as reliable as the Kubernetes cluster manager, and you can run millions of preemptively scheduled processes in it that communicate via message passing, and which can be managed in more detail as well, so that you get the logical separation without needing them to run in separate containers. So you can have something that is a monolith from a devops point of view but decoupled microservices or serverless functions from an architectural point of view. Also, I am kind of disappointed to see REST or gRPC as the communication between microservices. RPC is not the only game in town and event streams can often be nicer especially if you are building something that is architecturally a pipeline. Websockets in particular are a great communication mode. Finally, for serverless, cost is a trap imho. It is cheap when you get going but becomes way more expensive if you scale up.
We used gRPC for binary message transport on a project I was on. The frustrating part of was in converting from the monolithic splintered systems they kept wanting to keep adding items before we even completed a micro service. It bothered me that we did not consider what we needed to transport service by service or as we added more transferred functionality and grew the new applications. Micro services by the book or popular opinion within the community never seemed to have smooth transition or agreement within the projects. Drove me nuts.
Here are some things worth mentioning:
- When using a monolith in a production environment, you should definitely scale horizontally for increased reliability/uptime. It's still more reliable than running a single process.
- For single-threaded runtimes like Node.js, the process can only use a single thread, so you actually *have to* scale horizontally. Scaling Node.js vertically only helps with memory availability, but not CPU performance. (thx Shahab Dogar for pointing this out)
- In microservice architecture, the "pure" approach is for each service to have its own database/store so that the DB doesn't become the single point of failure. If my examples took that approach, there would be 2 databases: one for the Auth service and another for the Products service. However, in real life this approach isn't always taken, because you'd eventually end up with 20+ different databases that need to be secured, backed up, replicated, upgraded, etc- a maintainability nightmare. Nonetheless, if you want full decoupling, then you should go the pure approach.
I'll periodically update this comment with any other pertinent info/corrections.
May you pin it, please? Then it’s always on top of comment section. 🙂
Thank you for this addition. I almost wanted to add that Microservices were referencing the same DB, and it adds additional coupling that, in the world of microservices, is not acceptable, according to several posts that I've read earlier. But your video and this comment gave me a realization that in the real world, architectural paradigms adjust to factual matters. Also, this video brought to mind that different architectural ideas are like network topologies. Each with its pros and cons, but in the end, the Internet uses all of them.
Some details to clarify:
NodeJS is single threaded, but that is rarely a bottleneck outside FAANG scale when designed correctly because by design you should only be using Node to dispatch asynchronous tasks, like DB queries.
Single threaded apps DO NOT SCALE HORIZONTALLY!
NodeJS spawns *child tasks* on EventQ that take advantage of additional cores, allowing it to handle million+ requests per second in ideal cases.
NodeJS is single threaded yes but because you should use it to spawn “child tasks” for EventQ, it not crazy to treat it as a multithreaded program where Node is the main thread and child tasks (written as promises normally) are on child threads.
@@jibreelkeddo7030 Am I right that managing NodeJS with pm2 (or do the same things manually) to run it in cluster mode where you would use number of cpus of the machine minus 1 (to leave one for the OS to run smoothly) as the number of spawned processes that share the same network socket so practically you can use the CPU of the machine to the maksimum of its possibillity and then you can scale that vertically to get more memory/cpu power or maybe even buy another machine with the same exact setup and put the load balancer in front of both of those two
NodeJS isn't single threaded. V8's event loop has a single thread architecture.
LibUV observes and reacts to multiple file/socket descriptors' status asynchronously, and it uses 4 to 1024 threads as required.
So underlying OS networking/filesystem stack will benefilt from increased thread count, avoiding head-of-line blocking.
Also, to handle large buffers, you can always utilize additional worker threads to handle them on multiple event loops.
Microservices don’t always mean strong domain boundaries, you can have a distributed monolith (aka, a nightmare) services that are dependent on each other and not decoupled.
From what I've seen, this unfortunately happens a lot
ball of worms pattern
That's what we did for our own service T_____T
Tends to happen when companies build microservices first without fully knowing what their building. I find if you go the other way of cutting up a big monolith into a microservice it works out better.
Help, am here
We had a CTO that was completely bought into micro services to its truest form, allowing different teams to code in different languages and different backend infrastructure.
When those teams blow up and the CTO is fired you're then left supporting small pieces of functionality written in non familiar languages that you will end up writing back into familiar languages.
The phrase "teams blow up and CTO is fired" basically just means "everything is going terribly", which means your comment reduces to "when everything is going terribly then everything is terrible" which is true but not a meaningful contribution to the discussion.
@@fennecbesixdouze1794 plus there no much info about "those teams". Are people that also got fired with the CTO? Were they consultants and the contract was terminated when the CTO left? This sounds like a huge management issue and not related at all with the codebase. Don't get me wrong, I can empathize with pain of having to fix someone mess, but this is not the fault of micro-services.
@@fennecbesixdouze1794true but in general is better to keep the lid on how many languages and frameworks teams use. It's just less expensive over time most of the time. Most of the times whatever advantage one language/framework etc has over another is negligible compared to the cost in support when you can't leverage the org know how on a certain piece of software.
The first video that I've watch from your channel (somehow got recommended to me), very clear explanation (for someone who's only been working as a backend dev for
That is why I love the Elixir programming language. It makes writing monoliths easier because the language kind of behaves like microservices. The language uses something called supervisors which spin up processes and monitors them (These are not OS processes, but erlang VM processes). If a process is killed unexpectedly, then the supervisor will spin up a new process (kind of like how kubernetes starts/restarts pods if they go down. This removes the single point of failure point you mentioned. Elixir also runs on top of the Erlang VM which is already built to be able to scale both vertically and horizontally. It also has ETS which is like a built-in redis. If you use Phoenix (which you should if you are building a backend) you also get Pubsub out of the box, and it is setup to automatically connect to your cluster if your app is distributed. You don't get Rust or CPP level performance (Even GO is still slightly faster than Elixir), but you do get a lot of other benefits.
The biggest thing people complain about when it comes to Elixir is that it is dynamically typed.
Interesting. I’ve never worked with Elixir but this makes me want to take a look!
nice. need to take a look.
Have you checked out gleam? its a typed BEAM lang. Its one of my favorite languages right now because its simple and the BEAM is amazing. The biggest issues with it right now is its tiny so there isn't many packages, the erlang package doesn't include bindings to ets tables, timers, etc so you need to do that yourself, and there is no macros so you can't use a lot of elixir libraries easily.
So I actually looked into using Elixir as a Full Stack Language. While I like its' concurrency model and its' functional paradigm, I don't like Phoenix.
So I looked into Phoenix as a potential replacement for Next.js, as I am a Front End Dev first, Back End Dev second.
At first Phoenix seemed promising, but later I realized that it is completely backend rendered, even for client side operations!
So if a user simply wanted to do a client-side operation like sort a list (where the order doesn't matter to the back-end), they would have huge Latency!
Imagine pressing sort and then 1 second later it sorts. The only solution is to somehow have CDN nodes distributed for every user, but that is complex.
So what I found that is the solution of the Javascript problem (JS being a shit language to use) is to actually use Clojurescript on the Front End and Clojure or Elixir or anything on the Backend. Clojurescript with Reagent or Fulcro will give you that functional expressiveness and elegance on the front-end while also giving you the option to SSR, CSR, or Statically Generate sites just like Next.js.
I would look into Clojure/Clojurescript if you want a good Full Stack Experience.
@@Nellak2011 Phoenix Liveview has something called Phoenix hooks where you can have javascript that purely runs on the client. So your example of sorting an array purely on the client is still possible using this method. There is even a video of someone using svelte connected to liveview using the phoenix hooks method. In your case it would seem that you probably didn't look at liveview (or maybe not as closely) but liveview does indeed allow you to do client side stuff either using the JS interop, or with hooks. This way you aren't needlessly sending requests to the backend.
Edit: When I said "using svelte connected to liveview" I meant that the svelte app still runs entirely inside Phoenix. It is still just 1 app that gets started using mix phx.server
For monolith scaling there is another factor to consider, which is language. Usually these days node is used for everything, which is a single threaded language. Having multiple cpus (vertical scaling) has no impact on the process since it doesn't use those cpus (at least not without the devs going out of their way to add workers and fork processes) and in this case horizontal scaling is actually the only available option. Just putting this here for future viewers, it's not always this simple
Good point!
node is used for everything? in what world?
@@MyNameIsPetch mostly startups, a lot of companies use node for their backend so that the same team that builds the front end can also work on the backend and the team stays tight, while also saving the company money
Another thing to add though is that there are technologies that allows servers to maximize CPU cores with single threaded languages.
Concrete example: Most ruby on rails apps use Puma that can fork your ruby on rails apps to maximize your CPU threads. At work I have a single EC2 instance able to run 5 instances of my app server.
However it seems that most NodeJS projects probably do not use technologies that allow for this type of scaling.
@@danielleedottech yeah it's a combination of most people not using this method to scale as well as the fact that this is required in order to scale. Even with Ruby on rails for example, using puma to make multiple process instances will introduce complexity, and a lot of times either developers or management or both are not willing to introduce the complexity, so they end up just deploying to multiple nodes instead
It's hilarious to see people finally understanding round trip times and n* stacking round trip times. It's one of the primary things that drove me to Elixir.
I gotta check out Elixir
Hey Ryan, that was an amazing explanation. I am an intern at a startup and I was getting confused on all these terms they were speaking and you really made it more clear and now i don't feel stupid, thanks
11:35 Slight correction: you'd use a protocol buffer like protobuf, cap'n proto, FlatBuffers, etc. which are independent from gRPC. The protocol buffers are the IDL (Interface Description Language) for your binary data for SerDe, typically sent _over_ gRPC, but not necessarily.
It’s great to see you back man, love your videos
This is a great video that covers the main differences between architectures. I work on a project that is almost fully serverless. The importance of integration tests and e2e tests are key to building a stable system that minimizes breakages when introducing changes. Performance testing is also important to get your scaling configuration set to meet your goals while keeping costs minimized. We deploy every branch to an isolated stack to run e2e tests before it gets merged, and run most of the e2e tests when deploying to the production stack. Another important consideration is retry logic when needed. If you are calling an external service, what will you do if it is unavailable... retry or fail? SQS and Lambda make a great pair for implementing a scalable fault tolerant system. All of these architectures have their place... picking the right one for the task at hand is the most important first step.
You have an awesome speaking voice, just having you talk in the background creates and air or relaxation and confidence.
It always makes me smile when I see your face. I love the way you explain things, where I learn by understanding everything you say. Keep it up!
With Azure Functions, which is the Azure equivalent of lambda functions. You can integrate them into your private network and also allocate dedicated infrastructure to run the functions which solves cold start problem you mentioned for lambda functions in this video.
You gave an example that if the Auth service is down, we can still create products. I assume to create a product you will need the data of the Auth user to associate it with the created product.
How would you create a product if the Auth service is down?
i understand that it will continue if the auth use some kind of JWT. if you are already log in, you just keep sending the JWT and the create product use it for the bind with the user, it will not use the auth service. if you are not already log in, you can't use the create product.
The best video on the topic I have seen so far, great work man!
These topics haven't never made more sense until this video. This is great!
I have just come across your channel and I have been looking for this type of content for ages even though it's above my level. It answers a lot of conceptual questions for me. thank you so much.
I think your final take on the video was spot on. The best back-end architecture is the hybrid architecture. I think once your app reaches a certain point of complexity, there is really no way you can go all in 1 paradigm of architecture. I have no doubt in Netflix that they probably have a "monolithic micro-service" somewhere in their tech stack. My current workplace we have a "monolithic API Gateway" where we extracted our own internal micro-services and use company wide micro-services who themselves are monolithic in scope / complexity.
Isn't api gateway supposed to be a monolyth?
The best way to clear concepts is making comparison. Thanks brother!!!
Great vid! The part where you said "scraped 50k websites/sec using Lambda functions" where could I find it? would be immensely helpful for r&d for my lil bootstrapped startup!
Brilliant, your way of presentation with visuals is very useful for people like me. I'm kinda new in software, trying to learn front-end but I know that I'll need all the back-end knowledge and all of these features in no time. Thanks a lot.
This is the best video that compares the differences between each backend architecture in a clear and simple way. Thanks👍
I am obviously late to the party here, and my points probably have already been mentioned here, but I just saw the video and I need to write that down ^^
My first point would be that one very important point should always be mentioned when talking about these approaches, and that is the quantity of developers.
One of the main reasons why big companies like Netflix and Amazon implemented microservice architectures is that they had trouble scaling up their development teams when they were all working on the same codebase. So they added complexity by moving to microservices to allow their very large team to split up into smaller teams which could each work on very specific functionalities.
Of course this also provides benefits like becoming more flexible in what language to use in each service, but it is very important to know that this will always introduce more cost and a lot more complexity first. Not only do those services now need to communicate with each other (including authentication and proper contracts), but also the development teams. And as soon as one service is used by multiple other services (authentication is a good example for that) you not only have a possible single point of failure again, but you also have to consider all the consuming applications when you want to introduce any changes, which in turn can slow the whole development process down.
Talking about issues: I have never seen that a single service crashing was less problematic than the same problem happening in a monolith.
First: The error needs to be matched anyway. So if the consuming app doesn't handle a server exception properly, it will crash anyway.
Second: What is the difference between the outbox of the monolith crashing or the outbox service crashing? Both will prevent the users from using the outbox, but the rest of the application should still work in either case
Third: Debugging in a microservice environment can be a bitch. I've had way too many occasions where a crash was first investigated in one application, then moved to another team because it apparently happened in their service, but they delegated it to another team and so on. Especially if you have a chain of services, this can get ugly. And don't even get me started if you need to handle a rollback through those services if one of the steps fails ...
I could go on and on about this and give a lot of practical examples of what I heard and experienced, but in the end I totally agree with your conclusion.
Just build your MVP as a monolith, but always keep a proper architecture. Keep to the SOLID principles and always consider YAGNI. Don't over-engineer/over-anticipate/over-complicate things in the beginning and keep in mind that you can always refactor when required.
When you then reach a point where one specific part of the app keeps slowing the whole application down and you reached the point where switching to another programming language would help way more than refactoring in the current one -> extract that one functionality into a service.
Or you see that one part of your application is the same as it is in multiple other applications in the company -> Consider extracting that one into a microservice so that this part doesn't have to be implemented and maintained in different applications multiple times.
Or your sixteen or more devs constantly keep running into each other when working on the application and it is hard to scale their work -> Consider extracting parts of the application into services that can be maintained by one team each.
Going from monolith to microservice is by the way not always that easy and you should be very careful with that.
I've seen multiple cases where that was very contra-productive.
Just check out the AWS blog for example, where they recently had a post about one time where they rolled back from a microservice architecture to a monolith and discovered that this did not just reduce complexity, but also costs by 90%.
Your channel and your videos are gemstones you really got the talent to explain
Thank you for such a great video.
I wonder what you think about monolith-serverless approach. You start with single function that has modules to handle all API requests and then you only split it if it is really needed (for example if one particular API endpoint is used a lot comparing to others or if one function requires specific dependancies that are not needed in others). This way you limit number of cold-starts in case of usage fluctuations, as every instance of cloud function/lambda can handle any type of incoming requests.
I used this approach on couple projects so far and it really helps to start fast. On the other hand all my projects were relatively small.
I really liked this Video, finally I could understand the key differences between these architectures
You have to understand that most of the “Cost” depends on “Development Experience” after all. Reliability is easy to die due to application matters (Who gets benefit only Auth Server running while Product Server down?). Monolith is also horizontally scalable enough when its functions keep good response time, though long duration tasks could certainly cause bad effect on scalability. Conclusion: Think Monolith first, then consider idiosyncratic features to be separated as Microservice and Serverless according with the requirement, technology, and dev resources.
Bro can you explain why some of the small product based startups code their backend own by building jars and libraries and don’t use pre build framework and libraries like spring or spring boot .Is there any benifit to it like customisation or something else ……… btw great video
“It’s difficult to write spaghetti code with micro services” 9:03 (a little before that time stamp).
No it isn’t difficult at all. You just end up with spaghetti spread over multiple processes instead of it being all together.
Awesome video! it summaries _a lot_ of knowledge, and the youtube comments gives you the missing piece. A treasure in the internet!
cool video! in case of native dependencies for lambda function, you can bootstrap the functions beforehand and install required dependencies before the function gets executed. Also good to mention is that the cold start is not happening on every request and you have the option to provision the lamdas to avoid cold starts but this affects the cost point 😄
Coming from an IBM Mainframe background and trying to learn web development. Mainframes are often considered Monolithic and outdated. But seeing this helps in discovering what works best, including Hybrid. Effective IT solutions are rarely just one solution. :) Thanks!
I really enjoyed the explanation - clear and precise.
Plus, you have a great voice! 👍
Thank you for making the video!
17:43 I would disagree it can scale till a certain point, but after certain point it just fails to scale especially on burst traffic where cold starts are issue (I am talking about when you have 12k invocations per second)
I've started learning Golang because of one of your videos. It's a really cool language
Glad to hear, Go is awesome ❤
9:00 One may add that traditional enterprise architectures look exactly the same, just that the different components are not spun out into their own server process. In some architectures, they are grouped into layers (e.g. presentation, processing, database), which may live in their own container, but in others, they live side-by-side in the same process. This is one of the reasons microservices got adapted quickly---in many cases there was nothing to change in code, only in packaging it.
Omygosh!! what a pleasure to listen to. So many lecturers dont do it for me. I listened to the whole thing. Keep up the good work.
Alternative to lambda on AWS? is there something i can grab from somewhere that would work on a docker or ec2s on aws? Be well, Regards, Joel
Top quality content! Thanks for making this often-drab content seem really engaging! Also you have a really top speaking voice! Don't know if you've ever been told that?
Glad you enjoyed the video and I appreciate that!
I don't see any reason to not go with serverless if you are a startup with no prior experience with server infrastructure
Although my partner (ex-CTO) insists on going with Microservices, Docker, Apache kafka.
I am confused if he insists on that because he likes to flex. I am more of a business person like "do it with simplest way possible as long as it works" but I might be wrong. Are all programmers like that ? IDK
Very good video, I am use to using one big server, or even clusters, but our new platform will use Micro Services, and that is all very new to me. so this helped a bit.
Hi dude, you need to make more web development videos, you have very good diction and explanation! 👍
For Monolith, you didn't mention that you can easily host a managed app cluster that automatically deals with load-balancing and all so it's not complex at all; just need to pay the bill. Then normally you have a process watcher, if the app crashes, it restarts it automatically.
Great break down of the different backend architectures. Leaving a comment for the algorithm.
Monoliths don’t mean no horizontal scaling. They just are that you don’t have individual responsibility services so instead of network hops for functionality, your service has everything it needs in the runtime
You’d be nuts to not have multiple hosts running your service with respect to an availability point of view (what happens if you have bad hardware) or if you ran your server with your customers accessing the host directly not through some sort of load balancer or gateway. Huge security risk
They also don't mean easy develpment. Tiny monoliths for your school science fair project sure are easy to develop test and deploy, but anything on the enterprise scale will be terrible development experience.
Those are all good points and I agree.
My main argument is that scaling a monolith horizontally doesn’t entirely cure the single-point-of-failure problem.
All instances of the monolith run the same code, and if there’s a fatal uncaught runtime error (e.g. null pointer exception), then during high traffic you *may* find that all instances crash in the same window of time.
If availability is a concern, then splitting logic into separate services (and scaling those horizontally) makes more sense IMO.
Nonetheless, if you’re sticking to a pure monolith, then having multiple instances is certainly better, and using a load balancer (or cloudflare to proxy at the DNS level) is a good idea to obfuscate the server IP.
sure, valid points but I was addressing your slide at 5:16. I’m typically not a shill for monoliths but that slide is just wrong. Tbh I didn’t finish the video after that point so I don’t know if you corrected it or addressed it after that slide
With respect to your point about having uncaught runtime errors causing crashes in your servers, that argument still applies to poorly handled errors returned by micro services on (or off) the critical path. Like what happens if your auth service has a bug? What micro services do provide is a lower chance of this scale of event from happening because you aren’t necessarily touching the service code of your critical and stable services everyday. An exception to this that happens very often though is shared libraries between services. Anyways, a lot of things are not cut and dry and depends a lot on how you have set up your processes, development, error handling and testing.
@@Pscribbled yikes you completely and totally missed his point, I work in a large Telecoms company, and bro, without microservices its bloody impossible to even imagine deploying our architecture as a monolith and scaling it.
Trust me if something goes wrong the entire spaghetti goes down, and it will take us light years to fix, and cost of that is unimaginable!
The Developer experience would be 0/100!
The guys slide is totally correct, with respect youre wrong bro!
@@kaypakaipa8559 lmao telecoms doesn’t tell me anything about your credentials. That’s no flex. I work with micro services as well. My stance is to use whatever architecture makes sense for the scale, cost, performance and requirements of your service. Generally I shy away from monoliths but not because of their inability to scale horizontally… because they can scale horizontally…
If I’ve completely missed the point on how I believe Cody is wrong on slide 5:16 where he says it’s not feasible to scale a monolith horizontally, please give me your data and not your anecdotes
thank you for the video, its very inforamtive and useful. I started to develop an e-learning system at my company, and I also started in a monolith architecture, but in a modular way.
I use NestJS for backend and NextJS for frontend. With NestJS its very easy to develop the system in modules, so I hope if it will necceserry I can split up to services.
With NextJS I can also can make serverless functions, but firts of all I want to keep the buisness logic in one place, and later I can separetly.
great explanation! in your microservices diagram, db should be separated as well
Dependencies on serverless can be mitigated by a extra step, using dockerized images to run lambda, downside (aws) forces the use ECR
Nicely done. You missed SOA architecture which is between Monolith and Microservices I would say!
Development process for monoliths can be a nightmare, because you always have to have the whole application opened. And if you want to split it, the simplicity goes out the window. Same goes for anything, that's not standard.
I developed applications to run on JBoss and We logic as well as microservices (Spring Boot and Node stuff) and I'd never want to switch back to working on a monolith ever again - unless it's for moving to microservices.
However: if code reviews are not really great, it's easy to mess up microservices. If the developers aren't on the same page about when to have a new microservice and when to include something in an existent one, it can be a nightmare.
Then again integration tests can be easy depending on what you use and if you split the stuff correctly. For example you can choose to only have E2E integration tests from the user perspective and test only microservice-wide tests (and unit tests) within each microservice.
Also: customer barely ever pay for refactorings, unless it's really necessary and you can explain the benefits in $$$ (many customers don't speak any other language). The problem is, that sometimes you don't know for sure and after a few more months or years the stuff becomes too expensive to refactor. At that point you end up patching stuff instead of refactoring it for real. I've seen this happen many times in my 14 years of software development.
Every question is a “it depends”. Staff has a lot of impact. A single team is better off with a monolith in most cases, but 20 teams are better off with 20 loosely coupled services. As you said cost management is constant in any of these solutions. Predictable pricing lpoints to monolith in many cases.
One small tweak to thinking about the latency of Microservices - They are communicating (hopefully) over an intranet, not the internet. The problem with 3rd party APIs (like, say, using Google for Oauth) is that you have to traverse the internet to access it.
To be clear about what I'm saying, here's an example:
Accessing data from a microservice is like going to your neighbor's house in the same neighborhood (within a specific distance, like 1/4 of a mile, let's say).
Accessing data from a 3rd party service, like Google Oauth, is like having to get on the highway.
One is much less busy and a much shorter route while the other is a much longer route and potentially packed with traffic.
Not saying that there is no latency with an intranet, but it's negligible compared to 3rd party services.
Wow, your presentation skills have gotten so good!
Monoliths can also have the best performance. And you can scale horizontally with monoliths which combined with their lower usage of resources per action, will be more than 99% of servers need.
For strong domain boundaries with monoliths just use multiple packages. It forces even jr devs to decouple. And you can then require intercolunn communications to go through an event pattern to enforce loose coupling.
The problem with scaling horizontally like caching, load balancer, etc mentioned is not specific to monolith. Its horizontal scaling issue and applies to microservice.
Yep, this video is incorrect.
You left out a dimension. Team Size. Past a certain number of developers a Monolith becomes a bottleneck as the entire team is within blast radius of each other and tied to the same deployment schedule. This was a driver for Amazon's initial move towards microservices. Microservices provide a level of isolation between teams letting them work with less friction as long as they keep the contracts between services fulfilled and have a rollback strategy.
Yeah that’s a big one that I wish I mentioned. Microservices are great from an organizational standpoint.
@@codewithryan It depends on team size. I've also seen very small teams go nuts with Microservices when they're not necessary from an organizational perspective.
You can highly encapsulate features which dramatically increases the monolith size before the labor becomes an issue
Excellent 🎉 love this style of comparing different architectures!
you can use jelastic, they bill only based on CPU and RAM usage
also by default my monolith have redis and load balancer, so that's not an issue
Good presentation, but a few points from another active practitioner:
* Typically, each microservice has its own database, and makes up a "vertical slice". Multiple different microservices sharing a database is often seen as an anti-pattern, sometimes informally called "miniservices". Generally, your architecture should have most requests use a single microservice, and "horizontal" movement should be limited, which mitigates latency issues.
* Docker/kubernetes actually make integration tests a lot easier, since you can use declarative setups like docker compose to spin up a complete copy of your entire app in miniature, assuming you have docker on your workstation/ci server
* Kubernetes ease of installation has come really far the last few years with tools like KinD, k3s, and rke2, where you can launch an entire cluster with a single command on each node, or even realistically run on a single node (though you don't get the server-level reliability that way).
* As you mentioned with cold starts, serverless actually tends to be less scalable, and more expensive under heavy load. It really only makes financial sense for extremely infrequently used functionality, and at that point, you might as well just use a cron job or a batch workflow engine. The other use case, as you mentioned, is extremely small projects with a handful of users. I think amazon did an article the other day basically coming to the same conclusion.
One suggestion:, instead of just giving static good/bad grades, I'd recommend considering the grades under different circumstances. For example, monoliths have great developer experience when you have one or two engineers, because its so simple, but absolutely miserable when you have 100 engineers, as you now have to all get in line for a maintenance window to deploy your changes and fixes.
Great points, especially about the monolith dev experience. For large teams, a monolith may become an organizational bottleneck.
cold starts are not a per request problem in serverless, if other lambda instances already handled a request they stay up to 15 min or more handling subsequent requests, also there are multiple solutions to minimize them to the point where their effect on users is negligible, microservices on the other hand, due to their architecture and separate databases as you mentioned, a single request might go through multiple servers and multiple authorizations and 3 way handshakes plus multiple database look-ups in order to query or mutate some specific data, which is a performance hit that you will get on every request to that specific route, also "less scalable and more expensive" is not accurate, it depends on your use case and how you architectured it
Aren't cold starts fixable these days? I've heard rumblings of various solutions, but you seem to suggest here it's not even a consideration.
There are limits to what can be done with monoliths in a sane way. The moment you introduce processes which aren't directly triggered by the user (e.g: payment confirmation, invoices, automated processes happening overnight...) , then you're better off doing that in a different "service", which can be a tiny thing (think a webhook). If you have a single process which takes a lot of processing (think of image resizing, video/audio encoding, etc), then you're better off not having your "clerk" (user account etc) handling that but a replicable dedicated "service" (lambda, container, whatever) . If you chose to go with "one service per entity/action" then you're in foot-gunning territory already.
Great video! Would love you to do a follow up with cloud native monoliths
If you run microservices hosted in Kubernetes, would you really say that Cost is that bad? It's the most effecient way of utilizing capacity i would say
Suggestion for a light video of making a language tier list based on your opinion
My point of view : If you want to build an application(s) and you expect a lot of users in a few days after release, with high security, the option is Monolith. If you don't expect a lot of users for a couple of months after release, then microservices is your suitable option, if you want to build a small to medium app with minimum security then serverless is your option.
Great summary. Thanks Ryan!
The best video at the topic I've seen! Thank You :)
I don't understand the point you were making in the serverless functions section about whitelisting IP's.
You would have the same problem using any other infrastructure.
If you were spinning up microservice instances on AWS and you were hosting a database on Google Cloud, the exact IP addresses of your EC2 boxes would also be dynamic and you'd have the exact same whitelisting problem.
Excellent content keep it coming Ryan
Why I can’t scale monolith horizontally (replicas)? I just put my app in docker container and run with docker-compose replicas.
Monolith Reliability: fatal errors must be unacceptable while you developing program. And of course you need restart app immediately on crashing.
with that radio voice, you gonna get to a mill in a short time ;)
Great video Ryan, keep them coming, you're doing God's work! :)
Great rundown.
A minor point but I think a clearer term for what you call Reliability is Fault Tolerance. Reliability to me would be how many times a system fails, whereas Fault Tolerance would be how well it handles problems which is what you seem to be talking about.
For the Serverless, I don't agree with low rating on Development Experience and native dependicies limitation. AWS has docker images which we can download, add all the libraries and code, and simply upload it to work as the lambda function.
Most likely, people will be using one vendor anyways so security limitation is not a solid point for serverless as well.
To make response time better we can aim for more async architecture.
In the end its like choosing the correct tool for the correct architecture. Everything works if the choice is right.
Good video. One mistake, however. You assume people will host their sites on Amazon, Azure or somewhere else. However, you can also just host it all on your own servers or on a VPS environment. It does mean that you have to do your own server administration but once things are running, that's not too complex.
I have a setup at home with three MSI Cubi mini-desktops that have 8 GB, 8 GB and 64 GB of Ram with the 64 GB being my main server and the other two for various other tasks. All have about 500 GB of disk storage. I also have a Synology NAS with built-in DB/web server and a RAID-1 configuration giving me 16 TB of data storage. (Mostly for backup, database and large media served as CDN.) I also have a Gigabit Internet connection which I need anyways. So I don't even need Docker or Kubernetes for most of the things I do. (I use a docker container for the PHPMyAdmin site as I don't want PHP on my main server.) This system is pretty powerful already, compared to what you get on those Cloud services.
When you have to choose what kind of architecture you need, you should consider doing self-hosting too. An MSI Cubi 5 12M with 8 GB RAM and Intel Core i3 and 256 GB of SSD storage would cost less than 500 euros and should last at least three or more years. Just make sure you make daily incremental backups and weekly full backups to a NAS or other storage. (Which doesn't need to take much space if your database isn't too big. Your code should be in Git somewhere, so that's safe too.) Remove Windows and install Ubuntu on this hexa core system and you have a very good server already...
Disadvantages of self-hosting is that you have to do backups yourself too. And you need to be able to replace your server if it breaks for whatever reason. But this system would not need a keyboard, mouse or monitor once Ubuntu Server is installed on it. All you need is to use SSH to make a terminal connection or install WebMin or Cockpit on it for remote management.
Seriously, many developers keep looking at Cloud services for hosting, but if you are a software developer and learning, self-hosting is far much easier.
cool, i have some improvement to compare that differenct architecture. thanks ryan.
The first video on the Internet that says these things can coexist :D
Where have you been, lol - more content, please 🙂
Hey Mark! I’ll try to post more often now. Got a new computer setup so it’ll be a lot easier for me moving forward!
Yowza! I think I just found a new fav channel. Noice! 🎉
in serverless the best thing you can do is limit your code to be under 50mb basically abandoning complex frameworks.
if its more than 250mb its better of in a container.
i think the hybrid approach is the best relying on request pattern to decide if its in microservice or serverless is good thinking
Well, talking about reliability, it’s only fair to compare the systems on whole. To calculate it you need to multiply the reliability if every block. So, more blocks you have less the reliability goes (multiply number that are between 0 and 1 for probabilities). It means, even if reliability of a monolith is less but more parts in microservices setup would make it less. Duplication helps but with cost overhead.
Also, it’s worth to mention master/slave or write/read replicas for monolith.
I agree, that you just trade one complexity for another. With monolith you need better software quality, with micro services you need better infrastructure management. Which experts of high quality is easier to find - devs or devops :)
great comparison, thank you
It seems like all of the problems of microservices can be solved if you just learn kubernetes and run your own server.
With the monolith and the serverless functions, there's limitations in the infrastructure that physically prevent you from scaling, so getting smarter and more skilled doesn't solve the problem.
That's why I like microservices. You front the effort and pain of learning in the beginning, but once you get good, everything becomes easy.
21:52 modular monolith service
Than you Ryan! That was a very nice explanation!
Awesome! Thank you so much for your explanation
Easy to understand and fellow. Thanks
One point in microservices , reliability is not easy to obtain i guess because if one service is down, underlying services will get affected. This has happened several times.
What a great video ryan. Subscribed 👍🏻
Thx for the sub! 😊
Ryan, cool videos, thank you!
Apparently Spring Boot 3 largely solves that Serverless "cold start" problem, from what I've read. You agree?
Serverless is more than just functions that scale horizontally. It is about abstracting the whole runtime environment away from the application. And you can also deploy a monolith into a serverless environment, if it is designed to compatible. But monolith and serverless doesn't exclude each other.
Nice vid bro! 💪
Clean & informative. Subscribed.
This video was gold thank you!
Great video, clear and to the point, I learned something. Thank you! 😀
Such a great video, I had to sub!
It might be worth mentioning that Erlang & Elixir sort of blur the boundaries here. The BEAM is at least as reliable as the Kubernetes cluster manager, and you can run millions of preemptively scheduled processes in it that communicate via message passing, and which can be managed in more detail as well, so that you get the logical separation without needing them to run in separate containers. So you can have something that is a monolith from a devops point of view but decoupled microservices or serverless functions from an architectural point of view.
Also, I am kind of disappointed to see REST or gRPC as the communication between microservices. RPC is not the only game in town and event streams can often be nicer especially if you are building something that is architecturally a pipeline. Websockets in particular are a great communication mode.
Finally, for serverless, cost is a trap imho. It is cheap when you get going but becomes way more expensive if you scale up.
We used gRPC for binary message transport on a project I was on. The frustrating part of was in converting from the monolithic splintered systems they kept wanting to keep adding items before we even completed a micro service. It bothered me that we did not consider what we needed to transport service by service or as we added more transferred functionality and grew the new applications. Micro services by the book or popular opinion within the community never seemed to have smooth transition or agreement within the projects. Drove me nuts.
High quality video, good job! :)
Thanks ❤
@@codewithryan No problem. :)