Woooow you're still around!! I saw this suggested on RUclips, I watched you back when I was 12, and now I'm 25 and working as a platform engineer. Awesome channel!
Had to click because this content is underrated In the context of conveying alot of information smoothly and draw intuition quickly beyond words. and it shows you put alot of thought into it. MindMaps FTW Thanks Caleb !
Please, if you are a junior just starting out, please know that simplicity is the most important goal you should go for. Fight with tooth and nails to keep another technology out of it. A simple postgres and a monoloitic application on Linux, without docker, kubernetes and etc, can take you much further than it looks. I speak from experience. It is much more painful to recover from overengineering than to introduce a new complicafion when you have already exhausted all your other options. Also be very critical. If someone tells you gRPC is faster then JSON and REST benchmark it in real world situations. Be very critical of any new thing. And if you can avoid adding it by just doing a few manual steps, do those steps.
Speak for yourself man , my repo is %68 go, %21 docker, and %11 bash scripts, I'm having a blast. I have developed an innovative antipattern: The Modular Monolith Monorepo
@@Dom-zy1qy why do you feel we are talking about different things? I am talking about when you introduce microservices, kubernetes, teraform, rabbitmq, logstash, elastic search, Kafka and gRPC to the same project. Your project structure actually seems pretty conservative when it comes to backend.
Depends on scale. Your comment makes me think you never had to run something at scale, or with strict formal service level requirements. We, a team of four, or should I say more like three and a half, run a system with thousands of pods on AKS, consisting of several apps. Apps get frequent updates, even if releases to production only happen once every month or every two months - but dev and test platforms need to be updated several times a day. We are the final gatekeepers for something improper to reach production, therefore we maintain several test platforms alongside the production platform - one for each development team, one for our final tests and one for running the entire tooling used across teams (CI, static analysis, backups, more complex system level tests, various batch jobs not related to apps and so on). We get thousands of http requests per second from external clients only, sometimes tens of thousands, with large body sizes, not typical for web applications, and latency and uptime are critical for some of our apps. For other apps, used only internally, millions of messages are sent through a message broker for one batch job, and dozens of batch jobs need to run each day. We'd be lost without the prometheus stack, terraform, a managed kubernetes service, helm, managed database services, external IAM, message brokers and quite a few other things. Out of necessity we have developed some things ourselves, on top of what's available off the shelf, using the most appropriate tech in each case. We don't manage anything hands-on, except in extremely rare cases of failures in production, of which we haven't had one in years - our uptime over the last year was better than five nines, and the tiny bit of downtime is due to critical external systems failing, such as IAM or brief transient platform failures, for example. We do all upgrades with zero downtime, using various techniques, depending on what part of the system needs to be updated. What we spend our time on is almost exclusively automation and research on how to automate and harden our setup even further, plus some development of what we maintain ourselves, in order to adapt to changing external systems. We managed to switch the platform provider, earlier this year, with literally zero downtime - we temporarily ran the system on two distinct platforms, having also temporarily set up replication of data, advertised the new system to clients once it was up and shut down and removed replication once there were no more clients using the old system. It's a somewhat large system that has grown historically, but we're very close to the point where we'd be able to tear everything down and set it up again with just a very few clicks (not a single click because we don't want to move key management online - but we might externalize it). (Right now we're at the point where we'd have to spend a few more clicks, but we'd still not do anything manually.) You can't do that without using many distinct tools. Still, I wouldn't say our system is unnecessarily complex. It's as complex as needed to fulfill the requirements coming from the business side of things. Latency and uptime impact sales, for the company, with one hour of downtime representing many millions in lost sales, and they also impact production, since what we run is on the critical path of systems that drive the assembly line. (Compare this to just tens of thousands in runtime costs per month. Salaries might increase this to hundreds of thousands, but it's still _at least_ one order of magnitude less than _one single hour_ of lost sales and production delays. This might give you an understanding why managing a complex system, required to run things reliably, controlled, continuously supervised and watched, with zero downtime and low latency, is absolutely worth the added cost, to business, compared to just run the barebones applications on dedicated machines, just hoping for the best.) We did what was necessary to deliver what business asked for. Tech aspects should never limit business requirements, unless business requirements are too expensive to fulfill - which is absolutely not the case for us. The system we maintain is the result of breaking up a monolith. The effort started some 6-7 years ago. Compared to that monolith, we run several times cheaper, with lower latency and a few orders of magnitude less downtime - maintenance windows of hours every few months were required for the monolith, we don't accumulate 5 minutes of downtime per year - in fact, our less than 100% uptime results only from randomly failed requests spread out over the entire year). Developers could not afford more than two new releases per year, for the monolith. Developers haven't kept up with the platform modernization, so they're not yet as agile, but they still release every application about every second month, getting new features and bugfixes to production several times faster than they did with the monolith. I'd say the added complexity fully pays off. Simplicity is indeed valuable. But one should heed a saying Einstein is credited with: everything should be made as simple as possible, but not simpler than that. Especially for large and intrinsically complex systems, microservices, while adding complexity, also significantly increase resilience and scalability. When done right, they also increase flexibility and the ability to change things fast _without_ breaking them. Kubernetes may very well be the most expensive way to run a distributed workload - but only when you have to maintain Kubernetes yourself. When you need to run something at scale, it's simply cheaper to pay for a managed Kubernetes service. Containers do add complexity, but they also add isolation, which, when you need to run tons of different workloads, is way cheaper than running many distinct and small servers or VMs yourself, in order to avoid one application starving another one of resources. For distributed applications, depending on their nature, a message broker is unavoidable. Depending on your needs, Kafka or RabbitMQ may be a better solution. If your system is truly large, it may be best to run both, or even run multiple instances of each. Running any distributed computing workload without monitoring, distributed tracing and centralized logging is plain stupid - you effectively deny yourself any possibility of thorough failure analysis, and components may stay in a failed state for a long time before you even notice that something is broken.
@@a0flj0 yeah I see your point. Let me clarify, if you actually have the requirement for breaking up a monolith and see the benefits immediately do so. But if you start with kubernetes, microservices, etc, you absolutely get f**ed every second that requirements change as often as they do, and every mistake you have in modeling the domain, you pay for ten times each time you have to change it across twenty applications. Also there are solutions other than adding complexity of kubernetes, and all that comes with it, that one can try to improve the reliability and scalability. I advocate that one is better off achieving the same results with those instead of increasing complexity to that level. If those solutions are exhausted or have much higher complexity, definitely go with them. If nothing works, even go with a unikernel if you have to. I'm just saying that oftentimes, for most people, kubernetes and multiple databases are not required. And they may not recover from those mistakes as easily as recovering from under-engineering, like what you explained here. Often times even 5 nines are not required for most businesses but the price paid is massive for that 11 minutes per year of uptime. Most businesses can survive even two days of downtime, but they may not be able to survive two months of migration each time a major new change in requirements happens.
This video really explains everything I'm curious about and it's perfect. As a junior frontend developer, that explanation is very basic and understandable
Thank you very much for this video! I am actually in college and taking a database development & design class! Seeing this video reinforces my energy, and mindset. 🔥
This is golden content. I'm glad I came across this video, as someone who has been in the field for some time I can agree that knowing all this as a developer will make your life super easy. Subscribed so that I can get more of this.
WOW. This is the first time I've seen your videos and the dedication you put into this one is massive and admirable. Would definitely stick around for more content
We were waiting for the step-by-step roadmap. And continue this type of content, you could do it on the frontend and after the accea and full-stack, devops, etc. You would have one video for the mind map and one for the roadmap for each. Keep Up The Good Work!
Thanks alot for such insight. it's really refreshing to find information like this for free. It's really important to have a mental picture of what is required or generally what one has to do. a lot to learn and get familiar with but I guess that's the fun part of it all. thanks again.
Thank you Caleb so much I have I long career in technology this is the most valuable video I have EVER watched. I have taught in corporate universities and mainstream universities. The world must be mad if this doesn't become the definitive back-end reference video. Looking forward to seeing you speaking at conferences :-)
Great video concept. I’d like to see another iteration where you take all the languages, frameworks, and other concepts you don’t know well and reference the docs for a more accurate and concise explanation. Your audience are engineers after all.
I watched this video to learn about the core concepts used in backend development, not just technologies. I mean concepts like message queues and design patterns-whether they are part of this roadmap or not-along with a quick example for each concept. But it turned out that it was literally about the technologies used. Maybe it’s my fault for thinking that the complete roadmap for backend engineers was only about technologies. Anyway, I don’t mean to belittle your effort. Thank you. I just hope that you focus on teaching concepts rather than technologies, because technology is always evolving, whereas deeply rooted concepts remain unchanged. If anyone here has any idea about what I was searching for in this video but couldn’t find, please don’t hesitate to share, even a little.
Great tutorial Caleb! I've been learning webdev for a few years now and have come across most of these terms you cover, but this is the first time I've had them all explained from a higher-level viewpoint. The penny drops when I see how you've grouped things together, since I may already know, for example, what a webhook is, but didn't realise it's place in the webdev world is alongside REST. Very helpful - thank you. Duncan.😁
Delphi / Lazarus are very powerful contenders. Blazing fast compiler and binary, very expressive language (in-built readability) which is great to manage large code bases and their back-end development capabilities are simply not talked about. I often wonder why.
My choice for backend technology is Swift and Hummingbird. Server Side Swift is not necessarily tied to Apple although developers that know Swift tend to come from iOS/MacOS. Swift has evolved really nicely as an open source programming language for many years. It is a very modern language with amazing performance. I've chosen Hummingbird as it is much simpler than Vapor, they are both based on Apple's SwiftNIO framework. I believe this choice is much easier to implement while achieving better performance than most others, if not all. I must admit that a big part of my choice is the fact that I'm an iOS/MacOS developer and using the same language for both the client and server is a big advantage, specially when in Swift you have the Codable protocol which allows you to easily transfer any Codable Struct between client and server using the same Swift code for defining the models. For my personal projects ( outside my day job ), as the only developer, I'd be more than happy to have a successful App targeting iOS/MacOS only without support for Windows or Android.
Quarkus and Micronaut are missing. They have a tiny market share but are important because they are both new implementations of ideas of cloud-specific concepts in Java, unlike Spring. Also, I can't see any message broker in the mind map. Protocols aren't everything.
This is a very informative video. Thank you for the explanation! Though I have a question: At 49:22 Supabase was mentioned as a NoSQL solution. But it is running Postgres, shouldn't it be considered as RDB?
Hi, Caleb. I wanted to apply for the mentorship but I won't qualify because I'm from South Africa. But, thank you for this amazing work here, esp with this road map.
Prisma support for MongoDB is very limited, specially if you want to do something a little complex like search on the whole table or a combination of tables (joins)
Links + Errata
Get the mind map - calcur.tech/mindmap
Mentorship to land six figure engineering roles - calcur.tech/mentorship
Timestamps:
00:51 - Backend Frameworks
01:43 - Language vs Framework
03:40 - Example Learning Roadmap
04:16 - JavaScript
06:33 - C#
07:14 - Java
07:26 - Kotlin
07:44 - PHP
08:04 - Rust
09:00 - Go
09:19 - Elixir and Ruby
10:07 - Swift
11:23 - Popularity of a Language
12:09 - webAssembly
14:03 - ORMs and Database Libraries
20:45 - Content Management Systems (CMS)
22:17 - Static Site Generators (SSG)
23:26 - Databases
25:00 - SQL
25:24 - Data Warehouses
28:40 - Transactional Databases
35:47 - NoSQL
49:56 - Hosting
51:12 - Shared Hosting
55:26 - PaaS
58:39 - IaaS
59:39 - Clients and Servers
59:53 - Servers
01:01:25 - Browsers (client)
01:05:13 - CDNs
01:08:25 - ISPs
01:09:22 - Communication Protocols and APIs
01:10:08 - APIs
01:10:55 - APIs
01:16:39 - Network Protocols
01:22:33 - Notation
01:25:00 - App Dev Lifecycle
01:25:27 - Local Dev
01:27:25 - Source Control
01:27:57 - Containerization
01:29:41 - Kubernetes
01:31:23 - CI/CD
01:33:16 - Testing
01:36:58 - Issues/Tasks
01:37:49 - Monitoring
01:38:41 - end-to-end app dev review
01:39:13 - Cloud Services
01:41:41 - Services - Monitoring
01:41:54 - Services - Managed DBs
01:42:11 - Services - Storage
01:42:25 - Services - Compute
01:42:45 - Services - Serverless Functions
01:43:11 - Services - Identity
01:43:34 - Services - DNS
01:43:44 - Services - Virtual Cloud
01:43:51 - Services - CDN
01:43:57 - Services - CICD
01:44:05 - Services - Certificate Management
01:44:19 - Services - Containers
01:44:41 - Services - Serverless Compute
01:45:10 - Services - Kubernetes
01:45:17 - Services - IaC
01:45:59 - Services - Load Balancing
Errata / corrections
While Superbase is known as a firebase alternative it is actually structured (Postgres). I mentally just grouped it with firebase accidentally leaving it in the NoSQL section
Woooow you're still around!! I saw this suggested on RUclips, I watched you back when I was 12, and now I'm 25 and working as a platform engineer. Awesome channel!
Yes, I learned to code with him 10 years ago. He was so important to me back in those days.
drop your linkedin ! man
this is what a call a really valuable youtube video
Only if you put up with the unorganized mess that is the software engineering field
You don’t need so many flavors of the same snack
Had to click because this content is underrated In the context of conveying alot of information smoothly and draw intuition quickly beyond words. and it shows you put alot of thought into it. MindMaps FTW Thanks Caleb !
Appreciate that a lot! Glad you enjoyed the content
Please, if you are a junior just starting out, please know that simplicity is the most important goal you should go for. Fight with tooth and nails to keep another technology out of it. A simple postgres and a monoloitic application on Linux, without docker, kubernetes and etc, can take you much further than it looks. I speak from experience. It is much more painful to recover from overengineering than to introduce a new complicafion when you have already exhausted all your other options. Also be very critical. If someone tells you gRPC is faster then JSON and REST benchmark it in real world situations.
Be very critical of any new thing. And if you can avoid adding it by just doing a few manual steps, do those steps.
Speak for yourself man , my repo is %68 go, %21 docker, and %11 bash scripts, I'm having a blast.
I have developed an innovative antipattern: The Modular Monolith Monorepo
@@Dom-zy1qy why do you feel we are talking about different things? I am talking about when you introduce microservices, kubernetes, teraform, rabbitmq, logstash, elastic search, Kafka and gRPC to the same project.
Your project structure actually seems pretty conservative when it comes to backend.
@@Dom-zy1qywth is wrong with you, guy wants to learn, you turn it out to a flexing contest. Who cares about your 21% docker 😅
Depends on scale. Your comment makes me think you never had to run something at scale, or with strict formal service level requirements.
We, a team of four, or should I say more like three and a half, run a system with thousands of pods on AKS, consisting of several apps. Apps get frequent updates, even if releases to production only happen once every month or every two months - but dev and test platforms need to be updated several times a day. We are the final gatekeepers for something improper to reach production, therefore we maintain several test platforms alongside the production platform - one for each development team, one for our final tests and one for running the entire tooling used across teams (CI, static analysis, backups, more complex system level tests, various batch jobs not related to apps and so on). We get thousands of http requests per second from external clients only, sometimes tens of thousands, with large body sizes, not typical for web applications, and latency and uptime are critical for some of our apps. For other apps, used only internally, millions of messages are sent through a message broker for one batch job, and dozens of batch jobs need to run each day. We'd be lost without the prometheus stack, terraform, a managed kubernetes service, helm, managed database services, external IAM, message brokers and quite a few other things. Out of necessity we have developed some things ourselves, on top of what's available off the shelf, using the most appropriate tech in each case.
We don't manage anything hands-on, except in extremely rare cases of failures in production, of which we haven't had one in years - our uptime over the last year was better than five nines, and the tiny bit of downtime is due to critical external systems failing, such as IAM or brief transient platform failures, for example. We do all upgrades with zero downtime, using various techniques, depending on what part of the system needs to be updated. What we spend our time on is almost exclusively automation and research on how to automate and harden our setup even further, plus some development of what we maintain ourselves, in order to adapt to changing external systems. We managed to switch the platform provider, earlier this year, with literally zero downtime - we temporarily ran the system on two distinct platforms, having also temporarily set up replication of data, advertised the new system to clients once it was up and shut down and removed replication once there were no more clients using the old system. It's a somewhat large system that has grown historically, but we're very close to the point where we'd be able to tear everything down and set it up again with just a very few clicks (not a single click because we don't want to move key management online - but we might externalize it). (Right now we're at the point where we'd have to spend a few more clicks, but we'd still not do anything manually.) You can't do that without using many distinct tools.
Still, I wouldn't say our system is unnecessarily complex. It's as complex as needed to fulfill the requirements coming from the business side of things. Latency and uptime impact sales, for the company, with one hour of downtime representing many millions in lost sales, and they also impact production, since what we run is on the critical path of systems that drive the assembly line. (Compare this to just tens of thousands in runtime costs per month. Salaries might increase this to hundreds of thousands, but it's still _at least_ one order of magnitude less than _one single hour_ of lost sales and production delays. This might give you an understanding why managing a complex system, required to run things reliably, controlled, continuously supervised and watched, with zero downtime and low latency, is absolutely worth the added cost, to business, compared to just run the barebones applications on dedicated machines, just hoping for the best.) We did what was necessary to deliver what business asked for. Tech aspects should never limit business requirements, unless business requirements are too expensive to fulfill - which is absolutely not the case for us.
The system we maintain is the result of breaking up a monolith. The effort started some 6-7 years ago. Compared to that monolith, we run several times cheaper, with lower latency and a few orders of magnitude less downtime - maintenance windows of hours every few months were required for the monolith, we don't accumulate 5 minutes of downtime per year - in fact, our less than 100% uptime results only from randomly failed requests spread out over the entire year). Developers could not afford more than two new releases per year, for the monolith. Developers haven't kept up with the platform modernization, so they're not yet as agile, but they still release every application about every second month, getting new features and bugfixes to production several times faster than they did with the monolith. I'd say the added complexity fully pays off.
Simplicity is indeed valuable. But one should heed a saying Einstein is credited with: everything should be made as simple as possible, but not simpler than that. Especially for large and intrinsically complex systems, microservices, while adding complexity, also significantly increase resilience and scalability. When done right, they also increase flexibility and the ability to change things fast _without_ breaking them. Kubernetes may very well be the most expensive way to run a distributed workload - but only when you have to maintain Kubernetes yourself. When you need to run something at scale, it's simply cheaper to pay for a managed Kubernetes service. Containers do add complexity, but they also add isolation, which, when you need to run tons of different workloads, is way cheaper than running many distinct and small servers or VMs yourself, in order to avoid one application starving another one of resources. For distributed applications, depending on their nature, a message broker is unavoidable. Depending on your needs, Kafka or RabbitMQ may be a better solution. If your system is truly large, it may be best to run both, or even run multiple instances of each. Running any distributed computing workload without monitoring, distributed tracing and centralized logging is plain stupid - you effectively deny yourself any possibility of thorough failure analysis, and components may stay in a failed state for a long time before you even notice that something is broken.
@@a0flj0 yeah I see your point. Let me clarify, if you actually have the requirement for breaking up a monolith and see the benefits immediately do so. But if you start with kubernetes, microservices, etc, you absolutely get f**ed every second that requirements change as often as they do, and every mistake you have in modeling the domain, you pay for ten times each time you have to change it across twenty applications.
Also there are solutions other than adding complexity of kubernetes, and all that comes with it, that one can try to improve the reliability and scalability. I advocate that one is better off achieving the same results with those instead of increasing complexity to that level. If those solutions are exhausted or have much higher complexity, definitely go with them. If nothing works, even go with a unikernel if you have to. I'm just saying that oftentimes, for most people, kubernetes and multiple databases are not required. And they may not recover from those mistakes as easily as recovering from under-engineering, like what you explained here. Often times even 5 nines are not required for most businesses but the price paid is massive for that 11 minutes per year of uptime. Most businesses can survive even two days of downtime, but they may not be able to survive two months of migration each time a major new change in requirements happens.
This RUclips channel is underrated
Massively agree
He's on his way
And his courses are overpriced
600k subs is underated to u ?.
this guy is lady-like.. even he blocked downloading the mind map.. guys sense it and they do not take him seriously
Absolutely fantastic job, Caleb. This should be the first video of any student pursuing a career in backend Software Engineering.
you're a legend, i'm giving it full attention after work
This video really explains everything I'm curious about and it's perfect. As a junior frontend developer, that explanation is very basic and understandable
Thank you very much for this video! I am actually in college and taking a database development & design class! Seeing this video reinforces my energy, and mindset. 🔥
Dude, I came across your channel while trying to understand file streams in C++ during my early days of college. Stayed with the channel ever since.
This is golden content. I'm glad I came across this video, as someone who has been in the field for some time I can agree that knowing all this as a developer will make your life super easy. Subscribed so that I can get more of this.
My guy Caleb, you are worth your weight in gold. Thank you
This is the first time I finished a 2 hour youtube video. Bless
samee
Agreed. This channel is underated. I say that as an Engineer. Outstanding work!
WOW. This is the first time I've seen your videos and the dedication you put into this one is massive and admirable. Would definitely stick around for more content
We were waiting for the step-by-step roadmap. And continue this type of content, you could do it on the frontend and after the accea and full-stack, devops, etc. You would have one video for the mind map and one for the roadmap for each. Keep Up The Good Work!
Thanks alot for such insight. it's really refreshing to find information like this for free. It's really important to have a mental picture of what is required or generally what one has to do. a lot to learn and get familiar with but I guess that's the fun part of it all. thanks again.
That's what I needed
I was thinking of starting learning the backend
Thank you Caleb so much I have I long career in technology this is the most valuable video I have EVER watched. I have taught in corporate universities and mainstream universities. The world must be mad if this doesn't become the definitive back-end reference video. Looking forward to seeing you speaking at conferences :-)
You cleared all my doubt in a single video! Thanks a lot
I never found webhooks explained so easily. I really appreciated that.
Crazy stuf !!!
Hats off Dude.
Keep it up
1:33:20 LOL this guy is a comic too 😂 “testing? I don’t do that because I don’t make software that has bugs, but you guys probably do.” 😂💀
Quality content! Thank you for the mindmap Caleb
Great video concept. I’d like to see another iteration where you take all the languages, frameworks, and other concepts you don’t know well and reference the docs for a more accurate and concise explanation. Your audience are engineers after all.
What a tremendous amount of work done here
I watched this video to learn about the core concepts used in backend development, not just technologies. I mean concepts like message queues and design patterns-whether they are part of this roadmap or not-along with a quick example for each concept. But it turned out that it was literally about the technologies used.
Maybe it’s my fault for thinking that the complete roadmap for backend engineers was only about technologies.
Anyway, I don’t mean to belittle your effort. Thank you. I just hope that you focus on teaching concepts rather than technologies, because technology is always evolving, whereas deeply rooted concepts remain unchanged.
If anyone here has any idea about what I was searching for in this video but couldn’t find, please don’t hesitate to share, even a little.
I thought about creating an equivalent video on core concepts, however I felt this one was long enough that they should be separate. We’ll see!
Pls do work on that video m8@@codebreakthrough
Thank you so much for the effort you made to put all of these informations together ... Please keep sharing 🙏
Great tutorial Caleb! I've been learning webdev for a few years now and have come across most of these terms you cover, but this is the first time I've had them all explained from a higher-level viewpoint. The penny drops when I see how you've grouped things together, since I may already know, for example, what a webhook is, but didn't realise it's place in the webdev world is alongside REST. Very helpful - thank you. Duncan.😁
Great video, start to finish.
I use Go for backend and Flutter/Next for front end. This is the golden balance IMO.
thanks, you explained it really well. its good to hear something connect all those topics
I remember following your first c++ class, amazing channel and content.
Amazing stuff Caleb.
Great video. Must have taken ages to put this altogether. Thanks for sharing.
Amazing, well done, sir!
Very good overview, hard and amazing work!
Btw Cassandra is open source 43:07. It is used by other companies as well including Uber, Walmart, and Netflix.
Epic journey for a mobile dev! Cheers
Excellent, I love info presented this way
Great video! Would also love to see a separate mind map of the analytics based SQL DBs
this is gold, thanks mate!!!!!!
Welcome Back ♥
Delphi / Lazarus are very powerful contenders. Blazing fast compiler and binary, very expressive language (in-built readability) which is great to manage large code bases and their back-end development capabilities are simply not talked about. I often wonder why.
My choice for backend technology is Swift and Hummingbird. Server Side Swift is not necessarily tied to Apple although developers that know Swift tend to come from iOS/MacOS. Swift has evolved really nicely as an open source programming language for many years. It is a very modern language with amazing performance. I've chosen Hummingbird as it is much simpler than Vapor, they are both based on Apple's SwiftNIO framework. I believe this choice is much easier to implement while achieving better performance than most others, if not all. I must admit that a big part of my choice is the fact that I'm an iOS/MacOS developer and using the same language for both the client and server is a big advantage, specially when in Swift you have the Codable protocol which allows you to easily transfer any Codable Struct between client and server using the same Swift code for defining the models.
For my personal projects ( outside my day job ), as the only developer, I'd be more than happy to have a successful App targeting iOS/MacOS only without support for Windows or Android.
This looks like the requirements list for every "Full-stack" developer job advertised recently.
This video has been very informative 😊😁
Super content rich video! Awesome. 🎉
1:33:21 so much confidence
That might mean he doesnt make much software to get bugs ... :
This is a brilliant video
Wonderful! Thank you!
Fire video 🔥🔥🔥
Very informative.
Thank you very much 🥰🥰
Amazing Video
man thank you very much, hope you doing well !
Excellent , Thanks
Mind map to depression
Second this 😂
Great Video,
For PHP, you forgot Symphony. Is one of the best for backend development
Thanks Calob ♥
Very very underrated
wow this is really helpful!
I like php and flask for backend. I use PHP more heavily
You're my second favorite Curry, mate!
Happy2C u after few years, from database design course.
I FUCKING LOVE YOU YOU HELP SO MUCH WITH MY DEVELOPMENT AS A SOFTWARE DEV
Excellent 👌
Excellent video, Thank you 10:10
Thanks !
Brilliant!🥰☺❤
This guy has come a long way. He used to use a whiteboard to explain. Seeing him using modem tools feels different. Good for him, though.
superb
🥰
Thanks for the video! What tool did you use to create diagrams?
very informative
THANKS.
my hero
Exellent stuff
Quarkus and Micronaut are missing. They have a tiny market share but are important because they are both new implementations of ideas of cloud-specific concepts in Java, unlike Spring.
Also, I can't see any message broker in the mind map. Protocols aren't everything.
This is a very informative video. Thank you for the explanation!
Though I have a question:
At 49:22 Supabase was mentioned as a NoSQL solution.
But it is running Postgres, shouldn't it be considered as RDB?
Yea, the mind map link has been updated 👍🏻
Damn I found world's class engineer now!!!
If I may ask, how much it costs is your 1:1 mentorship?
Hi, Caleb. I wanted to apply for the mentorship but I won't qualify because I'm from South Africa. But, thank you for this amazing work here, esp with this road map.
LET'S GO
Thanks for all the knowledge, any chance for Zig tutorials?
Hello this video is super useful. like this, Can you create a video about microservices also.
the Zac drip is doing it's influence
I only wish the background was not so bright, but I'm not complaining!
Agreed next time I’ll try to make it dark mode
hi, this was great. may you please do the same for frontend ?
Prisma support for MongoDB is very limited, specially if you want to do something a little complex like search on the whole table or a combination of tables (joins)
Can u do it for front end
I will send this to all HR
192.xxxx is not loopback ip address 1:19:00
Apologies I meant 127
My experience as a backend developer is that you have to do a lot of frontend work...
WOW! THANK YOU!\
Sorry but isn’t supabase built on top of Postgres?
Yes, that’s corrected on the mindmap
Hey Caleb. May you please do Flutter tutorials😪😪
We need follow up video asap😊
Thanks for the great video im just concern about the Supabase? why is it categorized as nosql?
Oversight on my part, just mentally grouped it with firebase. Thank you!
A classic example of a rabbit hole 😅
Actually for Java there is "also" Java Enterprise Edition and its free and commercial implementations
À framework is more than a lib: it has an execution loop you then add things onto