Why I’ll never deploy to a VM again

Поделиться
HTML-код
  • Опубликовано: 18 янв 2024
  • My Courses
    📘 T3 Stack Tutorial: 1017897100294.gumroad.com/l/j...
    My Products
    📖 ProjectPlannerAI: projectplannerai.com
    🤖 IconGeneratorAI: icongeneratorai.com/
    Useful Links
    💬 Discord: / discord
    🔔 Newsletter: newsletter.webdevcody.com/
    📁 GitHub: github.com/webdevcody
    📺 Twitch: / webdevcody
    🤖 Website: webdevcody.com
    🐦 Twitter: / webdevcody

Комментарии • 268

  • @GanryuMVP
    @GanryuMVP 6 месяцев назад +468

    Save you 10m: His argument is that he dislikes manual orchestration and likes services that do it automagically for you in their own VMs (serverless).

    • @lpanebr
      @lpanebr 6 месяцев назад +15

      I figured that after 7 min. lol.

    • @jonasbadstubner2905
      @jonasbadstubner2905 6 месяцев назад +49

      You get vendor-locked so easily there. And you have to throw a lot of money at this…so it’s not for me and I would not recommend this to anyone, if they would ask me.

    • @jamesblack2719
      @jamesblack2719 6 месяцев назад +18

      Ec2 is a good choice if your app runs more than 12 hours a day. I have an app that runs every minute to pull data and store it in a mongo database. Ec2 running a docker image makes sense as running it in ecs would cost a lot more money. I don’t need to scale as I have six small ec2 images so it runs about every ten seconds.
      We build and push to a repository then have the same scrip log into each ec2 instance and update the version and restart.
      If he can’t figure out how to automate it, as we use a gradle script and our jump server has a shell script so it is easy.

    • @phineas6871
      @phineas6871 6 месяцев назад

      What’s scale of your ops and how has the cost been?

    • @jamesblack2719
      @jamesblack2719 6 месяцев назад

      @@phineas6871 the ec2 is to pull in changes so customers can see it in near real time so we have a few million assets we track with repairs and orders among other pieces of data on them which is why we run our queries to get any updates so often. The cost for ec2 is just a few dollars for each per day. We do use ecs for the web application parts though as that is cheaper with ecs.
      I don’t like Kubernetes but that is a preference.

  • @avwie132
    @avwie132 6 месяцев назад +29

    I fail to see why this is a Vm problem and not just a “we made it too complicated ourselves” problem

    • @WebDevCody
      @WebDevCody  6 месяцев назад +2

      How would you have simplified this back in 2015?

    • @avwie132
      @avwie132 6 месяцев назад +22

      @@WebDevCody so because of your experience in 2015, 9 years ago, you decided that deploying to VMs is something you’ll never do again?
      How was your Js dev experience in 2015? Why not make a video “why ill never use JS anymore”
      But, to get back to your question: you made it too complicated by assuming you need microservices and micro uis in the first place. If you have a complex application you’ll have a complex deployment. The microservice and micro ui fad was something that was a solution for the 0,1% of companies.
      The problem wasn’t the VMs, the problem was the architecture.

    • @indramal
      @indramal 6 месяцев назад

      @@avwie132 what is your recommend method?

    • @abheykaul
      @abheykaul 3 месяца назад

      Clusters webhooks nginx and little bit shell script

  • @noherczeg
    @noherczeg 6 месяцев назад +41

    EC2 is not bad just because you had a convoluted ci/cd process. Nowadays you can do at least half of what you described with GHA + Ansible / Terraform

    • @MrRecorder1
      @MrRecorder1 6 месяцев назад +1

      That is what I thought. Just like packaging containers (and if you can ship these, you are done here!) you can build the container images locally. I find it no problem to just reprovision MICROSERVICES a whole machine at a time tbh. What is on there? 3-4 GB? That is a like 15 minutes VM packaging at max. Just replace the entire box, done!

  • @etagh
    @etagh 6 месяцев назад +73

    You can do docker compose if it is single vm, or go kubernetes if bigger system

    • @drprdcts
      @drprdcts 6 месяцев назад

      My single VPS with 2 cores and 4gb ran handles thousands of daily users easily. My recipe - Cloudflare + nextjs ISR with pages dir + CapRover.
      Honestly using Caprover feels like a cheatcode. It's a self hosted Heroku alternative and it scales infinitely both horizontally and vertically. I wish more people talked about it ...

    • @sergiosoares5045
      @sergiosoares5045 6 месяцев назад +3

      K3S single VM work wells and better because you already have Blue Green and Rollouts to deploy.
      And use by default Traefik that is great.

    • @90hijacked
      @90hijacked 6 месяцев назад

      or just docker swarm, blue/green with docker service update yadda_yadda --image blah/blah:blah

    • @InfiniteQuest86
      @InfiniteQuest86 6 месяцев назад

      Those didn't exist when he did this.

    • @Patrk38
      @Patrk38 6 месяцев назад +1

      @@InfiniteQuest86docker compose didn’t exist? 😂

  • @user-qr4jf4tv2x
    @user-qr4jf4tv2x 6 месяцев назад +69

    we made web development complicated

    • @lennarthammarstrom1321
      @lennarthammarstrom1321 6 месяцев назад +5

      Finally we've gone full circle with RSC, HTML from the server is back and simple app services is in style

    • @InfiniteQuest86
      @InfiniteQuest86 6 месяцев назад +1

      Yeah, he's not the one that did this. Everyone made web development complicated, and now everyone is forced to do this nonsense.

    • @codingprograms2078
      @codingprograms2078 5 месяцев назад

      😂😂😂😂😂😂😂😂

  • @siwakotisaurav
    @siwakotisaurav 6 месяцев назад +47

    Its not that difficult tbh. Docker-compose for single machine hosting, docker swarm for something medium scale and by the time you make it big, you can just hire someone to do kubernetes for you.
    With vercel I found the bandwidth and function call costs too much for a non-SaaS product.
    I run a site with 30m visits a month, on a 200 bucks a month server, which would cost like 1-2k+ on vercel

    • @paulbornuat5655
      @paulbornuat5655 6 месяцев назад +1

      Is the cost of the time / people required to do what you describe not higher than the extra 1-2k you pay on vercel? Sounds like "just hiring someone" would cost more than $800 each month.

    • @johnmcway6120
      @johnmcway6120 6 месяцев назад

      ​@@paulbornuat5655i keep hearing this argument but i just cant buy it. you hire somebody yes and they would have it setup in N amount of time but then once it works, it works. then you have available resources to perform other important tasks tailored to your specific business needs.

    • @123mrfarid
      @123mrfarid 6 месяцев назад +1

      ​@@paulbornuat5655nah, docker compose or swarm is pretty easy nowadays.. you can use panels or even ask AI for reference scripts..

  • @EER0000
    @EER0000 6 месяцев назад +13

    For one of my projects in those days I was deploying on a windows VM with a powershell script, basically downloaded the zipfile and updated the folder the webserver was serving. Almost no downtime, and easy rollbacks too since all previous versions remained on the server. Later on we were using kubernetes in the worst possible way, my specific prject worked fine, but the other services we were hosting were really awful on containers.

  • @sarabwt
    @sarabwt 6 месяцев назад +7

    Ansible scripts would handle this for you. And a load balancer/reverse proxy was missing. When doing an update, you update the config on the load balancer, to stop requests coming in, do the update and update the config again. Or you could provision a new VM set it up, put it in rotation and destroy one of the old ones and repeat.

  • @esc-sh
    @esc-sh 6 месяцев назад +5

    There is one huge down side to this, vendor lock-in. This is not a problem when you are starting out. But all of it will come back with interest if/when the scale becomes large enough.
    If you are building something that you know will be large enough down the line, in my opinion, keeping the apps in platform agnostic orchestration tools like Kubernetes will give you the best of both worlds. A fully managed Kubernetes solution will make most things easy while avoiding vendor lock-in in future.

  • @williamschaefermeyer7007
    @williamschaefermeyer7007 6 месяцев назад +1

    I have a typescript MERN stack app with the frontend and backend in separate repos where the frontend will just proxy to the server 's port. I'm curious where/how you'd recommend deploying a project like this because right now I just have a digital ocean droplet i just run them on with pm2. I'm worried about autoscaling like you mentioned but haven't been able to find a better solution with the limited time I have to look into deployment processes. Loved the video, thanks!

  • @favanzzo
    @favanzzo 6 месяцев назад

    any plans for a react course?

  • @userasd360
    @userasd360 6 месяцев назад +2

    Can't you create a single docker hub repository and push the ms images. Then use those images in kubernetes or swarm ?

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Yeah, but now you’re no longer doing manual orchestration which was the whole point of this video, you’re using an orchestration manager 😂 I don’t think Kubernetes was as big in 2015 (or if it even existed yet)

    • @userasd360
      @userasd360 6 месяцев назад

      @@WebDevCody Any suggestions how to improve solutions architecture capabilities?

  • @kumardeepanshu8503
    @kumardeepanshu8503 6 месяцев назад +1

    and how you are doing it right now ? can you example it , like how to actually deploy an API to prod and connect it to the frontend .

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      I use serverless for my api, but there are better options now including container hosts

    • @kumardeepanshu8503
      @kumardeepanshu8503 6 месяцев назад +1

      @@WebDevCody so In my company they insist of using AWS , we have django api which I wanted to deploy currently I have developed it to ec2 and everything is manual, till now it is working fine because we only us the django api for one part of our application everything else is on trpc , and ec2 is handing every thing very well, but I have some concern about the scaling of it,.like I have deployed redis on different ec2 and connected it to the main ec2 . Do you recommend any other way to do it ?

    • @MuneebR7
      @MuneebR7 6 месяцев назад

      @@kumardeepanshu8503 You can use docker and upload your docker image to aws and use serverless ...

    • @WebDevCody
      @WebDevCody  6 месяцев назад +2

      @@kumardeepanshu8503 I mean just monitor the box memory and cpu. You can vertically scale for a long time without needing more machines. If you end up needing to horizontally scale, I’d try to use some type of managed host that auto scales your services. Redis is probably best on its own separate machine. Also you’d want to setup a vpc so that only your services can access that redis machine and don’t allow public access to your redis or database instances

    • @kumardeepanshu8503
      @kumardeepanshu8503 6 месяцев назад

      @@WebDevCody thank you so much for the insight.

  • @b_wheel
    @b_wheel 6 месяцев назад +2

    Sysadmins. Because even developers need heroes.

  • @IvanRandomDude
    @IvanRandomDude 6 месяцев назад +5

    But how would you deploy microservices on Vercel?

    • @CourageToGroww
      @CourageToGroww 6 месяцев назад +1

      You would use server less functions

    • @aspinwallx
      @aspinwallx 6 месяцев назад +1

      You don't. It's meant for full stack applications. If you want to deploy microservices you need other tools

  • @nimmneun
    @nimmneun 6 месяцев назад +3

    I haven't encountered any deployment+execution setups that didnt use symlinking in ages ... seems hard/complicated to achieve 6 9s and instant rollbacks that way 😅 I did encounter Jenkins in a past job, but it was only used for worker scheduling there. Can't say I'm a fan 😂

  • @stefandecimelli5241
    @stefandecimelli5241 6 месяцев назад +20

    The Jenkins+Artifactory combo is a match made in hell, and we (anon big co) are STILL using it for everything

  • @richardhoppe4991
    @richardhoppe4991 6 месяцев назад +16

    Really awesome video. You hit on so many different services and concepts in 10 mins. I am curious as to why your org was using Puppet over Terraform if you guys were already using Consul. It looks like Terraform came out in 2014, so it might have been too new

    • @WebDevCody
      @WebDevCody  6 месяцев назад +4

      A lot of the existing system was using puppet. As you know, It’s hard to switch to new tech on a dime, especially on larger projects or companies where everything goes through a tech review and approval process. Also I don’t think terraform is for orchestration, it’s more for defining infrastructure. Puppet and chef is for configuring those machines after you set them up using terraform

    • @richardhoppe4991
      @richardhoppe4991 6 месяцев назад

      Ah, you're right, I mistakenly looped Chef / Puppet in the IaC tools camp. Cheers and appreciate the response. @@WebDevCody

    • @ivanmaglica264
      @ivanmaglica264 6 месяцев назад

      Puppet used to be the thing back then. Problem was it made you install Ruby, which turned me away from it, since I did not want to install ruby environment just to run Puppet.

  • @levyroth
    @levyroth 6 месяцев назад +3

    Your first problem was using AWS when you can self-host.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +5

      Nice that’s taking my complaints to the ultimate pain

  • @luthecoder
    @luthecoder 6 месяцев назад

    can you make video about how you manage Authentication with SST in nextjs 14? what do you use to auth users? amplify? or u use custom solution?

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      Next auth works fine

  • @jly_dev
    @jly_dev 6 месяцев назад +4

    I count my blessings that I started after Docker, ECS, and Terraform were already well established -- older methods sound terrifying 😂

  • @zoranProCode
    @zoranProCode 6 месяцев назад

    So ehat are better options and how rxpensive are they?

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      I'd personally just use serverless or use a container host like elasticbean stalk, digital ocean app deploy, heroku, etc. Something where I don't have to maintain a VM myself and the autoscaling is already setup for me.

  • @kirilmilanov1096
    @kirilmilanov1096 6 месяцев назад

    Great video man. Maybe make one about serverless (lambdas), or service discovery tools like Consul/Linkerd

  • @CadisDiEtrama000
    @CadisDiEtrama000 6 месяцев назад +1

    I had to learn most of what you are talking about on my internship for just a webapp + actually building the webapp... So I feel you 😅
    However, I am still happy I did all of it because I learned a lot, it was still usefull in all jobs after, and it made me appreciate how easy it is if you just use services instead.
    And this project you did doesn't even seem overengineered... Those are just the problems inherent in doing all of that right, and if you don't go all in you will have problems sooner or later.

  • @banafish
    @banafish 5 месяцев назад

    Big fan of your channel. Been watching for a while and now am working as a junior, though we're probably the same age. :) The videos you've done on deployment and Next.js/AWS/SSR have been some of the most interesting. Super fun for me to watch and learn from. I would love if you dug more into Remix and what the deploy looks like on that side vs. Next. I am extremely hesitant to use Next.js at the moment, but still don't know if using v13 is still a reasonable option.

  • @madimetja-M
    @madimetja-M 6 месяцев назад

    trying to learn system design and don't know where to begin any suggestion?.

  • @oryankibandi3556
    @oryankibandi3556 6 месяцев назад +16

    This looks like something that can be handled with Kubernetes + ArgoCD + Istio. Not sure whether these tools were available when you worked on that project at the time but I can confirm that I never enjoyed using Puppet or Chef.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      I think k8s came out in 2015 when I was working on this project.

    • @leularia
      @leularia 6 месяцев назад

      Hy bro how can I contact you for questions?

    • @sergiosoares5045
      @sergiosoares5045 6 месяцев назад +1

      Beside more modern, K8S + ArgoCD + Istio can very complex too, but today with k8s managed easier to operate.

  • @ultrasive
    @ultrasive 6 месяцев назад

    I mean I think its find to run in VMs as long as you use container because it makes it much easier to run CI/CD especially if you dont want to hand off your private version control repos to a platform. You can just set a webhook on the container registry and it would automatically blue/green the ingress kubernetes resource type.

  • @underflowexception
    @underflowexception 6 месяцев назад

    I personally use Deployer (PHP) to setup CI in such situations and it works well for both complex and simple setups. Not for everyone though but I prefer the VM/Cloud instance approach.

  • @JonBrookes
    @JonBrookes 6 месяцев назад

    thanks for posting this, very interesting to hear your views. The infra you described reminded my of an on prem that has been 'uplifted' to the cloud and was the way some folks did ci-cd back in the day when everything was hand crafted in the DC. Some things became a sort of labour of love and sometimes by necessity as this was the way things were then. I would echo what is said below also, Ansible, Terraform and other tools have made a lot of the older tools redundant / can be refactored. Understand fully, that sort of complexity / friction would put anyone off rolling your own VMs in preference to BaaS. Costs can mount up though as your projects grow and we can be back in the room, devopsing a solution again. Ha !

  • @iskandar149
    @iskandar149 6 месяцев назад

    can you make a video about how you do ci/cd nowdays in your job, I mean I am a new developer and I thought that this is the typical approach for ci/cd, I didn't know that used to use it back in 2015 lol

  • @thanhquachable
    @thanhquachable 6 месяцев назад

    thanks for sharing, for mg api i am using github actions , docker and aws lambda, which is quite straightforward and convenient, front end, vercel makes thing simple and quick to go live

  • @thomastang2587
    @thomastang2587 6 месяцев назад

    What about windows 10 vm ?

  • @gordonta
    @gordonta 6 месяцев назад +1

    Used Jenkins for a week before replacing it with a simple GH Actions workflow. Jenkins was eating 40% of my VPS RAM for some reason ._.

  • @kumardeepanshu8503
    @kumardeepanshu8503 6 месяцев назад +2

    does ec2 comes under VM ?

    • @dhananjay7513
      @dhananjay7513 6 месяцев назад +1

      pretty sure by VM he means IaaS platform (Infrastructure as a service) so yes ec2 is a IaaS/vm

    • @kumardeepanshu8503
      @kumardeepanshu8503 6 месяцев назад

      @@dhananjay7513 can you suggest any alternative of it , I have a backend API written in django with docker , and I wanted to host it , currently I am using ec2 for it , is there any other way to do it ?

    • @IvanRandomDude
      @IvanRandomDude 6 месяцев назад +2

      @@kumardeepanshu8503 Koyeb, FlyIO, Elastic Beanstalk

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Use a container host that has scaling

    • @dhananjay7513
      @dhananjay7513 6 месяцев назад

      @@WebDevCody basically any PaaS, heroku, railway, render?

  • @CouchProgrammer
    @CouchProgrammer 6 месяцев назад

    The main idea is to be able to buy your own server blades and maintain them yourself in the future. This works well if you are doing something non-web related. And it's almost free compared to cloud solutions. Kubernetes is a good solution, but only after the question arises that you need more system administrators. This means you need to hire a devops specialist. If you need 2 devops, you either hired a bad devops, or you need technical support for clients.
    In my experience, the problem is usually not caused by the infrastructure, but by the fact that the services are not separated correctly. And you have to understand this diagram only because, due to the strong coupling of services, they no longer deploy independently.

  • @yaaaayeet745
    @yaaaayeet745 6 месяцев назад +2

    Thumbnail suggestion: The diagram is confusing (too much small elements), and it doesn't depict a virtual machine anyway. Instead, you could add a CPU image or something simpler. Also, the black color of your T-shirt is merging with the background of the diagram. I think both elements' colors should be a little different (yes, I am a certified 🤓)

  • @yassinesafraoui
    @yassinesafraoui 6 месяцев назад

    Just a few days ago( maybe weeks) I actually commented on why you didn't use a VM and what's that overhead that stopped you from doing so. Thank you for making this video, I know understand what you're talking about, if these tools are necessary in a project, I certainly agree that it's tooo much overhead, I didn't know that all this was needed just to get something deployed on a vm. Which begs the question if deploying on a VM is simpler nowadays, I expect it will be simpler but not sufficiently to justify transitioning to a VM, especially when we're talking about a big team like it was in your case. But I still think maybe one day we may have the next vercel( pun not intended lol) who will make deploying on vms worth a shot. Maybe I'm wrong( probably haha).

    • @WebDevCody
      @WebDevCody  6 месяцев назад +2

      Deploying is easier with containers and kubernetes now. If you have to use a vm, setup a kubernetes cluster and deploy using that. It’s still requires a skilled devops engineer to setup, but it saves you the time from needing to use puppet or chef to manage these vms

    • @LtSich
      @LtSich 6 месяцев назад

      @@WebDevCody you don't always need a k8s cluster.... All depend on the project and the needs...

  • @KerimWillem
    @KerimWillem 6 месяцев назад

    Very interesting!

  • @indramal
    @indramal 6 месяцев назад

    Why did not use GitHub Action? It will build, test and upload code directly to VM using FTP or OIDC? And serveless is more costly. Doing Github Action or manual method is better than paying high cost each months. any comments?

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      GitHub actions came out in 2019 😂serverless is only more costly if your requests per month cross the threshold of running your own VM which actually takes a lot of requests to do

    • @indramal
      @indramal 6 месяцев назад

      @@WebDevCody yes all functions can run 1 million per month free in AWS lambda. So it is too small. It you say small site it is ok. But you told about large sites. Why did you say “GitHub action come out in 2019” ? Is it too old? Or no features?

    • @volkan8583
      @volkan8583 5 месяцев назад

      @@indramal time line is 2015 😋

    • @indramal
      @indramal 3 месяца назад

      @@volkan8583 where is 2015? I don’t understand

  • @jeffreysmith9837
    @jeffreysmith9837 6 месяцев назад +2

    it's so much cheaper to do it manually. Monoliths are easy to manual deploy. Distributed systems with logging takes a long time to learn but it's so rewarding

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo 6 месяцев назад

    Nice content. Helpful. Thank you.

  • @colyndev
    @colyndev 6 месяцев назад +1

    I was part of many teams 2008-2018 timeframe that did exactly this kind of thing. I remember thinking it sucked (e.g. when asked to make sure all the SSL certs were updated). Once AWS was a thing I think it opened the flood gates and people convinced themselves all of this was necessary.

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      I mean what was the alternative? Rent a single VPS with a ton of memory and cpu and host your entire service on one machine?

  • @ThisFiniteWorld
    @ThisFiniteWorld 6 месяцев назад

    I had similar experience back in 2018, it was build by people who didn't know much about it and the pipeline was one huge mess, single job which was deploying everything.
    vercel and similar services made it so much easier so we can focus on the app and get profit way faster.

  • @Taddy_Mason
    @Taddy_Mason 6 месяцев назад

    That bit on Jenkins restarted my PTSD.

  • @CharcoalDaddyBBQ
    @CharcoalDaddyBBQ 6 месяцев назад

    Spent a day learning Terraform & Ansible and I havent had to touch a VM in months... plus its dirt cheap to spin up another node if i need it compared to a 'cloud' company

  • @donbernie9346
    @donbernie9346 5 месяцев назад

    I’ve done this in the past, manual VMs configuration and manual orchestration of services but guess what, cloud services are becoming super expensive and now I believe that cloud is good to start and iterate quickly but you should eventually move back to your own VMs or even bare metal stuff to reduce cost in the long run.

  • @alarice2136
    @alarice2136 6 месяцев назад +4

    where's my boy galactus at?

    • @Lexaire
      @Lexaire 6 месяцев назад +2

      Love Galactus.

  • @roganl
    @roganl 6 месяцев назад

    Been There, Done That. Unfortunately, all that automation still needs to happen on MANY platforms. Vercel & Elastic Beanstalk don't scale adequately to "moderately" large (3M instantaneous active users) - and these PaaS rube-Goldbergs then get rebuilt all that "in-house". Not usually necessary for modest user counts, but VERY much still needed for "web" scale.

  • @HAMYLABS
    @HAMYLABS 6 месяцев назад

    I'm with you - I think in most cases you just want a box to run your code on and nothing else. I've had a good exp w server less containers like Google cloud run or digital ocean app platform

  • @trapfethen
    @trapfethen 6 месяцев назад +2

    Just keep in mind what scale you're building for. Until you need multi-machine scale, just deploy to a single VM using an autorun script/GitHub Action. As you require more flexibility, then pull in these other tools (either managed or self-orchestrated, you know your skillset and pain tolerance for learning than I do). A single VM can easily manage hundreds of thousands of daily active users provided your site / app is running on good old fashioned https requests. You have less headroom if you're planning on using web-sockets instead.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Sure, but what if your client says you must have “multi region support” and 3 9s uptime, or else you won’t win the contract? That throws a wrench in a lot of things.

    • @trapfethen
      @trapfethen 6 месяцев назад

      @@WebDevCodyIt does. My general rule of thumb is based around the nature of the business (are they already established and have a sizable customer base?). If they already demonstrate the need or it's likely in the next five years, then you should be crunching the numbers with them to determine the best way forward assuming that scale is coming. If they can't demonstrate that need; especially if they are a younger / not yet proven venture, I would advise them against it. Too many businesses burn through all their runway capital by being too eager and optimistic about the demand for their product. They find their pockets empty after a few years just when they have figured out the pivots needed to survive. Some companies manage to hold on through that, but most don't. If the client is one that will listen to you lay all this out and insist on the scale anyway without thoughtful answers to these problems, then I would honestly pass on the project (something I understand not everyone has the luxury of doing, either needing the money or being part of a team where you don't get a say in these decisions). You don't want to be involved when things fall apart, it is never pretty.
      All this to say, you should be studying and understanding these tools if you want to improve your skillset as a goto dev, but you shouldn't be afraid to look at someone and say "You're making it more complicated than your product requires, thereby jeopardizing the long term viability of the business / product line". Again, keeping in mind that most people are not really in a position to do the latter. This is a lesson I learned the hard way throughout my career, and I wish someone had impressed it upon me sooner.
      I'm not really arguing against your point at all. I'm more pulling focus on something that has been becoming an assumption in our industry, that everything needs to scale. There are things that require the ability to scale quickly on short notice, but they are the minority (though they represent a disproportionate amount of the jobs in the industry because the size and complexity of those endeavors require more devs). Just food for thought.

    • @sarabwt
      @sarabwt 6 месяцев назад +1

      @@WebDevCody What is meant as "multi region support"?

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      @@sarabwt if your east region dies for whatever reason, people can still connect to your west region servers

    • @sarabwt
      @sarabwt 6 месяцев назад +1

      @@WebDevCody Don't you get into a shit ton of complexity with state by doing that? For me, deployments seem less problematic that syncing DBs and dealing with stale data and DB failover. Were DBs in 2015 capable of handling this? Afaik, this is mostly unsolved to this day, at least in comparison to the complexity of deployments.

  • @RafaelMilewski
    @RafaelMilewski 6 месяцев назад

    What you just described would have been very easy to be done "back then" as you mention if you had used docker swarm, pretty sure terraform existed at that time as well and probably traefik...

  • @justinstorm
    @justinstorm 6 месяцев назад

    This is kinda mixed me for me. I like this devops stuff, but agree it can be a complex nightmare to manage sometimes. I guess it depends on how much infrastructure stuff the developer wants to manage in addition to developing the software.I wish there was a second half to this, what do you use now if not vms.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Serverless (aws lambda) and write the code in a way to make it easy to switch off if needed

  • @ivanmaglica264
    @ivanmaglica264 6 месяцев назад

    This is pre-Docker. I get it, i've done something similar. Now we still use VMs (dev and stage mostly), but we deploy our stuff pretty much exclusively thru Docker. I don't want to touch any other deployment method. For DEV and STAGE it's best because you can debug much easier and have more direct control and visibility into the system.

  • @proevilz
    @proevilz 6 месяцев назад

    I typically used Laravels Forge service back in the day, even if I wasn't doing anything with Laravel.

    • @travisricker
      @travisricker 5 месяцев назад

      I've been loving Laravel Forge. I use it to partisan and deploy Laravel+Inertia+Vue3 apps.

  • @yassinesafraoui
    @yassinesafraoui 6 месяцев назад

    For jenkins, I've never used it but I feel you even it's name feels bad lol

  • @joswayski
    @joswayski 6 месяцев назад +1

    All of this is like 100 lines of CDK with fargate now. Wild

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Yeah everything is easier now compared to what I mentioned 😂 containers make life sinple

  • @TheMoisex01
    @TheMoisex01 6 месяцев назад

    Not knowing the tools needed for solving the issue, many ways of achieving little to no downtime with nginx and upstreams to microservices.

  • @toddfisher8248
    @toddfisher8248 6 месяцев назад

    hrm… if your just deploying a new service ec2 , ssh , and systemd behind an elb is pretty solid

  • @squareparticle
    @squareparticle 6 месяцев назад

    Before Beanstalk was a thing, I was just imaging the production EC2 and launching the AMI into a load balancer.

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      that's a good idea, what tool did you use to create the ec2 image at the time?

    • @squareparticle
      @squareparticle 6 месяцев назад

      Honestly, I would just right click the EC2 and choose "Create Image". I did find out the hard way that this works better when the EC2 is not running.

  • @moneymaker7307
    @moneymaker7307 6 месяцев назад +1

    This has to be a promotional video by vercel.
    What does a VM have to do with build and release pipeline. Even if you using server-less, don’t you not have a build and release pipeline???

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      I don’t deploy using vercel. The point of this video was to give insight into how we attempted microservice la in 2015 and how orchestrating and provisioning virtual machines isn’t a simple task. Using existing deployment services can allow us to focus on the real product and not wasting time with scaling, monitoring, etc. I threw Jenkins in there it highlighted the context of 2015 where we had to deploy and maintain our own Jenkins instance on, yet again, our own virtual machines.

  • @Goyo_MGC
    @Goyo_MGC 6 месяцев назад +2

    I'm not gonna lie, I could not really keep up with what you were trying to explain with this one. To me it just sounds like you were not happy with your old stack. Maybe explaining with a comparison of what a fullstack app hosted on a VM versus one hosted on services might help. Thanks for touching on those more complex and real production problems

  • @georgesmith9178
    @georgesmith9178 6 месяцев назад

    Well explained. It would have been even better if you ended the talk with a solution that addresses all of the shortcomings you listed. Probably Docker, Kubernetes ...?

  • @pacoserpico
    @pacoserpico 6 месяцев назад +9

    This is why devs need a good devops team. Devs shouldn't have to worry about anything other than writing and testing code to increase velocity. Also puppet and Jenkins are painful as hell, sorry you had to use those turds.

    • @IvanRandomDude
      @IvanRandomDude 6 месяцев назад +3

      95% of devs work on simple projects that don't require any devops and infrastructure setup nowadays. Also, thanks to SaaS tools and serverless web development is so simplified nowadays that even my grandpa can do it.

    • @levyroth
      @levyroth 6 месяцев назад +2

      Or you could man up and self host like we did before AWS created the mess we're in today. Processing power is dirt cheap nowadys.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Self host has the exact same issues I described, and more because now you need to have fail strategies when your cpu, memory, disk, or power fail

    • @AmirLatypov
      @AmirLatypov 6 месяцев назад +1

      Most of the time you don’t need dev ops. Developers need to understand how it works anyway.
      I’ve set everything myself on bare metal using nomad, and docker

    • @furycorp
      @furycorp 6 месяцев назад

      I've found when devs don't think about this they are not real full stack devs in that they risk only considering (or even being aware of) application level solutions to problems what are easily if not trivially solved if one considers the FULL stack which includes what can be done at "infra" level. I saw inside of a large startup that worked the way you say and "velocity" meant write unmaintainable spaghetti as fast as possible, and "devs don't think about infrastructure" meant there was ruby on rails bs implementing DIY versions of features like moving files from one S3 bucket to another, or polling buckets for changes. No kidding.

  • @exapsy
    @exapsy 6 месяцев назад +2

    Okay, now I get why we outsourced 5 years ago in an older company the whole devops to a devops company for deployment.
    It's a mess. And worst, I've heard of all those names, but by the time i've heard them, most of them, not including jenkins, started being "deprecated" from the market.
    I'm kinda "glad" that this isn't the norm anymore? But also today, it's not like everything is so much better.
    Youve got Docker, containerd, docker swarm, kubernetes, terraform, a thousand services in AWS. Like, objectively, it's still a mess. Maybe less and its so much easier to scale up horizontally or vertically. But today's tech is not unfortunately the pill that solves everything, and unfortunately I've had to learn about all of those too.

  • @kamiljanowski7236
    @kamiljanowski7236 6 месяцев назад

    I've seen that kind of setup, but ECS has existed for 10 years at least.

  • @nguyenat6454
    @nguyenat6454 6 месяцев назад

    hope you will make devops series

  • @bshelling8922
    @bshelling8922 6 месяцев назад

    Now there's containerization. Just package, build, deploy, done.

  • @Optimistas777
    @Optimistas777 6 месяцев назад

    Why not just use Dokku????

  • @sfalpha
    @sfalpha 6 месяцев назад

    I think there is wrong understanding about VM if you not working as System Engineer or Infrastructure Engineer.
    VM is not design to run single service, you should not restart or deploy VM every version you push, that just wrong, you could do that for containers running inside VM but not use VM for that purposes.
    When you migrating or cloning VM you does not shutdown or stop it. You freeze it and migrate over physical machine and let it continue it's work. That's the whole point of VM and orchestration of VM that is to do scale-out or migrations, or even disaster recovery.

  • @cheskoxd
    @cheskoxd 6 месяцев назад

    Love you content Cody, thinking on starting yt as well, doing small and easy tutorials, wish me luck❤

  • @snippletrap
    @snippletrap 6 месяцев назад

    ML people hosting local LLMs are learning these lessons for the first time

  • @leilei1129
    @leilei1129 6 месяцев назад

    Way back 2010, this is a common setup

  • @CallousCoder
    @CallousCoder 6 месяцев назад

    And then he needs to interface with hardware...than a VM with hardware pass trough is very nice or a bare metal even better. Never say Never, especially in IT :D We actually want to run as much in Kubernetes as possible but we have hardware like access card readers, ID card readers to onboard people they all need to run on a physical machine. With USB pass through to their VMs and all run build agents to deploy.

  • @aardvarkansas7500
    @aardvarkansas7500 6 месяцев назад

    Great video! Your project is definitely not a one-off (lots of folks used similar workflows), but serverless and Beanstalk were heavily used way before 2019... you didn't have to do it that way!

  • @blipojones2114
    @blipojones2114 6 месяцев назад +2

    This is basically it at my current company.
    Its simpler tho cause they are doing it fkin wrong tho lol.
    Doing it "correctly" in a redudant fashion like this is complex...
    Like my answer to a lot of this is a banner that says "website maintenance from x - y" i.e. f off for 20 mins while i spin down up this stuff.
    High availability is overrated until those 20 mins cost 6 figures.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      Yeah most products should just have down time if possible. It makes everything much easier

    • @LtSich
      @LtSich 6 месяцев назад

      @@WebDevCody as I say to my client.
      How much does it cost for you 1 or 2 downtime < 30min / year.
      And those downtime can be at night or during off time, not always during the day.
      And now, look at how much cost the high availability solution...
      If the downtime cost more, you can look at those shiny solutions.
      If the shiny solution cost 5x time more... Just accept that sometimes you will have short downtime...
      The goal is to make money, nothing more...

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      @@LtSich that’s just deploy downtime. What if the memory on your vm dies or the data center has an entire blackout? How long would it take to get everything back to operation? But yes this requirement for little downtime and resilience comes with a price

    • @LtSich
      @LtSich 6 месяцев назад

      @@WebDevCody On the last 5 years I had maybe 2 or 3 time memory issue.
      This is generally fixed in 1 or 2H.
      The whole DC down happen 1 time in 15 years (the whole DC was on fire)... I bring back 6k websites, on 20 servers in 3 days.
      And few times in this 15 years that, yes, issue on operator side, but this is very very rare, and is generally fixed very quickly.
      As I say, you need to evaluate the risk, how much cost the downtime, how much cost the shiny solution.
      If a 4H downtime in the year doesn't cost you more that the high availability in the same year, then it's not worth the price.
      And 4H downtime is a massive issue who is very rare... This happen once every 3/4 years / client for me... Even less... The last 5 years the baremetal get to be very very reliable. And the cost of running a baremetal is a fraction of SaaS solutions...
      Yes you need to know what you are doing... This is why my client pay me... They don't have internal sysadmin, only me as support to manage their services...
      ofc a big company who make thousand € / hour on his web service need high availability... But this is not for everyone...

  • @aslkdjfzxcv9779
    @aslkdjfzxcv9779 6 месяцев назад

    docker is a fantastic abstraction.

  • @dumpling_byte
    @dumpling_byte 6 месяцев назад +1

    This is why kubernetes is a standard player for microservices.

  • @nickross4059
    @nickross4059 5 месяцев назад

    Yes, rolling your own orchistration is not easy. But once you have a common recipe you jsut apply it over and over again, until something better comes along.

  • @colbyberger1881
    @colbyberger1881 6 месяцев назад +1

    Then DevOps was born so developers can code while others can secretly cry with the pain of configuring ansible, k8, jenkins, and cloud setups

  • @Kabodanki
    @Kabodanki 6 месяцев назад +1

    Imagine me : Our company had 800+ VM with a 800+ lines of a user script that would run an ansible scripts that would pull artifacts from S3 based on the tags of the machine....wait there's more it would also run chef and puppet in the same god damn F*ing script. One of the co worker thought everything can be solved by adding more stuff to that script. Prod broke nearly everything month. ECS streamlined everything.

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      Containers solved so many issues

    • @K9Megahertz
      @K9Megahertz 6 месяцев назад +2

      @@WebDevCody and adds so many more. Pick your poison I guess. Kubernetes and the like have their place, and I feel like its one of those things where everything looks like a nail and everyone is holding a Kubernetes hammer. Not everything should be put on Kubernetes.

  • @codeagency
    @codeagency 6 месяцев назад

    This is pretty much ancient methods 🤣 i remember these too from 15+ years ago. But these days, its very easy with just github actions + argocd + kuberneted cluster. Zero downtime, auto rolling updates, auto rollback if issues, ... Its a completely different era these days even for self hosting with VM's

  • @BandanazX
    @BandanazX 6 месяцев назад +1

    Fire the sysadmins they said. It'll be fun they said.

  • @georgeb.6162
    @georgeb.6162 6 месяцев назад

    We use k8 on an ec2 instance, to deploy our main backend and a bunch of microservices. And it's basically all auto-generated from the solution structure, it's black magic, I don't touch it. That's why the devops magicians get paid.

    • @WebDevCody
      @WebDevCody  6 месяцев назад

      Why not used a fully managed k8s service such as what aws or DO provides? Maintaining your own k8s cluster doesn’t sound like fun either unless you have the money to hire dedicated devops people

    • @georgeb.6162
      @georgeb.6162 6 месяцев назад

      ​@@WebDevCody We do have the money. But the main crux has been that we've had a pretty beefy bare metal staging environment for some time, and we've done testing on it for over a year, so migrating all of the already configured cluster was pretty painless, and we figured we don't have to complicate ourselves with new pipelines.

  • @cook5436
    @cook5436 6 месяцев назад

    Most apps and sites out there don´t need anything more than 1 ec2 vm. A CRUD app does not need all the CI/CD nightmare you did in your past project.

  • @LeighB420
    @LeighB420 5 месяцев назад

    Appreciate your view, but man you've still got a lot to learn. What would you do if you didn't have Cloud? There's nothing wrong with doing things manually, its how you learn what the automation is doing and how to fix it when it goes wrong, because it will. On-Prem will never die, its just a fact. The only thing that will kill on-prem is if/when Cloud costs are cheaper than On-Prem. (hint - never going to happen)

  • @harvenius
    @harvenius 6 месяцев назад +1

    Damn that looks complicated as shit

  • @tom_marsden
    @tom_marsden 6 месяцев назад +1

    Bare metal for life.

  • @michaelharrington5860
    @michaelharrington5860 6 месяцев назад +1

    I've been learning webdev for a year now and most of this sounds like a foreign language

  • @noext7001
    @noext7001 6 месяцев назад

    rancher say hello

  • @LanceBryantGrigg
    @LanceBryantGrigg 6 месяцев назад

    This is an age old discussion and frankly didn't need to be listened to if you are experienced in the industry.
    If you are not experienced in the industry however, this is a great way to understand how orchestration systems work and what the alternatives are.
    FWIW, this doesn't go anywhere near a proper docker environment on ECS on Kube and all the pain goes away with the solution he talked about meanwhile you get the feeling of serverless without the "pain of cost overruns" that a true serverless env causes.

  • @Nocare89
    @Nocare89 6 месяцев назад

    In 2018 I had a git-build-deploy completely in aws without any jenkens/puppet/whatever. Complete with building a new vm and hot swapping with the old vm in the load balancer. Old vm would die and new vm would live, while still allowing for rollbacks. No it was not beanstalk.
    To me it sounds like you didn't know what was available in aws and so you hacked together based on some old practices/searches. Which happens. Codebuild & codedeploy are very straight forward as is building a target vm image to pipe into. It really was quite nice. And for the record, I'm in the docker-is-lame crowd. I understand and refuse it. It's just some silly overhead. It has value in some situations but shouldn't be the go-to.

    • @WebDevCody
      @WebDevCody  6 месяцев назад +1

      It was more of the company had a committee of “senior engineers” who called all the shots and we had to use the garbage they created

    • @Nocare89
      @Nocare89 6 месяцев назад

      @@WebDevCody Definitely is a thing that happens too xD

  • @andreroodt4647
    @andreroodt4647 6 месяцев назад

    I think every experience can lead us down a certain path and make us come to certain conclusions. What I have found working with Micro Services is that we have removed the complexity from the code into the operations. Developers are now DevOps, and Ops, no matter which way you cut it is difficult and frustrating. You are dealing with dev, staging, and prod environments. We want zero downtime, backward compatibility, progressive rollouts, A/B testing, infinite scalability, friendly neighbors, etc and we often find ourselves spending more time configuring/running services than coding them. It's the nature of the beast and that's why we get paid the big bucks (just kidding).

  • @SeibertSwirl
    @SeibertSwirl 6 месяцев назад +1

    Good job babe! Also first 👸🏿

  • @rafamuttoni
    @rafamuttoni 6 месяцев назад

    0:37 - 0:48 ouch 😅

  • @misterpizzaman3581
    @misterpizzaman3581 6 месяцев назад

    Simple: load balancers, code deploy, golden image - blue/green for upgrades ... of course serverless is much better

  • @georgesmith9178
    @georgesmith9178 6 месяцев назад

    Do you expect AI to take care of all of this? I once watched a video using some sort of "AI" to control Ansible. It wasn't pretty. Well, systems have way to go to accommodate some basic deployment automation. Of course, you can always try a platform like Cloud Foundry.

    • @kelvinxg6754
      @kelvinxg6754 2 месяца назад

      Probably, AI will take care of most of the hassles stuff.

  • @algonix11
    @algonix11 6 месяцев назад

    complexity hell

  • @LtSich
    @LtSich 6 месяцев назад

    To make it short, we have a dev who try to do a sysadmin job...
    And, ofc he hate that, because it's not his job and he his not good at that (and that's normal, when it's not your job)...

  • @sizur
    @sizur 6 месяцев назад

    Hello new world. You think these new one-push "solutions" you mentioned a few times will make this easier? Wait till that breaks and you are at total mercy of the vendor.