20:22 I don't know if you already found a solution but within the AWS ECS world, you can just define a task that executes a special "task" and then dies. AWS ECS tasks can be compared to Kubernetes "Jobs" in the context of special migration tasks. With Kubernetes, you can create Jobs that run to completion and are triggered manually rather than automatically alongside other services. This allows you to run specific tasks like database migrations or batch processing without affecting the rest of your running services. It's a flexible way to handle one-time or periodic tasks separately from your main application workloads.
Db migrations should be backwards compatible in most cases. So don't delete columns, just mark them as nullable. (Remove them manually in the future a few versions down the line), make sure new columns have default values so that your old deployments can still continue to work. That way you can always safely deploy and rollback without destroying your db. Also rolling deployments can work now. For more complex migrations, accept the down time. But these cases should be rare. As for deploying migrations in k8s, you want to start a migration job first, separate from the deployment. Once that job finishes successfully, update the deployment. You definitely don't want race conditions in your db migrations. That is scary af.
20:51 in the opposite. Manual migrations make me soo nervous. I’m slow and what if I f up? I’ll be slow to unwind. Automated migrations are fast and precise… exactly as programmed.
In terms of your migration race condition problem, I personally think it's because you're running the migrations on service startup. At my work we run our migrations as part of the CI/CD. After all our tests have run, docker images get built, then the very last thing is migrations are run and then docker images get deployed to the cluster. That way it only happens once and doesn't even require a microservice.
But what about the version out of sync problem that he mentioned.... Like pod deployment fails but the migration is successful? Just curious to know what happens in that case
@@moveonvillain1080 We make our migrations backwards compatible and that if our migration runs but the new pods don't spin up, the old ones will still work as normal. For bigger changes in our datase (like changing the name of a column) we do it in multiple steps and even support two different columns at the same time until the old one is decomissioned.
27:12 It's like a fake sum type, and it's a great pattern! This is how http errors are handled in Elm as well. The sucky thing about it in Go is you need to opt in to wrapping your values in these interfaces, much like the "Effect" package for javascript, a great pattern, but at the end of the day it's a wrapper.
I tend to have a docker-compose with 2 services. One that runs the migration script then the other one that starts the app when the migrations have completed.
Why not gitignore the generated sqlc code and have air run the sqlc command? In any case I don't think sqlc is perfect but it definitely speeds up development. Thanks for all the go content.
For the DB race condition issue you can take a couple of approaches: 1. Bring down all services and start one after the other (downtime) 2. Do a rolling deployment. There are multiple issues in this. Two different pods with different schemas can run simultaneously. You can maintain a schema version table to handle this at the app layer. The new pods should process requests only according to the correct schema version. 3. Offload schema handling to PostgreSQL's ALTER command. This will ensure old and new schemas active at the same time
Migrations: your changes should not break the existing code, you should be able to have two code versions running at once. That goes beyond databases but to all external services
I tried this guy’s course and got a refund, could have been me could of been the teaching style-cool guy but would recommend a trial first make sure the style works for you. It is fine for RUclips videos but i never expected that for a formal course but that’s on me.
this is a weird podcast episode. you jump right into niche questions about the Go db migration tooling you use and problems you’ve run into in prod. you’ve trying to leverage his expertise to solve your own problems instead of learning about his experience
I don't see it like that. Go lacks opinions so it's cool to see how people handle things. I often wonder how others handle migrations, passing user context around, dependency injection, etc.
I watched Anthony and now I can run 6 miles every morning without sweating
You know Anthony is an OG when his intro is , "yeah I got into coding by making Call of Duty Clan websites as a teen".
Counter strike*
"if you turned your code like this you would have a pyramid scheme." Dud Anthony one liners are so underrated
I watched Anthony and now I can bench press 80 kg
Thats what my side hoe benches
This is not the flex you think it is
@@xmorsewith no training whatsoever? That’s not bad at all. Especially if you’re considering body weight.
20:22 I don't know if you already found a solution but within the AWS ECS world, you can just define a task that executes a special "task" and then dies.
AWS ECS tasks can be compared to Kubernetes "Jobs" in the context of special migration tasks. With Kubernetes, you can create Jobs that run to completion and are triggered manually rather than automatically alongside other services. This allows you to run specific tasks like database migrations or batch processing without affecting the rest of your running services. It's a flexible way to handle one-time or periodic tasks separately from your main application workloads.
Or just use a lock via redis, so only one of your services runs the migration and defer remove the lock from redis
Db migrations should be backwards compatible in most cases. So don't delete columns, just mark them as nullable. (Remove them manually in the future a few versions down the line), make sure new columns have default values so that your old deployments can still continue to work.
That way you can always safely deploy and rollback without destroying your db. Also rolling deployments can work now. For more complex migrations, accept the down time. But these cases should be rare.
As for deploying migrations in k8s, you want to start a migration job first, separate from the deployment. Once that job finishes successfully, update the deployment.
You definitely don't want race conditions in your db migrations. That is scary af.
great advice thanks!
The way I’ve done this in the past, have an environment variable for a migration instance, which solely runs the migrations.
Separating app from migrations sounds like good advice thanks
20:51 in the opposite. Manual migrations make me soo nervous. I’m slow and what if I f up? I’ll be slow to unwind. Automated migrations are fast and precise… exactly as programmed.
In terms of your migration race condition problem, I personally think it's because you're running the migrations on service startup. At my work we run our migrations as part of the CI/CD. After all our tests have run, docker images get built, then the very last thing is migrations are run and then docker images get deployed to the cluster. That way it only happens once and doesn't even require a microservice.
But what about the version out of sync problem that he mentioned.... Like pod deployment fails but the migration is successful? Just curious to know what happens in that case
@@moveonvillain1080 We make our migrations backwards compatible and that if our migration runs but the new pods don't spin up, the old ones will still work as normal. For bigger changes in our datase (like changing the name of a column) we do it in multiple steps and even support two different columns at the same time until the old one is decomissioned.
27:12 It's like a fake sum type, and it's a great pattern! This is how http errors are handled in Elm as well. The sucky thing about it in Go is you need to opt in to wrapping your values in these interfaces, much like the "Effect" package for javascript, a great pattern, but at the end of the day it's a wrapper.
I tend to have a docker-compose with 2 services. One that runs the migration script then the other one that starts the app when the migrations have completed.
Why not gitignore the generated sqlc code and have air run the sqlc command?
In any case I don't think sqlc is perfect but it definitely speeds up development. Thanks for all the go content.
He was still a timmy back then
Enjoining those podcasts, keep up great work. Thanks Anthony for his tutorials too.
The episode everyone was waiting for. 😊
docker compose, makefile, air, htmx with go template make me feel so productive in go
Nice pipeline, but adding air is bloated.
I use a similar setup for my go work.
@@cariyaputta Do you suggest any alternative? I'm not that good at frontend so auto reload is pretty helpful for me.
@@trietang2304 if your app already using docker then you can just use compose watch.
use magefile/mage instead of make for your go apps, it's a banger
technically, he has a CS degree. (Counter Strike)
I have a Counter Science degree
I watched Anthony and now I can do lateral raises with 80kg
compaany - love his accent!
Come to belgium, everybody speaks like that here...
@@dionverbeke1975 what's up with the 'ish' at the end of some words? for example, 'hereish', lol
Dude has the same origin story as I did. CS, gaming, websites, Drupal and PHP, later consulting and Golang
For the DB race condition issue you can take a couple of approaches:
1. Bring down all services and start one after the other (downtime)
2. Do a rolling deployment. There are multiple issues in this. Two different pods with different schemas can run simultaneously. You can maintain a schema version table to handle this at the app layer. The new pods should process requests only according to the correct schema version.
3. Offload schema handling to PostgreSQL's ALTER command. This will ensure old and new schemas active at the same time
Ah, very good, very good
You can also run an helm hook to migrate before deploying the new pods
@@andreffrosa yes I just know these few basic techniques
Or use an init container that does the migration, and only do a rolling deploy with a max of 1.
Could also elect a "leader" pod responsible for migrations
Migrations: your changes should not break the existing code, you should be able to have two code versions running at once. That goes beyond databases but to all external services
I remember when Dreamweaver CS2 was a god tier editor. Could push to straight to the server from the FTP tab or whatever. Shit was nuts.
I was wondering when this would happen!
Lets gooooooo, Im surprised you brought anthony ❤
Bun has a router too. I’m using it and it’s pretty nice. I’ve enjoyed their orm as well.
Finally 🥂
we need anthony course on bootdev, lets make it
you need to do a rollout deployment in kubernetes for your race condition issue, it’s not a postgres issue
He nevere said its a postgres issue or a goose issue. He just wondered how he could tackle it and if the goose team knows about it.
migrations is a thing for ci/cd pipeline
This is a good show.
Anthony 100% has a CS Degree, with a concentration in head shots!
After watching Andrew I have started adding 'esh' to the end of every word.
Magento which is now Adobe Commerce .. is still going strong. I do a lot of Magento 2 development and it can be a nightmare. Lol
It's a good experience but the way they built up the framework honestly sucks
X stands for eXperimental and not eXtended
LeGGendary GG Tony
he is the coolest gg
Anthony Huge W guy!!
banger
I didn't know George St. Pierre is a programmer
liquibase ftw
Oh shit
What Is This, a Crossover Episode?
go god 😂 nice title
K8s, golang, http, and I'm good
YavaScript
Should've mentioned ELIXEEEERRR
I tried this guy’s course and got a refund, could have been me could of been the teaching style-cool guy but would recommend a trial first make sure the style works for you.
It is fine for RUclips videos but i never expected that for a formal course but that’s on me.
As someone currently working on a magento project... ya... it fucking sucks
this is a weird podcast episode. you jump right into niche questions about the Go db migration tooling you use and problems you’ve run into in prod. you’ve trying to leverage his expertise to solve your own problems instead of learning about his experience
I don't see it like that. Go lacks opinions so it's cool to see how people handle things. I often wonder how others handle migrations, passing user context around, dependency injection, etc.
love the show but man that has to be the worst intro theme song ever