3:36 I would trigger "Adding 1 minute to the auction" only when < 1 minute (or some other low amount) is left on the auction. To avoid very long running auctions
fair point, but the core functionality is a simpler version of the WoW AH many of us (like probably Ben) have used for years. The piece missing is the design aesthetic feeling more like you're in a game world, but Ben's a developer at heart and probably doesn't have an army of designers to manufacture game assets to make it look "gamey" . For example, the WoW AH has customized borders, currency, cursors and fonts that all provide a common design language that might make sense in a sort of medieval themed game. Ben's app looks like a pretty "corporate webapp" because it's very minimalist in comparison, which is pretty common in modern UI design language these days In other words: you guys are looking for this to look like it came from a professional game studio, which it didn't haha
It'd be typical to model bids (and most financial information) as transactional logs. Where the current price is calculated as max(bid) over (group by item). And then you cache the result of that query and invalidate it using a before insert trigger on the table.
Since you're running postgres, you could avoid some ravioli in the action house by using locks with skip locked (inside transaction): a bit like this: "select * from "AuctionItem" where "endsAt" < now() for update skip locked limit 50" this code is safe to run concurrently (will not return the same row in two separate queries), so you can remove the sleeps and redis ravioli ;)
There still might be issue. Take this example: 1. node1 queries database gets 50 items 2. node1 finishes query and starts processing those items 3. node2 sends db query just after node1 finishes the query and before starting processing 4. node2 gets same result as node1 and starts processing same items So yeah, as you can see using db lock will solve one part of the solution but the whole lock needs to be applied to the whole processing part rather than query itself in order to solve race condition.
You should use a timeout that calls itself, intervals can get in a situation where if the job is slow and your job takes over a minute, it can start up the next job, slowing down your dB more etc
If the number of bids placed per minute exceeds 50, then there will be lag. Because of the way the distributed log is made, you can’t scale to multiple servers. Consider storing the key of the item(s) it’s handling. The pk of the row is a common distributed lock key. Thanks for making this Ben.
For the love of god, don't be lazy with the tables Ben. My company had a seminar where one of the consultants had a SQL crash course where he leared us everything we should NOT do. The biggest mistake he did in his 10+ years as a developer were being lazy when he made his databases. He said "being lazy with the databases saved me 1 day of work, but after a while the software has to scale and i ended up making what was 1 day of work to 1 week of work. The database sould always reflect the real world. Instead of setting up each home containing one sensor I ended up making sensors in homes as one table. Down the line, they wanted to keep track of when each sesonr was replaced which beacame a huge issue as we had none of that important data".
Also if you are using node you can capture the kill command (process.on) as well as hold on to the setInterval return value to call clearInterval if the kill command happens and the setInterval will stop.
You would have make your life so much easier by using a scheduler. With AWS or GCP you can schedule cron job on a serverless function. That way you don't have to worry about having a main runner and concurrency. But hey, whatever works best depending on your context 😅.
Keep your health a priority too Ben. I just burnt out(anxiety and acid reflux) and had to take a week long leave just to relax and get my health back. I'm still weak but the setback helped put into perspective what's important. Everything in moderation and to disengage from work is key
Why not update the current bid in redis and asynchronous update the db with the redis current bid? If there are 100 concurrent bids it will cost 100 db updates
You should never ever ever update a cache directly. A cache should only be invalidated or populated. Trying to do anything "clever" like this will lead to development hell and horrific race conditions. The cost of updating an item in the dB will probably be tiny. If it starts to get expensive then you might want to use an in-memory database, which redis could also be used for, as can postgres.
@@xzero01501 I mean to update the cache using a dedicate server and not directly from the client if that what you thinking.. The cost may be tiny, but in a scenario where there are many users and the time is very critical for the biding, waiting to update the db can be very problematic just imagine 100 concurrent request to update the db will take all the connection pool and then what?
@@LawZist Even with blocking io and ~300 req/sec postgres would handle the updates just fine. It's general performant up to ~30ms response times. If you want faster than that then I'd rather have a fully in-memory database and drop postgres completely. As for redis as write-ahead queue with eventual consistency to disk: seems crazily over engineered
start with small projects and you'll get there, no one was born an expert developer it takes time, hundres/thousands of debuguing hours and a lot of googling, but with time you'll be able to build that and more just be patient and enjoy you're time -unless you're debuguing ofc-
yes...everywhere you go there's infinite amount of known and unknown stuff to learn about...but to become a dev you have many guidelines online to learn enough to make awesome projects like voidpet
I'm new here, so sorry in advance, but if the game has a limit on the inventory, then could you use the auction house as an unlimited storage by putting your items in it for extremely high prices and a very long expiration date?
Really enjoying this series, the high level overview with some code is nice to follow. Best of luck Ben. Remember us when this takes off and you make a portable VoidConsole.
As a former Neopet connoisseur (aren't we all?), I can see how you're implementing the most important aspects of Neopets--selling stuff--with tons of improvements. No race conditions, maybe? No unlimited time, impossibly-high prices simply to show off your dope paintbrushes? Next you'll be telling me I don't need to play 3 crappy flashgames a day for my penance of 3,000 neopoints
In order to check what server should run the background job, you can use environment variables, a prop file and so when you start the app in a docker container just pass the env variable as TRUE where you need that. I think redis is overengineering for that.
You can achieve this with Kubernetes by using a replica with the env var set to true and another deployment with as many replicas as you want without that variable enabled.
Regarding your background jobs. If you're hosting somewhere like CloudRun/Firebase whatever, just use a cron-job. The cron will hit 1 server via the ingress delegation. Set the cron to 1 minute that will run an interval there.
if your selling something starting the bit at 1000vm instead make the start the bid at 1vm then outbid yourself with 2 other accounts incrementing by 1vm until you hit 1000vm BOOM 1000 free extra minutes (16 hours) on the auction
What a very good video. I liked it very much. The variety of colors in this video makes it interesting. Thank you very much for the good work. I wish you happiness my frien👍👍👍👍👍👋👋👋👋👋👋
After suffering in startups for a few years, i've decided to jump into the water and create my own startup. I am the sole developer, and my partner is actually from China! This will be a project in China, which is very challenging, but I got courage. I'm not the greatest developer, hell sometimes I write shit code, but if you can make something work, that's what mattets, right? Cheers
@@avivshvitzky2459 Be careful then, there are plenty of stories of foreigners joining adventures with a chinese national which then took advantage of the Chinese judicial system not being advantageous to foreigners to kick the foreigner out of the company when it grows. Legally speaking you're not on confortable grounds at all.
It would be better to register it in a country where the judicial system is more fair. I'd suggest talking your partner into that possibility (find another reason to justify it, maybe mention some other countries that allows you to pay less tax or no tax, idk). Your partner being a Chinese national, and you not being one, means he could, if he wants to, take over the company whenever he wants, either by his own will, or being pressured into it if you happen to build something that the Chinese government either doesn't like, or wants to take over. It happens on many level, there have been many small language schools in China opened by foreigners + chinese taken over by the chinese partners. You could look up how Uber got out of China because it was forced into selling for cheap to a Chinese company, etc. It's a real risk, that could happen to you. I strongly recommend you doing research on that stuff. You're puting yourself in a situation where you might invest time and money into something that will then disapear from your life completly without notice, nor your consent.
i like how you are making a game witch isnt nft based as a lot of comapnys would have instantly made this one as there's a few that run simler to it also love your content
If you don't set your transaction isolation at the serializable level you can get race conditions since the default is read commited (it will only see commits before the transaction started so concurrent updates won't see each other). You could also use a row lock with a Select For Update or you could use an advisory lock. In any case, I would keep the logic in the DB to avoid (create a pgsql function worst case) to avoid the round trip to the application.
For item expiration you could have used dynamoDB ttl and set up dynamoDB streams. A stream record is emitted on the stream when items are deleted which you can have trigger lambda. The lambda can handle the transaction of who gets the item / who gets the money / etc. For the redis cache which facilitates a single server doing the background job - look up the leader election algorithm
The dynamoDB TTL function doesn’t quite work with this. It tells dynamo that a record may be deleted but doesn’t force the delete at the TTL time. In my experience with a small table 15-30 minutes beyond the TTL time is normal but the dynamo docs say it may take up to 48 hours. I think he’s looking for more up to date actions here.
DynamoDB's TTL is variable but MongoDB's TTL would be a better simpler alternative. It deletes items within a minute, its API is also a lot more node friendly. Would have solved the whole problem without needing redis or keeping track of the running job
@@georgemunyoro does Mongos TTL give you an event stream though? An action needs to be performed on every delete. That’s why Dynamo was a good candidate because of DDB stream records
So in theory, could people collude to indefinitely extend the time of an auction by bidding and adding 1 minute to the timer? If they had enough money to do so?
yes but they gain nothing from that, and most attempts will end quickly because of the amount of money needed to keep going. once it ends, the last bidder would spend their money on it and waste a lot if they inflated the price. even if the bidder knew the seller, the auction house would take a cut which makes this idea less worth it. maybe someone would overpay to a friend to give them more money (until a trading system exists) but i see 0 reasons why multiple people would want to go back and forth, extending the timer
3:36 I would trigger "Adding 1 minute to the auction" only when < 1 minute (or some other low amount) is left on the auction. To avoid very long running auctions
The implementation is sick but I think that the auction house interface could look less like a corporate mobile app. Great content!
I just assume the UI will go through several iterations before it's polished and this can be done last
@@NotesNNotes Yeah I believe that the UI was just a quick prototype so they could implement and test the functionality first.
It looks a bit mobile but I'm not sure I would say it' looks "corporate".
It looks like a web/corporate app, doesn't feel like an auction house in a game
fair point, but the core functionality is a simpler version of the WoW AH many of us (like probably Ben) have used for years. The piece missing is the design aesthetic feeling more like you're in a game world, but Ben's a developer at heart and probably doesn't have an army of designers to manufacture game assets to make it look "gamey" . For example, the WoW AH has customized borders, currency, cursors and fonts that all provide a common design language that might make sense in a sort of medieval themed game. Ben's app looks like a pretty "corporate webapp" because it's very minimalist in comparison, which is pretty common in modern UI design language these days
In other words: you guys are looking for this to look like it came from a professional game studio, which it didn't haha
It'd be typical to model bids (and most financial information) as transactional logs. Where the current price is calculated as max(bid) over (group by item). And then you cache the result of that query and invalidate it using a before insert trigger on the table.
>The auction takes a cut
>Because this is the free market
Lol
I really missed this type of content. Welcome back champ.
Since you're running postgres, you could avoid some ravioli in the action house by using locks with skip locked (inside transaction):
a bit like this: "select * from "AuctionItem" where "endsAt" < now() for update skip locked limit 50"
this code is safe to run concurrently (will not return the same row in two separate queries), so you can remove the sleeps and redis ravioli ;)
There still might be issue. Take this example:
1. node1 queries database gets 50 items
2. node1 finishes query and starts processing those items
3. node2 sends db query just after node1 finishes the query and before starting processing
4. node2 gets same result as node1 and starts processing same items
So yeah, as you can see using db lock will solve one part of the solution but the whole lock needs to be applied to the whole processing part rather than query itself in order to solve race condition.
@@themisir In this situation you would definitely need to handle the data inside the transaction. Sorry if I wasn't clear on that ;) Cheers!
You should use a timeout that calls itself, intervals can get in a situation where if the job is slow and your job takes over a minute, it can start up the next job, slowing down your dB more etc
If the number of bids placed per minute exceeds 50, then there will be lag. Because of the way the distributed log is made, you can’t scale to multiple servers. Consider storing the key of the item(s) it’s handling. The pk of the row is a common distributed lock key. Thanks for making this Ben.
For the love of god, don't be lazy with the tables Ben. My company had a seminar where one of the consultants had a SQL crash course where he leared us everything we should NOT do. The biggest mistake he did in his 10+ years as a developer were being lazy when he made his databases. He said "being lazy with the databases saved me 1 day of work, but after a while the software has to scale and i ended up making what was 1 day of work to 1 week of work. The database sould always reflect the real world. Instead of setting up each home containing one sensor I ended up making sensors in homes as one table. Down the line, they wanted to keep track of when each sesonr was replaced which beacame a huge issue as we had none of that important data".
Also if you are using node you can capture the kill command (process.on) as well as hold on to the setInterval return value to call clearInterval if the kill command happens and the setInterval will stop.
Why does it look like every single app I see when I was interviewing bootcamp candidates?
It's so satisfying when things work in real time
its fake
Lower case SQL verbs…
When I see a new Ben Awad video, I watch the new Ben Awad video
so now he creates new project inside this so it doesn't look like he moved to new project
You would have make your life so much easier by using a scheduler. With AWS or GCP you can schedule cron job on a serverless function. That way you don't have to worry about having a main runner and concurrency. But hey, whatever works best depending on your context 😅.
Exactly..
U got it
Noooooo I don't want to leave my comfy cozy react workspace to handle this -- PROJECT CANCELED.
Can someone dumb this comment down for a noob pls
Serverless and cloud is for soy devs
Keep your health a priority too Ben. I just burnt out(anxiety and acid reflux) and had to take a week long leave just to relax and get my health back. I'm still weak but the setback helped put into perspective what's important. Everything in moderation and to disengage from work is key
0:25 that HR department really should start doing HR stuff
98% sure that microphone doesn't even work
serious ben why u doing this 😅😂
fr
???????? because it's a project that makes him happy to work on????
+billion dollar unicorn gaming company potential
Programing is fun
Doing what?
Working on an angular project right now and I'd rather die.. Send help
babe wakeup, new ben awad just droppped 🏃
Hey Ben, may I know what shaving blade/tool did you use? Btw, you always make the logic much easier and efficient! 🍻
Why not update the current bid in redis and asynchronous update the db with the redis current bid? If there are 100 concurrent bids it will cost 100 db updates
Actually if the bidding it is for days it may be good enough to just use postgres. WDY guys?
You should never ever ever update a cache directly. A cache should only be invalidated or populated. Trying to do anything "clever" like this will lead to development hell and horrific race conditions.
The cost of updating an item in the dB will probably be tiny. If it starts to get expensive then you might want to use an in-memory database, which redis could also be used for, as can postgres.
@@xzero01501 i think they are suggesting using it more like a queue than a cache.
@@xzero01501 I mean to update the cache using a dedicate server and not directly from the client if that what you thinking..
The cost may be tiny, but in a scenario where there are many users and the time is very critical for the biding, waiting to update the db can be very problematic just imagine 100 concurrent request to update the db will take all the connection pool and then what?
@@LawZist Even with blocking io and ~300 req/sec postgres would handle the updates just fine. It's general performant up to ~30ms response times.
If you want faster than that then I'd rather have a fully in-memory database and drop postgres completely.
As for redis as write-ahead queue with eventual consistency to disk: seems crazily over engineered
clear the interval as well so it's not continuously running in the background.
I've been waiting this for very long time.
This will be another amazing journey.
Finally a programming video
You can use redis locks or any other lock to lock the update job and release when done. That way you’ll be 100% sure they don’t overlap.
Wtf, there just so much I need to learn to be a dev ;-;
start with small projects and you'll get there, no one was born an expert developer
it takes time, hundres/thousands of debuguing hours and a lot of googling, but with time you'll be able to build that and more
just be patient and enjoy you're time -unless you're debuguing ofc-
yes...everywhere you go there's infinite amount of known and unknown stuff to learn about...but to become a dev you have many guidelines online to learn enough to make awesome projects like voidpet
Well it's not like Ben learned all he knows in a weekend, y'know
Practice and you'll get there
what is that db modeling lang called? at 1:20
I'm new here, so sorry in advance, but if the game has a limit on the inventory, then could you use the auction house as an unlimited storage by putting your items in it for extremely high prices and a very long expiration date?
Was listening to this in the background. Thought you were discribing ebay 😂
I will try to use actor frameworks for concurrency .
Good content by the way
Really enjoying this series, the high level overview with some code is nice to follow. Best of luck Ben. Remember us when this takes off and you make a portable VoidConsole.
Your intros never get old
five percent is everything
As a former Neopet connoisseur (aren't we all?), I can see how you're implementing the most important aspects of Neopets--selling stuff--with tons of improvements. No race conditions, maybe? No unlimited time, impossibly-high prices simply to show off your dope paintbrushes? Next you'll be telling me I don't need to play 3 crappy flashgames a day for my penance of 3,000 neopoints
Finally!! Some classic Ben Awad content🤌✨
Calling an hacky code a background job! Nice!
If this feature get popular, I don't think we will able to update every transaction.
i wish i could be as close to you as the mic
Ben thanks for sharing with us :D
the Prisma schema is just beautiful
Where's your corporate necktie?
“The pessimist sees difficulty in every opportunity. The optimist sees opportunity in every difficulty." - Winston Churchill $
Babe wake up Ben Awad uploaded a video
In order to check what server should run the background job, you can use environment variables, a prop file and so when you start the app in a docker container just pass the env variable as TRUE where you need that. I think redis is overengineering for that.
I don't think it's possible to dynamically change environment variables after the process starts.
@@themisir Nope, but you can set the env vars for each container from start, which should suffice.
You can achieve this with Kubernetes by using a replica with the env var set to true and another deployment with as many replicas as you want without that variable enabled.
This just sems like modern neopets
Regarding your background jobs. If you're hosting somewhere like CloudRun/Firebase whatever, just use a cron-job. The cron will hit 1 server via the ingress delegation. Set the cron to 1 minute that will run an interval there.
1:04 i was really hoping for him to say hypixel skyblock. it would be pretty funny
Yo Ben, how do you handle item returns if the user has filled up their inventory during the duration of the auction?
You have changed the shirt for the very first joke!
if your selling something starting the bit at 1000vm instead make the start the bid at 1vm then outbid yourself with 2 other accounts incrementing by 1vm until you hit 1000vm BOOM 1000 free extra minutes (16 hours) on the auction
Easy fix. Only add 1min if there is less than 1min remaining
@@tomich20 I was just making a joke but yea that would be a good idea
If you make the query on your background job idempotent, you won't have to worry about concurrency.
I think I'm 10 years older than you, but I want to be you when I grow up.
Is there going to be a "Going once, going twice..... sold, to the green elf with the white suit!" ???
These videos are good
Hey! You can use ".unref()" to kill running timer in Node.
What a very good video. I liked it very much. The variety of colors in this video makes it interesting. Thank you very much for the good work. I wish you happiness my frien👍👍👍👍👍👋👋👋👋👋👋
broo this is great.... wow
Yay. Welcome back
Is hard to see a query with select * . So painfully
You should have used RxDB for state/replication etc
I thought exactly about the same solution before you describe it for the concurrent background jobs
was waiting for Swyx to plug Temporal but he left
Oh yeah, I'm subscribed to this guy...
What ORM is Ben using? 1:25
Prisma
After suffering in startups for a few years, i've decided to jump into the water and create my own startup. I am the sole developer, and my partner is actually from China! This will be a project in China, which is very challenging, but I got courage.
I'm not the greatest developer, hell sometimes I write shit code, but if you can make something work, that's what mattets, right?
Cheers
Is your startup company registered in China, or in some other country ?
@@PierreMiniggio will be registered there by the partner
@@avivshvitzky2459 Be careful then, there are plenty of stories of foreigners joining adventures with a chinese national which then took advantage of the Chinese judicial system not being advantageous to foreigners to kick the foreigner out of the company when it grows.
Legally speaking you're not on confortable grounds at all.
Great attitude! All today's gurus were once shitty developers so no need to worry about that :)
It would be better to register it in a country where the judicial system is more fair.
I'd suggest talking your partner into that possibility (find another reason to justify it, maybe mention some other countries that allows you to pay less tax or no tax, idk).
Your partner being a Chinese national, and you not being one, means he could, if he wants to, take over the company whenever he wants, either by his own will, or being pressured into it if you happen to build something that the Chinese government either doesn't like, or wants to take over.
It happens on many level, there have been many small language schools in China opened by foreigners + chinese taken over by the chinese partners.
You could look up how Uber got out of China because it was forced into selling for cheap to a Chinese company, etc.
It's a real risk, that could happen to you.
I strongly recommend you doing research on that stuff.
You're puting yourself in a situation where you might invest time and money into something that will then disapear from your life completly without notice, nor your consent.
Really good content there! Your advices is one of the main reasons I swapped to the tech industry and started a RUclips channel myself. Thanks a lot!
poggers
ok
good stuff
Hey Ben! For bidding race conditions you can write a test to make sure it works. Cheers!
i like how you are making a game witch isnt nft based as a lot of comapnys would have instantly made this one as there's a few that run simler to it also love your content
Why does HR department make business decisions?
nice work!
Excuse me copper ore in WoW costs 100 gold now!? Holy shit, I still remember when 1 gold was a lot.
Postgres locks rows, not tables... Locking tables are too expensive (not saying that it's not done)...
Where can I get that T-shirt?
Now i am trying to create a auction house hope for the best
how can i do the concurrent background task for ending an auction using Node.js ?
Are you using prisma for the ORM ?
nice
Howdy! What language is this? I'm new to programming and want to develop games in the future, thanks.
Isn't TemTem a Pokemon MMORPG?
this is a lowkey discord endorsement
i preferreed sign up with a "GMAIL"
Hey bro, you should get jacked for the 1milly on tiktok and surprise everyone with the lights off trend. If you need help getting in shape hmu.
So basically….eBay?
What ORM are you using?
judging from previous videos, typeorm
If the end goal is to become a Pokemon MMO like game and if there's already a Pokemon MMO like game i.e Pokemon MMO, Why would someone play this game?
If you don't set your transaction isolation at the serializable level you can get race conditions since the default is read commited (it will only see commits before the transaction started so concurrent updates won't see each other). You could also use a row lock with a Select For Update or you could use an advisory lock. In any case, I would keep the logic in the DB to avoid (create a pgsql function worst case) to avoid the round trip to the application.
I missed you bro
If you want help with server-side stuff I’m working with Go.
For item expiration you could have used
dynamoDB ttl and set up dynamoDB streams.
A stream record is emitted on the stream when items are deleted which you can have trigger lambda. The lambda can handle the transaction of who gets the item / who gets the money / etc.
For the redis cache which facilitates a single server doing the background job - look up the leader election algorithm
The dynamoDB TTL function doesn’t quite work with this. It tells dynamo that a record may be deleted but doesn’t force the delete at the TTL time. In my experience with a small table 15-30 minutes beyond the TTL time is normal but the dynamo docs say it may take up to 48 hours. I think he’s looking for more up to date actions here.
@@bjlapp very good point. Forgot the TTL feature is variable
DynamoDB's TTL is variable but MongoDB's TTL would be a better simpler alternative. It deletes items within a minute, its API is also a lot more node friendly. Would have solved the whole problem without needing redis or keeping track of the running job
@@georgemunyoro does Mongos TTL give you an event stream though? An action needs to be performed on every delete. That’s why Dynamo was a good candidate because of DDB stream records
This seems really well thought out, the devs of Hypixel Skyblock could definitely learn a thing or two from you
The prodigal son has returned from Tik Tok
So in theory, could people collude to indefinitely extend the time of an auction by bidding and adding 1 minute to the timer? If they had enough money to do so?
yes but they gain nothing from that, and most attempts will end quickly because of the amount of money needed to keep going. once it ends, the last bidder would spend their money on it and waste a lot if they inflated the price. even if the bidder knew the seller, the auction house would take a cut which makes this idea less worth it. maybe someone would overpay to a friend to give them more money (until a trading system exists) but i see 0 reasons why multiple people would want to go back and forth, extending the timer
what is your origines ??
comment for the algorithm
Am I the only one with a crush on Ben? 🤠🙈
How many of you guys are looking to use this app?